In security operations, the bottleneck has always been clear: integrations. Every new data source means another parser to write, another schema mapping to create, another round of testing and debugging. Vendors ship integrations on their timeline, with their priorities, at their depth. If your particular flavor of Fortinet logs isn't supported? Wait for the next release. Found a parsing bug? Open a ticket and hope. Need deeper field extraction? Better have enterprise support. The vendor's integration catalog becomes the de facto limit of your visibility. The vendor's roadmap becomes your ceiling.
Today, we're shipping Tenzir MCP Server v0.4.0 to change this tragedy. Drop in a single log sample, and out comes a complete, tested, production-ready Tenzir package with both parsing and OCSF mapping. No keyboard required. The power dynamic just flipped. And we mean 100% hands-off-keyboard. You give it a single log sample. It gives you back a 1-click deployable Tenzir package. Your job is to... click.
What's New: Full End-to-End Automation
Our initial release focused on OCSF mapping generation. The new release adds comprehensive code generation, package lifecycle management, execution and testing frameworks, and enhanced documentation discovery. These provide an end-to-end path from raw log to deployable package.
Here's what an actual user prompt looks like—yes, that's really it!
Your agent then performs the following step with the support of the MCP server's tools:
Analyzes log format → Detects key-value structure
Generates parser → Creates
fortinet::parseoperatorIdentifies OCSF class → Selects
DNS ActivityMaps fields → Creates
fortinet::ocsf::dnsoperatorScaffolds package → Builds complete directory structure
Generates tests → Creates input/output test cases
Validates output → Runs tests, confirms OCSF compliance
The result: A complete package with operators, tests, and metadata. Generated hands-off keyboard. One conversation. From raw log to 1-click deployable integration.
Customer Empowerment Over Vendor Control
Here's what really changed: the fundamental power structure of security tooling just got inverted.
The Old Model: Vendor Lock-In
Traditional security platforms operate on a simple premise: control the integrations, control the customer. The vendor decides which data sources to support, which fields to extract, and when bugs get fixed. Customers are passive consumers of a pre-packaged integration catalog. Pay more, get more integrations. It's artificial scarcity dressed up as enterprise features.
This creates predictable problems:
No timeline control: Need support for a new data source? Get in line behind enterprise customers. Maybe next year.
No quality control: Parser extracting fields incorrectly? Submit a ticket. Wait for triage. Hope your support tier matters.
No depth control: Want more fields extracted or custom transformations? That's a "feature request." Translation: not happening.
Artificial differentiation: Basic parsers gatekept as "premium features" to justify price tiers. Pay to parse CSV files differently.
The integration catalog becomes a moat. Customer dependency becomes the business model. Everyone pretends this makes sense.
The New Model: Bring Your Own Integrations
With our MCP server, we flip this entirely. You own the integration pipeline. You control the timeline. You define the quality bar. You decide the depth.
Need a parser for a niche appliance? Generate it in 5 minutes. Found a bug in field extraction? Fix it yourself in real-time. Want to enrich with custom context? Add it directly to your package. Your security stack, your timeline, your way.
Vendors still matter. But they're no longer gatekeepers rationing basic functionality to justify tier pricing. Tenzir becomes the engine that executes your integrations, not the middleman charging rent for access.
The Strategic Implications
For security teams, this changes planning fundamentally. Stop negotiating integration roadmaps with vendors like you're asking permission to see your own data. Instead:
Respond immediately to new threats: New attack technique leveraging an obscure log source? Generate the integration in minutes, not months.
Customize without compromise: Need deeper field extraction for your compliance requirements? Add it yourself. No enterprise sales call required.
Fix issues in production timelines: Critical parsing bug affecting detection? Deploy a fix in hours. Not quarters. Not "scheduled for next release."
Build institutional knowledge: Your integrations live as code in your repository, documented and testable. Not locked in a vendor's black box where you can't see or change them.
The vendor relationship shifts from dependency to partnership. Tenzir provides the execution engine. You own the logic. As it should be.
The Architecture: Why Structured Tools Win
The architecture here matters more than the automation itself.
Structured Tools Over Prompt Engineering
Large language models are powerful, but raw prompting is brittle. Ask GPT-5 to "generate a parser for this log" and you get inconsistent results—sometimes valid code, sometimes hallucinated operators, sometimes subtly broken logic that passes cursory inspection. Production systems can't run on maybes.
The MCP server takes a different approach: structured tools with intelligent prompting. Instead of dumping everything into a prompt and hoping, we provide focused tools like docs_read, ocsf_get_class, and run_pipeline that give the AI specific capabilities at the right moments.
The high-level guide tools (make_parser, make_ocsf_mapping) orchestrate these atomic tools into coherent workflows. They tell the AI: "First, read the documentation for the parse_kv operator. Then analyze the log sample. Then generate code. Then test it with run_pipeline. Then iterate if needed." This structure dramatically improves reliability and consistency.
Model Agnostic by Design
Because the value is in the tools, not the model, the MCP server naturally supports any MCP-aware client and any underlying LLM. Want to use Claude Sonnet? Great. Prefer GPT-5? Works fine. Experimenting with open models? Go ahead. We don't care. Why would we? The differentiation is in what the tools do, not which AI calls them.
This matters strategically. LLM capabilities are advancing rapidly—what's state-of-the-art today will be commodity in six months. Companies betting on model lock-in are building on quicksand. By orthogonalizing our value from the model choice, we let customers ride the advancement curve without switching costs.
It also matters practically. Different teams have different model preferences based on cost, privacy requirements, or deployment constraints. Model lock-in is just another form of vendor lock-in. We're not playing that game.
Retrieval-Augmented Generation at Scale
The MCP server provides AI assistants with authoritative, up-to-date context through RAG. When generating TQL code, the AI queries our documentation in real-time. When creating OCSF mappings, it fetches the latest schema definitions. When testing pipelines, it gets immediate validation feedback.
This eliminates the staleness problem that plagues fine-tuned models. Our TQL language evolves. OCSF schemas release new versions. The MCP server always provides current information, ensuring generated code reflects best practices and current standards.
Deterministic Validation
The secret sauce: Tenzir's native OCSF operators (ocsf::cast, ocsf::derive) provide deterministic validation. AI-generated mappings aren't just syntactically correct—they're semantically validated against the official OCSF schema. Fields are type-checked. Required attributes are enforced. The output is guaranteed to be schema-compliant.
This combination—AI for generation, deterministic validation for correctness—is what makes 100% automation possible. You get the creativity and adaptability of AI without the hallucinations. Code that actually works, not code that looks plausible.
The Bigger Picture: AI as an Interface Layer
The release of the Model Context Protocol kicked off industry discussion about whether AI-driven interfaces would turn established SaaS vendors into "dumb infrastructure." As we noted in our previous post, for companies built on UI lock-in, this is existential anxiety. For us, it validates everything we've been building.
Tenzir's value lives in the pipeline engine, not the UI wrapper. The MCP server is another interface, joining our web UI, CLI, and API. Each interface exposes the same powerful engine in different ways. Let the AI abstract the UI—the engine is where the differentiation lives.
A high-performance, flexible, open-source pipeline execution platform that scales from edge devices to cloud infrastructure. That's the moat. That's what we're building.
The MCP server elevates Tenzir from infrastructure to intelligent platform. A programmable, OCSF-aware, documentation-rich toolkit that makes security teams more productive. The kind of smart building block that thrives in an AI-augmented workflow.
Try It Today
The Tenzir MCP Server is available now as open source at github.com/tenzir/mcp. Run it locally on your infrastructure with any MCP-aware AI client—Claude Code, Codex, Cursor, or custom harnesses. No vendor approval required.
To get started:
Follow our installation guide.
Try the OCSF mapping guide with your own log samples.
Explore the full MCP tool reference.
We're building the future of AI-catalyzed data pipelines in the open. No lock-in, no gatekeeping, no artificial constraints. No asking permission to parse your own logs. Just powerful infrastructure that puts you in control.
The integration bottleneck is being solved. The vendor gatekeeper is obsolete. Now build what you couldn't before.
Join our community Discord to share what you're building.

