CVE-2026-33980: KQL Injection in Azure Data Explorer MCP Server - What AI Agent Operators Need to Know

CVE-2026-33980: KQL Injection in Azure Data Explorer MCP Server - What AI Agent Operators Need to Know

The Threat: CVE-2026-33980

A critical vulnerability in the Azure Data Explorer MCP Server highlights a fundamental security gap in AI agent infrastructure: injection attacks via tool parameters. Designated CVE-2026-33980, this flaw allows prompt-injected AI agents to execute arbitrary Kusto Query Language (KQL) commands through unsanitized table_name parameters in three tool handlers. The vulnerability was patched in commit 0abe0ee, but its implications extend far beyond this single server.

This isn't just another injection bug—it's a pattern that's repeating across the MCP ecosystem. When AI agents can manipulate database queries through natural language prompts, the trust boundary between user intent and system execution collapses.

How the Attack Works

The vulnerability exploits unsafe string interpolation in KQL query construction. When an AI agent invokes a tool handler, the table_name parameter gets directly inserted into the query without validation or parameterization:

# VULNERABLE PATTERN (hypothetical reconstruction)
def query_table(table_name: str, filter_condition: str):
    query = f"""
    {table_name}
    | where {filter_condition}
    | take 100
    """
    return execute_kql(query)

An attacker crafts a prompt that tricks the AI into passing malicious input as the table_name:

"Show me data from logs_table; SecurityEvent | where EventID == 4624 | summarize count() by Account"

The agent, following its instructions to query tables, passes this entire string as table_name. The resulting KQL executes both the intended query AND the injected command, potentially exfiltrating authentication logs or sensitive security events.

Three tool handlers were affected, suggesting this wasn't an isolated coding error but a systemic pattern in the server's query construction approach.

Why This Matters for AI Agent Deployments

MCP servers act as the bridge between AI agents and external systems. Unlike traditional APIs where developers control request formatting, MCP servers must handle unpredictable input from language models that can be manipulated through prompt injection.

The attack chain is straightforward but devastating:

  1. Prompt Injection: Attacker embeds malicious instructions in user input or external content
  2. Tool Invocation: Compromised agent calls MCP server with poisoned parameters
  3. Query Execution: Unsanitized input reaches the database with full privileges
  4. Data Exfiltration: Sensitive data flows back through the agent to the attacker

This vulnerability demonstrates that traditional input validation assumptions fail in AI-mediated workflows. The agent itself becomes part of the attack surface.

Defensive Measures for Operators

If you're running MCP servers—especially database connectors—immediate action is required:

1. Parameter Validation and Whitelisting

Never interpolate user-controlled input directly into queries. Implement strict allowlists for table names:

ALLOWED_TABLES = {"logs", "metrics", "events", "users"}

def validate_table_name(table_name: str) -> bool:
    return table_name in ALLOWED_TABLES

def query_table(table_name: str, filter_condition: str):
    if not validate_table_name(table_name):
        raise ValueError(f"Invalid table name: {table_name}")

    # Use parameterized queries where possible
    query = f"""
    {table_name}
    | where {sanitize_filter(filter_condition)}
    | take 100
    """
    return execute_kql(query)

2. Query Construction Isolation

Separate query building from execution. Use prepared statements or query builders that don't support string concatenation:

from dataclasses import dataclass
from typing import Literal

@dataclass(frozen=True)
class TableQuery:
    table: Literal["logs", "metrics", "events"]
    limit: int = 100

    def to_kql(self) -> str:
        # No string interpolation possible - table is enum-constrained
        return f"{self.table.value} | take {self.limit}"

3. Defense in Depth

Implement multiple layers of protection:

  • Input validation: Reject unexpected characters in table names
  • Query analysis: Parse KQL before execution to detect multiple statements
  • Least privilege: Run queries with read-only credentials
  • Rate limiting: Prevent rapid-fire query exploitation
  • Audit logging: Log all tool invocations with full parameter capture

4. Agent-Side Controls

Configure your AI agents to recognize and resist injection attempts:

# Example: Tool call validation
class SecureToolHandler:
    INJECTION_PATTERNS = [
        r";\s*\w+\s*\|",  # Statement separators
        r"\|\s*\w+\s*\(",  # Pipe to function
    ]

    def validate_input(self, params: dict) -> bool:
        for param_value in params.values():
            for pattern in self.INJECTION_PATTERNS:
                if re.search(pattern, str(param_value), re.IGNORECASE):
                    return False
        return True

Immediate Actions

If you're using the Azure Data Explorer MCP Server:

  • Update immediately to the patched version (commit 0abe0ee or later)
  • Audit query logs for suspicious patterns (semicolons, multiple pipes, unexpected table references)
  • Review agent conversations for potential injection attempts
  • Implement query result filtering to prevent sensitive data exposure

For all MCP server deployments:

  • Scan your tool handlers for string interpolation in query construction
  • Apply strict input validation at the server boundary
  • Consider using security-focused MCP servers like Semgrep's MCP integration for code analysis or Mobb Vibe Shield for vulnerability detection
  • Enable comprehensive logging and monitoring for tool invocations

Key Takeaways

CVE-2026-33980 exemplifies a broader pattern: AI agents amplify the impact of traditional injection vulnerabilities. The same unsanitized input that might cause a minor bug in a web application becomes a critical security flaw when an AI agent can be manipulated to exploit it at scale.

The fix isn't just about patching one server—it's about recognizing that MCP servers require defense-in-depth strategies that account for adversarial AI behavior. Parameter validation, query isolation, and comprehensive logging aren't optional enhancements; they're fundamental requirements for secure AI agent infrastructure.

For the original vulnerability disclosure and technical details, see the NVD entry at https://nvd.nist.gov/vuln/detail/CVE-2026-33980.

AgentGuard360

Built for agents and humans. Comprehensive threat scanning, device hardening, and runtime protection. All without data leaving your machine.

Coming Soon