A critical vulnerability (CVE-2026-33980) in the Azure Data Explorer MCP Server exposes AI assistants to KQL injection attacks through unsanitized parameter interpolation. The vulnerability affects three tool handlers where the table_name parameter is directly embedded into Kusto Query Language (KQL) statements without proper validation or parameterization. This allows malicious prompts to inject arbitrary KQL commands, granting AI agents unintended access to execute arbitrary queries against sensitive data stores.
This vulnerability highlights a fundamental architectural risk in Model Context Protocol implementations: the trust boundary between AI-generated content and database query execution is often thinner than developers assume.
How the Attack Works
The vulnerability stems from a classic injection pattern adapted for the AI agent context. In the affected Azure Data Explorer MCP Server, tool handlers accept user input—often originating from AI agent prompts—and interpolate these values directly into KQL query templates.
Consider a tool designed to fetch data from a specific table. The implementation might construct a query like:
# VULNERABLE PATTERN - DO NOT USE
query = f"{table_name} | limit 100"
When an AI agent processes a user request like "show me data from orders where customer_id is 12345", the agent might extract "orders" as the table_name parameter. However, if a malicious user crafts a prompt like "from orders; DROP TABLE customers; --", the resulting query becomes:
orders; DROP TABLE customers; -- | limit 100
The KQL parser executes both statements, potentially destroying data or exfiltrating sensitive information. This is particularly dangerous because AI agents often have elevated permissions to perform analytical queries across multiple tables.
Why AI Agents Amplify This Risk
Traditional injection attacks require attackers to manually craft payloads. With AI agents, the attack surface expands significantly. A compromised or manipulated prompt can trigger the vulnerable code path without the attacker writing a single line of KQL.
The attack chain follows this pattern: 1. Attacker crafts a prompt injection targeting the AI agent's tool-calling behavior 2. AI agent processes the prompt and extracts parameters for the MCP tool 3. MCP server receives parameters and constructs the KQL query 4. Malicious payload executes with the permissions of the MCP server's service account
This is especially concerning in multi-tenant environments where the same Azure Data Explorer MCP Server instance serves multiple AI agents or user sessions. Cross-tenant data access becomes possible if table names aren't strictly validated against allowed schemas.
Immediate Defensive Measures
Organizations running the Azure Data Explorer MCP Server should immediately verify their deployment version and apply the patch from commit 0abe0ee. Beyond patching, implement these defensive layers:
1. Strict Input Validation
Validate table names against an explicit allowlist before any query construction:
ALLOWED_TABLES = {"orders", "customers", "products", "analytics_events"}
def validate_table_name(table_name: str) -> str:
if table_name not in ALLOWED_TABLES:
raise ValueError(f"Table '{table_name}' not in allowed set")
return table_name
# Safe query construction
query = f"{validate_table_name(table_name)} | limit 100"
2. Parameterized Queries
Where possible, use parameterized query mechanisms that separate data from commands:
# Safer approach using query parameters
query = "table($table_name) | limit 100"
params = {"table_name": validated_table_name}
3. Principle of Least Privilege
Ensure the Azure Data Explorer service account used by the MCP server has minimal permissions. It should not have DROP, ALTER, or admin privileges unless absolutely necessary for specific tools. Create dedicated service principals for MCP server operations with read-only access to specific tables.
4. Query Auditing and Rate Limiting
Implement comprehensive logging of all KQL queries executed through MCP tools. Set up alerts for anomalous patterns such as: - Queries attempting to access multiple tables in sequence - Unusually complex query structures - Queries containing semicolons or comment markers - High-frequency requests from single AI agent sessions
Architectural Recommendations
For teams building MCP servers, treat all AI-agent-provided parameters as untrusted input. The trust boundary exists at the MCP server interface, not at the AI model level. Even with prompt injection defenses in the LLM, the MCP server must independently validate all inputs.
Consider implementing a query builder pattern that constructs KQL through structured objects rather than string concatenation:
from dataclasses import dataclass
from typing import Optional
@dataclass
class KQLQuery:
table: str
filters: list[str]
limit: Optional[int] = 100
def to_kql(self) -> str:
# Validate table against schema
if not self._is_valid_table(self.table):
raise SecurityError("Invalid table reference")
query_parts = [self.table]
query_parts.extend(self.filters)
if self.limit:
query_parts.append(f"take {self.limit}")
return " | ".join(query_parts)
This approach eliminates string interpolation vulnerabilities while providing clear audit trails of query intent.
Key Takeaways
CVE-2026-33980 demonstrates that MCP server security requires the same rigor as traditional web application security. The convenience of AI agent tool calling must not bypass established secure coding practices.
Organizations should audit their MCP server implementations for similar injection vulnerabilities, particularly where user-facing parameters influence database queries. The Azure Data Explorer patch provides a template for remediation, but proactive code review of all MCP tool handlers is essential.
For detailed technical information on this vulnerability, refer to the original NVD entry: https://nvd.nist.gov/vuln/detail/CVE-2026-33980
The intersection of AI agents and database access creates powerful capabilities—and equally powerful risks. Security must be built into the protocol layer, not retrofitted after incidents occur.