A critical vulnerability in Graphiti's MCP server demonstrates how graph database queries can become the Achilles' heel of AI agent security. CVE-2026-32247 reveals that attacker-controlled labels via SearchFilters.node_labels were concatenated directly into Cypher queries without validation, creating a pathway for prompt injection attacks through the knowledge graph layer.
This pattern—where LLM-controlled inputs flow directly into database queries—is becoming increasingly common as AI agents rely on knowledge graphs for memory and reasoning.
How the Attack Works
Graphiti builds temporal context graphs that AI agents use to maintain memory. The MCP server exposes search functionality through SearchFilters.node_labels, which accepts labels to filter graph nodes during retrieval.
The vulnerability exists because these labels are concatenated into Cypher queries without sanitization:
// Vulnerable query construction
MATCH (n:{label}) WHERE n.content CONTAINS $search_term
RETURN n
When an attacker crafts a malicious label like Person) DELETE (n) RETURN (n)--(), the resulting query becomes:
MATCH (n:Person) DELETE (n) RETURN (n)--()) WHERE n.content CONTAINS $search_term
RETURN n
This injection enables multiple attack vectors: - Data exfiltration: Modifying RETURN clauses to extract sensitive properties - Graph manipulation: Injecting DELETE or MERGE operations to corrupt the knowledge base - Query logic bypass: Breaking WHERE clauses to retrieve unauthorized nodes
MCP deployments compound this risk. An attacker can exploit this through direct API access or by manipulating the LLM to generate malicious labels via prompt injection.
Real-World Implications
When knowledge graphs serve as the memory layer for autonomous agents, injection vulnerabilities compromise more than data integrity—they undermine the agent's decision-making foundation.
An agent using Graphiti to maintain customer history could suffer: - Poisoned understanding of customer preferences - False context influencing recommendations - Exfiltrated conversation history through crafted injections - Corrupted relationship mappings between entities
The temporal nature of Graphiti makes this particularly insidious. Attackers can inject nodes appearing to have historical precedent, rewriting the agent's "memory" of past interactions.
Defensive Measures
Parameterized Query Patterns
Replace string concatenation with allowlist validation:
# Vulnerable approach - NEVER USE
def search_nodes_vulnerable(label, search_term):
query = f"MATCH (n:{label}) WHERE n.content CONTAINS $search_term RETURN n"
return graph.run(query, search_term=search_term)
# Secure approach using allowlist
ALLOWED_LABELS = {"Person", "Organization", "Event", "Concept"}
def search_nodes_secure(label, search_term):
if label not in ALLOWED_LABELS:
raise ValueError(f"Invalid label: {label}")
query = f"MATCH (n:{label}) WHERE n.content CONTAINS $search_term RETURN n"
return graph.run(query, search_term=search_term)
Input Validation Middleware
Implement validation for all LLM-controlled inputs:
import re
class GraphQueryValidator:
LABEL_PATTERN = re.compile(r'^[A-Z][a-zA-Z0-9]*$')
@staticmethod
def validate_label(label: str) -> bool:
return bool(GraphQueryValidator.LABEL_PATTERN.match(label))
@staticmethod
def sanitize_search_filters(filters: dict) -> dict:
if "node_labels" in filters:
filters["node_labels"] = [
label for label in filters["node_labels"]
if GraphQueryValidator.validate_label(label)
]
return filters
Layered Content Moderation
Add moderation checks before processing inputs:
from openai import OpenAI
client = OpenAI()
def safe_search(agent_input: str, filters: dict):
# Layer 1: Content moderation
response = client.moderations.create(input=agent_input)
if response.results[0].flagged:
raise SecurityError("Input flagged")
# Layer 2: Label validation
safe_filters = GraphQueryValidator.sanitize_search_filters(filters)
# Layer 3: Execute query
return graphiti.search(agent_input, filters=safe_filters)
MCP Server Hardening
- Strict label schemas: Define allowed labels in configuration
- Read-only connections: Use database users lacking write permissions for search
- Query logging: Log all Cypher queries for anomaly detection
- Rate limiting: Implement per-user limits on graph operations
- Schema validation: Validate query results match expected structures
Key Takeaways
CVE-2026-32247 highlights a critical pattern: as AI agents gain access to sophisticated data stores, the security boundaries between natural language processing and database operations blur. The Graphiti vulnerability demonstrates that prompt injection can cascade into database injection without proper validation.
Immediate actions: - Audit graph database queries for string concatenation patterns - Implement label allowlists based on your schema - Add input validation middleware between LLM outputs and database operations - Deploy content moderation for agent-controlled inputs - Review MCP server configurations for query vulnerabilities
The original disclosure is available at NVD CVE-2026-32247.
Knowledge graphs remain essential for agent memory. Their security depends on treating every LLM-controlled input as potentially hostile and implementing defense-in-depth at every layer.