n8n Python Sandbox Escape: Critical Vulnerability in AI Agent Workflows (GHSA-8398-gmmx-564h)

n8n Python Sandbox Escape: Critical Vulnerability in AI Agent Workflows (GHSA-8398-gmmx-564h)

A critical Python sandbox escape vulnerability in n8n workflow automation platform (GHSA-8398-gmmx-564h) allows authenticated users to break out of the Python Code node sandbox and execute arbitrary code on the host system. This vulnerability affects deployments with Task Runners and Python enabled, and was patched in n8n v2.4.8. For AI agent operators using n8n as an orchestration layer, this represents a severe supply chain risk that could compromise entire agent infrastructure.

How the Attack Works

The vulnerability resides in n8n's Python Code node, which provides sandboxed execution of Python scripts within workflows. Sandboxing in workflow automation platforms typically relies on process isolation, restricted system calls, and limited access to host resources. However, the Python standard library contains numerous modules that can be weaponized to escape these containment boundaries.

Attackers with authenticated access to an n8n instance can craft Python payloads that leverage modules like os, subprocess, or pty to break out of the sandbox environment. Once escaped, the attacker gains the same privileges as the n8n Task Runner process, which often has broader system access than intended. This is particularly dangerous in containerized deployments where the escape might provide access to the container runtime or host Docker socket.

The attack surface expands significantly when n8n workflows process untrusted input or when multiple users share a single n8n instance. An attacker could embed malicious Python code in workflow data, trigger it through API calls, or exploit webhook-triggered workflows that pass user input directly to Python Code nodes.

Real-World Implications for AI Agent Deployments

AI agent architectures increasingly rely on workflow automation platforms like n8n to coordinate tool calls, manage state, and route between different LLM providers. When an agent framework delegates code execution to a compromised n8n instance, the blast radius extends beyond the workflow platform itself.

Consider a typical agent deployment where n8n handles Python-based data transformations between an LLM and external APIs. If that Python Code node is compromised, the attacker gains access to:

  • API credentials stored in n8n's credential vault
  • Network access to internal services reachable from the n8n host
  • File system access to workflow data and potentially the broader host
  • Ability to pivot to connected MCP (Model Context Protocol) servers

For operators running n8n in Kubernetes or Docker environments, a sandbox escape could lead to container breakout scenarios. If the n8n pod runs with elevated privileges or has access to the Kubernetes API, the compromise could extend to the entire cluster. This transforms a single workflow vulnerability into a infrastructure-wide incident.

Immediate Defensive Measures

If you're running n8n with Python Code nodes and Task Runners, verify your version immediately:

# Check current n8n version
n8n --version

# If below 2.4.8, upgrade immediately
docker pull n8nio/n8n:2.4.8
# or for npm installs
npm install -g n8n@2.4.8

Beyond patching, implement defense in depth for agent workflows:

1. Network Segmentation Isolate n8n Task Runners in restricted network segments with explicit egress allowlisting. Use network policies (Kubernetes) or security groups (AWS/Azure) to limit what the runner can reach.

# Example Kubernetes NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: n8n-runner-restricted
spec:
  podSelector:
    matchLabels:
      app: n8n-runner
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: trusted-services
    ports:
    - protocol: TCP
      port: 443

2. Principle of Least Privilege Run n8n Task Runners as non-root users with minimal filesystem permissions. Mount volumes read-only where possible and drop all Linux capabilities:

# Dockerfile snippet for hardened n8n runner
FROM n8nio/n8n:2.4.8
USER node
RUN mkdir -p /home/node/.n8n && chown -R node:node /home/node/.n8n
# Drop all capabilities
cap_drop:
  - ALL

3. Input Validation and Sanitization Before passing any user input to Python Code nodes, implement strict validation. Consider using allowlisted function patterns rather than raw Python execution:

# Safer pattern: restricted execution environment
import ast
import inspect

ALLOWED_FUNCTIONS = {'sum', 'len', 'sorted', 'filter'}

def safe_execute(user_code: str, context: dict):
    tree = ast.parse(user_code)
    for node in ast.walk(tree):
        if isinstance(node, ast.Call):
            func_name = ast.dump(node.func)
            if func_name not in ALLOWED_FUNCTIONS:
                raise ValueError(f"Function {func_name} not allowed")
    # Execute in restricted namespace
    safe_globals = {"__builtins__": {}}
    exec(compile(tree, '<string>', 'exec'), safe_globals, context)

4. Monitoring and Alerting Configure detection for anomalous Python execution patterns. Monitor for imports of suspicious modules (pty, socket, subprocess) and unexpected network connections from Task Runner processes.

Audit Your Agent Infrastructure

This vulnerability highlights a broader pattern in AI agent architectures: the trust boundary between orchestration layers and code execution environments is often weaker than assumed. When building agent systems that delegate to external workflow platforms, treat every code execution node as a potential privilege escalation point.

Audit your current deployments: - Inventory all n8n instances and their versions - Identify workflows using Python Code nodes - Review which credentials and APIs those workflows access - Assess whether Task Runners run with elevated privileges

For agent developers, consider whether n8n's Python Code node is necessary for your use case. Many data transformations can be handled through native n8n nodes or delegated to purpose-built microservices with stricter isolation guarantees.

The GHSA-8398-gmmx-564h disclosure serves as a reminder that workflow automation platforms in the AI agent stack require the same security scrutiny as the LLMs and MCP servers they orchestrate. Patch promptly, segment aggressively, and never assume sandbox boundaries are impenetrable.


References: - Original advisory: GHSA-8398-gmmx-564h - n8n Security Updates: n8n.io/security

AgentGuard360

Built for agents and humans. Comprehensive threat scanning, device hardening, and runtime protection. All without data leaving your machine.

Coming Soon