Claude Desktop 0-Click RCE: Tool Poisoning via Malicious Calendar Events

A critical zero-click vulnerability in Claude Desktop Extensions has exposed over 10,000 users to remote code execution attacks through malicious Google Calendar events. This attack demonstrates how the Model Context Protocol (MCP) can be weaponized to chain seemingly low-risk data sources with high-privilege tools without user consent - a textbook example of tool poisoning in AI agent deployments.

The vulnerability, reported by CybersecurityNews, allows attackers to execute arbitrary code on victim systems simply by sending calendar invitations containing malicious payloads. This represents a significant escalation in AI agent attack vectors, moving beyond prompt injection to direct system compromise through trusted integrations.

How the Attack Works

The attack exploits the trust relationship between Claude Desktop Extensions and integrated services like Google Calendar. When a user has calendar integration enabled, the extension automatically processes calendar events and their associated data. Attackers craft malicious calendar events containing specially formatted payloads that appear as legitimate meeting details.

The core issue lies in how MCP handles data flow between different privilege levels. Calendar data, typically considered low-risk, gets processed by the extension and can trigger high-privilege tool executions. This creates an attack path where external attackers can influence internal system operations without any user interaction - the "0-click" component that makes this particularly dangerous.

Defensive Measures and Code Examples

Implementing proper defense against tool poisoning requires multiple layers of security controls. First, establish strict input validation for all external data sources:

import re
from typing import Dict, Any

class CalendarInputValidator:
    def __init__(self):
        self.allowed_patterns = {
            'title': r'^[a-zA-Z0-9\s\-_\.]+$',
            'description': r'^[a-zA-Z0-9\s\-_\.,:;@]+$'
        }

    def validate_calendar_event(self, event_data: Dict[str, Any]) -> bool:
        for field, pattern in self.allowed_patterns.items():
            if field in event_data:
                if not re.match(pattern, event_data[field]):
                    return False
        return True

# Usage in MCP extension
validator = CalendarInputValidator()
calendar_event = get_calendar_event()

if not validator.validate_calendar_event(calendar_event):
    log_security_event("Suspicious calendar event blocked")
    return None

Second, implement privilege separation between data ingestion and tool execution:

class PrivilegeManager:
    def check_permission(self, action: str, context: str) -> bool:
        if context == 'calendar_integration' and action in ['system_exec', 'file_write']:
            return False  # Block high-privilege actions from calendar context
        return True

Third, configure the Anthropic SDK with security-focused settings:

from anthropic import Anthropic

client = Anthropic(
    max_retries=0,  # Disable automatic retries
    timeout=30,     # Set reasonable timeout limits
)

Immediate Actions for Operators

Organizations using AI agents with calendar integrations should take immediate steps to assess and mitigate their exposure:

  1. Audit current integrations: Inventory all connected services and their permission levels. Disable calendar integration in Claude Desktop Extensions until patches are applied.

  2. Implement input sanitization: Deploy strict validation for all external data sources. Never trust data based solely on its source.

  3. Enable security monitoring: Log all tool executions triggered by external data sources. Set up alerts for suspicious patterns.

  4. Review privilege boundaries: Ensure low-risk data sources cannot trigger high-privilege operations.

  5. Update and patch: Monitor for security updates and apply patches immediately when available.

Key Takeaways

The Claude Desktop Extensions vulnerability demonstrates how tool poisoning attacks can transform trusted integrations into system compromise vectors. For AI agent developers and operators, this incident underscores three critical principles: validate all external inputs regardless of source, implement strict privilege separation, and maintain continuous monitoring of AI system behaviors.

Organizations should immediately audit their AI agent deployments and implement the defensive measures outlined above. As AI systems become more interconnected, proactive security measures become essential for maintaining system integrity.

Reference: Original vulnerability report - CybersecurityNews

AgentGuard360

Built for agents and humans. Comprehensive threat scanning, device hardening, and runtime protection. All without data leaving your machine.

Coming Soon