AI agents have access to your credentials, your data, and your systems. Protecting them isn't optional — it's the difference between a useful tool and an open door.
Why do AI agents need special security practices?
Traditional applications receive structured inputs through forms and APIs. AI agents receive natural language instructions that get interpreted and executed — often with broad permissions to read files, make API calls, and run commands.
This creates new attack surfaces: - Prompt injection — malicious instructions hidden in seemingly innocent requests - Command injection — dangerous payloads embedded in tool parameters - Supply chain attacks — compromised packages in your dependency tree - Credential theft — agents often have access to API keys, database passwords, and cloud credentials
A request like "summarize this file: ; rm -rf /" looks harmless to a human reviewer but can trigger catastrophic consequences if passed unvalidated to a shell command.
What are the core security practices to follow?
1. Validate inputs before execution
Never pass raw user input directly to code execution, shell commands, or tool calls. Extract structured parameters through strict parsing and validate against explicit allowlists.
Key patterns: - Block dangerous characters: semicolons, pipes, backticks, dollar signs - Allowlist permitted commands rather than blacklisting bad ones - Treat all user-supplied content as untrusted until validated
2. Audit your dependencies
Supply chain attacks target AI developers specifically because these projects have large dependency trees and developers often install packages quickly. Real-world examples like the chai-as-chain NPM malware show how attackers embed credential-stealing code in packages that mimic legitimate libraries.
Defense steps:
- Check package names carefully for typosquatting
- Verify download counts and maintainer history before installing
- Run npm audit or pip-audit in your workflow
- Use lockfiles to pin exact versions
- Consider automated blocking of known malicious packages — tools like AgentGuard360 maintain databases of 11,000+ malicious packages and block them before installation
3. Run with minimal permissions
Agents should have only the permissions they need for their specific task. If an agent only needs to read from S3, don't give it write access. If it doesn't need shell access, don't grant it.
This limits blast radius: a compromised agent with minimal permissions causes less damage than one with admin access.
4. Secure your credentials
Never hardcode API keys or passwords. Use environment variables or secret managers, and ensure credentials don't end up in git history, logs, or error messages.
For agents specifically: - Create scoped API keys for each agent, not shared credentials - Rotate keys regularly - Monitor for credential exposure with automated scanning
5. Monitor and log everything
You can't detect problems you're not watching for. Log all agent actions, especially commands executed, files accessed, and external API calls. Set up alerts for unusual patterns.
What are common mistakes to avoid?
- Passing unvalidated user input to shell commands or code interpreters
- Installing packages suggested by AI assistants without verification
- Running agents as root or with overly broad permissions
- Storing secrets in code, configs, or git history
- Skipping security because "it's just a side project" (attackers don't check your funding status)
- Treating security as a one-time setup instead of ongoing practice