Framework Guides
Google ADK Integration
Integrate Saf3AI with Google Agent Development Kit for automatic tracing and security.
Google ADK Integration
Saf3AI provides native integration with Google’s Agent Development Kit (ADK), offering seamless security scanning and observability for your ADK agents.
Installation
pip install saf3ai-sdk google-adk
Quick Setup
Step 1: Set Environment Variables
Create a .env file in your project root:
SAF3AI_COLLECTOR_AGENT=https://your-collector-endpoint.com
SAF3AI_SERVICE_NAME=my-adk-agent
SAF3AI_API_KEY=your-api-key-here
SAF3AI_API_KEY_HEADER=X-API-Key
SAF3AI_API_ENDPOINT=https://your-scanner-endpoint.com
Step 2: Initialize the SDK
import os
from dotenv import load_dotenv
from saf3ai_sdk import init
# Load environment variables
load_dotenv()
# Initialize SDK for ADK
init(
service_name=os.getenv("SAF3AI_SERVICE_NAME", "adk-agent"),
framework="adk",
agent_id="my-adk-agent",
api_key=os.getenv("SAF3AI_API_KEY"),
api_key_header_name=os.getenv("SAF3AI_API_KEY_HEADER", "X-API-Key"),
safeai_collector_agent=os.getenv("SAF3AI_COLLECTOR_AGENT"),
)
Important: Call init() once at the start of your application, before creating any agents.
Step 3: Define Security Policy
def security_policy(text: str, scan_results: dict, text_type: str) -> bool:
"""
Return True to allow, False to block.
Args:
text: The text that was scanned
scan_results: Dict containing detection results
text_type: Either "prompt" or "response"
"""
detections = scan_results.get("detection_results", {})
return not any(
result.get("result") == "MATCH_FOUND"
for result in detections.values()
)
Step 4: Create Security Callback
from saf3ai_sdk import create_security_callback
# Create security callback
security_callback = create_security_callback(
api_endpoint=os.getenv("SAF3AI_API_ENDPOINT"),
on_scan_complete=security_policy,
scan_responses=True, # Optional: also scan AI responses
)
Step 5: Create ADK Agent with Callback
from google.adk.agents import LlmAgent
# Create ADK agent with security callback
agent = LlmAgent(
name="my_agent",
model="gemini-2.5-flash",
before_model_callback=security_callback,
)
# Use agent
response = agent.run("Hello, how are you?")
Complete Example
import os
from dotenv import load_dotenv
from saf3ai_sdk import init, create_security_callback
from google.adk.agents import LlmAgent
# Step 1: Load environment
load_dotenv()
# Step 2: Initialize SDK
init(
service_name=os.getenv("SAF3AI_SERVICE_NAME", "adk-agent"),
framework="adk",
agent_id="my-adk-agent",
api_key=os.getenv("SAF3AI_API_KEY"),
api_key_header_name=os.getenv("SAF3AI_API_KEY_HEADER", "X-API-Key"),
safeai_collector_agent=os.getenv("SAF3AI_COLLECTOR_AGENT"),
)
# Step 3: Define security policy
def security_policy(text: str, scan_results: dict, text_type: str) -> bool:
detections = scan_results.get("detection_results", {})
return not any(
result.get("result") == "MATCH_FOUND"
for result in detections.values()
)
# Step 4: Create security callback
security_callback = create_security_callback(
api_endpoint=os.getenv("SAF3AI_API_ENDPOINT"),
on_scan_complete=security_policy,
scan_responses=True,
)
# Step 5: Create ADK agent with callback
agent = LlmAgent(
name="my_agent",
model="gemini-2.5-flash",
before_model_callback=security_callback,
)
# Step 6: Use agent
response = agent.run("Hello, how are you?")
Multi-Agent Systems
For multi-agent ADK systems, apply the security callback to each agent:
from google.adk.agents import LlmAgent, SequentialAgent
# Create security callback (reusable)
security_callback = create_security_callback(
api_endpoint=os.getenv("SAF3AI_API_ENDPOINT"),
on_scan_complete=security_policy,
scan_responses=True,
)
# Agent 1: Research
research_agent = LlmAgent(
name="research_agent",
model="gemini-2.5-flash",
instruction="Research the given topic and provide key findings.",
before_model_callback=security_callback,
output_key="research_findings",
)
# Agent 2: Summarizer
summarizer_agent = LlmAgent(
name="summarizer_agent",
model="gemini-2.5-flash",
instruction="Summarize the research findings: {research_findings}",
before_model_callback=security_callback,
)
# Create pipeline
pipeline = SequentialAgent(
name="research_pipeline",
sub_agents=[research_agent, summarizer_agent],
)
# Run pipeline
result = pipeline.run("Tell me about quantum computing")
Workflow Agents
For SequentialAgent, ParallelAgent, and LoopAgent workflows:
from google.adk.agents import SequentialAgent, ParallelAgent, LlmAgent
# Create multiple agents with security
agent_a = LlmAgent(
name="agent_a",
model="gemini-2.5-flash",
before_model_callback=security_callback,
output_key="result_a",
)
agent_b = LlmAgent(
name="agent_b",
model="gemini-2.5-flash",
before_model_callback=security_callback,
output_key="result_b",
)
merger = LlmAgent(
name="merger",
model="gemini-2.5-flash",
instruction="Combine results: {result_a} and {result_b}",
before_model_callback=security_callback,
)
# Parallel execution then merge
pipeline = SequentialAgent(
name="parallel_pipeline",
sub_agents=[
ParallelAgent(
name="parallel_fetch",
sub_agents=[agent_a, agent_b]
),
merger
],
)
Using ADK Tools with Security
When defining custom tools for your ADK agents, they’re automatically traced:
from google.adk.agents import LlmAgent
def search_database(query: str) -> dict:
"""Search the company database for information."""
# Your search logic here
return {"results": ["Result 1", "Result 2"]}
def send_email(to: str, subject: str, body: str) -> dict:
"""Send an email to a recipient."""
# Your email logic here
return {"status": "sent"}
# Create agent with tools
agent_with_tools = LlmAgent(
name="assistant",
model="gemini-2.5-flash",
instruction="You are a helpful assistant with access to database and email tools.",
tools=[search_database, send_email],
before_model_callback=security_callback,
)
Security Policy Examples
Basic Policy (Block All Threats)
def basic_policy(text: str, scan_results: dict, text_type: str) -> bool:
"""Block any detected threats."""
detections = scan_results.get("detection_results", {})
return not any(
result.get("result") == "MATCH_FOUND"
for result in detections.values()
)
Selective Policy (Block Specific Threats)
def selective_policy(text: str, scan_results: dict, text_type: str) -> bool:
"""Block only specific threat types."""
detections = scan_results.get("detection_results", {})
blocked_types = {"CSAM", "Dangerous", "HateSpeech", "PromptInjection"}
for threat_type, result in detections.items():
if threat_type in blocked_types and result.get("result") == "MATCH_FOUND":
return False
return True
Custom Guardrails Policy
def custom_guardrails_policy(text: str, scan_results: dict, text_type: str) -> bool:
"""Use custom guardrails configured for this agent."""
custom_matches = scan_results.get("custom_rule_matches", [])
# Block if any custom rule matched
if custom_matches:
for match in custom_matches:
if match.get("action") == "block":
return False
return True
ADK Callback Hook Points
Saf3AI integrates at these ADK lifecycle points:
| Hook | Purpose | Usage |
|---|---|---|
before_model_callback | Scan prompts before LLM call | Block malicious inputs |
after_model_callback | Scan responses after LLM call | Filter harmful outputs |
before_tool_callback | Intercept tool calls | Audit tool usage |
after_tool_callback | Process tool results | Validate tool outputs |
Automatic Tracing
With ADK integration, Saf3AI automatically captures:
- Agent executions: Name, timing, inputs/outputs
- LLM calls: Model, tokens, latency, prompts/responses
- Tool usage: Tool name, arguments, results
- Workflow steps: Sequential, parallel, and loop execution
- Errors: Full exception details and context
Configuration Options
security_callback = create_security_callback(
api_endpoint=os.getenv("SAF3AI_API_ENDPOINT"), # Required: Scanner API URL
on_scan_complete=security_policy, # Optional: Custom policy
scan_responses=True, # Optional: Scan LLM outputs
timeout=30, # Optional: API timeout (seconds)
)
Troubleshooting
SDK not initializing
- Check all environment variables are set in
.envfile - Verify
.envfile is in project root - Ensure
load_dotenv()is called beforeinit()
Callbacks not working
- Verify SDK is initialized before creating callbacks
- Check that
framework="adk"is set ininit() - Ensure
google-adkpackage is installed:pip install google-adk - Verify callback is passed to agent before invocation
Agent not blocking threats
- Check your
on_scan_completefunction logic - Verify the scanner API is accessible
- Check scan results format matches your policy