Agent Recipes¶
Ready-to-use code patterns for building agents with the Akordi Agents SDK. Each recipe is complete and can be copied directly into your project.
Working Examples in Repository¶
The examples/ folder contains production-ready implementations you can run directly:
| Example File | Description | Run Command |
|---|---|---|
examples/lang_tool.py |
LangGraph agent with weather tool | poetry run python examples/lang_tool.py |
examples/simple_tool.py |
Agent with tool (no LangGraph) | poetry run python examples/simple_tool.py |
examples/talk_to_document.py |
RAG agent with knowledge base | poetry run python examples/talk_to_document.py |
examples/agent_with_guardrails.py |
Agent with AWS Bedrock guardrails | poetry run python examples/agent_with_guardrails.py |
examples/agent_without_guardrails.py |
Basic agent without guardrails | poetry run python examples/agent_without_guardrails.py |
examples/agent_orchestration_langgraph.py |
Multi-agent orchestration | poetry run python examples/agent_orchestration_langgraph.py |
examples/agent_to_agent_flow.py |
Agent-to-agent communication | poetry run python examples/agent_to_agent_flow.py |
Setup: Copy examples/.env.example to examples/.env and configure your AWS credentials and API keys.
Example Code Explanations¶
examples/lang_tool.py - LangGraph Agent with Tool¶
This example demonstrates how to build a LangGraph-enabled agent with custom tool support.
What it does:
- Creates a
WeatherToolthat calls the WeatherAPI.com service - Creates a custom
LangGraphLLMServicewrapper for AWS Bedrock - Creates a
RiskValidatorfor input validation (guardrails) - Uses
create_langgraph_agent()to build the agent with all components
Key code sections:
# 1. Define a custom tool by extending the Tool base class
class WeatherTool(Tool):
def get_name(self) -> str:
return "weather_tool" # Unique identifier for the tool
def get_description(self) -> str:
# Description helps the LLM decide when to use this tool
return "Fetches current weather information for any city"
def get_input_schema(self) -> dict:
# JSON Schema defines what parameters the tool accepts
return {
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": [] # City is optional, defaults to Auckland
}
def execute(self, **kwargs) -> dict:
# Actual tool logic - calls external API
city = kwargs.get("city", "Auckland, New Zealand")
response = requests.get(url, params={"key": API_KEY, "q": city})
return {"weather_data": response.json(), "success": True}
# 2. Create the agent using the factory function
agent = create_langgraph_agent(
name="weather_risk_agent",
llm_service=LangGraphLLMService(model_id), # Custom LLM wrapper
tools=[WeatherTool()], # List of tools
validator=RiskValidator(), # Input validation
config={
"enable_validation": True, # Enable guardrails
"enable_tools": True, # Enable tool execution
"enable_tracing": True, # Enable LangSmith tracing
"max_iterations": 10, # Max tool call loops
"temperature": 0.1, # LLM temperature
},
)
# 3. Process a request - agent automatically decides to use tools
response = agent.process_request({
"query": "What is the weather in London?",
"system_message": "You are a weather assistant...",
})
How it works:
- User sends query → Agent receives request
RiskValidatorvalidates input (guardrails check)- LLM analyzes query and decides to use
weather_tool WeatherTool.execute()calls WeatherAPI and returns data- LLM generates final response using tool results
- Response returned to user with tool usage metadata
examples/simple_tool.py - Agent with Tool (No LangGraph)¶
This example shows how to build an agent with manual tool handling without LangGraph workflows.
What it does:
- Creates a
SimpleToolLLMServicethat handles tool execution manually - Implements a tool execution loop inside the LLM service
- Uses
AgentBuilderwithout.with_langgraph()to create a basic agent
Key difference from lang_tool.py:
# In simple_tool.py - Manual tool handling in LLM service
class SimpleToolLLMService(LLMServiceInterface):
def generate_response(self, prompt, **kwargs):
# Manual tool execution loop
while iteration < self.max_tool_iterations:
# 1. Call LLM with tools available
llm_response = self.llm_service.invoke_model(prompt, tools=tools_config)
# 2. Check if LLM wants to use tools
tool_uses = self._extract_tool_uses(llm_response.content)
if not tool_uses:
# No tools needed - return final response
return {"response": llm_response.content}
# 3. Execute each requested tool
for tool_use in tool_uses:
result = self._execute_tool(tool_use["name"], tool_use["input"])
tool_results.append(result)
# 4. Send tool results back to LLM for next iteration
messages.append({"role": "user", "content": tool_results})
# Build agent WITHOUT LangGraph
builder = AgentBuilder("agent_name")
builder.with_llm_service_instance(SimpleToolLLMService(model_id, tools=[WeatherTool()]))
# NO .with_langgraph() call - tools handled manually
agent = builder.build()
When to use this approach:
- When you need custom tool execution logic
- When you want more control over the tool loop
- When LangGraph's automatic workflow doesn't fit your use case
examples/talk_to_document.py - RAG Agent with Knowledge Base¶
This example demonstrates Retrieval-Augmented Generation (RAG) using AWS Bedrock Knowledge Base.
What it does:
- Creates a custom LLM service that integrates with knowledge base search
- Retrieves relevant documents from the knowledge base
- Uses retrieved context to generate accurate answers
Key code sections:
# 1. Create search handler for knowledge base
from akordi_agents.handlers.search_handler import get_search_handler
search_handler = get_search_handler() # Gets AWSBedrockSearchHandler
# 2. Custom LLM service that uses knowledge base
class DocumentQALLMService(LLMServiceInterface):
def generate_response(self, prompt, **kwargs):
# Build search configuration
knowledge_base_config = SearchConfig(
knowledge_base_id=kwargs.get("knowledge_base_id"),
query=prompt,
retrieval_config=self.search_handler.get_filter_config(params),
)
# Call LLM with knowledge base search
llm_response = self.llm_service.invoke_model(
prompt=prompt,
knowledge_base=knowledge_base_config, # Enables RAG
)
return {
"response": llm_response.content,
"search_results": llm_response.search_results, # Retrieved docs
}
# 3. Process request with knowledge base
response = agent.process_request({
"query": "What are the safety requirements?",
"knowledge_base_id": "YOUR_KB_ID", # Required for RAG
"max_results": 10, # Number of docs to retrieve
})
How RAG works:
- User query → Search knowledge base for relevant documents
- Retrieved documents added to LLM context
- LLM generates answer based on retrieved context
- Response includes both answer and source documents
examples/agent_with_guardrails.py - AWS Bedrock Guardrails¶
This example shows how to integrate AWS Bedrock Guardrails for content safety.
What it does:
- Creates (or uses existing) AWS Bedrock Guardrail
- Wraps LLM calls with guardrail protection
- Tests various attack vectors to demonstrate guardrail effectiveness
Key code sections:
# 1. Create guardrail using BedrockGuardrail helper
from akordi_agents.guard_kit.bedrock.bedrock import BedrockGuardrail
guardrail_client = BedrockGuardrail(region_name="ap-southeast-2")
response = guardrail_client.create_default_guardrail(name_prefix="my_guardrail")
guardrail_id = response["guardrailId"]
# 2. Custom LLM service with guardrail integration
class GuardrailEnabledLLMService(LLMServiceInterface):
def __init__(self, model_id, guardrail_id, guardrail_version="1"):
self.guardrail_id = guardrail_id
self.guardrail_version = guardrail_version
def generate_response(self, prompt, **kwargs):
# Pass guardrail parameters to LLM call
llm_response = self.llm_service.invoke_model(
prompt=prompt,
guardrailIdentifier=self.guardrail_id, # Guardrail ID
guardrailVersion=self.guardrail_version, # Version
trace="ENABLED", # Enable trace logging
)
return {"response": llm_response.content}
# 3. Create agent with guardrails
agent = create_langgraph_agent(
name="safe_agent",
llm_service=GuardrailEnabledLLMService(model_id, guardrail_id),
config={"enable_validation": False}, # Using AWS guardrails instead
)
What guardrails protect against:
- Prompt injection attacks ("ignore previous instructions...")
- PII exposure (SSN, email, phone numbers)
- Harmful content requests
- Jailbreak attempts
- System prompt extraction
examples/agent_orchestration_langgraph.py - Multi-Agent Orchestration¶
This example demonstrates coordinating multiple specialized agents.
What it does:
- Creates specialized agents (weather, risk, finance, HR)
- Registers agents with capabilities in an
AgentRegistry - Uses orchestration patterns (Coordinator, Peer-to-Peer, Hierarchical)
Key code sections:
# 1. Create specialized agents with different tools
weather_agent = create_langgraph_agent(
name="weather_specialist",
llm_service=llm_service,
tools=[WeatherTool()],
)
risk_agent = create_langgraph_agent(
name="risk_specialist",
llm_service=llm_service,
tools=[RiskAnalysisTool()],
)
# 2. Register agents with capabilities
from akordi_agents.core.langgraph.orchestration import (
AgentRegistry, AgentCapability, AgentRole,
CoordinatorOrchestrator,
)
registry = AgentRegistry()
registry.register_agent(
agent_id="weather_specialist",
agent=weather_agent,
capabilities=[AgentCapability(
name="weather_analysis",
description="Analyze weather conditions",
domains=["weather", "climate"],
)],
role=AgentRole.WORKER,
)
# 3. Create orchestrator with pattern
orchestrator = CoordinatorOrchestrator(
coordinator_id="coordinator",
registry=registry,
)
# 4. Execute orchestrated query
async def run():
result = await orchestrator.execute(
query="Assess weather and construction risks in London",
context={"project_type": "construction"},
)
return result
Orchestration patterns:
| Pattern | How it works |
|---|---|
| Coordinator | Central agent delegates tasks to specialists, aggregates results |
| Peer-to-Peer | Agents communicate directly, share information |
| Hierarchical | Multi-level delegation with supervisors and workers |
Recipe 1: Basic Conversational Agent¶
Use case: Simple question-answering agent without tools or knowledge base.
Required imports:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
Required environment variables:
Complete code with explanations:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
# Step 1: Set AWS region for Bedrock API calls
os.environ["AWS_REGION"] = "us-east-1"
# Step 2: Create LLM service
# AWSBedrockService connects to AWS Bedrock and handles Claude API calls
# It uses the default model (Claude 3 Sonnet) unless specified
llm_service = AWSBedrockService()
# Step 3: Create agent using factory function
# create_langgraph_agent() builds a LangGraphAgent with:
# - ToolUseWorkflow for request processing
# - State management across workflow nodes
# - Automatic response generation
agent = create_langgraph_agent(
name="conversational_agent", # Unique agent identifier
llm_service=llm_service, # LLM service for generating responses
config={
"temperature": 0.1, # Low temperature = more deterministic responses
"max_iterations": 5, # Max workflow iterations (for tool loops)
}
)
# Step 4: Process a request
# The request dict contains all parameters for the LLM call
response = agent.process_request({
"query": "What is machine learning?", # User's question
"system_message": "You are a helpful AI assistant. Provide clear, concise answers.", # System prompt
"max_tokens": 1000, # Max response length
})
# Step 5: Handle the response
# Response structure:
# {
# "success": True/False,
# "llm_response": {
# "response": "The generated text...",
# "model_info": {"model_id": "...", "provider": "bedrock"},
# "token_usage": {"input_tokens": X, "output_tokens": Y}
# }
# }
if response.get("success"):
answer = response["llm_response"]["response"]
print(answer)
else:
print(f"Error: {response.get('error')}")
What happens internally:
create_langgraph_agent()creates aLangGraphAgentwrapping aCustomAgentprocess_request()initializes the workflow state with query and parameters- The workflow executes: Validation → Response Generation
AWSBedrockServicecalls Claude via AWS Bedrock API- Response is formatted and returned with metadata
Recipe 2: Agent with Custom Tool¶
Use case: Agent that can execute custom tools to perform actions.
Working example: See examples/lang_tool.py for a production-ready implementation with WeatherAPI integration.
Required imports:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
from akordi_agents.tools import Tool
Required environment variables:
Complete code with explanations:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
from akordi_agents.tools import Tool
os.environ["AWS_REGION"] = "us-east-1"
# Step 1: Define a custom tool by extending the Tool base class
# Every tool must implement 4 methods: get_name, get_description, get_input_schema, execute
class WeatherTool(Tool):
"""Tool to get weather information for a location."""
def get_name(self) -> str:
# Unique identifier - LLM uses this name to call the tool
return "get_weather"
def get_description(self) -> str:
# Description helps LLM decide WHEN to use this tool
# Be specific about what the tool does and when to use it
return "Get the current weather for a specified city or location"
def get_input_schema(self) -> dict:
# JSON Schema defines the tool's input parameters
# LLM uses this to know what arguments to pass
return {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name, e.g., 'London' or 'New York'"
}
},
"required": ["location"] # Required parameters
}
def execute(self, **kwargs) -> dict:
# Actual tool logic - called when LLM decides to use this tool
# kwargs contains the parameters from get_input_schema
location = kwargs.get("location", "Unknown")
# Replace with actual weather API call in production
return {
"location": location,
"temperature": "22°C",
"conditions": "Partly cloudy",
"humidity": "65%"
}
# Step 2: Create tool instance
weather_tool = WeatherTool()
# Step 3: Create LLM service
llm_service = AWSBedrockService()
# Step 4: Create agent with tool
# When tools are provided, the agent creates a ToolUseWorkflow that:
# 1. Sends query to LLM with tool definitions
# 2. If LLM requests tool use, executes the tool
# 3. Sends tool results back to LLM
# 4. Repeats until LLM generates final response
agent = create_langgraph_agent(
name="weather_agent",
llm_service=llm_service,
tools=[weather_tool], # List of Tool instances
config={
"enable_tools": True, # Enable tool execution in workflow
"temperature": 0.1, # Low temperature for consistent tool use
"max_iterations": 10, # Max tool call iterations
}
)
# Step 5: Process request - agent automatically decides to use tools
response = agent.process_request({
"query": "What's the weather like in London?",
"system_message": "You are a helpful weather assistant. Use the weather tool to get current conditions.",
"max_tokens": 500,
})
# Step 6: Handle response with tool metadata
if response.get("success"):
# Main response text
print(response["llm_response"]["response"])
# Check which tools were used (useful for debugging/logging)
tools_used = response.get("workflow_metadata", {}).get("tools_used", [])
print(f"Tools used: {tools_used}")
# Tool results are also available
tool_results = response.get("workflow_metadata", {}).get("tool_results", [])
How tool execution works:
User Query: "What's the weather in London?"
↓
LLM analyzes query and available tools
↓
LLM decides: "I should use get_weather tool with location='London'"
↓
Agent executes: WeatherTool.execute(location="London")
↓
Tool returns: {"location": "London", "temperature": "22°C", ...}
↓
LLM receives tool result and generates final response
↓
Response: "The weather in London is 22°C and partly cloudy..."
Recipe 3: Agent with Knowledge Base (RAG)¶
Use case: Agent that searches a knowledge base to answer questions.
Working example: See examples/talk_to_document.py for a production-ready RAG implementation.
Required imports:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
from akordi_agents.handlers import AWSBedrockSearchHandler
Required environment variables:
Complete code:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
from akordi_agents.handlers import AWSBedrockSearchHandler
os.environ["AWS_REGION"] = "us-east-1"
# Create search handler for knowledge base
search_handler = AWSBedrockSearchHandler(
knowledge_base_id="YOUR_KNOWLEDGE_BASE_ID", # Replace with your KB ID
)
# Create LLM service
llm_service = AWSBedrockService()
# Create RAG agent
agent = create_langgraph_agent(
name="rag_agent",
llm_service=llm_service,
search_handler=search_handler,
config={
"temperature": 0.1,
"max_iterations": 5,
}
)
# Process request with knowledge base search
response = agent.process_request({
"query": "What are the company policies on remote work?",
"system_message": "You are a helpful assistant. Answer based on the provided context from the knowledge base.",
"knowledge_base_id": "YOUR_KNOWLEDGE_BASE_ID",
"max_tokens": 1000,
})
if response.get("success"):
print("Answer:", response["llm_response"]["response"])
# Show sources
search_results = response.get("search_results", [])
if search_results:
print("\nSources:")
for result in search_results[:3]:
print(f" - {result.get('text', '')[:100]}...")
Recipe 4: Agent with Chat History Persistence¶
Use case: Agent that remembers conversation context across multiple messages.
Required imports:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
Required environment variables:
AWS_REGION=us-east-1
CHAT_SESSIONS_TABLE_NAME=my-chat-sessions
CHAT_MESSAGES_TABLE_NAME=my-chat-messages
Complete code:
import os
import uuid
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
# Configure environment
os.environ["AWS_REGION"] = "us-east-1"
os.environ["CHAT_SESSIONS_TABLE_NAME"] = "my-chat-sessions"
os.environ["CHAT_MESSAGES_TABLE_NAME"] = "my-chat-messages"
# Create agent
llm_service = AWSBedrockService()
agent = create_langgraph_agent(
name="chat_agent",
llm_service=llm_service,
config={"temperature": 0.1}
)
# Generate IDs for conversation
user_id = "user-123"
chat_id = str(uuid.uuid4())
# First message
response1 = agent.process_request({
"query": "Hi! My name is Sarah and I'm a software engineer.",
"user_id": user_id,
"chat_id": chat_id,
"system_message": "You are a friendly assistant. Remember details about the user.",
"max_tokens": 500,
})
print("Agent:", response1["llm_response"]["response"])
# Second message - agent remembers context
response2 = agent.process_request({
"query": "What's my name and profession?",
"user_id": user_id,
"chat_id": chat_id,
"max_tokens": 500,
})
print("Agent:", response2["llm_response"]["response"])
# Expected: "Your name is Sarah and you're a software engineer."
Recipe 5: Agent with Input Validation (Guardrails)¶
Use case: Agent with content safety validation before processing.
Working example: See examples/agent_with_guardrails.py for AWS Bedrock Guardrails integration.
Required imports:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.core.interfaces import ValidatorInterface
from akordi_agents.models.validation_models import ValidationResult, ValidationError
from akordi_agents.services import AWSBedrockService
Required environment variables:
Complete code:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.core.interfaces import ValidatorInterface
from akordi_agents.models.validation_models import ValidationResult, ValidationError
from akordi_agents.services import AWSBedrockService
os.environ["AWS_REGION"] = "us-east-1"
class ContentSafetyValidator(ValidatorInterface):
"""Validator that checks for prohibited content."""
def __init__(self):
self.prohibited_terms = ["harmful", "illegal", "dangerous", "hack"]
self.max_query_length = 5000
def validate(self, data: dict) -> ValidationResult:
query = data.get("query", "").lower()
errors = []
# Check query length
if len(query) > self.max_query_length:
errors.append(ValidationError(
field="query",
message=f"Query exceeds maximum length of {self.max_query_length} characters"
))
# Check for prohibited content
for term in self.prohibited_terms:
if term in query:
errors.append(ValidationError(
field="query",
message=f"Query contains prohibited content"
))
break
return ValidationResult(
is_valid=len(errors) == 0,
errors=errors
)
def get_validator_name(self) -> str:
return "content_safety_validator"
# Create validator
validator = ContentSafetyValidator()
# Create agent with validation
llm_service = AWSBedrockService()
agent = create_langgraph_agent(
name="safe_agent",
llm_service=llm_service,
validator=validator,
config={
"enable_validation": True,
"temperature": 0.1,
}
)
# Test with safe query
response = agent.process_request({
"query": "What is Python programming?",
"system_message": "You are a helpful assistant.",
"max_tokens": 500,
})
if response.get("success"):
print("Response:", response["llm_response"]["response"])
else:
print("Validation errors:", response.get("validation_errors", []))
Recipe 6: Agent with Multiple Tools¶
Use case: Agent with multiple tools for different capabilities.
Required imports:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
from akordi_agents.tools import Tool
Required environment variables:
Complete code:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
from akordi_agents.tools import Tool
os.environ["AWS_REGION"] = "us-east-1"
class CalculatorTool(Tool):
"""Tool for mathematical calculations."""
def get_name(self) -> str:
return "calculator"
def get_description(self) -> str:
return "Perform mathematical calculations. Supports basic operations: +, -, *, /, and parentheses."
def get_input_schema(self) -> dict:
return {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to evaluate, e.g., '2 + 3 * 4'"
}
},
"required": ["expression"]
}
def execute(self, **kwargs) -> dict:
expression = kwargs.get("expression", "0")
try:
# Safe evaluation (in production, use a proper math parser)
allowed_chars = set("0123456789+-*/().% ")
if all(c in allowed_chars for c in expression):
result = eval(expression)
return {"result": result, "expression": expression}
return {"error": "Invalid characters in expression"}
except Exception as e:
return {"error": str(e)}
class DateTimeTool(Tool):
"""Tool for date and time operations."""
def get_name(self) -> str:
return "datetime"
def get_description(self) -> str:
return "Get current date, time, or calculate date differences"
def get_input_schema(self) -> dict:
return {
"type": "object",
"properties": {
"operation": {
"type": "string",
"enum": ["current_date", "current_time", "current_datetime"],
"description": "The datetime operation to perform"
}
},
"required": ["operation"]
}
def execute(self, **kwargs) -> dict:
from datetime import datetime
operation = kwargs.get("operation", "current_datetime")
now = datetime.now()
if operation == "current_date":
return {"date": now.strftime("%Y-%m-%d")}
elif operation == "current_time":
return {"time": now.strftime("%H:%M:%S")}
else:
return {"datetime": now.strftime("%Y-%m-%d %H:%M:%S")}
# Create tools
calculator = CalculatorTool()
datetime_tool = DateTimeTool()
# Create agent with multiple tools
llm_service = AWSBedrockService()
agent = create_langgraph_agent(
name="multi_tool_agent",
llm_service=llm_service,
tools=[calculator, datetime_tool],
config={
"enable_tools": True,
"temperature": 0.1,
"max_iterations": 10,
}
)
# Test calculation
response = agent.process_request({
"query": "What is 15% of 250, and what's the current date?",
"system_message": "You are a helpful assistant with access to calculator and datetime tools.",
"max_tokens": 500,
})
if response.get("success"):
print(response["llm_response"]["response"])
print("Tools used:", response.get("workflow_metadata", {}).get("tools_used", []))
Recipe 7: Agent from DynamoDB Configuration¶
Use case: Load agent configuration from DynamoDB for dynamic agent management.
Required imports:
Required environment variables:
DynamoDB table schema:
{
"TableName": "my-agent-config-table",
"KeySchema": [{"AttributeName": "id", "KeyType": "HASH"}],
"AttributeDefinitions": [{"AttributeName": "id", "AttributeType": "S"}]
}
Agent record structure:
{
"id": "my-agent-001",
"name": "Customer Support Agent",
"model": "anthropic.claude-3-sonnet-20240229-v1:0",
"active": true,
"description": "Agent for customer support queries"
}
Complete code:
import os
from akordi_agents.utils.agent import get_agent, get_system_prompt
# Configure environment
os.environ["AWS_REGION"] = "us-east-1"
os.environ["AKORDI_AGENT_TABLE"] = "my-agent-config-table"
# Load agent from DynamoDB
agent = get_agent("my-agent-001")
# Optionally load system prompt from DynamoDB
system_prompt = get_system_prompt(agent_code="my-agent-001")
# Process request
response = agent.process_request({
"query": "How do I reset my password?",
"system_message": system_prompt or "You are a helpful customer support agent.",
"max_tokens": 500,
})
if response.get("success"):
print(response["llm_response"]["response"])
Recipe 8: Agent with AgentBuilder Pattern¶
Use case: Fine-grained control over agent construction using the builder pattern.
Required imports:
import os
from akordi_agents.core import AgentBuilder
from akordi_agents.services import AWSBedrockService
from akordi_agents.handlers import AWSBedrockSearchHandler
from akordi_agents.tools import Tool
Required environment variables:
Complete code:
import os
from akordi_agents.core import AgentBuilder
from akordi_agents.services import AWSBedrockService
from akordi_agents.handlers import AWSBedrockSearchHandler
from akordi_agents.tools import Tool
os.environ["AWS_REGION"] = "us-east-1"
class SearchTool(Tool):
"""Tool for web search."""
def get_name(self) -> str:
return "web_search"
def get_description(self) -> str:
return "Search the web for information"
def get_input_schema(self) -> dict:
return {
"type": "object",
"properties": {
"query": {"type": "string", "description": "Search query"}
},
"required": ["query"]
}
def execute(self, **kwargs) -> dict:
query = kwargs.get("query", "")
# Replace with actual search API
return {"results": [f"Result for: {query}"]}
# Create components
llm_service = AWSBedrockService()
search_tool = SearchTool()
# Build agent with full configuration
agent = (
AgentBuilder("advanced_agent")
.with_llm_service_instance(llm_service)
.with_tools([search_tool])
.with_config({
"provider": "AWS_BEDROCK",
"model_id": "anthropic.claude-3-sonnet-20240229-v1:0",
})
.with_langgraph(
enable=True,
config={
"enable_validation": False,
"enable_tools": True,
"max_iterations": 10,
"temperature": 0.1,
}
)
.build()
)
# Use the agent
response = agent.process_request({
"query": "Search for the latest AI news",
"system_message": "You are a research assistant with web search capabilities.",
"max_tokens": 1000,
})
if response.get("success"):
print(response["llm_response"]["response"])
Recipe 9: Streaming Response Agent¶
Use case: Stream responses for real-time output in chat applications.
Required imports:
import os
import asyncio
from akordi_agents.core.langgraph import ToolUseWorkflow, WorkflowConfig
from akordi_agents.services import AWSBedrockService
Required environment variables:
Complete code:
import os
import asyncio
from akordi_agents.core.langgraph import ToolUseWorkflow, WorkflowConfig
from akordi_agents.services import AWSBedrockService
os.environ["AWS_REGION"] = "us-east-1"
async def stream_agent_response(query: str, system_message: str):
"""Stream response from agent."""
# Create LLM service
llm_service = AWSBedrockService()
# Create workflow
workflow = ToolUseWorkflow(
name="streaming_agent",
config=WorkflowConfig(
enable_validation=False,
enable_tools=False,
enable_tracing=False,
temperature=0.1,
),
llm_service=llm_service,
)
# Stream response
print("Agent: ", end="", flush=True)
async for chunk in workflow.stream({
"query": query,
"system_message": system_message,
}):
print(chunk, end="", flush=True)
print() # Newline at end
# Run streaming
asyncio.run(stream_agent_response(
query="Explain quantum computing in simple terms",
system_message="You are a helpful science educator."
))
Recipe 10: Token Usage Tracking Agent¶
Use case: Track token consumption for cost monitoring and analytics.
Required imports:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
from akordi_agents.services.token_usage_service import TokenUsageService
Required environment variables:
Complete code:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
from akordi_agents.services.token_usage_service import TokenUsageService
# Configure environment
os.environ["AWS_REGION"] = "us-east-1"
os.environ["AKORDI_TOKEN_USAGE_TABLE"] = "my-token-usage-table"
# Create agent
llm_service = AWSBedrockService()
agent = create_langgraph_agent(
name="tracked_agent",
llm_service=llm_service,
config={"temperature": 0.1}
)
# Initialize token tracking
token_service = TokenUsageService()
agent_id = "tracked_agent"
token_service.initialize_agent_usage(
agent_id=agent_id,
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
metadata={"environment": "production"}
)
# Process request
response = agent.process_request({
"query": "What is the capital of France?",
"system_message": "You are a helpful assistant.",
"max_tokens": 500,
})
if response.get("success"):
print("Response:", response["llm_response"]["response"])
# Get token usage from response
token_usage = response["llm_response"].get("token_usage", {})
print(f"Input tokens: {token_usage.get('input_tokens', 0)}")
print(f"Output tokens: {token_usage.get('output_tokens', 0)}")
# Get cumulative usage
stats = token_service.get_agent_usage(agent_id)
print(f"Total tokens used: {stats.total_token_usage}")
print(f"Total LLM calls: {stats.number_of_llm_calls}")
Quick Reference: Common Imports¶
# Core agent creation
from akordi_agents.core import (
create_langgraph_agent, # Factory function for LangGraph agents
AgentBuilder, # Builder pattern for agents
CustomAgent, # Base agent class
LangGraphAgent, # LangGraph-enabled agent
)
# Services
from akordi_agents.services import (
AWSBedrockService, # AWS Bedrock LLM service
ChatHistoryService, # Chat persistence
ModelService, # Model information
)
from akordi_agents.services.token_usage_service import TokenUsageService
# Handlers
from akordi_agents.handlers import (
AWSBedrockSearchHandler, # Knowledge base search
)
# Tools
from akordi_agents.tools import Tool # Base class for custom tools
# Interfaces
from akordi_agents.core.interfaces import (
ValidatorInterface, # Input validation
LLMServiceInterface, # Custom LLM services
SearchHandlerInterface, # Custom search handlers
DataModelInterface, # Data validation
AgentInterface, # Custom agents
)
# Models
from akordi_agents.models.validation_models import (
ValidationResult,
ValidationError,
)
# LangGraph
from akordi_agents.core.langgraph import (
ToolUseWorkflow,
WorkflowConfig,
)
# Utilities
from akordi_agents.utils.agent import (
get_agent, # Load agent from DynamoDB
get_agent_by_code, # Get agent config from DynamoDB
get_system_prompt, # Get prompt from DynamoDB
)
Recipe 11: Multi-Agent Orchestration¶
Use case: Coordinate multiple specialized agents for complex tasks.
Working example: See examples/agent_orchestration_langgraph.py for production-ready multi-agent orchestration.
# Coordinator pattern
poetry run python examples/agent_orchestration_langgraph.py --pattern coordinator --query "Assess weather and construction risks in London"
# Peer-to-peer pattern
poetry run python examples/agent_orchestration_langgraph.py --pattern peer_to_peer --query "Analyze project risks"
# Hierarchical pattern
poetry run python examples/agent_orchestration_langgraph.py --pattern hierarchical --query "Full risk assessment"
Required imports:
import os
import asyncio
from akordi_agents.core import create_langgraph_agent, LLMServiceInterface
from akordi_agents.core.langgraph.orchestration import (
AgentCapability,
AgentRegistry,
AgentRole,
CoordinatorOrchestrator,
PeerToPeerOrchestrator,
HierarchicalOrchestrator,
)
from akordi_agents.tools import Tool
Required environment variables:
Complete code:
import os
import asyncio
from akordi_agents.core import create_langgraph_agent, LLMServiceInterface
from akordi_agents.core.langgraph.orchestration import (
AgentCapability,
AgentRegistry,
AgentRole,
CoordinatorOrchestrator,
)
from akordi_agents.services import AWSBedrockService
from akordi_agents.tools import Tool
os.environ["AWS_REGION"] = "us-east-1"
# Define specialized tools
class WeatherTool(Tool):
def get_name(self) -> str:
return "weather_tool"
def get_description(self) -> str:
return "Get current weather for a city"
def get_input_schema(self) -> dict:
return {
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"]
}
def execute(self, **kwargs) -> dict:
return {"temperature": "22°C", "conditions": "Sunny"}
class RiskTool(Tool):
def get_name(self) -> str:
return "risk_tool"
def get_description(self) -> str:
return "Analyze project risks"
def get_input_schema(self) -> dict:
return {
"type": "object",
"properties": {"activity": {"type": "string"}},
"required": ["activity"]
}
def execute(self, **kwargs) -> dict:
return {"risk_level": "Medium", "factors": ["weather", "equipment"]}
# Create specialized agents
llm_service = AWSBedrockService()
weather_agent = create_langgraph_agent(
name="weather_specialist",
llm_service=llm_service,
tools=[WeatherTool()],
config={"enable_tools": True}
)
risk_agent = create_langgraph_agent(
name="risk_specialist",
llm_service=llm_service,
tools=[RiskTool()],
config={"enable_tools": True}
)
# Create agent registry
registry = AgentRegistry()
registry.register_agent(
agent_id="weather_specialist",
agent=weather_agent,
capabilities=[AgentCapability(
name="weather_analysis",
description="Analyze weather conditions",
domains=["weather", "climate"]
)],
role=AgentRole.WORKER
)
registry.register_agent(
agent_id="risk_specialist",
agent=risk_agent,
capabilities=[AgentCapability(
name="risk_assessment",
description="Assess project risks",
domains=["risk", "safety"]
)],
role=AgentRole.WORKER
)
# Create coordinator
coordinator_agent = create_langgraph_agent(
name="coordinator",
llm_service=llm_service,
config={"temperature": 0.1}
)
registry.register_agent(
agent_id="coordinator",
agent=coordinator_agent,
capabilities=[AgentCapability(
name="coordination",
description="Coordinate specialist agents",
domains=["coordination"]
)],
role=AgentRole.COORDINATOR
)
# Create orchestrator
orchestrator = CoordinatorOrchestrator(
coordinator_id="coordinator",
registry=registry
)
async def run_orchestration():
result = await orchestrator.execute(
query="What is the weather in London and what are the construction risks?",
context={"project_type": "construction"}
)
print(f"Result: {result}")
# Run
asyncio.run(run_orchestration())
Orchestration Patterns:
| Pattern | Description | Use Case |
|---|---|---|
CoordinatorOrchestrator |
Central coordinator delegates to specialists | Clear task delegation |
PeerToPeerOrchestrator |
Agents communicate directly | Collaborative tasks |
HierarchicalOrchestrator |
Multi-level delegation | Complex workflows |
Quick Reference: Environment Variables¶
| Variable | Required | Description |
|---|---|---|
AWS_REGION |
Yes | AWS region for Bedrock and DynamoDB |
CHAT_SESSIONS_TABLE_NAME |
For chat history | DynamoDB table for chat sessions |
CHAT_MESSAGES_TABLE_NAME |
For chat history | DynamoDB table for chat messages |
AKORDI_TOKEN_USAGE_TABLE |
For token tracking | DynamoDB table for token usage |
AKORDI_AGENT_TABLE |
For DynamoDB agents | DynamoDB table for agent configs |
AKORDI_LLM_MODELS_TABLE |
For model service | DynamoDB table for model info |
GUARDRAIL_ID |
For AWS guardrails | Bedrock guardrail ID |
GUARDRAIL_VERSION |
For AWS guardrails | Guardrail version (default: DRAFT) |
LANGCHAIN_TRACING_V2 |
For tracing | Enable LangSmith tracing |
LANGCHAIN_API_KEY |
For tracing | LangSmith API key |
Quick Reference: Response Structure¶
# Successful response
{
"success": True,
"agent": "agent_name",
"llm_response": {
"response": "The generated text response",
"model_info": {
"model_id": "anthropic.claude-3-sonnet-20240229-v1:0",
"provider": "bedrock"
},
"token_usage": {
"input_tokens": 150,
"output_tokens": 200,
"total_tokens": 350
},
"chat_id": "chat-uuid",
"tools_used": ["tool_name"]
},
"workflow_metadata": {
"status": "completed",
"tools_used": ["tool_name"],
"tool_results": [{"key": "value"}]
},
"search_results": [
{"text": "...", "score": 0.95, "metadata": {}}
]
}
# Error response
{
"success": False,
"agent": "agent_name",
"error": "Error message",
"validation_errors": [
{"field": "query", "message": "Validation error message"}
]
}
Quick Reference: Tool Schema¶
class MyTool(Tool):
def get_name(self) -> str:
return "tool_name" # Unique identifier
def get_description(self) -> str:
return "Clear description of what the tool does"
def get_input_schema(self) -> dict:
return {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "Description of param1"
},
"param2": {
"type": "integer",
"description": "Description of param2"
}
},
"required": ["param1"] # Required parameters
}
def execute(self, **kwargs) -> dict:
param1 = kwargs.get("param1")
param2 = kwargs.get("param2", 0)
# Tool logic here
return {"result": "value"}