Agents¶
Agents are the core building blocks of the Akordi Agents SDK. They process user requests, orchestrate tools, and generate responses using LLMs.
How to Create an Agent¶
There are three ways to create an agent, from simplest to most customizable:
Method 1: Factory Function (Recommended)¶
Use create_langgraph_agent for most use cases:
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.services import AWSBedrockService
os.environ["AWS_REGION"] = "us-east-1"
agent = create_langgraph_agent(
name="my_agent",
llm_service=AWSBedrockService(),
tools=[], # Optional: list of Tool instances
validator=None, # Optional: ValidatorInterface instance
search_handler=None, # Optional: SearchHandlerInterface instance
config={
"enable_validation": True,
"enable_tools": True,
"temperature": 0.1,
"max_iterations": 10,
}
)
Method 2: AgentBuilder Pattern¶
Use AgentBuilder for fine-grained control:
import os
from akordi_agents.core import AgentBuilder
from akordi_agents.services import AWSBedrockService
os.environ["AWS_REGION"] = "us-east-1"
agent = (
AgentBuilder("my_agent")
.with_llm_service_instance(AWSBedrockService())
.with_tools([]) # Optional
.with_config({"temperature": 0.1})
.with_langgraph(enable=True, config={"enable_tools": True})
.build()
)
Method 3: Load from DynamoDB¶
Load agent configuration from DynamoDB:
import os
from akordi_agents.utils.agent import get_agent
os.environ["AWS_REGION"] = "us-east-1"
os.environ["AKORDI_AGENT_TABLE"] = "my-agent-table"
agent = get_agent("agent-id-from-dynamodb")
Agent Types¶
CustomAgent¶
The base agent class that processes requests with configured components:
from akordi_agents.core import CustomAgent
from akordi_agents.services import AWSBedrockService
agent = CustomAgent(
name="my_agent",
llm_service=AWSBedrockService(), # Required: LLM service
validator=None, # Optional: input validation
data_model=None, # Optional: data validation
search_handler=None, # Optional: knowledge base search
config={"temperature": 0.1}, # Optional: configuration
)
# Initialize before use
agent.initialize({})
CustomAgent features:
- Input validation with custom validators
- Data model validation
- Knowledge base search integration
- LLM response generation
- Configurable behavior
LangGraphAgent¶
Enhanced agent with LangGraph workflow capabilities. Created automatically when using create_langgraph_agent or AgentBuilder.with_langgraph(enable=True):
from akordi_agents.core import LangGraphAgent, CustomAgent
from akordi_agents.services import AWSBedrockService
# Create base agent first
base_agent = CustomAgent(
name="base",
llm_service=AWSBedrockService(),
)
# Wrap with LangGraph capabilities
agent = LangGraphAgent(
name="workflow_agent",
base_agent=base_agent,
tools=[], # Optional: list of Tool instances
config={
"enable_validation": True,
"enable_tools": True,
"max_iterations": 10,
},
)
LangGraphAgent features:
- All CustomAgent features
- Intelligent tool orchestration
- State-aware workflow execution
- Conditional routing
- Chat history persistence
Agent Builder Pattern¶
The AgentBuilder provides a fluent API for constructing agents:
from akordi_agents.core import AgentBuilder
agent = (
AgentBuilder("my_agent")
.with_llm_service_instance(llm_service)
.with_validator_instance(validator)
.with_search_handler_instance(search_handler)
.with_tools([tool1, tool2])
.with_config({"temperature": 0.1})
.with_langgraph(enable=True, config={
"enable_validation": True,
"enable_tools": True,
})
.build()
)
Builder Methods¶
| Method | Description |
|---|---|
with_validator(name) |
Add validator by registered name |
with_validator_instance(validator) |
Add validator instance |
with_data_model(name) |
Add data model by registered name |
with_data_model_instance(model) |
Add data model instance |
with_search_handler(name) |
Add search handler by registered name |
with_search_handler_instance(handler) |
Add search handler instance |
with_llm_service(name) |
Add LLM service by registered name |
with_llm_service_instance(service) |
Add LLM service instance |
with_tools(tools) |
Add list of tools |
with_config(config) |
Add configuration dictionary |
with_langgraph(enable, config) |
Enable LangGraph workflow |
build() |
Build and return the agent |
Factory Function¶
For quick agent creation, use the factory function:
from akordi_agents.core import create_langgraph_agent
agent = create_langgraph_agent(
name="quick_agent",
llm_service=llm_service,
tools=[weather_tool],
validator=my_validator,
search_handler=search_handler,
config={
"enable_validation": True,
"enable_tools": True,
"enable_tracing": False,
"max_iterations": 10,
"temperature": 0.1,
}
)
Processing Requests¶
Request Format¶
response = agent.process_request({
"query": "User's question or request",
"system_message": "System prompt for the LLM",
"max_tokens": 1000,
"temperature": 0.1,
"user_id": "user-123",
"chat_id": "chat-456",
"chat_history": [
{"role": "user", "content": "Previous message"},
{"role": "assistant", "content": "Previous response"}
],
"metadata": {
"custom_key": "custom_value"
}
})
Response Format¶
{
"success": True,
"agent": "my_agent",
"llm_response": {
"response": "The generated response text",
"model_info": {
"model_id": "anthropic.claude-3-sonnet",
"provider": "bedrock"
},
"token_usage": {
"input_tokens": 150,
"output_tokens": 200,
"total_tokens": 350
},
"chat_id": "chat-456",
"tools_used": ["weather_tool"]
},
"workflow_metadata": {
"status": "completed",
"tools_used": ["weather_tool"],
"tool_results": [{"location": "London", "temp": "15°C"}]
},
"search_results": []
}
Agent Lifecycle¶
sequenceDiagram
participant U as User
participant A as Agent
participant V as Validator
participant T as Tools
participant L as LLM
U->>A: process_request(data)
A->>A: initialize()
A->>V: validate(data)
V-->>A: ValidationResult
alt Validation Failed
A-->>U: Error Response
else Validation Passed
A->>T: execute_tools()
T-->>A: Tool Results
A->>L: generate_response()
L-->>A: LLM Response
A-->>U: Success Response
end
Custom Agents¶
Implement the AgentInterface to create custom agents:
from akordi_agents.core.interfaces import AgentInterface
from typing import Dict, Any
class MyCustomAgent(AgentInterface):
def __init__(self, name: str):
self.name = name
self._initialized = False
def initialize(self, config: Dict[str, Any]) -> None:
"""Initialize the agent with configuration."""
self.config = config
self._initialized = True
def process_request(self, request_data: Dict[str, Any]) -> Dict[str, Any]:
"""Process a request and return a response."""
if not self._initialized:
self.initialize({})
# Your custom processing logic here
query = request_data.get("query", "")
return {
"success": True,
"agent": self.name,
"response": f"Processed: {query}"
}
def get_agent_name(self) -> str:
"""Get the name of this agent."""
return self.name
Component Registry¶
Register custom components for use with AgentBuilder:
from akordi_agents.core import (
register_validator,
register_llm_service,
register_search_handler,
)
# Register a custom validator
register_validator("my_validator", MyValidator)
# Register a custom LLM service
register_llm_service("my_llm", MyLLMService)
# Register a custom search handler
register_search_handler("my_search", MySearchHandler)
# Use in AgentBuilder
agent = (
AgentBuilder("my_agent")
.with_validator("my_validator")
.with_llm_service("my_llm")
.with_search_handler("my_search")
.build()
)
Best Practices¶
1. Use Dependency Injection¶
# Good: Injectable components
agent = create_langgraph_agent(
name="agent",
llm_service=llm_service, # Injected
validator=validator, # Injected
)
# Avoid: Hardcoded dependencies
class BadAgent:
def __init__(self):
self.llm = AWSBedrockService() # Hardcoded
2. Enable Validation¶
Always validate user input in production:
agent = create_langgraph_agent(
name="safe_agent",
llm_service=llm_service,
validator=content_validator,
config={"enable_validation": True}
)
3. Handle Errors Gracefully¶
try:
response = agent.process_request(request_data)
if not response.get("success"):
handle_error(response.get("error"))
except Exception as e:
logger.error(f"Agent error: {e}")
return {"success": False, "error": str(e)}
4. Use Appropriate Temperature¶
# Factual queries
agent = create_langgraph_agent(
name="factual_agent",
llm_service=llm_service,
config={"temperature": 0.1}
)
# Creative tasks
agent = create_langgraph_agent(
name="creative_agent",
llm_service=llm_service,
config={"temperature": 0.7}
)
Related Documentation¶
- Recipes - Complete, copy-paste code examples for common agent patterns
- LangGraph Workflows - Advanced workflow orchestration
- Tools - Create custom tools
- Services - LLM and search services
- API Reference - Complete API documentation