Configuration¶
This guide covers all configuration options for the Akordi Agents SDK.
Environment Variables¶
AWS Configuration¶
| Variable | Description | Required | Default |
|---|---|---|---|
AWS_REGION |
AWS region for services | Yes | - |
AWS_DEFAULT_REGION |
Fallback AWS region | No | - |
AWS_ACCESS_KEY_ID |
AWS access key | No* | - |
AWS_SECRET_ACCESS_KEY |
AWS secret key | No* | - |
*Not required if using IAM roles or instance profiles.
Chat History (DynamoDB)¶
| Variable | Description | Required | Default |
|---|---|---|---|
CHAT_SESSIONS_TABLE_NAME |
DynamoDB table for sessions | Yes** | - |
CHAT_MESSAGES_TABLE_NAME |
DynamoDB table for messages | Yes** | - |
**Required only if using chat history features.
Token Usage Tracking¶
| Variable | Description | Required | Default |
|---|---|---|---|
AKORDI_TOKEN_USAGE_TABLE |
DynamoDB table for usage | No | - |
Agent Configuration¶
| Variable | Description | Required | Default |
|---|---|---|---|
AKORDI_AGENT_TABLE |
DynamoDB table for agent configs | No* | - |
*Required when using get_agent() or get_agent_by_code() from agent utilities.
Guardrails¶
| Variable | Description | Required | Default |
|---|---|---|---|
GUARDRAIL_ID |
AWS Bedrock guardrail ID | No | - |
GUARDRAIL_VERSION |
Guardrail version | No | DRAFT |
LangSmith Tracing¶
| Variable | Description | Required | Default |
|---|---|---|---|
LANGCHAIN_TRACING_V2 |
Enable LangSmith tracing | No | false |
LANGCHAIN_API_KEY |
LangSmith API key | No | - |
LANGCHAIN_PROJECT |
LangSmith project name | No | default |
LANGCHAIN_ENDPOINT |
LangSmith endpoint | No | - |
Agent Configuration¶
WorkflowConfig¶
Configuration for LangGraph workflows:
from akordi_agents.core.langgraph import WorkflowConfig
config = WorkflowConfig(
enable_validation=True, # Enable input validation
enable_tools=True, # Enable tool usage
enable_tracing=False, # Enable LangSmith tracing
max_iterations=10, # Max workflow iterations
temperature=0.1, # LLM temperature
)
| Parameter | Type | Default | Description |
|---|---|---|---|
enable_validation |
bool |
True |
Enable input validation |
enable_tools |
bool |
False |
Enable tool orchestration |
enable_tracing |
bool |
False |
Enable LangSmith tracing |
max_iterations |
int |
10 |
Maximum workflow iterations |
temperature |
float |
0.1 |
LLM temperature (0.0-1.0) |
LLM Configuration¶
Configure LLM behavior:
from akordi_agents.models.llm_models import ClaudeConfig
llm_config = ClaudeConfig(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
max_tokens=4096,
temperature=0.1,
top_p=0.9,
top_k=250,
stop_sequences=["Human:", "Assistant:"],
)
| Parameter | Type | Default | Description |
|---|---|---|---|
model_id |
str |
- | Bedrock model identifier |
max_tokens |
int |
4096 |
Maximum response tokens |
temperature |
float |
0.1 |
Sampling temperature |
top_p |
float |
0.9 |
Nucleus sampling |
top_k |
int |
250 |
Top-k sampling |
stop_sequences |
list |
[] |
Stop generation sequences |
Search Configuration¶
Configure knowledge base search:
from akordi_agents.models.llm_models import SearchConfig
search_config = SearchConfig(
knowledge_base_id="your-kb-id",
max_results=10,
search_type="SEMANTIC",
min_score_threshold=0.5,
)
| Parameter | Type | Default | Description |
|---|---|---|---|
knowledge_base_id |
str |
- | Bedrock KB ID |
max_results |
int |
10 |
Max search results to retrieve |
search_type |
str |
SEMANTIC |
Search type |
min_score_threshold |
float |
0.0 |
Minimum relevance score |
Client Parameters¶
Configure client request parameters:
from akordi_agents.models.types import ClientParams
client_params = ClientParams(
query="What are the safety requirements?",
knowledge_base_id="your-kb-id",
max_results=10,
context_results_limit=50, # Number of search results to include in LLM context
temperature=0.1,
)
| Parameter | Type | Default | Description |
|---|---|---|---|
query |
str |
- | User query (required) |
knowledge_base_id |
str |
None |
Bedrock Knowledge Base ID |
max_results |
int |
5 |
Max search results to retrieve from knowledge base |
context_results_limit |
int |
50 |
Max search results to include in LLM context |
temperature |
float |
0.1 |
LLM sampling temperature |
top_p |
float |
1.0 |
Nucleus sampling parameter |
top_k |
int |
250 |
Top-k sampling parameter |
override_search_type |
str |
HYBRID |
Search type (HYBRID, SEMANTIC, KEYWORD) |
file_keys |
List[str] |
None |
S3 keys to filter search results |
bucket_name |
str |
None |
S3 bucket name for file filtering |
system_message |
str |
None |
Custom system prompt |
chat_history |
List[Dict] |
[] |
Previous conversation messages |
model_id |
str |
None |
Override default model ID |
context_results_limit vs max_results
max_results: Controls how many results are retrieved from the knowledge basecontext_results_limit: Controls how many of those results are included in the LLM context
For example, you might retrieve 100 results but only include the top 10 in the context to manage token usage.
Prompt Configuration¶
PromptConfig¶
Configure prompt templates:
from akordi_agents.config import PromptConfig
prompt_config = PromptConfig(
system_template="You are a helpful assistant specializing in {domain}.",
user_template="{query}",
context_template="Context: {context}",
)
Using Prompt Manager¶
from akordi_agents.services import get_prompt_manager
manager = get_prompt_manager()
# Get prompt for a persona
prompt = manager.get_prompt(
persona="construction_expert",
context={"project_type": "residential"}
)
DynamoDB Table Schemas¶
Chat Sessions Table¶
{
"TableName": "chat-sessions",
"KeySchema": [
{"AttributeName": "user_id", "KeyType": "HASH"},
{"AttributeName": "chat_id", "KeyType": "RANGE"}
],
"AttributeDefinitions": [
{"AttributeName": "user_id", "AttributeType": "S"},
{"AttributeName": "chat_id", "AttributeType": "S"}
],
"BillingMode": "PAY_PER_REQUEST"
}
Chat Messages Table¶
{
"TableName": "chat-messages",
"KeySchema": [
{"AttributeName": "chat_id", "KeyType": "HASH"},
{"AttributeName": "message_id", "KeyType": "RANGE"}
],
"AttributeDefinitions": [
{"AttributeName": "chat_id", "AttributeType": "S"},
{"AttributeName": "message_id", "AttributeType": "S"}
],
"BillingMode": "PAY_PER_REQUEST"
}
Token Usage Table¶
{
"TableName": "token-usage",
"KeySchema": [
{"AttributeName": "agent_id", "KeyType": "HASH"},
{"AttributeName": "timestamp", "KeyType": "RANGE"}
],
"AttributeDefinitions": [
{"AttributeName": "agent_id", "AttributeType": "S"},
{"AttributeName": "timestamp", "AttributeType": "S"}
],
"BillingMode": "PAY_PER_REQUEST"
}
Agent Configuration Table¶
{
"TableName": "agent-config",
"KeySchema": [
{"AttributeName": "id", "KeyType": "HASH"}
],
"AttributeDefinitions": [
{"AttributeName": "id", "AttributeType": "S"}
],
"BillingMode": "PAY_PER_REQUEST"
}
Agent record structure:
{
"id": "my-agent-001",
"name": "My Agent",
"model": "anthropic.claude-3-sonnet-20240229-v1:0",
"active": true,
"description": "Agent description",
"created_at": "2024-01-15T10:00:00Z"
}
Complete Configuration Example¶
import os
from akordi_agents.core import create_langgraph_agent
from akordi_agents.core.langgraph import WorkflowConfig
from akordi_agents.services import AWSBedrockService
from akordi_agents.handlers import AWSBedrockSearchHandler
# Environment configuration
os.environ.update({
"AWS_REGION": "us-east-1",
"CHAT_SESSIONS_TABLE_NAME": "my-sessions",
"CHAT_MESSAGES_TABLE_NAME": "my-messages",
"AKORDI_TOKEN_USAGE_TABLE": "my-token-usage",
"LANGCHAIN_TRACING_V2": "true",
"LANGCHAIN_API_KEY": "your-api-key",
})
# Service configuration
llm_service = AWSBedrockService()
search_handler = AWSBedrockSearchHandler(
knowledge_base_id="your-kb-id",
)
# Workflow configuration
workflow_config = {
"enable_validation": True,
"enable_tools": True,
"enable_tracing": True,
"max_iterations": 15,
"temperature": 0.2,
}
# Create fully configured agent
agent = create_langgraph_agent(
name="production_agent",
llm_service=llm_service,
search_handler=search_handler,
tools=[my_tool],
validator=my_validator,
config=workflow_config,
)
Best Practices¶
Temperature Settings¶
| Use Case | Temperature | Description |
|---|---|---|
| Factual Q&A | 0.0-0.1 | Deterministic responses |
| General chat | 0.3-0.5 | Balanced creativity |
| Creative writing | 0.7-0.9 | More varied outputs |
| Brainstorming | 0.9-1.0 | Maximum creativity |
Token Limits¶
| Model | Max Input | Max Output | Recommended |
|---|---|---|---|
| Claude 3 Haiku | 200K | 4K | 2K-3K |
| Claude 3 Sonnet | 200K | 4K | 3K-4K |
| Claude 3 Opus | 200K | 4K | 3K-4K |
Security Recommendations¶
- Never hardcode credentials - Use environment variables or AWS IAM roles
- Enable guardrails - Use AWS Bedrock guardrails for production
- Limit token usage - Set appropriate
max_tokenslimits - Monitor costs - Enable token usage tracking
- Validate input - Always use validators for user input
Next Steps¶
- Quick Start - Create your first agent
- Concepts - Understand the architecture
- API Reference - Detailed API documentation