Page cover

Debugging

Step 1: Check Logs

# View agent logs
tail -f logs/agent.log

# Check errors
tail -f logs/errors.log

# Filter by time
grep "2025-01-15" logs/agent.log

Step 2: Test API Connections

# Test LLM API
from agent_builder_framework import test_llm_connection
test_llm_connection('anthropic', api_key)

# Test platform API
from agent_builder_framework import test_platform_connection
test_platform_connection('twitter', credentials)

Step 3: Validate Configuration

# Check environment variables
python agent_[ID].py --check-config

# Validate API keys
python agent_[ID].py --validate-keys

# Test mode (no actual posts)
python agent_[ID].py --test

Step 4: Monitor Resources

# Check memory usage
ps aux | grep python

# Monitor network
netstat -an | grep ESTABLISHED

# View process info
top -p $(pgrep -f agent_)

Getting Help

Documentation:

  • Read this documentation thoroughly

  • Check README files

  • Review code comments

Community Support:

  • Discord community

  • GitHub discussions

  • Twitter/X: @GraceOS

Direct Support:

Information to Include:

  1. Agent ID

  2. Error messages

  3. Transaction hash (if payment related)

  4. Steps to reproduce

  5. Browser/system information

  6. Screenshots (if applicable)


Performance Optimization

LLM Cost Management

Token Usage:

Understanding token costs:

  • Input tokens: Usually cheaper

  • Output tokens: Usually more expensive

  • Context window: Affects performance

Optimization Strategies:

  1. Prompt Optimization:

    • Shorter, clearer prompts

    • Remove redundant instructions

    • Use system messages efficiently

    • Cache stable prompts

  2. Response Length Control:

    • Set max_tokens appropriately

    • Use stop sequences

    • Implement length checking

    • Truncate when necessary

  3. Model Selection:

    • Use smaller models for simple tasks

    • Reserve powerful models for complex queries

    • Implement tiered response system

    • Monitor cost per interaction

Cost Tracking:

# Example cost tracking
class CostTracker:
    def __init__(self):
        self.total_cost = 0
        self.interactions = 0
    
    def log_usage(self, model, input_tokens, output_tokens):
        cost = calculate_cost(model, input_tokens, output_tokens)
        self.total_cost += cost
        self.interactions += 1
        
    def get_average_cost(self):
        return self.total_cost / self.interactions if self.interactions > 0 else 0

Caching Strategies

Response Caching:

Cache frequently asked questions:

from functools import lru_cache
import hashlib

class ResponseCache:
    def __init__(self, max_size=1000):
        self.cache = {}
        self.max_size = max_size
    
    def get_cache_key(self, query):
        return hashlib.md5(query.encode()).hexdigest()
    
    def get(self, query):
        key = self.get_cache_key(query)
        return self.cache.get(key)
    
    def set(self, query, response):
        if len(self.cache) >= self.max_size:
            # Remove oldest entry
            oldest = min(self.cache.keys())
            del self.cache[oldest]
        key = self.get_cache_key(query)
        self.cache[key] = response

API Response Caching:

  • Cache platform API responses

  • Implement TTL (Time To Live)

  • Use Redis for distributed caching

  • Clear cache strategically

Scaling Strategies

Horizontal Scaling:

  • Multiple agent instances

  • Load balancing

  • Distributed processing

  • Shared state management

Vertical Scaling:

  • Increase memory

  • More CPU cores

  • Faster network

  • SSD storage

Queue-Based Processing:

import queue
import threading

class AgentQueue:
    def __init__(self, num_workers=4):
        self.queue = queue.Queue()
        self.workers = []
        for _ in range(num_workers):
            worker = threading.Thread(target=self.process_queue)
            worker.start()
            self.workers.append(worker)
    
    def add_task(self, task):
        self.queue.put(task)
    
    def process_queue(self):
        while True:
            task = self.queue.get()
            try:
                task.execute()
            finally:
                self.queue.task_done()

Last updated