🧬 SIA-1 Documentation
SIA-1 is an advanced self-evolving AI agent system featuring automated testing, error correction, LLM-guided evolution, and comprehensive monitoring capabilities.
🚀 Quick Start
Prerequisites
- Docker Desktop - Container orchestration (must be running)
- Python 3.9+ - Main runtime environment
- LM Studio - Local LLM service for AI-powered evolution
Installation
# Clone the repository
git clone <repository-url>
cd SIA-1
# Install Python dependencies
pip install -r requirements.txt
# Verify Docker is running
docker --version
docker ps
Configuration
# Configure your local LLM (interactive setup)
python configure_llm.py
# This will:
# - Test connection to LM Studio
# - Detect available models
# - Update configuration automatically
Running the System
# Option 1: Standard Evolution
python main.py
# Option 2: Automated Testing with Error Fixing
python automated_test_runner.py
What Happens Next
The system will:
- 🌐 Start monitoring dashboard at
http://localhost:8080
- 🧬 Begin evolution with generation 0
- 🔍 Detect and auto-fix errors using AI (up to 3 attempts per issue)
- 📈 Continuously evolve agents with never-ending improvement
- 🔄 Handle graceful shutdown with cleanup on Ctrl+C
🏢 System Architecture
Evolution Pipeline
- Intent Generation - AI analyzes research content and previous generation feedback
- Code Evolution - LLM generates improved agent code with full validation
- Automated Testing - Docker-based execution with comprehensive error detection
- Error Correction - AI automatically fixes detected issues (up to 3 attempts)
- Performance Evaluation - Multi-criteria benchmarking and scoring
- Continuous Improvement - Never-ending evolution with robust fallbacks
Key Components
Component |
Description |
main.py |
Main evolution orchestrator with graceful shutdown |
llm_interface.py |
LM Studio integration with configurable models |
automated_test_runner.py |
AI-powered testing and error fixing |
configure_llm.py |
Easy LLM model configuration utility |
monitoring_dashboard.py |
Real-time web dashboard with container metrics |
🧬 Base Agent Evolution
SIA-1 can evolve any base agent placed in the /DNA
folder. The system will continuously improve and evolve your agent's functionality through AI-guided modifications.
How It Works
- Place Your Agent: Put your base agent code in the
/DNA
folder as agent.py
- Start Evolution: Run SIA-1 and it will analyze your agent's structure and functionality
- AI-Guided Improvement: The LLM generates evolved versions with enhanced capabilities
- Automatic Testing: Each evolved version is tested in isolated Docker containers
- Performance Selection: The best-performing agents survive to the next generation
- Continuous Evolution: The process repeats indefinitely, creating better agents
Basic Agent Template
Your base agent should follow this basic structure to ensure proper evolution. Save this as agent.py
in your /DNA
folder:
# Minimal agent implementation that meets SIA-1 evolution requirements.
# REQUIRED METHODS for evolution system:
# 1. __init__() - Initialize the agent
# 2. run() - Main execution method that returns True/False for success
# 3. get_status() - Return agent status information
# OPTIONAL METHODS that enhance evolution:
# 4. get_performance_metrics() - Detailed performance data
# 5. _monitor_performance() - Performance monitoring
Agent Requirements
For successful evolution, your base agent should include:
- Measurable Performance: Clear metrics that can be benchmarked (accuracy, speed, efficiency)
- Modular Structure: Well-organized code that can be easily modified and improved
- Error Handling: Robust error handling to prevent crashes during evolution
- Documentation: Clear comments explaining the agent's functionality
- Dependencies: All required imports and dependencies clearly specified
Evolution Possibilities
The AI can evolve your agent in many ways:
- Algorithm Optimization: Improve core algorithms and logic
- Performance Enhancements: Optimize speed and resource usage
- Feature Addition: Add new capabilities and functionality
- Error Handling: Improve robustness and error recovery
- Code Quality: Refactor for better maintainability and readability
- Architecture Improvements: Enhance overall design and structure
Evolution Tracking
Monitor your agent's evolution progress:
- Web Dashboard: Real-time evolution metrics at
http://localhost:8080
- Evolution History: All evolved versions saved in
/DNA_evolved/
- Performance Graphs: Visual tracking of improvement over generations
- Generation Logs: Detailed logs of each evolution cycle
🎯 LLM Configuration Guide
Optimal Settings for LM Studio
Since LM Studio supports up to 8000 tokens max length, here are the recommended settings:
Current Configuration (in config.py
):
# LLM settings
LLM_MAX_TOKENS_INTENT = 500 # For intent generation
LLM_MAX_TOKENS_CODE = 4000 # For code generation
LLM_TIMEOUT_SECONDS = 180 # 3 minutes timeout
Recommended Optimizations:
# Increase these values in config.py for better results
LLM_MAX_TOKENS_INTENT = 800 # More detailed intents
LLM_MAX_TOKENS_CODE = 6000 # More complete code (use ~75% of your 8000 limit)
LLM_TIMEOUT_SECONDS = 240 # 4 minutes for complex generation
LM Studio Server Settings
- Context Length: Set to 8192 or higher
- Max Tokens: 8000 (as configured)
- Temperature: 0.7 (good balance of creativity vs consistency)
- Top P: 0.95
- Repeat Penalty: 1.1
Supported LLM Models
Tested and optimized for:
- Qwen Coder Models:
qwen3-coder-30b-a3b-instruct
(recommended)
- CodeLlama Models:
codellama-34b-instruct
- DeepSeek Coder:
deepseek-coder-33b-instruct
- Other OpenAI-compatible APIs: Any model supporting the OpenAI chat format
🔍 Troubleshooting FAQ
Q: LM Studio connection failed - how do I fix this?
A: Follow these steps to resolve LM Studio connection issues:
- Open the LM Studio application on your computer
- Load a code-generation model (e.g., Qwen Coder, CodeLlama, or DeepSeek)
- Click the "Start Server" button (green play button) in LM Studio
- Verify the endpoint in your config matches (usually http://localhost:1234)
- Test the connection by running:
python configure_llm.py
Q: Docker is not running - what should I do?
A: Docker must be running for SIA-1 to work properly:
- Windows: Start Docker Desktop from your Start menu or system tray
- Linux: Run
sudo systemctl start docker
in terminal
- macOS: Start Docker Desktop from Applications
- Verify: Run
docker ps
to confirm Docker is working
Q: Port 8080 is already in use - how can I change it?
A: You can change the dashboard port in the configuration:
- Open
config.py
in your SIA-1 directory
- Find the
SYSTEM_CONFIG
section
- Change the dashboard port to an available one (e.g., 8081, 8082)
- Example:
"port": 8081
instead of "port": 8080
- Restart SIA-1 and access the dashboard at the new port
Q: Getting "Model Context Overflow" errors - what does this mean?
A: This happens when your LLM model has a smaller context window than expected:
- Edit
config.py
and reduce max_context_tokens
(try 4096 for smaller models)
- Enable auto-truncation by setting
truncate_input: True
- Consider using a model with a larger context window (8K+ tokens recommended)
- Reduce the complexity of your evolution research content
Q: Container builds are failing - how do I debug this?
A: Container build failures are usually due to resource or syntax issues:
- Check available disk space:
docker system df
- Clean up old containers:
docker system prune -f
- Check the generated agent code in
/DNA_evolved/
for syntax errors
- View specific container logs:
docker logs [container_name]
- Try running the cleanup utility:
python cleanup_containers.py
Q: The system stops evolving after a few generations - why?
A: This is usually due to accumulated errors or resource constraints:
- Check system logs for error patterns
- Ensure LM Studio is still running and responsive
- Verify Docker has sufficient resources (CPU, memory, disk)
- Check that generated code is valid Python (look in
/DNA_evolved/
)
- Restart with the automated test runner:
python automated_test_runner.py
Q: Dashboard shows "No data" or won't load - what's wrong?
A: Dashboard issues are typically network or database related:
- Confirm the dashboard is running on the correct port (check terminal output)
- Try accessing
http://localhost:8080
directly in your browser
- Check if the SQLite database file exists:
agents.db
- Restart SIA-1 to reinitialize the dashboard
- Check browser console for JavaScript errors
Q: Evolution is too slow - how can I speed it up?
A: Several factors affect evolution speed:
- Reduce token limits: Lower
LLM_MAX_TOKENS_CODE
in config.py
- Faster model: Use a smaller, faster LLM model in LM Studio
- Reduce population: Lower
POPULATION_SIZE
in configuration
- Hardware: Ensure adequate CPU/GPU resources for your LLM
- Network: Use local LLM (LM Studio) instead of API calls
Q: Getting Unicode/encoding errors in generated code - how to fix?
A: Unicode errors have been largely fixed, but if they occur:
- Ensure all files are saved with UTF-8 encoding
- Check that your LLM model supports Unicode properly
- The system automatically cleans Unicode characters from generated code
- If persistent, try a different LLM model that handles encoding better
Q: How do I view logs and debug information?
A: SIA-1 provides several logging options:
- Real-time logs:
tail -f system.log
(Linux/Mac) or view system.log
file
- Test execution logs: Check files named
test_execution_log_*.txt
- Container activity: Run
docker stats
to monitor resource usage
- Specific container logs:
docker logs [container_name]
- Evolution history: Check the
/DNA_evolved/
directory
Q: Installation fails with permission errors - what should I do?
A: Permission issues are common during setup:
- Windows: Run terminal as Administrator
- Linux/Mac: Use
sudo
for system-level installations
- Python packages: Try
pip install --user -r requirements.txt
- Docker: Ensure your user is in the docker group (Linux)
- File permissions: Check that the SIA-1 directory is writable
Q: The system uses too much memory/CPU - how can I limit resources?
A: Resource usage can be controlled through configuration:
- Container limits: Set Docker resource limits in
docker-compose.yml
- Population size: Reduce
POPULATION_SIZE
to create fewer containers
- Cleanup frequency: Increase container cleanup frequency
- LLM model: Use a smaller, less resource-intensive model
- Generation limit: Set
MAX_GENERATIONS
to stop after a certain number