Troubleshooting
Quick fixes for common issues.
Installation
“command not found: insidellms”
source .venv/bin/activate
# or
python -m insideLLMs.cli --version
pip fails
pip install --upgrade pip
python --version # Must be 3.10+
Missing dependencies
pip install -e ".[all]"
API Key Issues
“OPENAI_API_KEY not set”
Cause: Environment variable not configured.
Solutions:
# Set for current session
export OPENAI_API_KEY="sk-..."
# Add to ~/.bashrc or ~/.zshrc for persistence
echo 'export OPENAI_API_KEY="sk-..."' >> ~/.zshrc
# Verify
echo $OPENAI_API_KEY
“Invalid API key”
Cause: Incorrect key format or expired key.
Solutions:
- Verify key format (OpenAI:
sk-..., Anthropic:sk-ant-...) - Check key in provider dashboard
- Regenerate if expired
Rate limiting errors
Cause: Too many requests.
Solutions:
# Reduce concurrency
async: true
concurrency: 3 # Lower number
# Add rate limiting
rate_limit:
requests_per_minute: 60
Configuration Issues
“Dataset file not found”
Cause: Relative path resolved incorrectly.
Important: Paths are relative to the config file’s directory, not the current working directory.
Solution:
# If config is at /project/configs/harness.yaml
dataset:
path: ../data/test.jsonl # Resolves to /project/data/test.jsonl
YAML syntax errors
Cause: Indentation or formatting issues.
Solutions:
# Validate YAML
python -c "import yaml; yaml.safe_load(open('config.yaml'))"
Common fixes:
- Use spaces, not tabs
- Ensure consistent indentation (2 spaces)
- Quote strings with special characters
“Unknown model type”
Cause: Registry not initialized or typo.
Solutions:
from insideLLMs.registry import ensure_builtins_registered
ensure_builtins_registered()
# Check available types
from insideLLMs.registry import model_registry
print(model_registry.list())
Runtime Issues
“Refusing to overwrite directory”
Cause: Safety guard preventing overwrite of non-run directories.
Solutions:
# Use --overwrite flag
insidellms run config.yaml --run-dir ./my_run --overwrite
# Or use a new directory
insidellms run config.yaml --run-dir ./my_run_v2
# Or delete manually
rm -rf ./my_run
Memory errors with large datasets
Cause: Loading entire dataset into memory.
Solutions:
# Limit examples
max_examples: 1000
# Use streaming (if supported)
dataset:
format: jsonl
path: large_file.jsonl
streaming: true
Timeout errors
Cause: Slow API responses.
Solutions:
model:
type: openai
args:
model_name: gpt-4o
timeout: 120 # Increase timeout
“Resume validation failed”
Cause: Prompt set changed between runs.
Solutions:
- Use same config for resume
- Or start fresh: delete run directory and re-run
Diff Issues
“Unexpected changes in diff”
Cause: Non-deterministic model outputs.
Solutions:
# Use DummyModel for determinism testing
models:
- type: dummy
args:
response: "Fixed response"
“Diff fails on latency”
Cause: Comparing volatile fields.
Solutions:
insidellms diff baseline candidate --ignore-fields latency_ms
Output Issues
“Invalid JSONL record”
Cause: Corrupted or incomplete write.
Solutions:
# Validate file
python -c "
import json
with open('records.jsonl') as f:
for i, line in enumerate(f, 1):
try:
json.loads(line)
except:
print(f'Error on line {i}')
"
# Resume from incomplete run
insidellms run config.yaml --run-dir ./my_run --resume
Empty report.html
Cause: No records to report.
Solutions:
- Check
records.jsonlexists and has content - Run harness with
--skip-report false - Generate manually:
insidellms report ./run_dir
Environment Check
Run diagnostics:
insidellms doctor --verbose
This checks:
- Python version
- Required dependencies
- Optional dependencies
- API key environment variables
- Write permissions
Getting Help
Check logs
insidellms run config.yaml --verbose
Debug mode
insidellms run config.yaml --debug
Report an issue
Include:
- insideLLMs version:
insidellms --version - Python version:
python --version - OS:
uname -a - Full error message
- Minimal reproducible config