Common Issues
Authentication Errors
401 Unauthorized - Authentication credentials not provided
401 Unauthorized - Authentication credentials not provided
403 Forbidden - Invalid API key
403 Forbidden - Invalid API key
- Log in to https://app.aimon.ai
- Go to Account → API Key
- Generate a new API key
- Update your application with the new key
429 Too Many Requests - Rate limit exceeded
429 Too Many Requests - Rate limit exceeded
- Implement exponential backoff
- Add delays between requests
- Cache responses when possible
- Contact support for higher limits
Dataset Issues
File upload fails
File upload fails
- File too large (>100 MB)
- File corrupted
- Incorrect encoding (not UTF-8)
- Unsupported file format
- Check file size:
ls -lh yourfile.csv - Verify file opens correctly
- Convert to UTF-8:
iconv -f ISO-8859-1 -t UTF-8 input.csv > output.csv - Ensure correct file extension
ZIP file rejected for MULTI_FOLDER
ZIP file rejected for MULTI_FOLDER
- Folders must be at ZIP root, not nested
- No empty folders
- No files at root level
Wrong dataset type chosen
Wrong dataset type chosen
- Dataset type cannot be changed after creation
- Delete the dataset
- Create new dataset with correct type
- Re-upload files
Analysis & Specification Issues
Analysis takes too long
Analysis takes too long
- Check status endpoint for errors
- Verify dataset files are not corrupt
- Try with smaller dataset first
- Contact support if persistent
Analysis fails with error
Analysis fails with error
- Dataset files corrupted or unreadable
- Unsupported file format
- Files not UTF-8 encoded
- Dataset too small (<3 samples)
- Check error message in status response
- Verify all files are valid
- Ensure at least 3-5 samples in dataset
- Test with known-good data first
Generated spec doesn't capture requirements
Generated spec doesn't capture requirements
- Manually edit the specification
- Add specific requirements explicitly
- Provide more diverse seed data
- Add examples of edge cases
YAML syntax error when updating spec
YAML syntax error when updating spec
- Validate YAML: https://www.yamllint.com/
- Use spaces for indentation, not tabs
- Escape special characters
- Use
|for multiline strings:
Generation Issues
Generation stuck or very slow
Generation stuck or very slow
- Short samples: 5-30 seconds each
- Long samples: 2-10 minutes each
- Check status for progress updates
- High system load may slow processing
- Large batches take longer
- Contact support if no progress for 30+ minutes
Generation fails immediately
Generation fails immediately
- Specification not in READY status
- Wrong sample type for dataset
- Invalid parameters
- Model timeout
- Verify spec status is READY
- Use long samples for structured data (CSV, JSON)
- Check error message in status response
- Try with smaller batch (5-10 samples)
Short samples not supported error
Short samples not supported error
- Use
sample_type: "long"instead - Short samples don’t work for:
- Single-file CSV/JSON/JSONL datasets
- Complex structured data
Poor sample quality
Poor sample quality
- Review and clarify specification requirements
- Add more specific constraints
- Provide better quality seed data
- Try different model (e.g., claude-sonnet vs haiku)
- Adjust temperature (0.5-0.8 for balance)
Samples too similar / not enough variation
Samples too similar / not enough variation
- Increase temperature (try 0.8-0.9)
- Add more variation axes to specification
- Ensure variation values are distinct
- Generate larger batch for better distribution
- Provide more diverse seed data
Evaluation Issues
Low conformance rate (<70%)
Low conformance rate (<70%)
- Review failed samples to identify patterns
- Clarify vague requirements in spec
- Simplify complex or contradictory requirements
- Add format examples to specification
- Try different model
Unbalanced classification distribution
Unbalanced classification distribution
- Clarify variation axis descriptions
- Make all variation values equally specific
- Reduce number of variation axes
- Generate larger batch
- Regenerate with adjusted spec
Evaluation takes too long
Evaluation takes too long
- Wait up to 15 minutes
- Check evaluation status periodically
- Large batches take proportionally longer
- Contact support if stuck >30 minutes
Local Development Issues
Docker containers won't start
Docker containers won't start
- Docker not running
- Port conflicts
- Insufficient resources
LocalStack not responding
LocalStack not responding
Database migration errors
Database migration errors
Service can't connect to database/redis
Service can't connect to database/redis
- Verify all services are running:
docker compose ps - Check service logs:
docker compose logs postgres redis - Verify /etc/hosts configuration
- Restart affected services:
docker compose restart service-name
Error Messages Reference
Common Error Codes
| Error | Status | Meaning | Solution |
|---|---|---|---|
| Authentication failed | 401 | Invalid API key | Check API key format and validity |
| Permission denied | 403 | Insufficient permissions | Verify API key has access |
| Resource not found | 404 | Dataset/spec doesn’t exist | Check resource ID is correct |
| Invalid parameters | 400 | Bad request data | Review API documentation |
| Rate limit exceeded | 429 | Too many requests | Implement backoff/retry logic |
| Server error | 500 | Internal error | Retry or contact support |
| Timeout | 504 | Request took too long | Reduce batch size or retry |
Getting Help
Before Contacting Support
Gather this information:- Error message (complete error text)
- Request details (endpoint, parameters)
- Resource IDs (dataset_id, spec_id, task_id)
- Timestamp of when error occurred
- Steps to reproduce the issue
Debugging Tips
Enable verbose logging:FAQ
How long does generation take?
How long does generation take?
- Short samples: 5-30 seconds each
- Long samples: 2-10 minutes each
- Total time depends on batch size and available workers
- Large batches (100+) may take several hours
What's the maximum dataset size?
What's the maximum dataset size?
- Maximum file size: 100 MB per file
- Maximum samples: 1000 per dataset
- Recommended: 10-50 samples for best results
- Minimum: 3-5 samples for meaningful analysis
Can I cancel a running generation?
Can I cancel a running generation?
How do I improve sample quality?
How do I improve sample quality?
- Clarify specification requirements
- Provide higher quality seed data
- Use appropriate sample type (short vs long)
- Adjust temperature (0.7 is default)
- Try different models
- Iterate: generate small batches, review, refine
What models are supported?
What models are supported?
anthropic/claude-sonnet-4-5(default, recommended)anthropic/claude-haiku-4-5(faster, lower cost)gemini/gemini-2.5-pro(complex reasoning)openai/gpt-4.1(alternative high quality)
Are generated samples copyright-free?
Are generated samples copyright-free?
- Your seed data licensing
- Your use case and jurisdiction
- Consult legal counsel for specific questions

