Skip to main content
This comprehensive guide covers the complete workflow from seed data upload through result analysis.

Upload seed data

Choosing upload mode

Dataframer supports three upload modes depending on your data structure:
When to use: Structured datasets in tabular or line-delimited formatSupported formats:
  • CSV: Tabular data with optional headers
  • JSONL: One flat JSON object per line (no nesting)
  • JSON: Array of flat objects only (no nesting)
Constraints:
  • Max file size: 50MB
  • Max columns/fields: 40
Example use cases:
  • Product reviews CSV with columns: review_text, rating, date
  • Chat conversations JSONL with fields: user_message, assistant_response, tone
  • API responses JSON with array of objects
When to use: Collection of independent samples, each in its own fileSupported formats: TXT, MD, JSON, CSV, JSONLConstraints:
  • Max 1,000 files
  • 1MB per file
  • 50MB total
  • All files must use same format
Structure: Flat folder of files, each file = one sampleExample use cases:
  • Folder of 100 product descriptions (100 .txt files)
  • Collection of Python functions (100 .py files)
  • Set of news articles (100 .md files)
When to use: Multi-file samples where each sample consists of related filesSupported formats: MD, TXT, JSON, CSV, JSONLConstraints:
  • Minimum 2 folders required
  • Max 20 files per folder
  • Max 1,000 files total
  • 1MB per file
  • 50MB total across all folders
  • Max depth: parent/subfolder/file.txt (3 levels)
Structure: Parent folder → subfolders → files (each subfolder = one sample)Example use cases:
  • Code repositories (homogeneous): Each folder contains main.py + utils.py + config.json
  • Mixed document sets (heterogeneous): Folder 1 has report.pdf, Folder 2 has article.md + references.json, Folder 3 has presentation.pptx + notes.txt
  • Flexible data samples (heterogeneous): Folder 1 has data.csv, Folder 2 has schema.sql + queries.sql, Folder 3 has output.json

Seed data best practices: quality over quantity

Generation heavily depends on the quality of the seeds.
  • Of course, “quality” is defined relative to what you want to achieve—you may want to generate data with imperfections, then your seed examples should display those imperfections.
  • In some cases you can fix seed issues by using the “Generation Objectives” feature, but in general it is recommended to have very high-quality seed examples.
Number of samples:
  • 2 samples is minimum, even 2 samples is often enough, but loading a more substantial number of seed examples is encouraged.
  • More samples improve inference of data properties and distributions.

Create specifications

Specification creation workflow

  1. Navigate to your uploaded dataset
  2. Click Create Spec
  3. Configure spec settings
  4. Submit and wait for analysis (1-5 minutes)
  5. Review generated spec
  6. Edit if needed

Configuration options

Natural language guidance to influence property discovery.Purpose: Help the analyzer understand what matters in your dataExamples:
Include writing style and formality as separate properties
Don't consider text length as a variable - let it vary naturally
Add neutral sentiment alongside positive/negative
Make formal tone 80% likely, casual 20%
(Last example only works if “Generate probability distributions” is enabled)
For Python code, use Django/Flask frameworks; for JavaScript code, use Express/React frameworks
(This example works if “Include conditional distributions” is enabled - creates dependencies between properties)How objectives flow into generation:Generation objectives influence the specification contents (both shared properties and variable properties). The spec in turn influences the distribution of generated data. Objectives only indirectly affect generation through the spec.Important: After spec creation completes, always review the generated spec to verify your objectives were correctly captured. If the spec doesn’t match your intent, edit it manually or regenerate with refined objectives.Best practices:
  • Be specific about properties you want captured
  • Explicitly exclude properties you don’t want
  • Feel free to list specific values, or give instructions for generating those values
  • Suggest probability adjustments if you have strong preferences
  • Review the generated spec to ensure objectives were met
Model used to analyze seeds and create specification.A powerful model, such as Claude Sonnet Thinking, is recommended here.
Create explicit probability distributions for each property value.Default: ONWhen enabled:
  • Each property gets probabilities: formal: 0.6, casual: 0.4
  • Properties are sampled independently using these probabilities (unless conditional distributions are enabled)
  • Enables more controlled generation
When disabled:
  • Properties discovered but no explicit probabilities
  • Generation samples uniformly from observed values, unless you manually add probabilities in the spec
Model dependencies between properties.Default: OFFRequires: “Generate probability distributions” must be ONWhat are conditional distributions:Conditional distributions override the default probabilities based on previously selected property values. Each property has:
  • Base probabilities: Default probabilities used when no condition matches
  • Conditional probabilities: Alternative probabilities used when specific conditions are met
Example YAML structure:
axis: Framework
possible_values: [Django, Flask, Express, React, Spring]
base_probabilities: [0.2, 0.2, 0.2, 0.2, 0.2]  # Default: equal probability
conditional_probabilities:
  Language:
    Python: [0.4, 0.4, 0.1, 0.1, 0.0]  # If Language=Python: favor Django/Flask
    JavaScript: [0.0, 0.0, 0.45, 0.45, 0.1]  # If Language=JavaScript: favor Express/React
    Java: [0.0, 0.0, 0.0, 0.0, 1.0]  # If Language=Java: only Spring
How sampling works:Properties are sampled sequentially in the order they appear in the spec. For each property:
  1. Check if any conditional rule applies (based on already-selected property values)
  2. If a matching conditional rule is found, use those probabilities
  3. If no conditional rule matches, fall back to base probabilities
  4. If multiple conditional rules could apply, the first matching one is used (order defined by spec)
Example: If Language is sampled first as “Python”, then when Framework is sampled, the conditional rule Language: Python applies, so Framework uses [0.4, 0.4, 0.1, 0.1, 0.0] instead of base probabilities.When to enable:
  • Dataset has obvious correlations
  • You need compatibility constraints (file extension matches language)
Trade-offs:
  • More complex to edit in the UI: deleting a properties requires deleting all other properties that depend on it; there are more values to view and edit
Discover properties not explicitly present in seeds.Default: OFFWhen enabled:
  • Example: Seeds have colors but don’t differ in brightness → spec still contains a new brightness property to create a different type of variation
When disabled:
  • Only the types of variation explicitly present in seeds are included
  • Example: Seeds have colors but don’t differ in brightness → spec will not contain a brightness property, only a color property.
Suggest values not present in seeds for each property.Default: ONWhen enabled:
  • Expands possible values beyond seeds
  • Example: Seeds have red/blue → suggests green/yellow/purple
  • Example: Seeds have Python/JavaScript → suggests Java/Go/Rust
When disabled:
  • Use when you want generation limited to observed values
  • Example: Seeds have Python/JavaScript → spec programming language property only contains Python/JavaScript

Editing specifications

Once created, specs can be edited:
  • Add new properties and remove existing ones
  • Add/remove property values
  • Edit the probability distributions
  • Create and edit conditional probability distributions
It only saves the spec once you click “Save”: this creates a new numbered version of the spec. You can select current or past versions of the spec when configuring a run.
For single-file datasets containing SQL schema and query columns, these columns will be automatically recognized. This works if there is one SQL schema column and one or more query columns corresponding to the schema. Verify SQL column detection in Specs -> click on Spec -> “Advanced Settings” at the bottom of the page. If correct columns are selected there, all generated schemas and queries are guaranteed to be valid in MySQL, SQLite, and PostgreSQL.

Create runs

We recommend using Long Samples mode for all workloads. It provides advanced features like revisions, outlines, and validation that work for both short and long content. Short Samples mode is available but has fewer features.

Run configuration

Navigate to Runs → Create Run → Select spec and version.

Long Samples configuration

Same as short samples. Range: 1-20,000
Model used to generate document parts.Recommended: Claude Sonnet 4.5See short samples section for full model list.
Model used to create document blueprint.Recommended: Claude Sonnet 4.5 Thinking
Turn on quality improvement cycles after generation.Default: ON (recommended)It’s recommended to always keep it ON when generating structured documents (e.g. json/csv) in multi-file / multi-folder seed dataset modes. On the other hand, it’s recommended to turn it OFF with structured documents in single-file mode due to higher cost and generation slowdown.When enabled:
  • Revision model performs quality passes
  • Multiple revision types including for coherence and for conformance to specified properties
When disabled:
  • Raw generated output without refinement
  • Faster and cheaper but works worse with very dense, complex or structured documents
Model used to perform quality improvements.Recommended: Claude Sonnet 4.5 Thinking. Weak models not recommended as revision is a complex task.Only used if “Enable revisions” is ON
Number of quality improvement passes.Range: 1-5For highest quality or whenever you see any issues with generated data, increase this setting to max (5). Note this increases cost and generation time.
For SQL query generation only.Options:
  • syntax: Fast syntax check only
  • syntax+schema: Executes schema
  • syntax+schema+execute: Full execution test (recommended, selected by default)
With either of syntax+schema, all schemas are guaranteed to be valid in all 3 DB types: SQLite, PostgreSQL, MySQL. For syntax+schema+execute, queries are guaranteed to be valid as well.
Whether and how much to shuffle seeds when composing various generation prompts.Default: No shuffling (recommended)Introducing more shuffling might increase diversity a bit in some cases, however, the generation time and cost increase steeply.
Limit number of seeds shown to model.Default: As many seeds as possible are packed into model context window.You can override it with an integer to cap examples supplied to every generation prompt. This can drastically reduce generation costs when the seed dataset has substantial number of samples (e.g. 300 small samples, or 30 huge documents). In these cases, it’s recommended to set Max Examples in Prompt to a small number like 15.

Short Samples configuration

How many samples to generate.Range: 1-20,000
Model used to generate samples.Recommended: Claude Sonnet 4.5
Model used to evaluate quality in the evaluation-generation loop (if max_iterations > 0).Recommended: Claude Sonnet 4.5Only relevant if max_iterations > 0
Number of generation-evaluation-revision cycles.Range: 0-10Settings:
  • 0: Generation-only mode (default, fastest, no evaluation)
  • 2-4: Some evaluations with balanced quality/cost
Break each generation into multiple LLM calls.Default: OFF
Accumulate feedback from all evaluation iterations.Default: OFF
How many seed examples to show the model.Default: 5Range: 1-50Trade-off: More examples = better understanding but more tokens

Monitor runs and access generated data

Click a run to see detailed information: Overview Tab:
  • Configuration parameters (models, settings used)
  • Spec and dataset references
  • Metrics: success rate, failure rate, duration, iterations
Generated Dataset Tab:
  • File explorer
  • LLM-as-a-judge assigned labels for samples are visible either in dataset viewer and/or inline file viewer depending on dataset type.
  • Manually label samples with key-value annotation tags. These tags are saved and downloaded together with the data.
  • Download individual files or all files together.
Evaluation Tab (appears after completion):
  • Distribution Analysis: Expected vs observed distributions (bar charts) for different properties
  • Chat Interface: Ask questions about generated dataset