Skip to main content
GET
/
api
/
dataframer
/
evaluations
/
{evaluation_id}
Python
import os
from dataframer import Dataframer

client = Dataframer(
    api_key=os.environ.get("DATAFRAMER_API_KEY"),  # This is the default and can be omitted
)
evaluation = client.dataframer.evaluations.retrieve(
    "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(evaluation.id)
{
  "id": "f4e64db3-3cac-4706-9fd0-c6695ae4694a",
  "run_id": "a98715da-921d-4326-bbf8-208f8bcc2956",
  "status": "SUCCEEDED",
  "conformance_score": 85.5,
  "conformance_explanation": "Generated samples largely conform to the spec with minor deviations in distribution.",
  "distribution_analysis": [
    {
      "property_name": "sentiment",
      "total_samples": 100,
      "requested_distributions": {
        "positive": 40,
        "negative": 30,
        "neutral": 30
      },
      "expected_distributions": {
        "positive": 42,
        "negative": 30,
        "neutral": 28
      },
      "evaluated_distributions": {
        "positive": 45,
        "negative": 28,
        "neutral": 27
      },
      "total_samples_analyzed": 100
    }
  ],
  "sample_classifications": [
    {
      "id": "ad7913d9-a0aa-4a80-83a7-70026e3c1f1d",
      "evaluation_id": "f4e64db3-3cac-4706-9fd0-c6695ae4694a",
      "sample_identifier": "sample_1",
      "classifications": {
        "sentiment": "positive",
        "topic": "technology"
      },
      "created_at": "2025-01-15T10:30:00Z"
    }
  ],
  "started_at": "2025-01-15T10:30:00Z",
  "completed_at": "2025-01-15T10:31:00Z",
  "error_message": null,
  "created_by_email": "[email protected]",
  "created_at": "2025-01-15T10:30:00Z",
  "duration_seconds": 60
}

Documentation Index

Fetch the complete documentation index at: https://docs.dataframer.ai/llms.txt

Use this file to discover all available pages before exploring further.

Use this endpoint to poll for evaluation completion and retrieve results. When an evaluation completes successfully, the response includes:
  • conformance_score: Overall score (0-100) measuring how well samples match expected distributions
  • distribution_analysis: Per-property comparison of expected vs observed percentages
  • sample_classifications: How each generated sample was classified for each property

Authorizations

Authorization
string
header
required

API Key authentication. Format: "Bearer YOUR_API_KEY"

Path Parameters

evaluation_id
string<uuid>
required

Unique identifier of the evaluation

Response

Evaluation details

Full evaluation details including distribution analysis and sample classifications

id
string<uuid>
read-only

Unique identifier for the evaluation

run_id
string<uuid>
read-only

ID of the run being evaluated

status
enum<string>

Current status of the evaluation

Available options:
PENDING,
PROCESSING,
SUCCEEDED,
FAILED
conformance_score
number | null

Overall conformance score (0-100) measuring how well generated samples match the spec's expected distributions. Null until evaluation completes.

conformance_explanation
string | null

Human-readable explanation of the conformance score and any notable deviations

distribution_analysis
object[] | null

Per-property comparison of expected vs observed distributions. Null until evaluation completes.

sample_classifications
object[]
read-only

Classification results for each generated sample. Empty until evaluation completes.

started_at
string<date-time> | null
read-only

When evaluation processing started

completed_at
string<date-time> | null
read-only

When evaluation completed

error_message
string | null
read-only

Error message if evaluation failed

created_by_email
string
read-only

Email of the user who created the evaluation

created_at
string<date-time>
read-only

When the evaluation was created

duration_seconds
number | null
read-only

Time taken to complete the evaluation in seconds

company_id
string<uuid>
read-only

ID of the company that owns this evaluation

status_display
string
read-only

Human-readable status display

conformant_areas
string | null
read-only

Description of areas where samples conform well to the spec

non_conformant_areas
string | null
read-only

Description of areas where samples deviate from the spec

trace
object
read-only

Internal trace information including task_id and evaluation model used

created_by
integer
read-only

ID of the user who created this evaluation

updated_at
string<date-time>
read-only

When the evaluation was last updated