Skip to main content
GET
/
api
/
dataframer
/
evaluations
/
{evaluation_id}
Python
import os
from dataframer import Dataframer

client = Dataframer(
    api_key=os.environ.get("DATAFRAMER_API_KEY"),  # This is the default and can be omitted
)
evaluation = client.dataframer.evaluations.retrieve(
    "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(evaluation.id)
{
"id": "f4e64db3-3cac-4706-9fd0-c6695ae4694a",
"run_id": "a98715da-921d-4326-bbf8-208f8bcc2956",
"status": "SUCCEEDED",
"conformance_score": 85.5,
"conformance_explanation": "Generated samples largely conform to the spec with minor deviations in distribution.",
"distribution_analysis": [
{
"property_name": "sentiment",
"total_samples": 100,
"expected_distributions": {
"positive": 40,
"negative": 30,
"neutral": 30
},
"observed_distributions": {
"positive": 45,
"negative": 28,
"neutral": 27
},
"total_samples_analyzed": 100
}
],
"sample_classifications": [
{
"id": "ad7913d9-a0aa-4a80-83a7-70026e3c1f1d",
"evaluation_id": "f4e64db3-3cac-4706-9fd0-c6695ae4694a",
"sample_identifier": "sample_1",
"classifications": {
"sentiment": "positive",
"topic": "technology"
},
"sub_file_classifications": null,
"created_at": "2025-01-15T10:30:00Z"
}
],
"started_at": "2025-01-15T10:30:00Z",
"completed_at": "2025-01-15T10:31:00Z",
"error_message": null,
"created_by_email": "[email protected]",
"created_at": "2025-01-15T10:30:00Z",
"duration_seconds": 60
}

Authorizations

Authorization
string
header
required

API Key authentication. Format: "Bearer YOUR_API_KEY"

Path Parameters

evaluation_id
string<uuid>
required

Unique identifier of the evaluation

Response

Evaluation details

Full evaluation details including distribution analysis and sample classifications

id
string<uuid>

Unique identifier for the evaluation

run_id
string<uuid>

ID of the run being evaluated

status
enum<string>

Current status of the evaluation

Available options:
PENDING,
PROCESSING,
SUCCEEDED,
FAILED
conformance_score
number | null

Overall conformance score (0-100) measuring how well generated samples match the spec's expected distributions. Null until evaluation completes.

conformance_explanation
string | null

Human-readable explanation of the conformance score and any notable deviations

distribution_analysis
object[] | null

Per-property comparison of expected vs observed distributions. Null until evaluation completes.

sample_classifications
object[]

Classification results for each generated sample. Empty until evaluation completes.

started_at
string<date-time> | null

When evaluation processing started

completed_at
string<date-time> | null

When evaluation completed

error_message
string | null

Error message if evaluation failed

created_by_email
string

Email of the user who created the evaluation

created_at
string<date-time>

When the evaluation was created

duration_seconds
number | null

Time taken to complete the evaluation in seconds

company_id
string<uuid>

ID of the company that owns this evaluation

status_display
string

Human-readable status display

conformant_areas
string | null

Description of areas where samples conform well to the spec

non_conformant_areas
string | null

Description of areas where samples deviate from the spec

trace
object

Internal trace information including task_id and evaluation model used

created_by
integer

ID of the user who created this evaluation

updated_at
string<date-time>

When the evaluation was last updated