Skip to main content

Accuracy

Up to 99.999% detection accuracy across supported entity types, combining AI model recognition with pattern-based rules.

Pricing

Starting at $0.10 per million tokens processed. Only detected and transformed tokens count toward usage.
Dataframer’s detection, anonymization, and augmentation feature gives you three modes of operation on sensitive data across your datasets:
  • Detection — identify sensitive entities and surface them for review, without modifying the data
  • Anonymization (redaction) — replace detected entities with mask tokens to remove sensitive information
  • Augmentation — transform detected entities into synthetic but realistic replacements, preserving data utility while eliminating real sensitive values
Detection covers eight categories of sensitive information:
  • PII — Personally Identifiable Information (names, dates, contact details, demographics)
  • PHI — Protected Health Information (diagnoses, medications, medical record numbers)
  • PCI — Payment Card Industry data (credit cards, bank accounts, routing numbers)
  • Financial Data — Tax IDs, IBANs, and other financial identifiers
  • Credentials / Secrets — Passwords, API keys, tokens
  • Government ID Data — Passports, driver licenses, national IDs, voter IDs
  • Device / Digital Identifiers — IP addresses, MAC addresses, device IDs, URLs
  • Employment / Professional Data — Employee IDs, salaries, job titles, company names
Anonymization jobs overview

Creating a job

Step 1: Select dataset

Choose a seed dataset from your library as the input for the job. Give the job a descriptive name so you can identify it later. Step 1 – Select a dataset for anonymization

Step 2: Detection configuration

Configure how sensitive entities are detected and which model evaluates the results. Step 2 – Detection configuration: choose detection method, confidence threshold, and evaluation judge

Detection methods

Uses the AIMon PII detection model exclusively, relying on learned entity recognition without rule-based augmentation.
Combines an LLM for contextual detection with fast pattern-based rules. Useful when you want LLM judgment alongside deterministic patterns.
Delegates all detection to an LLM. The most flexible option for unusual or domain-specific entity types.
Pattern-based detection only. Fastest option with deterministic behavior, but lower recall on context-dependent entities.
Combines AIMon-PII-M1, LLM, and AIMon-PII-Simple in a union. Best for maximum coverage when false negatives are unacceptable.

Confidence threshold

The confidence threshold controls the trade-off between recall and precision. Lower values (e.g., 0.1) produce more detections with more potential false positives. Higher values (e.g., 0.9) produce fewer detections but with higher certainty. The default of 0.30 works well for most datasets.

Evaluation judge model

After the transform completes, an LLM evaluates the quality of the anonymization or augmentation. Select the model you want to use for this post-processing evaluation step.

Step 3: Entity types & masks

Select which entity types to detect and configure how each one is handled in the output—either replaced with a mask token (anonymization) or substituted with a synthetic value (augmentation). Step 3 – Select sensitive entity types and configure mask tokens The full set of supported entity types is organized by category:
Person Name, Date of Birth, Date, Age, Gender, Nationality, Ethnicity, Marital Status
Email, Phone Number, Address, ZIP Code, City, State, Country
Medical Record Number, Diagnosis, Medication, Health Plan Number, Patient ID, Lab Result
Credit Card, Bank Account, Routing Number
Social Security Number, Tax ID, IBAN
Password, Username, API Key, Token
Passport Number, Driver License, National ID, Voter ID
IP Address, URL, MAC Address, Device ID
Company Name, Job Title, Employee ID, Salary
For anonymization, each selected type maps to a mask token in the output—for example, person_name → <NAME> or date_of_birth → <DOB>. You can customize the mask token for each type. For augmentation, detected values are replaced with synthetic equivalents that preserve the format and context of the original.

Step 4: Review & submit

Review your full configuration before submitting. The summary shows the job name, dataset, detection method, confidence threshold, evaluation judge, and all selected entity types with their configured transforms (mask tokens for anonymization, or synthetic replacement rules for augmentation). Step 4 – Review and submit the anonymization job After submission, the job processes in the background. You can monitor progress on the job detail page.