A buyer's directory

Who offers AI to help fill in complex compliance forms? Three answers, three different products.

The published lists for this question put Vanta AI, Drata, Sprinto, Scytale, Centraleyes, IBM watsonx.governance, UiPath, and Power Automate in one bucket and call it a day. That flattens a category that is actually three categories. A SIG questionnaire, a faxed claim form, and a Fiserv onboarding screen are all called “compliance forms” in plain English, but they live on three different surfaces and demand three different products. This page is a directory, not a leaderboard. It splits the vendors by the form they actually reach and by the artifact each one leaves behind for the auditor to read.

M
Matthew Diakonov
12 min

The category is three categories

Strip away the vendor marketing and there are three distinct surfaces to fill. The vendors split along these surfaces. Most of them only touch one. A small number reach two. None of them, including Mediar, covers all three with a single primitive, and a buyer who tries to force a single-vendor answer ends up with a beautifully filled questionnaire and an empty Epic registration screen, or vice versa.

Layer 1. The web questionnaire

SIG, CAIQ, NIST 800-171, vendor RFPs, modern HTML or Excel-attachment forms. The answer is mostly text drafted from a knowledge base. The artifact is the submitted questionnaire export plus an answer-source log.

Layer 2. The scanned document

PDF claim forms, faxed enrollment paperwork, broker submission packets. The job is to read the document, classify the fields, and pass structured data downstream. The artifact is an extraction record with confidence scores per field.

Layer 3. The system-of-record screen

The form inside Epic, Fiserv, Jack Henry, Guidewire, SAP GUI, Oracle EBS, an AS/400 emulator. No API, no DOM, validation across screens. The artifact is the workflow definition checked into source control plus a per-step trace.

The rest of this page walks each bucket. Inside each one I name the vendors a buyer is most likely to evaluate, what they actually fill, and what artifact each one produces when the auditor asks. The third bucket is where Mediar lives, so that section is denser and grounded in the product source.

Layer 1. AI for the web questionnaire

This is the loudest bucket. A prospect or regulator sends a structured template (SIG Lite, SIG Core, CAIQ v4, NIST 800-171 attestation, an ISO 27001 questionnaire, a custom RFP). A model with access to the customer's trust evidence drafts text answers, a security analyst reviews them, and the platform writes them back into the questionnaire portal or exports them to Excel. The audit artifact is the submitted questionnaire plus, on the better products, a per-answer source citation pointing at the policy or evidence row the model drew from.

Layer 1: questionnaire and trust portal

Vendors in this bucket

Vanta AI

Drafts SIG / CAIQ / vendor questionnaire answers from your trust center evidence.

Drata

Trust questionnaire AI plus controls evidence collection across SOC 2, HIPAA, ISO 27001.

Sprinto

Autonomous trust platform with AI agents that map evidence to control families and respond to questionnaires.

Scytale

AI compliance copilot covering questionnaire response and policy generation across frameworks.

Centraleyes

GRC platform with AI-assisted assessment intake and audit-ready report generation.

Inventive AI

Knowledge-base RFP and security-questionnaire answer drafting; review-then-submit flow.

Responsive (RFPIO)

Long-running RFP and questionnaire response platform; AI Assistant drafts from your answer library.

Spellbook

AI copilot for legal and compliance lawyers reviewing or drafting contractual compliance language.

The good ones (Vanta AI, Drata, Sprinto, Scytale, Centraleyes, Inventive, Responsive) are mature products. They genuinely shrink a two-week SIG turnaround into something a small team can do in days. What they do not do is type into the screen inside Epic where the patient record actually lives. That is not a criticism, it is a surface limitation: their tool stack is HTML and JSON, and the regulated screen they would otherwise have to fill is not.

Audit artifact in this layer: a questionnaire export (XLSX, PDF, or portal submission) plus an answer-source log for each answer the model drafted. Compliance signs off on the answer text and the citation. The runtime that wrote the answer back is rarely audited because the surface (a web form) is well-understood.

Layer 2. AI for the scanned document

A claim arrives by fax. A PDF first-notice-of-loss lands in an inbox. A KYC packet shows up as a multi-page TIFF. Layer 2 vendors classify the document, find the fields, extract values with confidence scores, and pass the structured result downstream. They are not primarily form-fillers; they are field extractors that feed forms. When a buyer asks “does AI fill this claim form?” the honest answer is that AI extracts the fields, and a separate downstream system writes them into the actual record.

Layer 2: document understanding and extraction

Vendors in this bucket

Hyperscience

Document classification and field extraction for forms-heavy industries (insurance, government, finance).

Rossum

AI document gateway for invoices, claim forms, and structured paperwork; routes extracted fields downstream.

Klarity

Document AI focused on revenue and order operations; reads contracts and order forms.

Docupanda

Schema-driven extraction from any document into structured JSON.

Instabase

Document understanding platform widely used in banking onboarding and KYC.

ABBYY

Long-standing document capture and intelligent processing toolkit, common in regulated back offices.

The mature ones (Hyperscience, Rossum, ABBYY, Instabase) carry years of regulated-industry deployments. The newer entrants (Klarity, Docupanda) are easier to set up but less battle-tested in production back offices. The pairing question that matters: once the fields are extracted, what writes them into the system of record? That is where this layer hands off to the next.

Audit artifact in this layer: a structured extraction record with a confidence score per field, a copy of the original document, and the model version that produced the extraction. Reviewers care about field-level confidence and the human review queue, not about the downstream write.

Layer 3. AI for the system-of-record screen

This is the bucket the published lists usually skip. A patient access coordinator types a registration into Epic. A bank associate opens a new commercial account through 17 screens of Fiserv DNA. A claims adjuster files a first notice of loss across three tabs in Guidewire. A finance clerk posts a journal entry through F-02 in SAP GUI. None of these screens is a web form. None has a documented public API that covers the full screen flow. None is reachable by a tool whose surface is HTML and JSON. Filling these forms is a desktop-automation problem with a regulated audit overlay on top.

Layer 3: system-of-record and legacy desktop

Vendors in this bucket

Mediar

AI agent watches a workflow once, then a deterministic Rust runtime replays it through Windows accessibility APIs.

UiPath

Long-incumbent enterprise RPA platform with newer Autopilot / Agentic Automation surface.

Automation Anywhere

RPA suite with Document Automation and an Automator AI layer for less-structured tasks.

Microsoft Power Automate

Cloud and desktop flows; AI Builder for forms and document processing inside the Microsoft 365 stack.

Blue Prism (SS&C)

Enterprise RPA used in regulated financial services; on-prem digital workforce.

Sola

Newer browser and desktop automation focused on AI-recorded workflows.

Inside layer 3 the meaningful split is between agentic runtimes (a model decides the next click each run, which is fast to deploy but harder for a compliance reviewer to sign) and recording-and-replay runtimes (a model authors a workflow once during recording, then a deterministic process executes it). Mediar sits firmly in the recording-and-replay camp. UiPath and Automation Anywhere have both shipped agentic surfaces in the last two years; their classic studios remain selector-driven RPA. Power Automate covers all of this from inside the Microsoft stack with the trade-offs that implies. Sola is a newer entrant with an AI-first recording flow.

Audit artifact in this layer: the workflow definition (a TypeScript file in Mediar's case, a XAML file in UiPath's, a JSON flow definition in Power Automate's) plus a per-run trace. Compliance signs off on the workflow definition the same way they would sign off on a stored procedure, then samples the trace to confirm the runtime did what the definition said.

The artifact most buyers forget to ask for

When a regulator asks “show me how this record was filled” six months after the fact, every vendor in this directory has to produce something. Layer 1 produces a question-answer log. Layer 2 produces an extraction record. Layer 3 produces a workflow file plus a trace. The shape of that workflow file is the part most buyers do not interrogate, and it is where the layer-3 vendors actually differ.

Inside Mediar, every recorded step lands as a structured record defined by the StepAnalysis struct in apps/desktop/src-tauri/src/recording_processor.rs (lines 31 to 49). The struct names eight fields per step: step_title, step_summary, events_that_happened, how_content_changed, results_if_any, what_was_clicked, what_was_typed, and user_intent. Those eight fields are serialized into a TypeScript file the runtime executes. The file is the audit artifact.

8

Eight named fields per step is the difference between an audit artifact a JavaScript-literate reviewer can read on a normal pull request, and a XAML blob or a Microsoft-proprietary flow definition that requires a vendor-specific tool to inspect.

StepAnalysis struct, recording_processor.rs lines 31 to 49

UiPath records the same intent into XAML, an XML dialect designed for Workflow Foundation. Power Automate records it into a Microsoft-proprietary flow definition. Both are inspectable, but inspection requires the vendor's studio. A TypeScript file with named semantic fields is the same shape a software team already reviews on every pull request, which is the integration point that actually matters for a compliance program staffed by software engineers and risk officers in the same room.

Where the typing actually happens

One layer below the workflow file, the runtime emits a single MCP tool call to fill a field: type_into_element, defined in apps/desktop/src-tauri/src/mcp_converter.rs. The tool takes a structured locator and a string. The locator resolver tries the recorded automation id first, then the window handle plus bounds, then the visible text content, then the parent window as a last fallback. Three of those four strategies are position-independent, so a routine UI tweak (a button shifts a row, a panel reorders, a form gets a new tab) usually resolves through one of the first three. None of this involves a model call at runtime. The same primitive types into Epic, Cerner, Fiserv, Jack Henry, Guidewire, Oracle EBS, SAP GUI, and any AS/400 emulator that exposes a Windows accessibility tree.

The buyer's mental model usually flips once

A common arc on first calls: a buyer arrives convinced they need “an AI for our compliance forms,” meaning the SIG queue. Twenty minutes in they realize that piece is solved by Vanta AI or Drata, and the painful one was always the Epic registration flow or the Fiserv onboarding screen. The mental model swaps from “one AI for forms” to “a stack of three layers, with the hardest one being the screen inside the legacy app.”

Before and after the directory

We need an AI to fill our compliance forms. There must be one tool that does this. Most lists I read put Vanta, UiPath, and Hyperscience next to each other, so the category must be a single market. Whichever vendor scores highest on the comparison page is the one to pilot.

  • Treats one phrase as one product market
  • Picks based on a flat comparison list
  • Runs out the pilot before discovering the surface mismatch

A short answer when someone asks at the dinner table

If you are filling a vendor questionnaire, look at Vanta AI, Drata, Sprinto, Scytale, Centraleyes, Inventive, Responsive, or Spellbook. They are all in the same market and a head-to-head comparison is fair.

If you are extracting fields from a scanned document, look at Hyperscience, Rossum, Klarity, Docupanda, Instabase, or ABBYY. The right pick depends on volume, document complexity, and which downstream system you are feeding.

If you are filling the form inside the legacy desktop app where the regulated record actually lives, look at Mediar, UiPath, Automation Anywhere, Power Automate, Blue Prism, or Sola. Inside this layer the decision is mostly about runtime determinism and what the audit artifact looks like when it is checked into source control. That is the part most flat lists never tell you.

Bring a real form, see the runtime fill it

If your bottleneck is a layer 3 form (Epic, Fiserv, Jack Henry, Guidewire, SAP GUI, or any Win32 app), book a call. We will record the workflow live, show the TypeScript audit artifact the recording produces, and replay it against your test environment in the same call.

Frequently asked questions

Why does a question this simple need three answers?

Because the word 'form' is doing too much work. A 380-question SIG that a procurement team sends as a web link, a 14-page faxed claim packet that arrives as a PDF, and a 17-screen new-account onboarding flow inside Fiserv DNA are all 'compliance forms' in the way buyers talk about them, but they live on three different surfaces. A single product cannot reach all three with the same primitive. The vendors split along that line, even when their marketing pages do not.

How do I tell which bucket a vendor belongs to without a demo?

Read the integration list at the bottom of their page. Layer 1 vendors integrate with HubSpot, Loopio, the customer's trust portal, and Slack. Layer 2 vendors integrate with email inboxes, S3 buckets, and downstream systems via API. Layer 3 vendors describe screen recording, accessibility tree access, or 'works with apps that have no API.' If a vendor names SAP GUI, AS/400, Epic, or Jack Henry by application name, they are in layer 3. If they name Vanta, Drata, or Loopio, they are in layer 1. If they name Snowflake or Workday, they are usually layer 2.

What does Mediar do that the layer 3 incumbents do not?

Two things. The runtime is deterministic Rust calling Windows UI Automation, with zero model calls in the hot path. UiPath and Automation Anywhere both have agentic AI layers now, but the agent decides clicks at runtime, which means two identical inputs can produce two different action sequences and a compliance officer cannot sign the workflow off the way they sign off a SQL stored procedure. The other difference is the recording artifact: Mediar emits a TypeScript file with an eight-field semantic record per step, derived from the StepAnalysis struct in apps/desktop/src-tauri/src/recording_processor.rs at lines 31 to 49. UiPath emits a XAML file plus a Studio project; Power Automate emits a Microsoft-proprietary flow definition. Each shape is auditable in its own way; the Mediar shape is the one a JavaScript-literate reviewer can read on a normal pull request.

Can a layer 1 vendor like Vanta fill a form in Epic?

No. The question is mechanical, not strategic. Vanta AI's surface is HTML and JSON: it writes into the customer's trust portal and into structured questionnaire fields. An Epic registration screen is a Win32 control reachable through the Windows UI Automation accessibility tree. There is no way for a tool that knows HTML to type a value into a UIA edit control without a separate runtime that runs on Windows and walks the accessibility tree. That runtime is what layer 3 sells.

What audit artifact does each layer produce?

Layer 1 produces an export of question and answer pairs, ideally with a citation to the knowledge-base source for each answer. Layer 2 produces an extraction record with field-level confidence scores and a copy of the original document. Layer 3 produces a workflow definition (a TypeScript file in Mediar's case, a XAML file in UiPath's, a JSON flow definition in Power Automate's) plus a per-run trace that records which element was matched, what was typed, and what the application replied. A complete compliance program collects all three, because real audits ask all three questions.

Are any of these vendors interchangeable?

Inside a layer, mostly yes; across layers, no. Vanta AI, Drata, and Sprinto compete with each other for the questionnaire-and-evidence job, and a buyer evaluating those three is making a fair comparison. Mediar, UiPath, and Power Automate compete for the system-of-record job, and a buyer evaluating those three is also making a fair comparison. A buyer evaluating Vanta AI against Mediar is comparing two different products that share a noun, and one of them will be the wrong tool for whichever form the buyer actually has in mind.

Where does the open-source Terminator SDK fit in?

Mediar's execution layer (the part that resolves a locator and types into a Win32 control) is open source under MIT at github.com/mediar-ai/terminator. A team that wants to build a custom layer-3 form-fill bot without paying for the cloud product can call those primitives directly: type_into_element, click_element, set_value, and a few others. The orchestration layer, the recording pipeline, and the no-code builder are commercial. The open-source piece is enough to fill a regulated form on a Windows machine; the commercial piece is what turns it into a queue with retries, scheduling, validation rules, and SOC 2-grade audit logs.

What about the agentic AI agents marketed as 'autonomous compliance assistants'?

The honest read is that they are valuable for layer 1 work and increasingly capable inside browser-based SaaS. They struggle on layer 3 for the same reason all browser agents struggle on layer 3: an Epic session, an SAP GUI session, or an AS/400 emulator session is not a browser, and the agent's tool layer assumes one. A few vendors are gluing computer-use models onto desktop screen capture, but the determinism and recoverability that compliance teams need are not yet there in general-purpose computer-use. A recording-and-replay primitive (capture once, run deterministically) is the bridge between a model that can plan and a runtime an auditor can sign.

Do any vendors span more than one layer cleanly?

Microsoft is the closest: Power Automate covers cloud flows, desktop flows, and AI Builder for documents, all under one tenant. The trade-off is the Microsoft stack lock-in and the fact that desktop flows on Windows are still selector-and-coordinate-based for many regulated apps, which puts them closer to legacy RPA than to a recording-first runtime. UiPath also has document understanding alongside its desktop runner, but the same selector-fragility argument applies. A multi-layer story is not the same as a layer-3 story; if your bottleneck is the screen inside Epic, single-vendor consolidation can come at the cost of the runtime quality on that specific screen.

Is there a short version of the vendor decision?

Yes. If your bottleneck is questionnaire turnaround or trust portal answer drafting, pick a layer 1 product. If your bottleneck is reading a structured document and forwarding fields downstream, pick a layer 2 product. If your bottleneck is filling the form inside the legacy Windows app where the regulated record lives, pick a layer 3 product, and inside layer 3 weigh whether the runtime is deterministic or agentic, because that determines whether your compliance reviewer can sign the workflow itself or only the post-hoc trace.