Core Revenue Recovery Beta

VMS Reconciliation Agent

Four match strategies run in order — Learned, Exact, Fuzzy, AI — and every result is routed by confidence tier. ≥95% auto-approves. 70–94% goes to human review. <70% is flagged. Side-by-side VMS↔ATS compare for every match.

VMSMatchReview view with a row expanded, showing side-by-side VMS and ATS field values, field-level agreement indicators, and an AI reasoning pill explaining the confidence score.
VMSMatchReview — expanded row with side-by-side VMS↔ATS compare and AI reasoning pill.

The 2–5% revenue leak happens in the VMS↔ATS gap.

Fieldglass, Beeline, VectorVMS, and Magnit all hold the same placements as your ATS — just under different IDs, with different field names, and occasionally different rates or hours. Reconciling by hand is slow, error-prone, and institutional (one senior analyst typically owns it). Whatever slips through becomes an underbilled or overbilled invoice, or an aging dispute the client won’t pay.

Four match strategies, run in order, per record.

For every VMS record the agent receives, it runs four matching strategies in sequence. The highest-confidence result wins and is attached to the record as its “match candidate.” Every candidate is persisted with its confidence score, the matching strategy that produced it, and the fields that agreed vs. disagreed.

1 · Learned

Previously confirmed pair

The same VMS ID ↔ ATS placement pair was confirmed in a previous reconciliation. Near-instant, highest confidence.

Typical: 98–100%
2 · Exact

Deterministic key match

Exact match on one or more of: external ID, placement number, candidate email, SSN last-4 combined with name.

Typical: 95–99%
3 · Fuzzy

Tolerant string + field

Normalized-name similarity, rate within tolerance, overlapping start/end dates, same client — weighted into a score.

Typical: 70–94%
4 · AI

LLM reasoning with citations

Structured LLM compare over name, client, role, rate, dates. Emits a confidence score and a reasoning pill for every match.

Typical: 60–90%

Confidence-tier routing — humans only see what humans need to.

Once a candidate has a winning strategy and confidence score, it’s routed automatically. The cutoffs are configurable per tenant in Agent Settings, but the defaults mirror what experienced middle-office analysts already do by hand.

≥ 95%
Auto-approve. Match is written to the audit log and appears in the Alert Queue as "Resolved (auto)." Reversible for 7 days.
70–94%
Human review. Surfaced in VMSMatchReview with the side-by-side compare and AI reasoning pill. One-click Approve / Reject / Re-match.
< 70%
Flagged. Held for manual resolution; does not block the rest of the batch. Typically requires data-quality fix on one side or the other.

Every match opens to a side-by-side compare.

When a human reviews a match, the row expands to show the VMS record on one side and the ATS placement on the other, field-by-field, with agreement indicators on each field. Above the compare sits the AI reasoning pill explaining why the agent scored the match the way it did. One click to approve. One click to reject. One click to request a re-match with different fields.

Everything you do here writes to the audit log: actor, timestamp, decision, and the match candidate that was accepted or rejected. If a re-match is requested, the new candidate shows up in the queue immediately.

Recover the revenue hiding in the VMS gap.

Currently in Beta on the Transform tier. Every tenant starts in dry-run for the first full reconciliation batch.

How agents work inside the Command Center →