OCR, document AI, and inspection
Pixel-level decisions in regulated environments are not the same problem as consumer image classification. A misread on an invoice is an accounting issue. A misread on a clinical document, a quality-control inspection, or a compliance form is a regulatory one. The cost of being wrong shapes the entire system - what gets escalated to a human, how confidence is exposed, what is logged, and how the model version is tied to every decision it produced.
Where this earns its place is the long tail of work that humans currently absorb because nothing else has worked: invoices in fifteen layouts, intake forms with handwriting, BOLs and discharge summaries that don't fit a clean template, quality inspection on lines moving faster than a human can sustain. Document AI and visual inspection done well don't replace the expert; they push the rote pattern-matching into the system and leave the judgment with the person.
We build these systems to fail loudly. Confidence below a threshold routes to a queue; every decision is traceable to the inspecting model version; the eval harness includes the cases the operations team flagged last quarter. In medical imaging adjacents, the posture is firm - we build triage assistants that augment radiologists, not systems that pretend to replace them.

Three ways this shows up in production.
Document AI
Invoices, intake forms, BOLs, and clinical notes - parsed and routed.
Visual inspection
Quality control on the line, flagged before it leaves the floor.
Medical imaging adjacents
Triage assistants that augment radiologists, not replace them.