Portfolio case study · UST Healthproof

Designing for the gap between diagnosis and coverage

A systems-level concept for UST HealthProof’s risk adjustment ecosystem — using AI to turn unstructured medical records into actionable signals across four connected workflows.

Conceptual/speculative project for portfolio purposes.

RoleProduct designer
TypeConcept / speculative
DomainHealthcare · Risk adjustment · BPaaS
Screens4 connected surfaces
01

The problem

Unstructured records create gaps that the plan can't close

Health plans operating in risk adjustment ecosystems like UST HealthProof’s often face a persistent challenge: records arrive as PDFs, scanned documents, and handwritten notes—making diagnosis capture slow and inconsistent. When chronic conditions are missed or uncoded, HCC scores may be incomplete, increasing audit risk and plan mismatch.

The issue cascades across the care pipeline: coders can miss conditions hidden in chart notes, enrollment analysts may make tier decisions on incomplete data, and providers may start visits without a clear view of the patient’s known conditions.

When a member’s documented conditions are not reflected in their risk score, plans can lose accuracy, revenue, and member trust.

Revenue at risk
a single missed HCC condition can materially reduce annual reimbursement
Hidden cost
manual chart retrieval and diagnosis extraction remain major productivity drains in coding workflows.
CMS RADV
audit exposure increases when submitted risk scores are not supported by clinical documentation.
02

Research

Primary and secondary

For primary research, I would conduct structured interviews with medical coders and care managers in the risk adjustment workflow, observing their interactions with Enrollment+™ to surface friction points, cognitive load triggers, and workaround behaviours. Findings would be mapped to a journey anchored to the chart review and HCC gap closure process.

For secondary research, I reviewed CMS-HCC risk adjustment model documentation and OIG audit reports on risk adjustment data validation to understand the regulatory and financial stakes of coding inaccuracies.

03

Users

Three actors, one pipeline

The product concept serves three distinct roles across the payer and provider ecosystem. Each has a different relationship to the data, a different definition of success, and a different tolerance for interface complexity.

Primary · Payer-side
Medical coder / risk adjustment analyst
Reviews AI-flagged conditions, accepts or corrects HCC codes, and closes gaps for CMS submission. High volume, compliance-critical, needs speed and trust.
Secondary · Payer-side
Enrollment ops analyst
Uses confirmed HCC data to assign risk tiers and match members to appropriate plans. Needs accurate segmentation, not raw clinical detail.
Tertiary · Provider-side
Physician / care provider
Sees a pre-visit briefing with clinical context — conditions to confirm, open care gaps. Needs clinical framing, not billing language.
04

Screens

Four surfaces, one connected system

Each screen corresponds to a distinct moment in the pipeline — from document upload to clinical encounter. The design decisions in each surface reflect who's using it, what they need to trust, and what action the system is asking them to take.

SCREEN 01
AI interpretation card
Coder · chart review queue
SCREEN 02
Coder override interface
Coder · condition correction
SCREEN 03
Enrollment segmentation dashboard
Enrollment analyst · plan assignment
SCREEN 04
Provider pre-visit briefing
Physician · EHR embedded panel
Screen 01 AI interpretation card — chart review queue
AI interpretation card showing three flagged conditions — Type 2 diabetes at 94% high confidence, CKD stage 3 rejected, and hypertensive heart disease at 61% needs review — each with evidence snippets and accept/reject actions

Each scanned document produces one card. Three conditions are flagged from a discharge summary: Type 2 diabetes mellitus at high confidence (94%), CKD stage 3 rejected with a reason ("not sufficient for coding"), and hypertensive heart disease at 61% needing review. Each flag shows the exact evidence snippet from the source text and an Accept / Reject / Add note action set. Coders can bulk-accept all high-confidence flags at the bottom without reviewing each individually.

Screen 02 Coder override interface — inline correction
Coder override interface showing an inline reason dropdown expanded with options: not documented, condition already coded, insufficient clinical evidence, incorrect code suggested, and other

Override expands inline — no modal or navigation away from the queue. A required reason dropdown surfaces five options: not documented in this record, condition already coded, insufficient clinical evidence, incorrect code suggested, and other. Selecting "Incorrect code suggested" would reveal a code substitution field, feeding a correction signal back to the AI model. Friction is calibrated to the flag's confidence — high-confidence overrides require a reason; lower-confidence ones do not.

Screen 03 Enrollment segmentation dashboard — shared view
Enrollment segmentation dashboard showing four KPI tiles, a filterable member table with risk tiers and HCC gap status, and an expanded profile drawer for Arjun Menon showing RAF score, confirmed HCC codes, and a plan match recommendation at 94% confidence

Shared by risk adjustment and enrollment ops. Four KPI tiles surface the pipeline at a glance: 247 members pending segmentation, 84 high risk, 12 flagged for critical review, and a 91% HCC gap closure rate. Each member row shows risk tier, gap status, and plan match. Expanding a row (shown for Arjun Menon) reveals the RAF score bar, confirmed HCC codes, and a plan match recommendation with confidence score — all actionable with Confirm, Export, or Override.

Screen 04 Provider pre-visit briefing — EHR embedded panel
EHR-embedded pre-visit briefing panel showing patient complexity badge, four conditions to confirm with Active/Uncertain/Resolved toggles, and two open care gaps including a high-priority HbA1c flag

A sidebar panel embedded in the existing EHR alongside the patient chart. No ICD-10 codes visible — condition names only, keeping the frame clinical rather than billing-oriented. Four conditions are listed with Active / Uncertain / Resolved toggles for the provider to confirm currency at the visit. Below, two open care gaps are framed as clinical recommendations: a high-priority HbA1c flag and a nephrology referral. The CDI nudge appears last, after clinical context has established trust.

05

Design decisions

Choices worth explaining

Five decisions shaped the system's character — each one a deliberate answer to a tension between competing design values.

06

Hypothesised impact

What success looks like

As a concept project, impact is framed as a hypothesis grounded in industry benchmarks for retrospective chart review and HCC gap closure programmes.

30–40%
Estimated reduction in chart review time per coder, driven by AI pre-extraction and bulk acceptance of high-confidence conditions
↑ HCC accuracy
Fewer missed conditions in CMS submission, reducing retrospective audit exposure and retroactive payment adjustments
Faster enrollment
Risk-stratified member segmentation based on complete HCC data enables confident plan placement decisions earlier in the enrollment cycle
CDI loop closed
Provider pre-visit briefing surfaces documentation gaps before the encounter — shifting CDI from retrospective correction to prospective prevention
07

Transformation

From manual chart review to AI-assisted coding

The concept reimagines a high-friction legacy workflow into a faster, more scalable review experience while preserving coder judgment and compliance controls.

Before
Manual coding workflow screen
  • ~23 min average review time
  • Conditions buried in notes
  • Manual lookup + evidence search
  • Higher miss / inconsistency risk
After
AI assisted coding workflow screen
  • 8–12 min assisted review target
  • Conditions surfaced automatically
  • Evidence snippets + confidence
  • Bulk actions + coder override