AI Innovation Program & Product Incubator
Designing a scalable framework for evaluating, validating, and governing AI-enabled healthcare products.
Overview
Breastcancer.org needed a repeatable way to evaluate and pilot AI-enabled digital products while protecting medical trust. I built a structured innovation and incubation framework that aligned stakeholders, defined guardrails, and accelerated pilot readiness.
Problem / Opportunity
What was broken (or missing), and why it mattered.
Innovation demand was rising, but there wasn’t a formal system to evaluate, prioritize, and pilot AI-enabled products.
Efforts were spread across departments without clear governance, decision criteria, or shared language.
Healthcare content required explicit boundaries, sourcing transparency, escalation paths, and bias/hallucination risk controls.
My role
Explicit ownership and responsibilities.
- Product discovery
- Innovation governance design
- Stakeholder alignment
- AI workflow planning
- Pilot prioritization
- Product requirements development
- Risk and trust evaluation
- Cross-functional coordination (research, content, tech, leadership)
Process / Approach
Where the strategy, structure, governance, and tradeoffs live.
- 1) Innovation framework + intake
Designed a repeatable process for proposing, evaluating, and incubating AI product concepts with clear gates and criteria.
- 2) Discovery workflows
Created structured discovery to identify patient pain points, workflow friction, and engagement gaps grounded in real usage signals.
- 3) MVP definitions + validation criteria
Produced product sheets, MVP scopes, and validation criteria for multiple AI-enabled concepts to reduce ambiguity and speed alignment.
- 4) Governance guardrails
Established boundaries for AI-generated medical content, sourcing transparency, escalation pathways, and review requirements.
- 5) Pilot sequencing + metrics
Defined success metrics and evaluation approaches so pilots could be compared, prioritized, and governed consistently.
Optimized for trust and governance first, even when it reduced speed — because the long-term cost of trust failures is higher than slower iteration.
Explicitly evaluated hallucinations, recommendation bias, content staleness, and patient trust concerns before implementation planning.
Designed frameworks that scale across multiple products: AI-assisted search, retrieval, guided journeys, conversational logging, and structured education.
Visual systems diagrams
Placeholders for architecture maps, flows, governance models, and lifecycle diagrams.
A repeatable pipeline for intake, evaluation, approval, pilot gating, and post-pilot decisions.
A phased experimentation model showing validation criteria and review points.
Technology
Tools and systems involved.
- OpenAI APIs
- Claude
- AI agents
- Vector search / retrieval systems
- Supabase
- SQL
- Figma
- Next.js
- Analytics frameworks
- Workflow automation systems
- Product documentation systems
Outcome / Impact
The organizational and product-level effect.
- Established a formal innovation + AI product evaluation framework
- Accelerated ideation and pilot planning across departments
- Improved stakeholder alignment around governance and implementation
- Reduced ambiguity in prioritization and experimentation
- Validated multiple product concepts for future pilot deployment
Start with an AI Ops audit to identify high-ROI workflows, governance needs, and a pragmatic 30/60/90-day plan.
.png)