Technical Program Manager

I build enablement programs.
These are my AI experiments.

Ten years designing L&D and certification programs at Google and TikTok. These prototypes explore what AI can do in the domain I know best.

3 AI prototypes I built to explore my domain.

Early-stage work. Not production systems. Just me testing what's possible.

AI Experiment Hosted on Railway

JTA Exam Builder Orchestra

An attempt to put on autopilot a process I've been running by hand since Google.

14 AI Agents
5 Human Gates
22.5h Build Time

I've run exam design workshops and psychometric reviews professionally at TikTok and Google. This pipeline uses 14 AI agents to replicate that process: evidence gathering, SME simulation, bias review, and blueprint generation, with those standards in mind and not as a certified claim. Five human-in-the-loop gates keep a person in the decision chain. Built as a personal experiment to see how far AI can go in a domain I know well.

Node.js TypeScript Claude API Multi-model Supabase SSE Railway
AI Experiment Hosted on Vercel

Candidate Experience Portal

A minimal candidate portal for the kind of certification program I've been managing since 2016.

18 E2E Tests
RLS DB Security
Magic Links

Having managed certification portals at Google for Education and TikTok Academy, I wanted to see how quickly I could prototype one from scratch using AI. This covers the candidate lifecycle: auth, eligibility tracking, prerequisite validation, and cross-domain exam handoff. Early-stage. Not designed for scale.

Next.js 16 React 19 TypeScript Tailwind 4 Supabase Playwright
AI Experiment Hosted on Vercel

AI Fluency Performance Based Exam

An AI-graded hands-on exam environment. Built around an assessment model I've run on paper for most of my career.

4D AI Rubric
Auto Codespaces
Score Tracks

Performance-based exams are the hardest part of certification design to get right. This prototype provisions an isolated coding environment per candidate, runs deterministic validation, then uses a four-dimension AI rubric (Delegation, Description, Discernment, Diligence) to assess how well they worked with AI. An experiment in whether AI can grade what humans currently review.

Next.js 15 TypeScript GitHub API Claude API Supabase Vercel
Restricted Access

What I actually do.

L&D & Training Design

Instructional design & curriculum development
Learning experience mapping & journey design
Blended learning (ILT, vILT, eLearning)
Learning platform strategy (Docebo, Intellum)

Certifications

Exam development & JTA facilitation
Psychometrics & item writing
Credentialing & digital badging
ANAB / ISO 17024 program design

Program Management

0-to-1 product and program launch
Cross-functional stakeholder management
Vendor and BPO operations
Data-driven program iteration
Agile / Scrum facilitation

AI Experimentation

Anthropic Claude API
Multi-agent workflow design
Human-in-the-loop systems
Supabase · Vercel · Railway
Prompt engineering & iteration

Data & Platforms

SQL / BigQuery / Data Studio
ETL pipeline design
Salesforce / CRM operations
Google Analytics / GTM
LMS administration

Let's work on something real.

Open to TPM roles in AI, EdTech, certifications, L&D, and eCommerce platforms.