Product Idea: MedSchools.ai Lecture Notes — AI-Powered Study Companion for Med Students
P3 - LowMedSchools.ai Lecture Notes — Product Spec (Idea Stage)
Origin
Henry Kim (CEO), March 11, 2026. Strategic expansion of MedSchools.ai from pre-med admissions tool into an in-school companion platform.
Vision
Extend MedSchools.ai's value beyond admissions into the med school experience itself. Students who used MedSchools.ai to GET IN now use it to SUCCEED once enrolled. This creates a lifecycle product: admissions → enrolled student → residency.
Core Concept
A mobile + web app where med students can:
- Record lectures — audio or audio+video capture from phone or laptop
- AI processing — automatic transcription, highlighted summaries, key concept extraction, visual content recognition
- Share with classmates — students in the same school/class can access shared recordings and notes
- Exam prep — AI synthesizes all lecture content into targeted study guides for midterms, finals, and board exams (USMLE Step 1/2/3)
User Flow
- Download app → Login with existing MedSchools.ai account (or create new)
- Select medical school → Browse or create class (e.g., "Biochemistry — Spring 2026")
- Hit record during lecture → audio/video captured locally, uploaded in background
- AI processes recording and generates:
- Full searchable transcript
- Highlighted summary with key points, definitions, clinical correlations
- Concept tags and topic categorization
- Visual content extraction (whiteboard content, projected slides, diagrams — if video)
- Linkage to USMLE/COMLEX topic outlines
- Student reviews/edits AI summary → publishes to class group
- Classmates who missed lecture or want another perspective access the recording + summary
- Before exams: select a date range or topic → AI generates comprehensive study guide from all relevant lectures
- Board exam prep: AI cross-references lecture content with official exam topic outlines, identifies coverage gaps
Why Multimodal Embeddings Are Critical
This is the ideal use case for advanced embedding technology like Gemini Embedding 2:
- Audio embeddings — Semantically index lecture audio segments. A student searches "mitral valve regurgitation" and jumps directly to the 43-minute mark where the professor discussed it.
- Video frame embeddings — Capture and index whiteboard drawings, projected slides, anatomical diagrams, clinical images shown during lecture.
- Cross-modal search — Student types a text query → system searches across audio transcripts, video frames, slides, and handouts simultaneously, returning the most relevant moments.
- PDF/document embeddings — Professors' handouts, syllabus materials, textbook excerpts all embedded alongside lecture content in the same vector space.
- Unified semantic space — A single query like "explain the Krebs cycle" retrieves: the lecture audio segment, the relevant slide, the professor's whiteboard diagram, the textbook page, AND related board exam practice questions.
This is NOT just text RAG — it's audio+video+document+text in one searchable knowledge base, organized per student, per class, per school.
Revenue Model
Free tier (drives adoption + network effects):
- Record lectures (unlimited)
- Share recordings with classmates
- Basic transcript (unformatted)
- Class/school organization
Premium (part of MedSchools.ai subscription — for enrolled students):
- AI-generated highlighted summaries
- Smart concept extraction and tagging
- Visual content recognition (slides, whiteboard)
- Exam prep study guide generation
- Board exam topic mapping (USMLE/COMLEX alignment)
- Cross-lecture concept linking ("every time mitral valve was mentioned across all classes")
- Advanced semantic search across all content
- Spaced repetition flashcard generation from lecture content
Pricing thought: Could be a separate tier or add-on — "MedSchools.ai Student" vs current "MedSchools.ai Applicant" plan.
Strategic Value
- Lifecycle retention — Students stay on platform through all 4 years of med school, not just the admissions cycle (12 months → 48+ months LTV)
- Viral network effects — One student records → whole class benefits → organic growth within cohorts. Each new class year inherits recordings from previous years.
- Massive data moat — Corpus of med school lecture content organized by school/class/professor/topic. No competitor has this.
- High-intent upsell — Free recording → paid AI features during exam crunch time (students have extremely high willingness to pay before Step 1)
- Board prep disruption — Competes with Anki, Sketchy, Pathoma, First Aid but with PERSONALIZED content from students' ACTUAL lectures, not generic pre-made material
- Pipeline continuity — Pre-med (admissions) → Med student (lectures/exams) → Residency match (future product)
- School partnerships — Potential B2B angle: med schools license the platform for their students
Competitive Landscape
| Competitor | What They Do | Our Advantage |
|---|---|---|
| Notion AI / Otter.ai | General transcription | Not med-school specific, no exam prep, no class structure |
| Lecturio / Osmosis | Pre-made video content | Not from students' actual lectures, can't personalize |
| Anki | Flashcards | Manual card creation, no lecture capture, no AI summaries |
| Fireflies.ai | Meeting transcription | Optimized for business meetings, not academic lectures |
| Recall.ai | AI note-taking | General purpose, no medical terminology optimization |
Our unique advantage: Med-school-specific AI that understands medical terminology and clinical context, integrates with school/class/professor structure, connects to the broader MedSchools.ai ecosystem (admissions data, school profiles), and uses multimodal embeddings for audio+video+text search.
Technical Architecture (High Level)
- Mobile app — React Native (cross-platform iOS + Android)
- Web app — SvelteKit (consistent with MedSchools.ai stack)
- Audio processing — OpenAI Whisper for transcription → Gemini Embedding 2 for semantic audio indexing
- Video processing — Frame extraction at key moments → Gemini Embedding 2 for visual content embedding
- Storage — Supabase (metadata + auth) + S3/R2 (media files) + pgvector (embeddings)
- AI summarization — GPT/Claude for generating summaries, study guides, flashcards
- Real-time sharing — Supabase Realtime for class group collaboration
- Exam prep engine — RAG across all course lectures + USMLE/COMLEX topic mapping
- Search — Multimodal vector search (text query → audio + video + document results)
Key Metrics to Track
- Lectures recorded per week per active user
- Share rate (% of lectures shared with classmates)
- Class group size (students per class)
- Conversion: free recorder → paid AI features
- Exam prep usage spike (weeks before exams)
- Retention: MAU through academic year
- NPS among med students
- Storage cost per active user
Open Questions
- Build native app or PWA first? PWA faster to ship, native better for background recording
- Storage costs at scale — 1 hour of audio ≈ 50MB, video ≈ 500MB. Need per-user cost model.
- FERPA / privacy compliance — Lecture recording policies vary by school. Some prohibit recording. Need legal review.
- Professor consent — Some schools require professor permission to record. App could include consent workflow.
- Partnership model — Approach med schools directly? Or organic student-led adoption?
- Content moderation — What happens if inappropriate content is shared?
- Timing — Build after MedSchools.ai core is generating revenue, or prototype in parallel?
Priority & Timeline
P2 — Strategic idea for post-launch phase.
Document now. Prototype after MedSchools.ai core (admissions product) hits revenue targets. The multimodal embedding tech should be cheaper and more mature by then (Gemini Embedding 2 is currently in preview at $0.20/MTok — expect GA pricing to drop significantly).
Ideal launch window: Summer/Fall 2027 — ahead of the new academic year, when incoming M1 students are most receptive to new tools.
Created: Wed, Mar 11, 2026, 4:23 PM by bob
Updated: Wed, Mar 11, 2026, 4:37 PM
Last accessed: Tue, Mar 31, 2026, 4:25 AM
ID: ed6566d7-b423-4263-971c-0726121661bf