Learning objectives are tagged to Bloom's cognitive levels at input — this metadata drives downstream AI prompt selection, assessment difficulty, and activity type assignment automatically.
AI generates modular content blocks — never full lessons. Each block has a defined schema (concept, scenario, quiz-item) allowing independent review, versioning, and reuse across courses.
Each component is a self-contained module with its own metadata, learning objective mapping, and accessibility attributes — enabling cross-platform reuse and A/B testing of individual components.
AI generates — humans validate — the system standardizes. Every AI output passes through a minimum of two human reviewers before entering the production pipeline. This is non-negotiable regardless of AI confidence scores.
The assembled lesson below is the output of this phase — a fully interactive MindTap experience built from the modular components produced in Phases 1–5.
The Student Assistant is trained per-course and per-chapter. It knows what question the student is working on, but its responses are architecturally prevented from producing the answer — it can only guide the student's thinking toward the answer using Socratic questioning and textbook references.