Skip to main content
Link
Menu
Expand
(external link)
Document
Search
Copy
Copied
Topics
1: Introduction
1.1: Purpose of this Guide
1.2: Target Audience
1.3: Scope
1.4: Why PrairieLearn Questions Require a Different Approach
2:PrairieLearn As A Measurement Tool
2.1: Implementing in Homework Problems, Tests, and In Class Activities
2.2: Observable Student Behavior vs Intended Learning Outcomes
2.3: Noticeable Feedback Loops Between Question Design and Teaching
3: Defining Learning Objectives
3.1: Translating Course Goals into Assessable Skills
3.2: Concept Isolation - How can students actually master SPECIFIC topics?
3.3: Cognitive Skill Classification / Level of Understanding per topic
3.4: Mapping Objectives to Question Types
4: The Anatomy of a High Quality PrairieLearn Question
4.1: Clear Concept Focus
4.2: Controlled Difficulty
4.3: Randomization Without Changing Cognitive Demand
4.4: Designing for Interpretability of Student Errors
4.5: Alignment with Learning Objectives
5: Common Failure Modes In Question Design
5.1: Ambiguous Prompts
5.2: Over-Randomization
5.3: Guessable Correctness
5.4: Sub question barring (Dan Garcia went over this)
5.5: Mismatch Between Objective and Assessments
6: Question Design Workflow
6.1: Define the Learning Objective
6.2: Identify the Correct Reasoning Path
6.3: Identify Common Incorrect Reasoning Paths
6.4: Select the Appropriate PrairieLearn Question Type
6.5: Implement the Autograder Logic / Debugging
6.6: Add Randomization Strategically
6.7: Internal Testing and Validation
7: Instrumenting Questions For Data Collection
7.1: Metadata and Tagging Strategy
7.2: Concept Labels
7.3: Difficulty Labels / Question Smells Categorization
7.4: Skill Type Labels
7.5: Versioning and Change Tracking? (optional but could use improvements)
8: PrairieLearn Analytics Core Metrics
8.1: First-Attempt Success Rate
8.2: Number of Attempts per Student
8.3: Time to First Submission after attempt is selected
8.4: Time Between Attempts
8.5: Answer Distribution Analysis
8.6: Variant Performance Comparison
9: Interpreting Student Data
9.1: Identifying Misconceptions
9.2: Distinguishing Productive Struggle from Friction
9.3: Detecting Guessing Behavior
9.4: Evaluating Question Difficulty
9.5: Evaluating Fairness Across Variants
10: Iterative Improvement Cycle
10.1: Post-Deployment Review Process
10.2: Deciding When a Question Needs Revision
10.3: Revising Distractors and Feedback
10.4: Revising Randomization Parameters / Range
10.5: Version Comparison Across Course Offerings
11: Integration With Course Pedagogy
11.1: Using Question Data to Inform Lecture Content
11.2: Using Question Data to Inform Discussion Sections
11.3: Aligning Homework, Exams, and Practice Questions
11.4: Supporting Mastery-Based Learning
12: Implementation Strategy For Course Staff
12.1: Roles and Responsibilities
12.2: Authoring Workflow for Teams
12.3: Review and Quality Assurance Process
12.4: Timeline for Question Development
12.5: Maintaining a Shared Question Bank
13: Case Study Template
13.1: Original Learning Objective
13.2: Initial Question Design
13.3: Observed Student Data
13.4: Identified Issues
13.5: Redesigned Question
13.6: Post-Redesign Outcomes
14: Best Practices Checklist
15: Future Directions And Research Opportunities
15.1: Potential for Cross-Course Analytics
15.2: Opportunities for Automated Insight Generation
Home
Search
Topics
5: Common Failure Modes In Question Design
5.4: Sub question barring (Dan Garcia went over this)
5.4: Sub question barring (Dan Garcia went over this)