Mission:
- To teach aspiring medical school students how to analyze MCAT questions using critical reasoning skills that include a breakdown of the syntax structure of test questions, answer choices, and associated passages.
- To determine whether the MCAT possesses a lack of measurement equivalence that manifests at a component level of exam question structuring or across a set of questions or both.
Primary outcome:
- To teach critical analysis skills that prompt students to analyze MCAT questions at a component level.
- To measure test outcomes on sample MCAT questions for response variance using statistical techniques developed through the Item Response Theory (IRT).
- To determine how response variance manifests and then categorize specific levels of test structure at which those variances produce statistically significant differences in outcomes.
Secondary outcome:
- To glean qualitative, unstructured feedback from students to determine whether outcome variances emerge out of perceived biases, whether or not quantifiable as outcome measurement variance.
- To understand how structural variations in exam design affect the exam experience or elucidate perceptions of inequities.
Introduction:
Item response theory (IRT) attempts to explain using statistical modeling the relationship between unobservable characteristics or attributes and observed outcomes, such as specific responses or outcomes. We postulate the MCAT contains undue test variance that results in a lack of measurement equivalence. Meaning, the structure of the MCAT, despite being a standardized test, leads to disparities in outcomes, even among test takers of equal competency.
Often this phenomenon is labeled as a cultural bias emanating from the exam itself.
We hope to examine this further through a rigorous scrutiny of the exam structure, focusing on component aspects of the exam question, including:
- Syntax structuring of the principal question, answer choices, and, when appropriate, the associated passage.
- Pattern functioning of questions in relation to one another, sequentially and spatially, for both passage-based and discrete questions.
Method:
We will take a cohort of ten (10) students and enroll them into a six (6) week long course that exams
Week 1: We will review the foundational concepts in the MCAT material that will be covered in the rest of the course to ensure all students are familiar with the fundamentals underlying the MCAT content presented in the course.
Week 2: We will begin through an analysis of syntax structure, including semiotic frameworks and recursion theory. Students will learn to analyze word phraseology, sentence structuring, and paragraph assembly.
Week 3: We will begin with a component level analysis of sample MCAT questions while emphasizing the structural modeling of the questions themselves and encouraging students to view questions through the foundational principles presented and the question design shown.
Week 4: Students will complete the first block of fifty (50) Science Section questions and of fifty (50) Critical Analysis and Reasoning Skills (CARS) questions.
Week 5: Students will complete the second block of fifty (50) Science Section questions and of fifty (50) Critical Analysis and Reasoning Skills (CARS) questions.
Week 6: Students will review test results and discuss their experiences going through the block questions utilizing the critical reasoning skills developed. Students will share their thoughts in open conversation and write a letter to the AAMC summarizing their experiences.
Questions:
Questions will be sourced from the AAMC MCAT Official Prep Practice Exam. Each block will contain questions from the Science Sections and Critical Analysis and Reasoning Skills (CARS) Sections. Science Sections include questions from Chemical and Physical Foundations of Biological Systems, Biological and Biochemical Foundations of Living Systems, Psychological, Social, and Biological Foundations of Behavior.
Analysis of questions will be done separately for the Science Sections and for the Critical Analysis and Reasoning Skills (CARS) Sections.
Outcomes will be analyzed at the level of individual test items and structuring, at the level of the total score, delineated based on levels at which variances were most pronounced.
Risk:
We acknowledge this approach may produce flawed inferences. We hope that by integrating open-ended feedback from students, we can contextualize the outcome responses in a way that provides greater meaning to the variances measured, and to the potential disparities not adequately demonstrated in the data.
Data Sharing:
We will share the outcomes from the examinations alongside the transcribed conversation and letters. Students will have the option of sharing their information in a de-identified manner. Information will be available at the end of the cohort.
Initial Cohort:
- Initial cohort to begin in April.
- We will conduct the classes over a secure webinar platform.
- Exam questions will be presented to enrolled students through the secure webinar platform.