Research Review – Center for Teaching and Learning /ctl Wed, 15 Apr 2026 15:46:41 +0000 en-US hourly 1 /ctl/wp-content/uploads/sites/88/2024/01/cropped-android-chrome-512x512-1-32x32.png Research Review – Center for Teaching and Learning /ctl 32 32 Generative and retrieval tasks: Does the sequence matter and do sequence effects depend on learning task delay? /ctl/generative-and-retrieval-tasks-does-the-sequence-matter-and-do-sequence-effects-depend-on-learning-task-delay/ Wed, 15 Apr 2026 15:46:41 +0000 /ctl/?p=5829 This 2026 study investigated whether the order in which students complete generative tasks (like generating their own examples of concepts) and retrieval tasks (like cued recall) affects learning outcomes. Using a 3×2 experimental design with 208 university students, the researchers compared three task sequences — generative-before-retrieval, retrieval-before-generative, and restudy-before-generative — under two timing conditions: completing tasks immediately after an initial study phase or completing them two days later. Students were tested one week after finishing the learning tasks on both retention and comprehension of four psychology concepts.

]]>
This 2026 study investigated whether the order in which students complete generative tasks (like generating their own examples of concepts) and retrieval tasks (like cued recall) affects learning outcomes. Using a 3×2 experimental design with 208 university students, the researchers compared three task sequences — generative-before-retrieval, retrieval-before-generative, and restudy-before-generative — under two timing conditions: completing tasks immediately after an initial study phase or completing them two days later. Students were tested one week after finishing the learning tasks on both retention and comprehension of four psychology concepts.

The study found that the order of generative and retrieval tasks generally made little difference to learning outcomes. The one notable exception was that retrieval task performance was significantly better for the generative-before-retrieval group when a two-day delay was involved, likely because those students used the open-book generative task as an opportunity to re-engage with the material first.

Key Takeaways

  • Order of activities matters less than you might think. You can feel free to sequence generative activities (discussions, example-generation, concept mapping) and retrieval activities (quizzes, recall prompts) in whatever order suits your course design.
  • Retrieval practice still outperforms restudy for retention. Replacing quizzes or recall activities with simply re-reading course material is ineffective. Students who only reread retained significantly less after one week. Low-stakes quizzes and retrieval activities remain worth keeping.
  • Task design quality matters more than sequence. The authors note that earlier research finding sequence effects likely reflected a poorly designed generative task (no feedback, no revision opportunity). When both task types are well-designed with feedback built in, sequence effects largely disappear. This is a reminder that how activities are designed is more important than their order.

Read the full article here:

Obergassel, N., Renkl, A., Endres, T., Nückles, M., Carpenter, S. K., & Roelle, J. (2026). Generative and retrieval tasks: Does the sequence matter and do sequence effects depend on learning task delay? Applied Cognitive Psychology, 40(2), e70188. 

]]>
Learning with concept maps: the effect of activity structure and the type of task /ctl/learning-with-concept-maps-the-effect-of-activity-structure-and-the-type-of-task/ Fri, 10 Apr 2026 15:51:40 +0000 /ctl/?p=5791 This research examined how 226 undergraduate students learned using concept maps under different conditions, comparing task types (fill-in-the-blanks, shuffled concepts, self-constructed, and summaries) with activity structures (individual only, individual-then-collaborative, and collaborative-then-individual). The study measured learning outcomes through comprehension and recall tests while analyzing nearly 4,200 verbal exchanges during collaborative activities. Results revealed a significant interaction between task type and activity structure: students who individually self-constructed concept maps and then discussed them collaboratively (I+C) achieved the strongest learning outcomes, particularly for delayed recall.

]]>
This research examined how 226 undergraduate students learned using concept maps under different conditions, comparing task types (fill-in-the-blanks, shuffled concepts, self-constructed, and summaries) with activity structures (individual only, individual-then-collaborative, and collaborative-then-individual). The study measured learning outcomes through comprehension and recall tests while analyzing nearly 4,200 verbal exchanges during collaborative activities. Results revealed a significant interaction between task type and activity structure: students who individually self-constructed concept maps and then discussed them collaboratively (I+C) achieved the strongest learning outcomes, particularly for delayed recall.

The analysis of dialogue quality showed that self-constructed maps triggered deeper, more dialogic conversations characterized by reasoned justification, mutual engagement, and co-construction of knowledge. In contrast, simpler tasks (completing or ordering pre-made maps) produced more superficial exploratory talk with minimal argumentation. Summary-writing generated the least dialogic interaction overall, with patterns of one student dictating while the other transcribed. These findings suggest that task complexity paired with structured collaboration—allowing individual work before peer discussion—creates optimal conditions for both learning and high-quality dialogue.

Key Takeaways:

  • Prioritize self-construction over passive consumption: Have students create concept maps from scratch rather than completing or organizing pre-made maps. The cognitive demand of self-construction, when paired with collaboration, activates deeper reasoning and argumentation. However, you might consider providing more support for learners who are completely new to the content or might otherwise struggle with the task.
  • Use the I+C structure: Assign individual concept map creation as homework or in-class work first, then facilitate 15-30 minute pair discussions where students negotiate a shared map. This sequence generates more exploratory dialogue and better learning than jumping directly into group work or keeping work entirely individual.
  • Avoid summary writing as the primary task: While summaries are common, this study found they produced the weakest learning outcomes and the least meaningful peer dialogue. If summaries are required, pair them with concept mapping activities or ensure they include structured peer review to increase dialogic engagement.

Read the full article here:

Read the full article here: Amante, C., Lucero, M., & Montanero, M. (2026). Learning with concept maps: The effect of activity structure and the type of task. Instructional Science, 54, Article 12. 

]]>
Hyflex learning and student engagement in higher education: a systematic literature review /ctl/hyflex-learning-and-student-engagement-in-higher-education-a-systematic-literature-review/ Wed, 08 Apr 2026 15:41:45 +0000 /ctl/?p=5760 This open-access systematic literature review, published in Frontiers in Education, synthesizes current research on HyFlex (Hybrid-Flexible) course models in higher education — a format in which students choose, session by session, whether to attend in person, join synchronously online, or engage asynchronously. The review draws on studies from across institutional contexts to examine how this radical flexibility affects student engagement, attendance, and learning outcomes. Rather than advocating for one modality over another, the authors investigate what conditions make flexible course designs succeed or fail, and the findings challenge some widely held assumptions about what students actually do when given a choice.

]]>
This open-access systematic literature review, published in Frontiers in Education, synthesizes current research on HyFlex (Hybrid-Flexible) course models in higher education — a format in which students choose, session by session, whether to attend in person, join synchronously online, or engage asynchronously. The review draws on studies from across institutional contexts to examine how this radical flexibility affects student engagement, attendance, and learning outcomes. Rather than advocating for one modality over another, the authors investigate what conditions make flexible course designs succeed or fail, and the findings challenge some widely held assumptions about what students actually do when given a choice.

The most striking finding is that HyFlex flexibility does not, as many instructors fear, lead to declining attendance or disengagement. On the contrary, students who needed flexibility tended to use it as a tool to stay current with coursework rather than to disengage entirely, suggesting that choice itself can function as a retention mechanism. More significant, however, is what the research reveals about the true driver of engagement: belonging. Students who felt a strong sense of connection and support remained highly engaged regardless of which modality they chose, while students who felt disconnected showed lower engagement even with maximum freedom. This points to a finding with broad implications: modality is largely secondary to the relational and emotional climate of the course. Instructor presence, defined as timely communication, responsiveness, and visible enthusiasm, consistently emerged as a critical factor in sustaining that climate across all attendance modes.

Key Takeaways for Faculty

  • Belonging matters more than modality. Whether you teach in person, online, or in a blended format, students who feel seen and supported engage more deeply. Investing in the relational dimensions of your course, such as check-ins, responsive feedback, community-building activities, may have a greater impact on student success than any structural or technological choice.
  • Flexibility can be a retention tool, not a risk. Giving students some agency over how or when they engage does not necessarily lead to avoidance. When students trust that the course structure supports them, flexibility tends to help them stay on track during difficult weeks rather than fall behind.
  • Instructor presence is the throughline across all formats. The research consistently identifies faculty visibility, warmth, and timely responsiveness as central to student engagement in every modality studied. How present and approachable you appear to students may be the single most transferable lesson from HyFlex research for any course format.

Read the full article here:

Mahmud, M. M., Teh, J. K. L., & Azizan, S. N. (2026). Hyflex learning and student engagement in higher education: A systematic literature review. Frontiers in Education, 11. 

]]>
Effects of teacher, peer and self-feedback on student improvement in online assessment: the role of individuals’ presumptions and feedback literacy /ctl/effects-of-teacher-peer-and-self-feedback-on-student-improvement-in-online-assessment-the-role-of-individuals-presumptions-and-feedback-literacy/ Wed, 25 Mar 2026 20:15:58 +0000 /ctl/?p=5646 This study examines how teacher, peer, and self-feedback influence student learning in an online assessment context. Using a quasi-experimental design with university students, the authors compared how students perceive different feedback types versus how those feedback types actually impact writing improvement.

]]>
This study examines how teacher, peer, and self-feedback influence student learning in an online assessment context. Using a quasi-experimental design with university students, the authors compared how students perceive different feedback types versus how those feedback types actually impact writing improvement.

Key findings show a clear disconnect between perception and effectiveness: students rated teacher feedback as most valuable, but peer feedback produced the greatest improvement in essay quality. Additionally, students’ ability to benefit from feedback depended significantly on their feedback literacy (their ability to understand and use feedback), while their initial preferences or assumptions about feedback types had no effect on learning outcomes.

Key Takeaways for Faculty

  1. Don’t let student preference dictate feedback design. Students consistently favor teacher feedback, but this study found peer feedback produced the only statistically significant improvement. Faculty should feel confident integrating peer assessment even when students express resistance to it.
  2. Teach feedback literacy explicitly. Since feedback literacy mediated learning gains, faculty should build in scaffolded activities that help students learn how to read, interpret, and act on feedback — not just receive it. This could include reflection prompts, revision protocols, or structured rubric training.
  3. Use structured tools to support peer feedback quality. This study used a detailed rubric with descriptors for each performance level, which helped peers provide more consistent and actionable feedback. Clear structures reduce student anxiety about peer assessment and improve its reliability.
  4. Combine feedback modes strategically. The authors recommend using teacher, peer, and self-feedback together in a dialogic, iterative way rather than treating them as competing options. Each mode offers distinct cognitive benefits — peer feedback in particular encourages deeper engagement with the content.
  5. Use exemplars to support self-assessment. Providing high-quality model essays as anchors for self-assessment helped students engage in meaningful self-reflection. This is a low-cost, scalable strategy that also builds evaluative judgment over time.
  6. Measure actual learning, not just student satisfaction. Faculty and program assessors should track measurable improvements in student work rather than relying on course evaluations or student satisfaction surveys to gauge feedback effectiveness.

Read the full article here:

Heil, J., & Ifenthaler, D. (2026). Effects of teacher, peer and self-feedback on student improvement in online assessment: The role of individuals’ presumptions and feedback literacy. Assessment & Evaluation in Higher Education, 51(2), 281–300. 

Want more tips on feedback? Check out the recording and resources for our Lunch & Learn 

]]>
Instructional Illusions: Ten Things in Education that Look Right but Aren’t /ctl/instructional-illusions-ten-things-in-education-that-look-right-but-arent/ Wed, 18 Mar 2026 20:19:04 +0000 /ctl/?p=5672 Each of these illusions contain an element of truth that makes them appealing and enduring, but they do not have evidence to support their effectiveness in student learning. This blog post explains each of these illusions, describes why they are harmful to learning, and provides guidance for improving instruction.

]]>
This is not a research article, but a summary provided by Paul Kirschner, a widely cited cognitive psychologist, highlighting 10 common instructional illusions that impact student learning:

  1. The engagement illusion
  2. The expertise illusion
  3. The student-centred illusion
  4. The transfer illusion
  5. The easy-wins illusion
  6. The motivation illusion
  7. The discovery illusion
  8. The uniqueness illusion
  9. The performance illusion
  10. The innovation illusion

Each of these illusions contain an element of truth that makes them appealing and enduring, but they do not have evidence to support their effectiveness in student learning. This blog post explains each of these illusions, describes why they are harmful to learning, and provides guidance for improving instruction.

Read the full post here: Kirschner, P. (2026). Instructional Illusions: Ten Things in Education that Look Right but Aren’t, KirschnerEd

]]>
Flipped Feedback: Engaging Students With the Feedback Process to Enhance Evaluative Judgement /ctl/flipped-feedback-engaging-students-with-the-feedback-process-to-enhance-evaluative-judgement/ Wed, 11 Mar 2026 21:21:53 +0000 /ctl/?p=5583 This study examines a “flipped feedback” model where students engage with feedback before final submission. Students submit a draft, review generic feedback on common errors, self-assess using a rubric, predict their grade, revise the assignment, and request targeted feedback.

]]>
This study examines a “flipped feedback” model where students engage with feedback before final submission. Students submit a draft, review generic feedback on common errors, self-assess using a rubric, predict their grade, revise the assignment, and request targeted feedback.

Compared with previous cohorts using traditional feedback, students using flipped feedback showed significant improvements between draft and final submissions and higher overall marks. Most students also reported that the approach helped them better understand assessment criteria and apply feedback to improve their work.

Key Takeaways

  • Provide feedback earlier: Give guidance before final submission so students can revise their work.
  • Use drafts and revision: Iterative submissions support improvement and deeper learning.
  • Promote self-assessment: Having students evaluate their work against rubrics builds evaluative judgement.
  • Offer targeted feedback: Let students request specific feedback areas to increase relevance and efficiency.
  • Provide clear guidance: Strong rubrics and examples help students assess their work accurately.

Read the full article here:

Francis, N., Coates, K., Bodger, O., & Winstone, N. (2026). Flipped Feedback: Engaging Students With the Feedback Process to Enhance Evaluative Judgement. Active Learning in Higher Education, 0(0). 

For more on feedback strategies, join us for our Faculty Lunch & Learn

]]>
Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them /ctl/measurement-schmeasurement-questionable-measurement-practices-and-how-to-avoid-them/ Wed, 04 Mar 2026 22:17:00 +0000 /ctl/?p=5551 This article argues that many problems in psychological and behavioral research stem not only from statistical practices but also from how researchers define and measure constructs. The authors introduce the concept of questionable measurement practices (QMPs)—research decisions about measurement that raise doubts about the validity of a study’s conclusions. When such decisions are hidden or poorly documented, it becomes difficult for readers or other researchers to evaluate threats to construct validity, internal validity, statistical validity, and external validity, which ultimately undermines the credibility and replicability of research findings.

]]>
This article argues that many problems in psychological and behavioral research stem not only from statistical practices but also from how researchers define and measure constructs. The authors introduce the concept of questionable measurement practices (QMPs)—research decisions about measurement that raise doubts about the validity of a study’s conclusions. When such decisions are hidden or poorly documented, it becomes difficult for readers or other researchers to evaluate threats to construct validity, internal validity, statistical validity, and external validity, which ultimately undermines the credibility and replicability of research findings.

A key argument of the paper is that research culture often treats measurement as secondary to statistical analysis, creating what the authors call a “measurement schmeasurement” attitude. This mindset allows substantial researcher flexibility in selecting or modifying measures without transparent reporting, producing results that appear rigorous but may rest on unstable foundations. The authors emphasize that even well-powered studies with sophisticated analyses cannot compensate for poor measurement. To address this issue, they advocate greater transparency about measurement decisions, such as clearly defining constructs, reporting how items were chosen or modified, documenting reliability and validity evidence, and making measurement materials openly available. Such practices would allow others to evaluate, replicate, and build upon research more effectively.

The article’s insights translate directly to assessment of student learning, where educators frequently rely on tests, rubrics, and surveys to infer what students know or can do. Just as in research, questionable measurement practices can occur if instructors use poorly aligned assessments or rely on instruments that do not validly capture the intended learning outcomes. The authors’ emphasis on construct clarity suggests that educators should first define precisely what a learning outcome represents and then ensure assessments genuinely measure those constructs rather than convenient proxies such as recall or participation. Increased transparency, such as sharing rubric design, validation processes, and examples of student work, could strengthen the credibility of learning assessments, improve comparability across courses or programs, and support more meaningful interpretations of evidence about student learning.

Read the full article here:

Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456–465. 

]]>
Highly-Cited “AI Erodes Critical Thinking” Study Appears To Be AI Generated Slop /ctl/highly-cited-ai-erodes-critical-thinking-study-appears-to-be-ai-generated-slop/ Wed, 25 Feb 2026 22:10:54 +0000 /ctl/?p=5521 This week, I want to highlight a substack post critiquing a recent research article. The author critically examines a widely referenced paper claiming that increased AI use degrades critical thinking skills. Pookins argues that the study’s design and methodology are fundamentally flawed: the sample isn’t representative, the survey measures self-reported beliefs rather than actual critical thinking performance, and many items intended to measure different constructs are essentially redundant. Because of these flaws, he asserts that the paper does not provide reliable evidence that AI use causes a decline in critical thinking, meaning that its frequent citation in media and academic discussions may be misleading or premature. Moreover, he points to evidence that the paper itself may have been AI-generated.

]]>
This week, I want to highlight a substack post critiquing a recent research article. The author critically examines a widely referenced paper claiming that increased AI use degrades critical thinking skills. Pookins argues that the study’s design and methodology are fundamentally flawed: the sample isn’t representative, the survey measures self-reported beliefs rather than actual critical thinking performance, and many items intended to measure different constructs are essentially redundant. Because of these flaws, he asserts that the paper does not provide reliable evidence that AI use causes a decline in critical thinking, meaning that its frequent citation in media and academic discussions may be misleading or premature. Moreover, he points to evidence that the paper itself may have been AI-generated.

Why is this such an important post to read? The key issue isn’t whether AI might affect cognition. That is a broader and ongoing research question supported by diverse studies on cognitive offloading and educational impacts. Instead, how we interpret and communicate evidence is what is critical here. The post highlights the importance of scrutinizing research methodology before adopting headlines about AI’s harms or benefits. In teaching and policy conversations, this means encouraging nuanced engagement with research on AI and critical thinking, distinguishing between correlation and causation, and integrating AI in ways that support, rather than inadvertently replace, deep learning and reasoning.

If you have found a quality research article on the impact of AI on learning, please share it with us by emailing it to umpi-ctl@maine.edu.

Read the full post here:

Pookins, N. (2026, February 15). Highly-Cited “AI Erodes Critical Thinking” study appears to be AI generated slop. Nebu’s Newsletter. Substack. 

]]>
Does ChatGPT enhance student learning? A systematic review and meta-analysis of experimental studies /ctl/does-chatgpt-enhance-student-learning-a-systematic-review-and-meta-analysis-of-experimental-studies/ Wed, 18 Feb 2026 17:46:23 +0000 /ctl/?p=5058 This is a systematic review and meta-analysis of experimental research on ChatGPT’s impact on student learning (69 studies from 2022–2024). The goal was to move beyond simple correlations and look at causal effects of ChatGPT use in education settings.

]]>
This is a systematic review and meta-analysis of experimental research on ChatGPT’s impact on student learning (69 studies from 2022–2024). The goal was to move beyond simple correlations and look at causal effects of ChatGPT use in education settings.

The review found positive associations between ChatGPT use and several student outcomes:

  • Improved academic performance compared with control conditions.
  • Boosts in affective-motivational states, meaning students felt more motivated or positive about learning tasks.
  • Increases in higher-order thinking propensities, suggesting students may engage more with critical thinking when ChatGPT is used thoughtfully.
  • Reduced mental effort reported by learners in some contexts.

Interestingly, the review did not find that ChatGPT use changed students’ self-efficacy (their confidence in their own learning ability). This suggests that while tools can help performance, they don’t automatically make students feel more capable. The article doesn’t just stop at findings — it critiques the quality of current research and offers propositions for future work.

The review suggests that ChatGPT used as part of regular classroom practice — not just as an add-on — shows the strongest effects. This article provides evidence that ChatGPT should be integrated in ways that promote higher-order thinking, e.g., through scaffolding questions, collaborative prompts, or guided inquiry.

Read the full article here:

Deng, R., Jiang, M., Yu, X., Lu, Y., & Liu, S. (2025). Does ChatGPT enhance student learning? A systematic review and meta-analysis of experimental studies. Computers & Education, 227, 105224. 

]]>
Learning in double time: The effect of lecture video speed on immediate and delayed comprehension /ctl/learning-in-double-time-the-effect-of-lecture-video-speed-on-immediate-and-delayed-comprehension/ Wed, 11 Feb 2026 17:52:10 +0000 /ctl/?p=5084 Researchers examined how lecture video playback speed affects student learning by having undergraduates watch recorded lectures at normal speed (1x), faster speeds (1.5x, 2x, 2.5x), or by watching videos more than once at increased speed. Students completed comprehension tests immediately after viewing and again one week later. The study focused on whether faster playback harms understanding or long-term retention, a common concern among instructors using recorded lectures.

]]>
Researchers examined how lecture video playback speed affects student learning by having undergraduates watch recorded lectures at normal speed (1x), faster speeds (1.5x, 2x, 2.5x), or by watching videos more than once at increased speed. Students completed comprehension tests immediately after viewing and again one week later. The study focused on whether faster playback harms understanding or long-term retention, a common concern among instructors using recorded lectures.

The key finding was that watching lecture videos at up to 2x speed did not significantly reduce comprehension, either immediately or after a delay, compared to watching at normal speed. Notably, students who watched a lecture twice at double speed often performed as well as—or better than—students who watched once at normal speed, particularly on delayed tests.

Interestingly, students’ intuitions about learning did not align with outcomes. While most students believed slower playback was better for learning, their test performance showed that faster viewing was equally effective. This suggests instructors may not need to discourage increased playback speed and could instead help students think strategically about when fast review is appropriate, such as when reviewing for an exam.

For teaching practice, the study suggests that recorded lectures can support efficient learning, freeing students’ time for deeper engagement activities such as practice problems or retrieval exercises. However, the authors caution that results may not fully generalize to highly complex or technical material, where slower pacing or pausing may still be necessary.

Read the full article here:

Murphy, D. H., Hoover, K. M., Agadzhanyan, K., Kuehn, J. C., & Castel, A. D. (2022). Learning in double time: The effect of lecture video speed on immediate and delayed comprehension. Applied Cognitive Psychology, 36(1), 69–82. 

]]>