91ĚŇÉ«

Skip to content

Generative and retrieval tasks: Does the sequence matter and do sequence effects depend on learning task delay?

This 2026 study investigated whether the order in which students complete generative tasks (like generating their own examples of concepts) and retrieval tasks (like cued recall) affects learning outcomes. Using a 3×2 experimental design with 208 university students, the researchers compared three task sequences — generative-before-retrieval, retrieval-before-generative, and restudy-before-generative — under two timing conditions: completing tasks immediately after an initial study phase or completing them two days later. Students were tested one week after finishing the learning tasks on both retention and comprehension of four psychology concepts.

Learning with concept maps: the effect of activity structure and the type of task

This research examined how 226 undergraduate students learned using concept maps under different conditions, comparing task types (fill-in-the-blanks, shuffled concepts, self-constructed, and summaries) with activity structures (individual only, individual-then-collaborative, and collaborative-then-individual). The study measured learning outcomes through comprehension and recall tests while analyzing nearly 4,200 verbal exchanges during collaborative activities. Results revealed a significant interaction between task type and activity structure: students who individually self-constructed concept maps and then discussed them collaboratively (I+C) achieved the strongest learning outcomes, particularly for delayed recall.

Hyflex learning and student engagement in higher education: a systematic literature review

This open-access systematic literature review, published in Frontiers in Education, synthesizes current research on HyFlex (Hybrid-Flexible) course models in higher education — a format in which students choose, session by session, whether to attend in person, join synchronously online, or engage asynchronously. The review draws on studies from across institutional contexts to examine how this radical flexibility affects student engagement, attendance, and learning outcomes. Rather than advocating for one modality over another, the authors investigate what conditions make flexible course designs succeed or fail, and the findings challenge some widely held assumptions about what students actually do when given a choice.

Effects of teacher, peer and self-feedback on student improvement in online assessment: the role of individuals’ presumptions and feedback literacy

This study examines how teacher, peer, and self-feedback influence student learning in an online assessment context. Using a quasi-experimental design with university students, the authors compared how students perceive different feedback types versus how those feedback types actually impact writing improvement.

Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them

This article argues that many problems in psychological and behavioral research stem not only from statistical practices but also from how researchers define and measure constructs. The authors introduce the concept of questionable measurement practices (QMPs)—research decisions about measurement that raise doubts about the validity of a study’s conclusions. When such decisions are hidden or poorly documented, it becomes difficult for readers or other researchers to evaluate threats to construct validity, internal validity, statistical validity, and external validity, which ultimately undermines the credibility and replicability of research findings.

Highly-Cited “AI Erodes Critical Thinking” Study Appears To Be AI Generated Slop

This week, I want to highlight a substack post critiquing a recent research article. The author critically examines a widely referenced paper claiming that increased AI use degrades critical thinking skills. Pookins argues that the study’s design and methodology are fundamentally flawed: the sample isn’t representative, the survey measures self-reported beliefs rather than actual critical thinking performance, and many items intended to measure different constructs are essentially redundant. Because of these flaws, he asserts that the paper does not provide reliable evidence that AI use causes a decline in critical thinking, meaning that its frequent citation in media and academic discussions may be misleading or premature. Moreover, he points to evidence that the paper itself may have been AI-generated.

Learning in double time: The effect of lecture video speed on immediate and delayed comprehension

Researchers examined how lecture video playback speed affects student learning by having undergraduates watch recorded lectures at normal speed (1x), faster speeds (1.5x, 2x, 2.5x), or by watching videos more than once at increased speed. Students completed comprehension tests immediately after viewing and again one week later. The study focused on whether faster playback harms understanding or long-term retention, a common concern among instructors using recorded lectures.