Assessment – Center for Teaching and Learning /ctl Fri, 10 Apr 2026 15:43:25 +0000 en-US hourly 1 /ctl/wp-content/uploads/sites/88/2024/01/cropped-android-chrome-512x512-1-32x32.png Assessment – Center for Teaching and Learning /ctl 32 32 Introduction to the VALUE Rubrics: An Authentic Approach to Assessment /ctl/introduction-to-the-value-rubrics-an-authentic-approach-to-assessment/ Wed, 08 Apr 2026 15:43:01 +0000 /ctl/?p=5768 This free, short, self-paced course was co-created with the American Association of Colleges and Universities, and it includes short videos, activities, and discussion prompts. When completed you will be able to:

]]>
This free, short, self-paced course was co-created with the American Association of Colleges and Universities, and it includes short videos, activities, and discussion prompts. When completed you will be able to:

  • Explain what VALUE rubrics are and why they are important to the sector, institutions, and individual educators.
  • Describe the way a rubric has been constructed to engage in discussion and debate with faculty and students.
  • 91ɫ VALUE rubrics to your context, using good practice principles derived from research, assessment data, and pedagogical innovation.

Check it out here: .

]]>
Did a Robot Write this Report? Managing AI Cheating /ctl/did-a-robot-write-this-report-managing-ai-cheating/ Wed, 11 Mar 2026 21:23:01 +0000 /ctl/?p=5591 Generative AI is a powerful tool that can be used to support teachers and students. Unfortunately, just as AI can be used to generate lesson plans, provide helpful feedback, and serve as a personalized tutor, it can also be used to write a paper, provide answers, and do students' work.

]]>
Generative AI is a powerful tool that can be used to support teachers and students. Unfortunately, just as AI can be used to generate lesson plans, provide helpful feedback, and serve as a personalized tutor, it can also be used to write a paper, provide answers, and do students’ work.

But how can we manage this? Over the last several years, Educational Technologist Eric Curtis has been having this academic integrity discussion with thousands of educators around the world. He has assembled feedback and resources in a 1-hour webinar with support materials.

]]>
Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them /ctl/measurement-schmeasurement-questionable-measurement-practices-and-how-to-avoid-them/ Wed, 04 Mar 2026 22:17:00 +0000 /ctl/?p=5551 This article argues that many problems in psychological and behavioral research stem not only from statistical practices but also from how researchers define and measure constructs. The authors introduce the concept of questionable measurement practices (QMPs)—research decisions about measurement that raise doubts about the validity of a study’s conclusions. When such decisions are hidden or poorly documented, it becomes difficult for readers or other researchers to evaluate threats to construct validity, internal validity, statistical validity, and external validity, which ultimately undermines the credibility and replicability of research findings.

]]>
This article argues that many problems in psychological and behavioral research stem not only from statistical practices but also from how researchers define and measure constructs. The authors introduce the concept of questionable measurement practices (QMPs)—research decisions about measurement that raise doubts about the validity of a study’s conclusions. When such decisions are hidden or poorly documented, it becomes difficult for readers or other researchers to evaluate threats to construct validity, internal validity, statistical validity, and external validity, which ultimately undermines the credibility and replicability of research findings.

A key argument of the paper is that research culture often treats measurement as secondary to statistical analysis, creating what the authors call a “measurement schmeasurement” attitude. This mindset allows substantial researcher flexibility in selecting or modifying measures without transparent reporting, producing results that appear rigorous but may rest on unstable foundations. The authors emphasize that even well-powered studies with sophisticated analyses cannot compensate for poor measurement. To address this issue, they advocate greater transparency about measurement decisions, such as clearly defining constructs, reporting how items were chosen or modified, documenting reliability and validity evidence, and making measurement materials openly available. Such practices would allow others to evaluate, replicate, and build upon research more effectively.

The article’s insights translate directly to assessment of student learning, where educators frequently rely on tests, rubrics, and surveys to infer what students know or can do. Just as in research, questionable measurement practices can occur if instructors use poorly aligned assessments or rely on instruments that do not validly capture the intended learning outcomes. The authors’ emphasis on construct clarity suggests that educators should first define precisely what a learning outcome represents and then ensure assessments genuinely measure those constructs rather than convenient proxies such as recall or participation. Increased transparency, such as sharing rubric design, validation processes, and examples of student work, could strengthen the credibility of learning assessments, improve comparability across courses or programs, and support more meaningful interpretations of evidence about student learning.

Read the full article here:

Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456–465. 

]]>
Exploring the Impact of Required Justifications in Multiple-Choice Elaboration Questions on Student Experiences and Performance /ctl/exploring-the-impact-of-required-justifications-in-multiple-choice-elaboration-questions-on-student-experiences-and-performance/ Wed, 04 Feb 2026 18:04:18 +0000 /ctl/?p=5120 This study investigated a hybrid assessment format called Multiple-Choice with Elaboration Questions (MCEQs). In these questions, students not only select a multiple-choice answer but also must justify their choice in writing. The research was conducted across four sections of an upper-division psychology research methods course at a large public university.

]]>
This study investigated a hybrid assessment format called Multiple-Choice with Elaboration Questions (MCEQs). In these questions, students not only select a multiple-choice answer but also must justify their choice in writing. The research was conducted across four sections of an upper-division psychology research methods course at a large public university. The researcher found that students performed better on the questions that required elaboration compared to traditional multiple-choice questions and students also rated them more positively, viewing them as better assessments of their learning. The study suggests that MCEQs can capture deeper student understanding than traditional multiple-choice items and may support better memory and engagement with material.

This research offers several actionable insights for faculty:

  1. MCEQs combine efficiency and depth: they retain the quick grading benefits of multiple-choice items while also requiring students to demonstrate their reasoning.
  2. Requiring justification can help students engage more critically with content, potentially improving conceptual understanding and reducing surface guessing.
  3. Written responses provide richer data for instructors to identify student misconceptions and tailor instruction or feedback accordingly.
  4. Students may view assessments as fairer and more authentic when they can explain their answers, which might support motivation and reduce test anxiety.
  5. Implementing MCEQs requires careful rubric design and possibly more grading time than standard multiple-choice. However, the educational payoff may justify the added effort.

Read the full article here:

Overono, A. (2025). Exploring the impact of required justifications in multiple-choice elaboration questions on student experiences and performance. Journal of the Scholarship of Teaching and Learning, 25(4). 

]]>
Feedback in your voice /ctl/feedback-in-your-voice/ Wed, 26 Nov 2025 21:51:21 +0000 /ctl/?p=5284 Rubrics are handy tools for providing clear expectations and consistent feedback to learners, but students also welcome authentic feedback that sounds like it came from you. You can add your own "voice" through the commenting tool on the rubric in Brightspace or by adding multimedia feedback.

]]>
Rubrics are handy tools for providing clear expectations and consistent feedback to learners, but students also welcome authentic feedback that sounds like it came from you. You can add your own “voice” through the commenting tool on the rubric in Brightspace or by adding multimedia feedback.

The screenshots below show where you can add specific feedback related to a rubric criterion as well as text based feedback for a whole assignment. You can also easily record audio or video-based feedback from within Brightspace.

Screenshot of Brightspace rubric scoring interface with Add Feedback highlighted
Screenshot of Brightspace feedback interface with text box, record audio, and record video highlighted
]]>
Choose your Assessment /ctl/choose-your-assessment/ Wed, 19 Nov 2025 22:02:41 +0000 /ctl/?p=5339 ճ has a fantastic team supporting their CBE program, including a psychometrician (an expert in the measurement of mental capacities and processes) who developed a taxonomy of assessment types. While it is still in development, you can find the verb used in your learning outcome in the list in this database (such as 91ɫ) and see helpful related information. This includes:

]]>
The  has a fantastic team supporting their CBE program, including a psychometrician (an expert in the measurement of mental capacities and processes) who developed a taxonomy of assessment types. While it is still in development, you can find the verb used in your learning outcome in the list in this database (such as 91ɫ) and see helpful related information. This includes:

  1. most likely related behavior (think critically-prospective for 91ɫ)
  2. whether the behavior is internal or external (internal for 91ɫ)
  3. whether the behavior can be completed independently or requires interaction (independent for 91ɫ)
  4. the preferred context for the assessment (hypothetical/simulated for 91ɫ)
  5. preferred stimulus task (scenario for 91ɫ)
  6. other possible stimuli (simulation, real world project, or real world demonstration for 91ɫ)
  7. learner choice of stimulus (decided by program for 91ɫ)
  8. preferred deliverables (none for 91ɫ because it must be paired with an external behavior)
  9. other possible deliverables
  10. learner choice of deliverables

Use this tool to help you think through the alignment of your assessments with your course and program learning outcomes. With gratitude to Dr. Courtney Castle for sharing.

.

]]>
Know What You’re Looking For /ctl/know-what-youre-looking-for/ Wed, 12 Nov 2025 22:05:23 +0000 /ctl/?p=5348 How do you know if your instruction is effective? What evidence do you look for? Are you looking to see if your students are engaged? Are you looking for performance on assessment?

]]>
How do you know if your instruction is effective? What evidence do you look for? Are you looking to see if your students are engaged? Are you looking for performance on assessment?

Determining instructional effectiveness requires measures of a variety of data points, both formal and informal. Before your next class, choose your ruler: this could be degree of eye contact, correct-ish answers when cold calling on students, quality of posts on the discussion board, fewer questions about what to do on the next assignment, number of questions correct on a quiz, etc. The data points are endless. But, if you want to improve your effectiveness, you need to know what you’re looking for.

Need help coming up with ideas for data and measures? Reach out to CTL at umpi-ctl@maine.edu. Let us know if you are interested in piloting the iClicker platform in use at UMaine for synchronous and asynchronous courses to collect more data and improve learner engagement. Their enterprise agreement includes access for about $3 per student.

]]>
Help with Grading? Yes, please! /ctl/help-with-grading-yes-please/ Wed, 29 Oct 2025 18:54:25 +0000 /ctl/?p=3505 Admit it—sometimes there are other things you would rather be doing than grading. You can speed up the process with a little help from Gemini which protects student data privacy under our UMS contract (you must access it through the ).

]]>
Admit it—sometimes there are other things you would rather be doing than grading. You can speed up the process with a little help from Gemini which protects student data privacy under our UMS contract (you must access it through the ).

Try the following:

  1. First, paste this into the Gemini chat box:
    “I would like your assistance with evaluating this student assignment submission. I will give you the grading rubric and the student submission. Please assess it according to the rubric and provide me with a list of ways the student work has met the proficiencies described in the rubric as well as which areas are not yet proficient. Include action-oriented suggestions for the student to improve their work to meet the standards. Use positive and encouraging language.”
  2. Then, copy & paste or upload your rubric.
  3. Next, copy & paste or upload the student submission file.
  4. Click the submit button or hit enter.
  5. Read through the output from Gemini.
  6. Open the student submission to see how accurate Gemini was with its assessment.
  7. Provide Gemini with feedback if it missed something important so it learns from you.
  8. Mark the rubric in Brightspace.
  9. Copy and paste any of the feedback you agree with, modifying as necessary to sound like you.
  10. Add your own encouragement and publish!

Bonus: If you require students to reference specific course materials in their responses, you can upload those, too, and ask Gemini to assess how well they used those resources and which information likely came from the web or can’t be verified. If you would like in-line feedback on student writing, just include in your prompt:
“Please print the student submission with in-line commentary in brackets.”

In the spirit of transparency, it is a good idea to let students know that you used a secure platform to assist with your evaluation of their work so that you could provide more specific and thorough feedback.

]]>