Assessing Digital Projects
Assessing digital projects can be one of the more challenging components of bring digital literacy and digital creativity into the classroom. This challenge only expands if students are allowed to pursue unique / different outputs (video, podcast, webtext, etc.) for the same assignment. As such, when thinking about assessment practices, I begin with a few simple orienting questions:
- What am I trying to assess? Product? Process? Learning?
- What types of artifact/evidence is needed to assess this work?
- Does the project belong to a genre? If so, does it already have established conventions and expectations?
- Should I apply a general heuristic for all projects or generate output-specific criteria? Also, should I generate the heuristic on my own or in consultation with students?
There is no one-size-fits-all model for digital project assessment. Not only does each discipline carry its own variety of expectations and implications, but each project conveys its own unique audience and each medium invites its own unique expectations. What might be a good fit for Project A just doesn't work for Project B. As such, each assignment (for each student/group) is likely will necessitate its own set of assessment practices.
But I've outlined four basic approaches below as a starting place for assessment. They vary in depth and application, not to mention what they prioritize in assessment, but all have worked for me over the years.
- SEC Approach (Self-Evaluative Criteria) - Students generate the criteria by which they want to be evaluated and then submit a self-evaluation using those criteria.
- Genre Approach - Students (in conjunction with faculty) do a genre analysis and identify key features. Then they turn those features into assessment criteria.
- Kuhn+2 Model - Provides a a wholistic set of criteria (6 interrelated categories) for responding to digital projects, anchoring the assessment conceptually and rhetorically.
- Ungrading / Learning Record Method - Ungrading is an emergent trend over the past decade or so that prioritizes instructor feedback and students reflection on learning. Students ultimately create and curate artifacts throughout a project or semester that can serve as evidence of their learning and development, and then they use that evidence to "make the case" for the grade they feel they deserve.
This "case" is often in relation to performance specifications (i.e., criteria) and the final deliverables often include something more akin to a portfolio: a collection of artifacts in conjunction with a reflective learning statemetn.
SEC Approach
Perhaps the lowest-hanging fruit on the Assessment tree is the Self-Evaluation Criteria approach. This is approach works well when dealing with a lot of uncertainties: first time trying an assignment in a class, completing projects where the output doesn't have well-established conventions, having students create things for which there are few (good) few examples, etc.
This is one of my favorite assessment approaches because it offers a lot of flexibility and puts the responsibility on the students for establishing assessment criteria. Meaning, the basic principle or practice is to invite students to generate their own criteria:
- Invite students to determine (individually or collectively) the values upon which they want to be evaluated.
- Require students to provide a detailed explanation of those criteria and how they see them operating in practice.
- Require students to evaluate themselves using that criteria.
- Instructor uses student criteria to offer an evaluation (using individual student rationales and self-evaluations as guide to how they criteria apply).
Sample of Student-Generated Criteria
Does the video meet the assignment requirements?
- Meets requirements
- Fails requirements
Does the video employ rhetorical principles (as discussed in class) and offer a compelling experience?
- Exceeds Expectations
- Meets Expectations
- Somewhat Meets Expectations
- Does not offer a Compelling Experience
Does the video effectively use multiple media?
- Excellent in its integration of media
- Effectively uses multimple media
- Somewhat effectively uses multiple media
- Fails to meanifully use multiple media
Does the project indicate a substantial effort?
- Student has clearly put in an effort way and above expectation
- Student has put in a lot of effort
- Student has put in moderate effort
- Student has clearly not put an serious effort.
Genre Approach
If the assignment output fits into an established genre (e.g., interview-based podcast), then this approach can work well not only for assessment but helping students understand practices related to specific disciplines. The basic approach is to study examples of the genre as a class and do in-class research to determine the key features of the genre. I even extend it into a relatively formative genre analysis:
- What are the key features of this genre? How do they relate to the author's purpose? How do they operate in relation to the target audience?
- What elements are common (or even required)? i.e., what makes this genre a genre? Style? Tone?
- What components are included and/or rhetorical strategies are used in "good" examples of the genre?
Using what we've learning through research and discussion, we collectively generate the evaluative criteria to be used in assessing projects.
Most Common Use --> Podcasts
When inviting students to do podcast projects, I make them listen to several podcasts, identify a podcast genre, and then identify what makes a good podcast in that genre a good podcast.
Kuhn+2 Model
The Kuhn+2 model offers rigor with great flexibility. It provides a wholistic set of criteria for responding to digital projects and anchors that assessment conceptually and rhetorically. The model itself comes from the work of Virginia Kuhn (2008), "The Components of Scholarly Multimedia." In 2010, working in specific relation to the Institute for Multimedia Literacy programs at the University of Southern California, the model was extended by Kuhn, with DJ Johnson and David Lopez (see "Speaking with Students: Profiles in Digital Pedagogy"). Then Cheryl Ball (2012) refined it further in "Assessing Scholarly Multimedia."
The Kuhn+2 model offers a heuristic ecology focused in 6 key areas:
- Conceptual Core
- Research Component
- Form & Content
- Creative Realization
- Audience
- Timeliness
Conceptual Core
- What is the project's controlling idea? Is it apparent in the work?
- Is the project productively aligned with one or more multimedia genres? (If so, what are they? How do you know?)
- Does the project effectively engage with the primary issue of the subject area into which it is intervening?
Research Component
- Does the project display evidence of substantive research and thoughtful engagement with the subject matter?
- Does it use a variety of credible (and appropriate) sources and cite them appropriately?
- Does the project deploy more than one approach to the issue?
Form & Content
- Does the project's structural/formal elements serve the conceptual core?
- Does the project's design decisions appear deliberate and controlled? Are they defensible?
- Is the project's efficacy unencumbered by technical problems?
Creative Realization
- Does the project approach the subject in a creative or innovative manner?
- Does the project use media and design principles effectively?
- Does the project achieve significant goals that could not be realized on paper?
Audience
- Is the target audience for the project apparent in the work?
- Does the project work at the appropriate levels (of language, design, function, etc.) for its target audience?
- Has the project been created with an attentiveness to the experience it offers its targeted audience?
Timeliness
- Is the project timely in its engagement/focus?
- If not, does the project attempt to demonstrate why it is relevant to contemporary matters/concerns?
Kuhn+2 Modified in Rubric for First Year Writing - Scrolling Digital Essay (Adobe Express webpage)
ACTIVITY 2 - Applied Assessment
Using either the SEC Criteria or the Kuhn+2 Method, work in pairs (or in groups of 3) to assess one of the two projects below
Sample Video Project - Rachel Yoakum (Indiana University)
Both the SEC approach and Kuhn+2 can work on this video project. I often default to these approaches for projects that are not fully predictable in output. I also tend to default to the SEC approach for first-attempt projects.
Sample Project - Bash Reno by Jaedyn Young, Wyatt Layland, and Benedict Nagy (UN Reno)
I most commonly employ Kuhn+2 for multi-faceted projects or for projects whose output (final form) I am unable to anticipate. For example, one might use the Kuhn+2 model to assess the Bash Reno Instagram Project from Dr. William Macauley's Advanced Non-Fiction class (UN Reno), published at JUMP+.
Ungrading / Learning Record Method (LRM)
Ungrading has gained in popularity over the past decade but it, like its early 2000s predecssor the Learning Record Method, focuses on measuring growth and students awareness of that growth, rather than a summative score. But, in higher ed, we still typically have to assign a grade, so ungrading approaches wither combine with "Specs" grading (specification grading) to set easily identifable achievement levels or they involve some form of student self-evaluation / instructor final assessment (see LRM).
Specs Grading
Specs grading places the emphasis on mastering (at different levels) the essential goals of a course. Specs grading can be used as an individual rubric for a single assignment or can be constructed for an entire course. This works by instructor designating the essential objectives to be achieved for a specific level of performance. Student then choose to reach their desired goal
- To be A-eligble, students must successfully complete the 4 major course assignments, attend and actively participate in 90% of classes, complete 15 (or more) discussion posts, and be able to demonstrate they've achieved all 6 Learning Outcomes for the Course (see Learning Outcomes in Canvas).
- To be B-eligble, students must successfully complete the 4 major course assignments, attend and activitely participate in 85% of classes, complete 12 (or more) discussion posts, and be able to demonstrate they've achieved at least 5 of the Learning Outcomes for the Course (see Learning Outcomes in Canvas).
Learning Record Method (LRM)
The LRM comes from the UK but was made prominent in the US in the early 2000s via the work of Margaret Syverson and Mary Barr. It is essentially an ungrading approach, but one that asks students to gather evidence all semester of their learning (i.e., to keep a record) and then, at the end, to critically engage in self-reflection as a part of the project work.
To gather evidence, students create work logs, curate email exchanges, produce/archive self reflections, and the like. At the end of the semester/project, they use that evidence (gathered in their learning record) to argue (i.e., make the case) for the grade they feel they deserve.
The assessed grade, then, is based partially on what was created, but more overtly focuses on the students ability to showcase their learning and/or to demonstrate the value of that learning. While what they have produced matters (in terms of quality and intent), what they've learned and can demonstrate is given precedent.
The LRM, like most ungrading approaches, works by combining a course porfolio with a student self-evaluation/reflective statement. It, like ungrading, shifts the focus toward what a student has learned (and understands about that learning) rather than some specific level of performance (i.e., A-level writer)
For more on Ungrading, see the many works of Susan D. Blum as a place to start.
For a fuller exploration of LRM, visit FairTest.org.
NOTE: LRM used to be fully represented at learningrecord.org, but as of August 10th, that website was no longer active.
And now we make our own assessment criteria ... :)
Challenge 3 - 2 Stages (Create and Reflect) & 3 Paths:
- create your own / modify existing criteria
- adapt/modify one of the 4 approaches above
- partner with Generative AI to create criteria