Table of Contents

Codio Grading Types

Jason Carroll Updated by Jason Carroll

Overview

Codio supports multiple ways to evaluate student work—from fully manual review to scripted auto-grading and lightweight self-checks. This article explains the grading types we use, when to choose each, and how to set them up so facilitators can see submission and grading status clearly while students receive appropriate feedback.

A curated set of relevant examples is available in the associated Canvas course, Examples of Codio Units. This course provides sample units that demonstrate common patterns, expectations, and best practices you can reference as you work through Codio-based activities.

Manual vs. Auto-graded vs. Auto-assessed (self-check)

This section outlines the three primary assessment approaches in Codio—manual, auto-graded, and auto-assessed (self-check)—and provides guidance on when to use each based on instructional intent, feedback needs, and grading requirements.

  • Manual: Should be used when human judgment is essential, such as evaluating nuanced, open-ended, or subjective work. In these cases, Codio rubrics are preferred to support consistency and clarity in evaluation.
  • Auto-graded: Best when scale, consistency, or time savings are a priority. These activities allow student work to be evaluated automatically using predefined criteria, making them well suited for objective or repeatable tasks.
  • Auto-assessed (self-check): intended for practice and formative feedback rather than formal grading. They allow students to check their understanding without receiving a recorded score. If a grade is still required, it should be captured separately using another assessment mechanism.

Environment shortcuts

These shortcuts summarize which environment to use for common assessment and grading needs.

  • Base Codio IDE: Use for Advanced Code Tests (ACT) and script-based grading.
  • Jupyter: Use NBGrader for auto-grading and hidden solutions.
  • RStudio: Use a Guide page with a “Test Code” button that runs a comparison script.
  • SQL (OmniDB): Use an external OmniDB tab with a Codio comparison script.
  • Quizzes: Use Codio Quizzes or Canvas New Quizzes; Codio sandbox may be embedded if needed.

Manually Graded

Codio Rubrics (manually graded)

NOTE: Codio rubrics are only available with the base Codio IDE

Use Codio’s teacher interface to grade with a Codio rubric or numeric score. This keeps submission and grading status in one place and avoids cross-referencing in Canvas. The typical flow is: open the student’s work, apply the rubric or score, and let LTI pass the result to the Canvas gradebook. This is generally the cleanest workflow for facilitators.

Note that passback issues have occasionally occurred; if you encounter one, record timestamps and affected users and escalate to both Codio and Canvas.
  • What: Grading is performed directly in Codio’s teacher interface using a Codio rubric or a numeric score.
  • Why: This approach keeps submission status and grading in a single system, reducing the need to cross-reference Canvas and minimizing facilitator overhead.
  • Flow: Facilitators open the student’s work in Codio, apply the rubric or score, and allow the LTI integration to pass the grade back to the Canvas gradebook.
Open student work → apply rubric/score → LTI passback to Canvas gradebook.
  • Considerations: This is generally the cleanest and most efficient workflow for facilitators and supports Base Codio IDE–only teacher tools.
Limitations: In rare cases, LTI passback issues have occurred (for example, in some CIS-level courses). When this happens, capture timestamps and affected users and escalate the issue to both Codio and Canvas support.

Use Canvas rubrics and SpeedGrader only in legacy or mixed sequences where a single Canvas rubric must cover both Codio and non-Codio items. Outside of those cases, this approach is not recommended because it requires cross-referencing Codio submissions against Canvas grading status, which adds extra steps and increases the chance of errors.

  • Use only for: Legacy/mixed sequences where a single Canvas rubric must span non-Codio and Codio items
  • Why not recommended: It introduces an inefficient workflow that requires facilitators to cross-reference Codio submissions with Canvas grading status, adding extra steps and increasing the risk of grading errors.

Access to Student Work & Assigning Grades

For Codio-based assignments, view and grade student work directly in the Codio teacher interface. This keeps submission and grading status aligned and reduces extra steps. Enter the student’s assignment from Codio, review the work, apply the rubric or score, and allow LTI to send the grade to Canvas. Avoid using Canvas SpeedGrader for Codio submissions, as switching tools can cause state mismatches and unnecessary clicks.

  • Best practice: Grade in Codio for Codio work
Avoid SpeedGrader for Codio submissions to prevent state drift and extra clicks

Ancillary: Codio Teacher Resources (Base IDE only)

Codio provides additional teacher tools in the Base IDE, including plagiarism detection and code playback. These features help facilitators review how a solution was produced and identify potential integrity issues. Note that they are not available in Jupyter- or RStudio-based units.

  • Plagiarism Checker: The plagiarism checker compares student submissions against peers and external sources to flag potential overlap.
  • Code playback: Code playback allows instructors to review a time-stamped replay of how a student’s code was written, not just the final result.
Note: These tools are not available in Jupyter/RStudio-based units

Auto Graded

Auto-grading vs. Auto-assessing

Before choosing tools, decide whether an activity should produce a score or simply give feedback. Auto-graded activities generate a score in Codio and pass it to Canvas; auto-assessed activities provide immediate feedback for practice but do not pass a grade. In the Codio IDE, this distinction comes down to configuration and wiring. You can attach a unit-level script to let students run self-checks (auto-assessing), and you can use an Advanced Code Test (ACT) to pass a grade when the student selects Mark as Complete. In Jupyter, keep auto-check cells separate from graded NBGrader cells so students always know what affects their score.

  • Auto-graded: Produces a score and passes a grade to Canvas
  • Auto-assessed: Produces feedback only (self-check); no grade pass

In Codio IDE: The difference comes from Canvas configuration and how artifacts are wired. A unit-level script can let students self-check (auto-assess). An ACT (Advanced Code Test) can handle grade passes when the student selects Mark as Complete.

In Jupyter: Keep separate auto-check cells (feedback only) and graded NBGrader cells.

Make sure to use the same test code for auto-check cells as for the final auto-grading-on-submission

Autograded — Quizzes

Quizzes are the simplest way to provide automated checks. Inside Codio, you can build multiple-choice items, use our customized fill-in-the-blank format that shows per-blank scores and highlights correct answers, and add free-text items that are evaluated by a grading script. These options work well for lightweight checks inside a Guide. If you prefer a Canvas-first experience, create a Canvas New Quiz and embed a Codio sandbox either in the quiz instructions or inside individual questions. Students can try code live in Codio and then answer in Canvas, which keeps the quiz workflow familiar while still giving access to a coding environment.

Codio Quizzes
  • Types used: Codio quizzes support basic multiple-choice questions, a customized fill-in-the-blank format (used in eCornell Python courses) that provides per-blank scoring and highlights correct answers, and free-text questions that are evaluated using a grading script. Some formats also support structured content such as tables.
  • Good for: These quizzes work best for lightweight, low-overhead checks embedded directly inside Codio Guides, where students can quickly verify understanding without leaving the coding environment.
Canvas New Quizzes (with Codio sandbox)
  • Patterns: A Codio sandbox can be embedded either in the quiz instructions or within individual quiz questions, allowing students to experiment with code while completing the quiz. Choose this approach when you want a Canvas-native quiz experience but still need students to interact with live code in Codio before submitting their answers.

Autograded — IDE Patterns

Base Codio IDE — Advanced Code Tests (ACT)

In the Base Codio IDE, Advanced Code Tests (ACT) are the primary mechanism. An ACT runs tests, renders readable results in the Guide (including custom HTML/CSS reporters), and passes the grade to Canvas when the student marks the unit complete. We use this across several course families, including Python Boot Camp; CIS550s (with a mix of “Walker” home-grown and refit ExerPy-style items); JavaScript and Front-End Web (Jasmine and Mocha/Selenium); and CTECH/CAC series. If you need rich feedback formatting and a grade, confirm the ACT is wired to do both; if not, split the build into a self-check ACT and a separate grade-passing ACT.

  • What: Primary mechanism to run tests, render results, and pass grades
  • Examples / Families: ACTs are used across several course families, with patterns varying by discipline and instructional need.
    • In Python Boot Camp, units often include two distinct ACTs: one focused on generating detailed, student-facing output (often styled with custom CSS), and a second ACT responsible solely for sending the final grade to Canvas.
    • In the CIS550s, multiple ACT styles are in use. Some units rely on "Walker White home-grown tests", while others use refactored ExerPy-style items that have been adapted to behave more like native Codio activities.
    • In JavaScript and Front-End Web courses, ACTs typically leverage standard testing frameworks such as Jasmine or Mocha with Selenium. In at least one CIS560s unit, a bash-based ACT is used to execute and validate tests.
    • This ACT pattern is also standard in the CTECH400s and CAC100s, where automated validation, clear feedback, and reliable grade passback are required.
  • Authoring notes
    • Use custom reporters for readable HTML/CSS feedback in the Guide
    • If you need both polished feedback and grade pass, verify ACT wiring supports both; otherwise split assess vs. grade (see above under Python Boot Camp)
Jupyter

In Jupyter, use NBGrader for auto-grading and to hide solution code. Design notebooks so student edits happen inside clearly marked BEGIN // END SOLUTION cells. Avoid mid-line edits or mixing student code inside instructor blocks, which can break solution tagging and grading.

  • NBGrader: Use NBGrader to auto-grade Jupyter notebooks and to keep instructor solution code hidden from students.
  • Solution regions: Structure notebooks so all student work occurs within clearly marked BEGIN SOLUTION // END SOLUTION blocks, and avoid mid-line edits or mixing student code inside instructor blocks, as this can break solution tagging and grading.
RStudio

In RStudio, there is no NBGrader equivalent. The supported pattern is to place a Test Code button in the Guide. That button runs a comparison script that evaluates key variables, plots, or outputs against an instructor solution and returns results to the Guide. The grade pass still occurs through Codio’s assessment side, not through an in-IDE autograder.

  • No NBGrader equivalent: RStudio does not have an NBGrader-style tool for in-notebook auto-grading or solution hiding.
  • Pattern: The supported approach is to use a Guide page with a Test Code button that runs an ACT-backed comparison script to evaluate key variables, plots, or outputs against an instructor solution.
  • Codio “working on something?”: While alternatives to NBGrader have been discussed historically, there is currently no supported replacement available in Codio.
SQL (OmniDB)

For SQL activities, we use OmniDB in a separate browser tab. Students run their queries in OmniDB; Codio then re-runs those queries against a known dataset and compares the results to determine correctness before passing the grade. This comparison-script approach is used across several SQL-heavy labs.

  • Pattern: OmniDB opens in a separate tab; Codio script re-runs the query against a dataset and compares results
  • Used for: SQL-heavy labs across multiple courses

Appendix

Combination Patterns

Some activities benefit from both automated checks and human judgment. In these cases, combine auto-grading for correctness with manual grading for qualities that automation cannot reliably assess (style, reasoning, documentation, naming, narrative explanation).

Example (CEEM620s): A Jupyter hybrid where NBGrader evaluates code cells automatically, and a Codio rubric is used to review written justification and code quality. NBGrader provides fast, consistent scoring on testable outputs, while the rubric captures the parts that require expert review.

Why this matters:

  • Scalability: Auto-grading handles repetitive correctness checks across many students without adding grading time.
  • Quality of feedback: Manual review focuses human effort where it’s most valuable—clarity, approach, and standards that are difficult to encode in tests.
  • Fairness and clarity: Separating “does it work?” from “is it written well and explained?” makes criteria explicit for students and easier to defend.
  • Operational reliability: If either side (auto or manual) has an issue, the other still provides usable signals; you’re not relying on a single mechanism.

Choosing & Designing (Checklist)

Use this checklist before you build or revise any Codio activity. It helps you match the grading pattern to your staffing, simplify facilitator workflows, and set clear expectations for students. Start by confirming who will grade and whether you need automation for scale, then plan the facilitator experience by keeping grading in Codio when the work lives in Codio. Give students self-checks where they help learning, and be explicit that self-checks don’t affect grades. Finally, account for environment constraints (Base IDE, Jupyter, RStudio, SQL), note any requirement to use a single Canvas rubric across mixed items, and include passback monitoring in your rollout so issues are caught early and documented.

Consideration

Guidance

People

Do we have graders? If staffing is thin, lean auto-graded.

Facilitator UX

Prefer Codio-side grading to keep submission/grade view unified.

Student UX

Provide self-checks where helpful; clearly state that self-check ≠ grade.

Environment constraints

Base IDE → teacher tools (plagiarism, playback). Jupyter → NBGrader. RStudio → Guide + comparison script. SQL → OmniDB + comparison.

Rubrics

If leadership requires one Canvas rubric across mixed items, document the tradeoffs.

Passback reliability

Add monitoring to the rollout plan; collect diagnostics if issues appear.

Troubleshooting

Use this troubleshooting table to quickly diagnose common issues that arise during Codio builds and grading. Start by matching your symptom to the relevant row, then follow the listed steps in order—most problems resolve with basic linkage checks, a resync, or clarifying test expectations. If a passback issue persists after these steps, capture the requested details and open tickets with both vendors so resolution isn’t blocked.

Issue

Steps / Resolution

Codio → Canvas passback fails

Confirm correct LTI link/assignment pairing; check for duplicates. Try a resync, then re-open/submit a test. Capture course, assignment, user, and timestamps. Open tickets with both vendors; include logs/screens.

“Technically correct but failed tests”

Align grader acceptance criteria with instructions (contract tests). Provide sample inputs/outputs and note acceptable variants.

NBGrader oddities

Re-tag cells; ensure hidden tests aren’t exposed; avoid mid-line edits.

SpeedGrader confusion

If the submission lives in Codio, don’t grade in SpeedGrader.

Open Items & Ownership

Task

Owner

Update job aids for Codio manual grading

ITG

Document/monitor passback issue pattern

ITG + Support

Audit teacher-tool availability by environment (Base IDE vs. Jupyter/RStudio)

ITG

Examples library: Link to exemplar units (Python Boot Camp, CIS550s, Front-End Web, CEEM620s hybrid, OmniDB labs)

ITG

How did we do?

Contact