Table of Contents

SIM GPT Simulation Manual

Ted Blanchard Updated by Ted Blanchard

SIM GPT Simulation Manual

This manual outlines a new, flexible method for creating AI-driven simulations, walking you through the process required to develop, integrate, test, and refine the OpenAIGPTs that power them.

By customizing how these simulations interpret and respond to learners, you’ll create dynamic interactions that feel more personalized, relevant, and effective in reinforcing key concepts. This tailored approach to development ensures that your simulation not only reflects and assesses the right materials, but engages students with targeted, iterative, and meaningful learning experiences.

What is a SIM GPT Simulation?

A SIM GPT Simulation is an interactive AI tool that guides students through scenario-driven decision pathways, delivering feedback tied directly to the course materials, topic, and objective(s) you identify as an ID. Whether you want to assess comprehension of video assets, non-video assets, or a mix of the two, these simulations provide a flexible means of assessing course topics/objectives at a customizable breadth/depth.

By following the steps in this manual, you will help design the SIM GPT powering that tool, customizing it to create interactions that:

  • Prompt student decision-making around the topic and objectives you define.
  • Draw from, refer to, and are guided by the course materials you select.
  • Offer coaching advice if students make a mistake or go off-topic.
  • Continue to prompt students until the simulation objectives you defined have been met.

Why a Manual?

Given the ID input required to configure a SIM GPT, we’ve compiled a set of guidelines and resources to help you manage that process. This manual guides you through each step to ensure that your simulation is both aligned with eCornell standards and delivers the intended experience and outcome for students:

  • Detailed, Stepwise Guidance: The first section of this manual walks you through the three key steps required to customize, configure, and test/deploy a SIM GPT in Canvas.
    • Step 1 extracts key conceptual details, customizes, and provides context for your SIM GPT.
    • Step 2 configures, embeds, tests, and potentially refines your SIM GPT.
    • Step 3 copy edits, assures the quality of, attains faculty approval for, and ultimately deploys your SIM GPT.
  • Appendix: The second section of the manual includes variants of a customizable template for SIM GPT system instructions and additional resources that support implementation of the first section.

Feel free to use this manual as a reference when necessary to help clarify the abbreviated version of those steps present in associated “Add SIM to Catalog” Wrike tasks.


Step 1: Gather and Prepare Simulation Resources

This step walks you through gathering and organizing relevant details and content, leveraging those details to customize the appropriate SIM GPT Template for your target application, and draft both ID- and student-facing context for your simulation. By following the sub-steps outlined below, you ensure that:

  • Only content, context, and files that are directly relevant to your simulation have been gathered and organized into a Google Drive folder structure specific to your simulation.
  • Customization of the appropriate SIM GPT Template was limited to boldface <...> placeholders and the details around them within asterisks to enforce the eCornell standard.
  • Simulation placement was justified and effectively contextualized from both an ID (design) and student (Canvas) perspective.
1.1: Extract Key Details for Your Simulation

Identify an Opportunity for Your Simulation

  • Navigate to the “Retrofit Batch Deployment” tab of the Simulation Deployment Tracker, identify the row for the simulation you were assigned to by identifying your name in the “SIM Author” column cell of that row, and set the “Status” column cell to “In Development”.
  • Navigate to the Wrike project embedded in the “Code” column cell of that row (the project this sub-task is present within) and assign yourself to all tasks/subtasks currently assigned to the generic “Instructional Designer”, leaving those with a blank assignee field or assigned to Karen Shepherd alone.
  • ✅ Wrike Task: Set the status of Steps 1 and 1.1 as In-Progress.
  • Navigate to the SIM GPTs folder in Google Drive, which is a shared space for IDs to store, reference, use, and refine SIM GPT resources, and identify whether a folder for the course your simulation will be embedded within already exists in the SIM GPTs folder.
    • If it doesn’t, create a new sub-folder in the SIM GPTs folder for your course titled “CourseCode” (e.g., CREA502) before proceeding to the next bullet.
    • If it does, proceed to the next bullet.
  • Navigate to the “ID Review” tab of the Simulation Deployment Tracker, identify the row for your simulation, and review notes outlining the opportunity for a simulation identified via “ID Review” in the “Notes” column cell of that row.
    • Note: While you are not required to leverage the opportunity identified there, identifying and leveraging your own opportunity would require significant justification given the time/effort logged for the deeper pedagogical review conducted to provide that jumping off point.
  • Navigate to the -M version of your course and identify an opportunity for a simulation in the context of the “ID Review” conducted for your simulation.
    • Review the course overview, objectives, project preview page, and module structure overall to identify gaps in the existing scaffolding/pedagogy that a simulation could help bridge.
    • Determine whether the opportunity identified via “ID Review” is aligned with any opportunities you identified and either leverage that opportunity or go with one you feel justified exploring.
  • Make a note of the Canvas page content and/or files of the intended host module relevant to your target application for the simulation.
    • While the relevance of content can vary depending on what your simulation is meant to assess, it would likely include one or more of the following:
      • Canvas page text regardless of page type (Read, Watch, Tool, Activity, Discussion, etc.)
      • Questions and answers for an Activity page.
      • Video transcripts for Watch pages.
      • Downloadable file for a tool.
    • The goal at this stage is to provide content that is directly inclusive/reflective of the concepts your simulation is meant to assess. This ensures that ChatGPT-4o has adequate context to extract core concepts, potential learning objectives, and scenarios that both assess and are reflective of your target application for the simulation.

Gather Relevant Content/Context for Your Simulation

  • Navigate to the Google Docs Template Gallery, click on the “CourseCode_Module_Customization” Google Doc template to create a copy in your Drive, re-title it to reflect the course and intended host module of the simulation (e.g., CREA502_M2_Customization), and move it to the “CourseCode” folder you just found or created.
  • Download any files (Tools, Documents, etc.) that you identified as being relevant to your simulation, convert them to .txt format if/where possible (open them in whichever application is relevant to the file type, then “Download As” or “Save As” .txt), and upload them to your “CourseCode” folder.
  • Then navigate back to your “CourseCode_Module_Customization” document and embed titles/links to both those uploaded files and any Canvas pages of the -M version of your course (or -DEV for courses in development) in the Gathered Content field of that document.
    • Note: While the process to construct a vector store for your SIM GPT is automated, those files are still relevant as input content/context for your Step 1.1 prompt.
  • Draft a statement in the Simulation Context field of the “CourseCode_Module_Customization” document that explicitly and comprehensively contextualizes the purview of Gathered Content and intended pedagogical nature of student interactions with respect to your target application for the simulation from three key perspectives:
    • Describe the high-level thematic framework you are shooting for, ensuring that the nature/breadth of that framework is appropriate for, reflective of, and aligned with the concepts you want the simulation to assess.
      • Outline the topic you want the simulation to assess, contextualizing gathered content by name (Canvas page title) using the specific terminology those assets comprise.
      • Ensure that your topic outline effectively encapsulates intended assessment goals, as it will help provide a thematic foundation for their presentation and assessment.
    • Describe a few general assessment goals for the simulation and the intended sequence (to an extent), breadth, and depth of concept assessment in that context.
      • Clarify the scope of assessment you envision, as it will help determine the breadth, depth, and accuracy of that assessment.
      • A narrow scope (less and/or more specific goals) generally lends itself to depth, while a broad scope (more and/or less specific goals) lends itself to breadth.
      • Either way, accuracy is dependent on the extent to which you’ve tied your thematic aesthetic and assessment goals to gathered content and the concepts/terminology they comprise.
    • Describe the scenario details/conditions best suited to the thematic framework and assessment goals you’ve outlined to reinforce thematic/assessment alignment.
      • By default, your simulation will infer and generate a scenario and associated interactions based on the topic and assessment goals you’ve established in the context of the content you’ve gathered.
      • Describing the types of scenarios and associated interactions you feel would be relevant, ensuring that you’ve layered in terminology that maps directly to the theme and assessment goals you’ve described, helps tie everything together.

Prompt the Extraction of Key Details for Your Simulation

  • Navigate to https://chat.ai.it.cornell.edu/ to access Cornell’s instance of ChatGPT, login via SSO, and select openai.gpt-4o as the model being employed. Claude can be a great tool for everyday tasks, and may work to an extent for Step 1.1, but it doesn’t have the context-length, processing power, or training to adequately address subsequent steps.
  • Incorporate Gathered Content and Simulation Context beneath a prompt that is limited to this step (1.1) exclusively. While you want to draft all Step 1 prompts in the same thread, the prompts need to be separate to encourage depth and creativity in ChatGPT-4o’s response:
    • Prompt: Please analyze the following content and file(s) uploaded to this prompt and identify a topic, key concepts, potential learning objectives, and scenarios relevant for an interactive simulation that both assesses and is reflective of both. Summarize these elements succinctly:
      [Paste Gathered Content + Simulation Context here]
      *Remember to upload relevant files downloaded from Canvas if applicable*

Review and Potentially Iterate on ChatGPT-4o’s Response

  • The quality of ChatGPT-4o’s response is wholly dependent on the quantity and quality of the content, context, and any files that you provided. Ideally, ChatGPT’s response would have listed the following:
    • Topic (e.g., “Negotiation Skills”),
    • Objectives (e.g., “Practice reflective listening and handle discount requests”)
    • Scenarios (e.g., “Salesperson dealing with a demanding client”).
  • If the response went in a pedagogically and/or topically irrelevant direction, it’s likely a sign that you provided too much or too little content/context. If the response was close, try responding with follow-up prompts to nudge ChatGPT-4o in a more desirable direction.
    • Note: Best practice at this point would likely be to start fresh in a new thread, providing better context and limiting materials to those directly relevant to the target application.
  • Once you’re happy with the response, copy/paste your prompt and ChatGPT4o’s response separately without formatting (command + shift + v) in the Step 1.1 “Prompt:” and “Response:” fields of the “CourseCode_Module_Customization” document and reformat as desired for interpretability.
  • ✅ Wrike Task: Set the status of Step 1.1 as Done and Step 1.2 as To Do.
1.2: Customize the SIM GPT Template

Prompt Customization of the SIM GPT Template

  • ✅ Wrike Task: Set the status of Step 1.2 as In-Progress.
  • Draft a prompt limited to this step in the same ChatGPT-4o thread as Step 1.1 to populate and contextualize boldface <...> placeholders and the context around them within asterisks of the appropriate Appendix A template.
  • Input the following instructions as your query for the Step 1.2 prompt, including the appropriate SIM GPT Template beneath that query:
    • Prompt: Based on your analysis of the content I gathered for an interactive simulation, populate and contextualize <...>* placeholders of the following template with the topic and primary objectives you identified as relevant to that simulation. Then populate/contextualize any other details identified as relevant to that simulation around those placeholders within the asterisks of the template. Finally, format all content within the asterisks in boldface; keep the rest of the template content, structure, and format unchanged, never modify anything outside of those asterisks.*
      [Paste the appropriate Appendix A SIM GPT Template here]**

Review and Potentially Iterate on ChatGPT-4o’s Response

  • Once you’ve gotten a response from ChatGPT-4o, it is critical that you evaluate the extent to which it adhered to prompt guidelines. Ideally, boldface <...> placeholders of the SIM GPT Template and the content immediately around them (within asterisks) were populated with boldface, contextualized versions of key details extracted by ChatGPT-4o in Step 1.1.
  • In evaluating both the initial response and any subsequent iteration, verify that those details were effectively populated and contextualized within the asterisks of that template alone. If ChatGPT-4o’s response went off the rails, respond with follow-up prompts to nudge ChatGPT-4o in the topical/pedagogical direction you’d prefer it align the template to.
    • Note: Formatting inconsistent with the SIM GPT Template and/or content adjusted outside of asterisks, as harmless as it may seem, will impact the behavior of your SIM GPT. If either are true, identify where it went off the rails for ChatGPT-4o and ask that it honor the stipulations of the Step 1.2 prompt.
  • Once you’re happy with the response, copy/paste your prompt and ChatGPT4o’s response separately without formatting (command + shift + v) in the Step 1.2 “Prompt:” and “Response:” fields of the “CourseCode_Module_Customization” document, verifying that you’ve preserved boldface formatting of “within asterisks” customization consistent with the appropriate Appendix A template.
  • ✅ Wrike Task: Set the status of Step 1.2 as Done and Step 1.3 as To Do.
1.3: Draft ID/Student-facing Context for Your SIM GPT

Prompt the Drafting of Design/Canvas Context for Your SIM GPT

  • ✅ Wrike Task: Set the status of Step 1.3 as In-Progress.
  • Draft two prompts limited to this step in the same ChatGPT-4o thread as Steps 1.1 and 1.2 that leverage the output of those interactions to draft both ID-facing context for the design/pedagogical intent of your simulation and student-facing page introduction content that helps implicitly satisfy/realize those intentions in setting the stage for simulation engagement.
  • Input the following instructions and sample content as your query for the Design Context prompt, including a few examples of design context from previously deployed SIMs in the SIM Development Sandbox Canvas shell:
    • Prompt 1 (Design-Context): Given everything we’ve discussed, I need your help drafting a pair of ID-facing paragraphs that explains why this simulation exists, what it is meant to assess, and its relationship with preceding/succeeding assets in the module. Model your approach on the following examples:
      [Paste sample Design Context for two or more SIMs here]
  • Input the following instructions and sample content as your query for the Canvas Content prompt, including a few examples of Canvas content from the same previously deployed SIMs as the previous prompt in the SIM Development Sandbox Canvas shell:
    • Prompt 2 (Canvas-Context): Given everything we’ve discussed, I need your help drafting a pair of student-facing paragraphs meant to provide students with a concise, engaging preview of what they’re about to do and why it matters.
      [Paste sample Canvas Content for the same two or more SIMs used in Prompt 1 here]

Review and Potentially Iterate ChatGPT-4o’s Response

  • Once you’ve gotten responses to Prompts 1 and 2 of Step 1.3 from ChatGPT-4o, it is critical that you evaluate the extent to which they are accurate/relevant. Note, however, that accuracy/relevance in this context is largely dependent on your approach to Step 1.1:
    • Gathered Content: Did you explicitly name and provide gathered content within your prompt for Step 1.1.
    • Simulation Context: Did you explicitly describe how named/provided content relates to and/or is assessed by the simulation within your prompt for Step 1.1?
  • If you weren’t satisfied with the accuracy/relevance of ChatGPT-4o responses to Design and/or Canvas Content prompts, iterate on those interactions:
    • Provide and/or clarify gathered content and simulation context.
    • Provide additional examples and explain what did/didn’t go well.
  • Once you’re happy with the responses, copy/paste your prompts and ChatGPT4o’s responses without formatting (command + shift + v) in the appropriate Step 1.3 Prompt: and Response: fields of the “CourseCode_Module_Customization” document and reformat as desired for interpretability.
    • Note: If you are incapable of getting accurate/relevant responses for Design/Canvas context, it is likely a sign that your initial approach to Step 1.1 was insufficient. If that is the case, starting over at this point will likely save you a significant amount of time in the long run.
  • Navigate to your “CourseCode” folder, create a new folder titled “CourseCode_Module_Simulation Name” to reflect the name of your simulation as determined by ChatGPT-4o and the course/module it will be embedded within (e.g., CREA502_M2_Building Connections Between Entrepreneurs), then move the “CourseCode_Module_Customization” document into that new sub-folder along with any other files uploaded to or used in your prompt Step 1.1 prompt.
  • ✅ Wrike Task: Set the status of Steps 1 and 1.3 as Done and Steps 2 and 2.1 as To Do.

Step 2: Prep, Configure, and Vet Your Simulation

This step walks you through the process of configuring and embedding the SIM GPT for your simulation in Canvas, testing/refining that simulation yourself, and then sending it to a third party for end user acceptance testing. By following the sub-steps outlined below, you ensure that your simulation:

  • Has been embedded with the Design/Canvas Context from Step 1 in an archival Canvas shell.
  • Has been embedded in a student-facing target page and ready for testing.
  • Has been tested, refined if necessary, and is ready for QA.
2.1: Prepare Canvas for the Simulation

Organize the Resources Required to Configure Your SIM GPT

  • ✅ Wrike Task: Set the status of Step 2.1 as In-Progress.
  • Navigate to the Google Docs Template Gallery, click on the “CourseCode_Module_Configuration” Google Doc template to create a copy in your Drive, re-title it to reflect the target course/module the SIM GPT will be embedded within (e.g., CREA502_M2_Configuration), and move it to your “Simulation Name” folder.
  • Populate the fields the “Configuration” document comprises as relevant from your “CourseCode_Module_Customization” document:
    • Simulation Name: Re-title the “CourseCode_Module_Simulation Name” text to reflect the target course/module and title of your simulation as determined by ChatGPT-4o in Step 1.2 (e.g., CREA502_M2_Building Connections Between Entrepreneurs).
      1. Note: Do not include the word “Simulation” in the name, as the simulation will present itself as such on its own given System Instructions inclusion.
    • Simulation Instructions:
      Paste the full SIM GPT Template customized via Step 1.2 within the [Instructions Start] and [Instructions End] placeholders, ensuring that the formatting of your template matches the appropriate Appendix A SIM GPT Template. More specifically, make sure that you’ve preserved boldface formatting of your populated/contextualized <Topic> and <Primary Objective(s)> placeholders and any content integrated around them (within asterisks), which should have been captured in your “Customization” document per Step 1.2 instructions.
      1. Note: The formatting of the “Simulation Instructions” field of your “Configuration” document must be identical to that of the Appendix A template leveraged in Step 1.2 to help clarify your approach to simulation configuration in Step 2.2.

Create and Populate the eCornell “Archival” Canvas Page for Your Simulation

  • Navigate to the “SIM Dev Templates” module of the SIM Development Sandbox, duplicate the page template that pertains to appropriate Appendix A template type (limited to D for now) for the simulation you configured, and move it into the module reflective of the appropriate “Batch” per “Retrofit Batch Deployment” tab of the Simulation Deployment Tracker:
    • D - CourseCode_Module_SimulationName
  • Navigate to the archival page you just duplicated, click “Edit” in the upper right hand corner, and address each of the fields on that page given your work thus far:
    • Page Title + Header: Change the “CourseCode_Module_SimulationName” text of both fields to reflect the “Simulation Name” field of your “CourseCode_Module_Configuration” document, ensuring that you preserve the prefix reflective of the Appendix A template leveraged in both (do NOT include the word “Simulation”).
    • Source Inputs: Paste the titles of the Canvas pages/files added to the Gathered Content field of your “Customization” document, then embed links to those files from your “Simulation Name” folder.
    • Target Course/Page:
      1. Update the “CourseCode” portion of the “CourseCodeDEV2025-SIM” text field to reflect the target course code.
      2. Update the “Simulation Name” text field for consistency with the “Simulation Name” field of your configuration document (do not include CourseCode or Module, just the name).
    • Context/Content: Paste the “Design Context” and “Canvas Content” from your “Customization” document that you generated as a consequence of Step 1.3 interactions with ChatGPT-4o and any subsequent iteration for accuracy, specificity, and relevance.

Create and Populate the “Target” Canvas Page for Your Simulation

  • Open a new tab in Chrome, navigate to the -M version of the target course for your simulation, make a “Target Course” copy of the -M Canvas shell (never adjust the -M version), and then update the course/module of the “target course” copy as necessary given simulation scope:
    1. Make a “target course” copy of the -M Canvas shell:
      1. Name: [CourseCode]DEV2025-SIM: Course Name
        1. Course Code: [CourseCode]DEV2025-SIM
        2. Subaccount: eCornell
        3. Term: Default Term
      2. Adjust the module intro, wrap-up, and adjacent pages of the host module of that -M copy as necessary/relevant to account for SIM inclusion. If your SIM was meant to replace an existing activity for example, you need to make sure that activity is actually removed once you’ve created its replacement (the simulation).
      3. Adjust the home page to account for the simulation as an Activity in the host module.
  • Navigate to the host module for the simulation, click “+” in the upper right hand corner of that module, select “Page”, select “Create Page”, add “Activity: Simulation Name” as the Page Name, and move that new Activity page to the appropriate location within the host module (do NOT include the word “Simulation” in the title).
  • Open that new Activity page, click “Edit” in the upper right hand corner of the page, switch to the HTML Editor (</> symbol), and the following HTML framework:
    <h2 class=“activity”>Simulation Name</h2>
    <p>TBD</p>
    <p>TBD</p>
    <div style=“background-color: #f5f5f5; border: 1px solid #cecece; border-radius: 10px; padding: 0px 10px;”>
    *\<p\>\<span\>This simulation uses AI and is experimental.\</span\>\</p\>*

    *\<p style="margin-top: \-5px;"\>If you encounter issues, please click the \<strong\>Report a concern\</strong\> button below to share feedback.\</p\>*

    </div>
  • Then switch to the Rich Content Editor (</> symbol) and use your duplicated archival page in the SIM Development Sandbox to populate the remaining fields of the target page as follows:
    • Header: Update for consistency with the “Simulation Name” page title, including the name of your simulation as drafted in the “Configuration” document (do NOT include the word “Simulation” in the header).
    • TBD(s): Replace TBD text with the “Canvas Content” as drafted in the “Customization” document that you generated as a consequence of Step 1.3 interactions with ChatGPT-4o and any subsequent iteration for accuracy, specificity, and relevance.
  • Save and publish the target page for your simulation, navigate back to your duplicated archival page in the SIM Development Sandbox, and copy/embed links to the “Modules” page of the target course and the “Target” page for the simulation itself in the following text fields of the archival page:
    • CourseCodeDEV2025-SIM: Embed a link to the “Modules” page of the target course, which will provide quicker access to simulation context for future reference.
    • Simulation Name: Embed a link to the “Target” page for the simulation in that course, which will provide direct access to the simulation as students themselves will experience it for both reference and testing purposes.
  • Save and publish your duplicated archival page in the SIM Development Sandbox, then compare both your “Target” and “Archival” pages to those created by other IDs for previous batches of deployed simulations to ensure that your approach to both pages is aligned with eCornell standards (content, context, formatting).
    • Note: Your simulation is not expected to load yet, as that will be achieved in the following step, you are simply comparing existing page copy to that of examples authored by other IDs for previous batch releases.
  • ✅ Wrike Task: Set the status of Step 2.1 as Done and 2.2 as To Do.
2.2: Configure and Test/Refine Your SIM GPT

Configure and Embed your Simulation via Firefox Automation Utility

  • ✅ Wrike Task: Set the status of Step 2.2 as In-Progress.
  • Open Firefox (NOT Google Chrome), navigate to the “Target” page for your simulation, click on the red utilities button in the upper left-hand corner, and click on “New SIM Utility” to open the SIM automation tool. However, in order for this to work, you must NOT be in “Edit” view, the tool will only recognize and/or be able to create, embed, or update a SIM from a non-Edit view on page load.
    • Note: You will not see the utilities button nor will it work correctly even if you can see it unless you have enabled the eCornell DnD Canvas Utilities extension and are running an updated version of Firefox.
  • Use your “Configuration” document to populate and/or update the top three fields of the resulting terminal depending on whether there is or isn’t already a simulation present on your “Target” page.
    • “Simulation Name”: Verify that content populated here matches that of the “Simulation Name” field of your “Configuration” document. If it doesn’t, copy/paste that field content from your “Configuration” document, making sure that you do not include the word simulation.
      • Note: Remove the DEV2025” language from this field in the terminal, as input here should match the “Simulation Name” field of your “Configuration” document.
    • “Simulation Instructions”: Populate the three input fields of the utility, reflective of the three levers for SIM customization/refinement, with all of the corresponding content within asterisks of those same three fields in the “System Instructions” field of your “Configuration” document.
    • “Audio SIM”: Audio will be toggled on by default, but if your SIM isn’t meant to or can’t involve audio as a consequence of faculty preference or customization, that option must be toggled off.
      • Note: If you are in a situation where audio must be toggled off, please notify ID-Leads of that decision, as it may have an impact on simulation functionality.
  • Compare the “Text Preview” field below input fields of the utility with the “System Instructions” field of your “Configuration” document and verify that content has been populated as intended. If not, iterate on field population until it does.
  • Review and revise course content pre-selected for inclusion in the vector store in the “Course Materials’ Reference Content” section of the SIM utility terminal and de-select all modules that are irrelevant to the simulation. By default, all content up to and including the host module for your simulation will be selected, but unless otherwise required only content up to the “Target Page” within the host module for the simulation should be selected (make sure that you’ve deselected the “Target Page” itself as well given that it isn’t meant to be used as a reference.
    • Note: If you are in a situation where you need content from other modules included as well, please notify ID Leads, as it will impact where your simulation can be embedded (e.g., ODs). Additionally, if you find that course content isn’t available for inclusion in the vector store, make sure that you are on the “Target” page which has the Modules reference content available for selection (the “Archival” page lacks that content).
  • Once you’ve addressed/updated each field as described above, click on the “Create & Embed” button at the bottom of the SIM utility terminal to create, configure, and embed the HTML framework for your simulation on the “Target” page. Then, once embedding has been confirmed by the utility, refresh the “Target” page in Firefox to verify that it was embedded.
  • Then switch back to Google Chrome, open the “Target” and “Archival” pages in Google, click “Edit” in the upper right hand corner of both pages, switch to the HTML Editor (</> symbol), copy the following HTML framework that was embedded by the SIM utility at the bottom of the “Target” page, and paste that same HTML framework at the bottom of the “Archival” page.
    <div id=“AISimulation” data-modalities=“text,audio” data-assistant-id=“[Assistant ID]”></div>
  • Save the “Target” and “Archival” pages, then navigate to the Simulation Deployment Tracker and populate the following column cells of that row as indicated:
    • Type: Select “Dialogue-Driven,” which is the only simulation type currently available for development.
    • Drive: Add the target “CourseCode_Module” text (e.g., CREA502_M2) and embed a link to your “CourseCode_Module_Simulation Name” folder.
    • Archival: Embed a link to your duplicated “Archival” page in the SIM Development Sandbox.
    • Target: Embed a link to the “Target” page for your simulation.

Conduct an Initial Round of Testing for Expected/Unexpected Behavior

  • Navigate to the Google Docs Template Gallery, click on the “CourseCode_Module_Validation” Google Doc template to create a copy in your Drive, re-title it to reflect the target course/module the SIM GPT was embedded within (e.g., CREA502_M2_Validation), and move it to your “CourseCode_Module_Simulation Name” folder.
  • Open and populate the “Simulation Resources” section of that document with links to the Google Drive repository for your simulation, documents that contributed to the development/configuration of your SIM GPT, and Canvas pages it is embedded within.
    • Re-title and embed links in the “CourseCode_Module_SimulationName”, “CourseCode_Module_Customization”, and “CourseCode_Module_Configuration” text of the “Drive Resources” section of the “Validation” document to your “Simulation Name” drive folder and “Customization” and “Configuration” documents respectively.
    • Re-title the “CourseCode_Module_SimulationName” and “Simulation Name” text of the “Canvas Pages” section of the “Validation” document and embed links to your duplicated “Archival” page in the SIM Development Sandbox and the target page for your simulation respectively with that text.
  • Navigate to the “Simulation Name” “Target Page” for your simulation and conduct Phase 1-4 testing as outlined in the “Testing” section of the “Validation” document.
    • Classify observed behavior for test criteria as “E” (Expected) or “UE” (Unexpected) in the “E/UE” column of each phase table.
    • Document the observations that justify each classification in the “Observations” column of each phase table.
    • Synthesize and summarize observations/classifications for each phase in the “Phase Summary” section underneath phase tables.
  • Once you’ve documented and classified observed behavior for each of the test criteria, justifying each of those classifications and providing a high level synthesis/summary for each phase, paste those same phase summaries in the appropriate cells of the “Test Summary” column of the “Refinement” section table and determine whether or not remediation is required to refine your simulation.
    • If your justified classification for all test criteria of Phases 1-4 was “E” (Expected):
      1. Provide an “Approved for Deployment” or “Tentatively Approved for Deployment” sign-off recommendation in the “Sign-off” section of the “Validation” document.
      2. Re-title the text for and embed links to the “Archival” and “Target” pages for your simulation in the “Resource Hand-off (ID)” section of the Wrike task description for Step 2.3:
        1. Archival: Type - CourseCode_Module_Simulation Name
        2. Target: Simulation Name
      3. Make a note of the Simulation Deployment Tracker entries for the “Batch”, “SIM Author”, and “EUA Tester” column cells of your simulation row in the “Resource Hand-off” section of the Wrike task description for Step 2.3 in preparation for EUA testing at that step, Step 3.1 in preparation for QA/CE at that step, AND Step 3.2 in preparation for faculty outreach/approval at that step by the parties respectively responsible for all three.
      4. Navigate to the Simulation Deployment Tracker, set the “Status” column cell of your simulation row as “Ready for EUAT”, and identify the ID assigned to your SIM for End User Acceptance testing (Step 2.3) in the “EUA Tester” column cell. If an ID wasn’t assigned for End User Acceptance testing, @mention Jen Hynes or Susan Herman in a comment posted to the Wrike task for Step 2.3 requesting ID EUA tester assignment and, once one has been assigned, ensure their first name and the first initial of their last name has been added to the “EUA Tester” column cell of your simulation row.
      5. ✅ Wrike Task: Set the status of Step 2.2 as Done, Step 2.3 as To Do.
    • If your justified classification for one or more test criteria of Phases 1-4 was “UE” (Unexpected), determine the extent to which they are an issue and either proceed to Step 3.1 as described above or proceed to the following sub-step.

Iteratively Remediate/Test Your Simulation to Refine Observed Behavior

  • Given the nature of generative AI, perfectly consistent alignment with Phase 1-4 test criteria isn’t realistically attainable (at least with the current model), but it is important that the frequency/severity of unexpected behavior is mitigated to the extent possible.
    • Phases 1-3: Attempts to remediate simulation resources to address unexpected behavior here, particularly in Phase 1, should be made, as Phase 1-3 test criteria is reflective of the engagement expected of a typical student.
    • Phase 4: Given that this phase is reflective of edge case student engagement and meant to stress test the simulation, attempts should only be made to address particularly egregious/consistent violations of test criteria.
  • If a decision has been made to remediate simulations resources to address observed behavior, review the summaries of each phase added to the “Refinement” table of the “CourseCode_Module_Validation” document, identify the specific observations you feel are most critical to address, and identify any relationships that may exist between them to determine the root cause(s).
  • Review the “CourseCode_Module_Customization” and “CourseCode_Module_Configuration” documents, along with files set for upload to the SIM GPT, against Step 1 and 2 instructions to determine if either an error was made at some point along the way that might explain unexpected behavior and/or if there is an opportunity for refinement of observed behavior.
    • Major Issues: If unexpected behavior was significant, frequent, and/or observed across numerous criteria/phases, it is likely that a mistake was made at some point across Steps 1 and/or 2. If this is the case, the best course of action would be to completely re-do Steps 1 and 2 as manual edits are unlikely to address the root cause(s).
    • Minor Issues: If unexpected behavior is insignificant and/or only present across a few criteria/phases, it is likely that issues are simulation specific nuances that can be addressed through manual edits to the system instructions or re-work of aspects of Step 1.
  • Refine simulation resources to remediate observed unexpected behavior on that basis leveraging the three levers or customizable portions of the template to do so:
    • First Lever - Topic: Provides a high-level thematic framework for the simulation, ensuring that the nature/breadth of the simulation is appropriate for and aligned with the concepts it is meant to assess in its host module.
      1. A learning objective style topic is best here, and you want to ensure that terminology leveraged is aligned with/reflective of the terminology faculty leverage in the assets your simulation is meant to assess and the host module more generally.
      2. Additionally, the topic should effectively encapsulate the objectives (next lever), as it is a way of capturing the theme for the presentation and assessment of those objectives.
    • Second Lever - Primary Objectives: Establish assessment goals for the simulation and determine the sequence (to an extent), breadth, and depth of the student’s interaction with simulation.
      1. Scope/specificity are key here, as those goals will determine the breadth, depth, and accuracy of concepts explored/assessed.
      2. The narrower the scope, the greater the depth/specificity, and accuracy simulation interactions will be in the context of the assets it is meant to assess.
      3. The more specific those objectives are about concepts, terminology, strategies, techniques, acronyms, etc. (in alignment with faculty content), the more accurate/aligned simulation interactions will be in the context of module concepts/context.
    • Third Lever - Conditional Details: Provides a framework for the scenario/interactions generated, which includes the topic by default, but can be enhanced to provide more detailed, specific starting point scenario/decision point conditions for simulation interactions.
      1. By default, the only course specific language included in this section is the topic, but this is another touchpoint that can have a significant impact.
      2. Nouns and adjectives added in this section to better specify/clarify the thematic/conceptual details you feel are required to set the stage are often sufficient.
      3. While it is important that we rely on the creative power of these tools to generate scenarios based on model interpretation of the topic/objectives through the lens of simulation instructions and your course transcript, this is where you can ensure that the stage is set more effectively, creating a more relevant scenario and/or initial starting conditions and associated downstream effects on the simulation as a whole.
  • Once you’ve addressed one or more of the levers available to refine simulation output, navigate back to the “New SIM Utility” via Firefox, make any necessary adjustments, click “Update SIM” to re-configure your SIM GPT. Then test your simulation again to gather additional observations, refine on that basis, and continue that iterative cycle until unexpected behavior has either been addressed or mitigated in frequency/significance to the extent that it wouldn’t be a meaningful issue for expected student engagement with the simulation.
    • Note: Remember that you must navigate to the “Target” page via Firefox, and make adjustments via utility there to reconfigure your simulation. Whereas for testing, you should navigate to the “Target” page in Chrome, ensuring that you have refreshed the page once any updates have been confirmed in the utility.
  • Document the results of each testing/remediation cycle as outlined in the “Validation” document until you feel that a “Ready for Deployment” or “Tentatively Ready for Deployment” recommendation in the “Sign-off” section of the “Validation” document is justified, ensuring that you update your “Configuration” document with each iteration as a track record of those changes (version history).
    • Note: If you are struggling to address unexpected behavior after a few testing/remediation cycles, reach out to an ID-Lead (John C, Jason C, Vince IC, Dan T, or Rachel H) for help in the sim-dnd Slack channel.
  • Re-title the text for and embed links to the “Archival” and “Target” pages for your simulation in the “Resource Hand-off (ID)” section of the Wrike task description for Step 2.3:
    • Archival: Type - CourseCode_Module_Simulation Name
    • Target: Simulation Name
  • Make a note of the Simulation Deployment Tracker entries for the “Batch”, “SIM Author”, and “EUA Tester” column cells of your simulation row in the “Resource Hand-off” section of the Wrike task description for Step 2.3 in preparation for EUA testing at that step, Step 3.1 in preparation for QA/CE at that step, AND Step 3.2 in preparation for faculty outreach/approval at that step by the parties respectively responsible for all three.
  • Navigate to the Simulation Deployment Tracker, set the “Status” column cell of your simulation row and assign subsequent tasks as appropriate given the nature of your refinement:
    • If your refinement was a consequence of your own Step 2.2 testing:
      1. Set the status column cell of your simulation row as “Ready for EUAT” and assign Step 2.3 to the ID assigned to test your simulation as indicated in the “EUA Tester” column cell of your simulation row.
      2. ✅ Wrike Task: Set the status of Step 2.2 as Done and Step 2.3 as To Do.
    • If your refinement was a consequence of Step 2.3 EUAT feedback:
      1. Set the status column cell of your simulation row as “Ready for EUAT” and tag the “EUA Tester” already assigned to your simulation to a comment posted to Step 2.3 alerting them to its readiness for subsequent testing.
      2. ✅ Wrike Task: Set the status of Step 2.2 as Done and Step 2.3 as To Do.
    • If your refinement was a consequence of Step 3.1 or 3.2 feedback:
      1. Set the status column cell of your simulation row as “Ready for QA” and tag Karen Shepherd in a comment posted to Step 3.1 alerting her to its readiness for another round of QA.
      2. ✅ Wrike Task: Set the status of Step 2.2 as Done and Step 2.3 as To Do.
2.3: End User Acceptance Testing (EUA Tester)

Familiarize Yourself with the Context for this Simulation (2.3.1)

  • ✅ Wrike Task: Set the status of Step 2.3 as In-Progress.
  • This evaluation is meant as a final layer of simulation review to identify any persistent technical, pedagogical, and/or experiential issues before it is deployed live. In addressing testing substeps, please adopt the perspective of a student when possible.
    • Time Investment: Plan for 1-2 hours total testing time.
    • Testing Requirement: Complete at 5-9 full simulation runs to collectively address and iterate on Step 2.3 testing phases.
    • Desired Outcome: Provide an informed assessment of deployment readiness.
  • Identify the CourseCode of this simulation and review the “Deployment Tracker” portions of the “Resource Hand-off (ID)” section at the top of the Wrike task description for Step 2.3 to identify the row for this simulation in the Simulation Deployment Tracker.
  • Once the row for this simulation in the Simulation Deployment Tracker has been identified, set the “Status” column cell of that row as “In EUA Testing”.
  • Navigate to the “Archival” reference page for that simulation linked in the “Canvas Pages” section of the “Resource Hand-off” section in the Wrike task description for Step 2.3 and review content in the “Design Context” and “Canvas Content” drop-downs dropdowns drafted by the authoring ID against the listed “Source Inputs” from the host module of the target course to get a sense of the foundation/purpose of the simulation and better understand the role it is meant to play within that module.
    • Design Context (ID-facing):
      1. Review the specific “Source Input” course materials this simulation was built around and meant to assess.
      2. Identify the concepts covered by those course materials that the simulation is meant to assess and the intended depth/breadth of that assessment.
      3. Identify the pedagogical role this simulation is meant to serve within the host module.
    • Canvas Content (Student-facing):
      1. Review and compare the student-facing Canvas copy with drafted “Design Context”.
      2. Assess the extent to which it effectively sets the stage for student engagement with, and assessment by, the simulation on that basis.
  • Then, armed with that context, navigate to the “Target” page for the simulation linked in the “Canvas Pages” section of the “Resource Hand-off” section in the Wrike task description for Step 2.3 and evaluate the extent to which it and associated source inputs in the target course are aligned with the “Design Context” drafted by the authoring ID on the “Archival” page for the simulation.

SIM End User Acceptance Testing (2.3.2)

  • Complete testing phases 1-4 (Initial Output → Interactive Structure → Termination → Edge Cases) in order on the “Target” page for the simulation linked in the “Canvas Pages” section of the “Resource Hand-off” section in the Wrike task description for Step 2.3 as though you were a student encountering this simulation for the first time. To make the best use of your time and more efficiently assess test criteria, however, consider using ChatGPT to generate responses to counterpart dialogue for you.
  • Once you’re done, check the following boxes as attestation of your alignment with each:
  • Phase 1 - Simulation Load (Initial Output)
    • Evaluate the initial output of the simulation once it has loaded.
      1. Content may vary between sessions, this is expected.
      2. Focus on required elements being present, not exact wording.

E/UE

Phase 1 Test Criteria

Observations

Simulation loads successfully without errors.

Includes a title, introductory context, simulation objectives, notice of termination conditions and coaching context, and a scenario for that simulation.

Instructionally clear, unambiguously formatted/organized, effectively sets SIM expectations, and provides sufficient content/context for student engagement.

Appropriately relevant/detailed/specific scenario provided given simulation objectives, followed by a “Start this Simulation” button.

Output is contextually relevant given module scope/placement, limited to taught materials at this point, and is implicitly aligned with the asset(s) it is meant to assess (Design Context + Canvas Content).

Phase 1 Summary:

[Summarize E/UE and associated observations here.]

  • Phase 2 - Interaction Structure (Subsequent Responses):
    • Evaluate subsequent output of the simulation.
      1. Provide input reflective of a student honestly engaging with the simulation in the course.
      2. Whether you input correct or incorrect answers, ensure they are always on-topic.

E/UE

Phase 2 Test Criteria

Observations

Upon clicking the “Start this Simulation” button, initial output is always limited to a first person counterpart dialogue response.

Audio and/or Text input can be easily/intuitively entered into and submitted/accepted via the input box.

SIM output (counterpart dialogue + coaching support) is quick, relevant, and adaptive to student input in the context of simulation objectives.

Across interaction turns, required student input is always a first person response to counterpart dialogue implicitly assessing one or more of the simulation objectives.

Coaching support is always limited to contextually relevant guidance implicitly reflective of the objective(s) being assessed and ends with a request that students respond to the last counterpart dialogue in that context; Example responses or solutions to counterpart dialogue are never provided.

Output (counterpart dialogue + coaching support) is contextually relevant given module scope/placement, limited to taught materials at this point, and is implicitly aligned with the asset(s) it is meant to assess (Design Context + Canvas Content).

Phase 2 Summary:

[Summarize E/UE and associated observations here.]

  • Phase 3 - Termination (Final Output)
    • Evaluate termination conditions and associated final output of the simulation.
      1. Test both successful and unsuccessful achievement of simulation objectives.
      2. Like Phase 2, ensure that your input is always on-topic.

E/UE

Phase 3 Test Criteria

Observations

SIM terminates successfully when defined objectives have been met, achievement against each of which is implicitly self-evident given the nature/adaptation of counterpart dialogue across interactions.

SIM terminates unsuccessfully after 3 failed attempts at any single objective, failure against any of which is implicitly self-evident given the nature/adaptation of counterpart dialogue (Note that sufficiently off-topic attempts result in instant termination, which is a phenomena covered in Phase 4).

Upon termination, one final counterpart response is provided followed by summative feedback reflective of objective achievement across interactions as a whole (all of which should be explicitly reflective of objectives provided via Phase 1 initial output).

Output (final dialogue + summative feedback) is contextually relevant given module scope/placement, limited to taught materials at this point, and is implicitly aligned with the asset(s) it is meant to assess (Design Context + Canvas Content).

Phase 3 Summary:

[Summarize E/UE and associated observations here.]

  • Phase 4 - Edge Cases (Stress Testing)
    • Evaluate how the simulation responds to edge case inputs at different points during interaction.
      • Run 2–4 fresh sessions of the simulation.
      • Your goal is to identify boundaries for termination and confirm the SIM responds appropriately to problematic/unexpected input.

E/UE

Phase 4 Test Criteria

Observations

Short 2-3 word inputs are accepted and handled appropriately given their relevance to the objective being implicitly assessed (via counterpart dialogue).

Long 2000+ word inputs are accepted and handled appropriately given their relevance to the objective being implicitly assessed (via counterpart dialogue).

Special characters and/or formatting are accepted and handled appropriately given their relevance to the objective being implicitly assessed (via counterpart dialogue).

Both audio and text inputs are accepted and handled appropriately given their relevance to the objective being implicitly assessed (via counterpart dialogue). Note: To effectively assess this criteria, it is critical that you test with audio input exclusively, text input exclusively, and a mix of both across separate simulation runs.

Foreign language inputs are accepted and handled appropriately given their relevance to the objective being implicitly assessed (via counterpart dialogue). Counterpart dialogue should reflect language input, but coaching advice and initial/final simulation output should always remain grounded in English.

Off-topic inputs are handled appropriately given extent, where slight deviation prompts counterpart preference for relevant input and major deviation results in instant termination.

Incorrect and/or unprofessional (but on-topic) inputs are handled appropriately given extent, where slight deviation prompts counterpart preference for ‘correct and/or appropriate’ input and major deviation results in instant termination.

Coaching support is only provided once per counterpart response; subsequent requests prompt a request that students address that response.

Phase 4 Summary:

[Summarize E/UE and associated observations here.]

SIM End User Deployment Sign-Off (2.3.3)

  • Review Phase 1-4 test summaries and check whichever of the following boxes is most closely aligned with the collective synthesis of E vs. UE classifications and the observations justifying them.
  • Approved for Deployment:
    * The SIM meets all functional, pedagogical, and experiential expectations.
    * It is fully ready for student use, with no outstanding issues or concerns.
  • Tentatively Approved for Deployment: (2 unexpected break downs as noted)
    * The SIM meets core functional, pedagogical, and experiential expectations, but has minor issues or potential areas for improvement.
    * Suitable for student use as-is, though refinement is recommended if/when feasible.
  • Revision Suggested:
    * The SIM mostly meets core functional, pedagogical, and/or experiential expectations, but has moderate issues or critical areas for improvement.
    * Likely suitable for student use as-is, though re-work/refinement is recommended.
  • Revision Required:
    * The SIM does not meet core functional, pedagogical, and/or experiential expectations.
    * Not suitable for student use as-is, re-work/refinement is required.
  • Post a comment to this Wrike task (Step 2.3) that synthesizes Phase 1-4 summaries and provides a deployment sign-off recommendation aligned with and justifying whichever box you checked above, aware that this is the last layer of review before the simulation is deployed live:
    • Review and synthesize Step 2.3.2 phase summaries with respect to the potential benefits/concerns they highlight from an experiential, pedagogical, and/or functional (technical) perspective.
    • Revisit and review Step 2.3.1 context (Design Context and Content) and evaluate the extent to which those conditions were relevant/fulfilled.
    • Connect that evaluation to the box you checked above in providing a deployment sign-off recommendation.
  • ✅ Wrike Task: Set the status of Step 2.3 as Planned and Step 2.2 as To Do if you’ve selected either “Revision Suggested” or “Revision Required”, navigate to the Simulation Deployment Tracker and set the “Status” column cell of the row for this simulation as “In Development”, and tag the authoring ID in the Wrike comment you posted.
  • ✅ Wrike Task: Set the status of Steps 2 and 2.3 as Done and Steps 3 and 3.1 as To Do if you’ve selected either “Approved for Deployment” or “Tentatively Approved for Deployment”, navigate to the Simulation Deployment Tracker and set the “Status” column cell of the row for this simulation as “Ready for QA”, and tag both the Authoring ID and QA in the comment you posted.

Step 3: Finalize, Validate, and Deploy Your Simulation

In this step, the “Target” page for your simulation (and all associated pages) will be copy-edited and undergo QA, you will pass that finalized version to an Ambassador to obtain faculty approval and, once approved, it will be passed back to QA for live deployment. By following the sub-steps outlined below, you ensure that your simulation:

  • Has been copy edited and undergone QA.
  • Has been reviewed and approved (or rejected) by the Faculty Author of the Target course
  • Has been deployed live in the -M version of your target course.
3.1: Simulation Quality Assurance (QA)

Identify and Review Simulation Resources in the Deployment Tracker

  • ✅ Wrike Task: Set the status of Step 3.1 as In-Progress.
  • Review the “Deployment Tracker” portion of the “Resource Hand-off” section in the Wrike task description for Step 3.1 along with the CourseCode present in the Wrike task title to identify the row for this simulation in the Simulation Deployment Tracker.
  • Once the row for this simulation in the Simulation Deployment Tracker has been identified, open the “Archival” column cell link of that row (also embedded in the “Deployment Tracker” portion of the “Resource Hand-off” section in the Wrike task description for Step 3.1), which provides access to the eC DnD-facing “Archival” reference page for the simulation.
    • Note: The “Archival” page is meant as an archival record of the simulation and reference for the source inputs to and context for its design. While this page should be consistent with the “Target” page, it isn’t ever meant to be deployed live.
  • Review the “Design Context” and “Canvas Content” sections of the “Archival” page, referencing the “Source Inputs” listed on that same page as necessary to get a sense of the pedagogical intent/design of that simulation with respect to the Canvas pages, content, and concepts it is meant to assess.
    • Source Inputs: The Canvas pages/resources that directly contributed to and/or are assessed by the simulation, which simulation output should be implicitly aligned with.
    • Design Context: An explanation of the purview of the “Source Inputs” with respect to the design and pedagogical intent of the simulation, including mention of any pages associated with the simulation that were adjusted or replaced as a consequence of its inclusion.
    • Canvas Content: The student-facing Canvas page introduction drafted to implicitly satisfy/realize drafted “Design Context” and explicitly set the stage for student engagement with the simulation.
  • Once you’ve reviewed the “Archival” column cell linked reference page for the simulation, open the “Target” column cell link of that same row in the Simulation Deployment Tracker (also embedded in the “Canvas Pages” portion of the “Resource Hand-off” section in the Wrike task description for Step 3.1), which provides access to the simulation page where CE/QA is meant to take place.
    • Note: The “Target” page for a simulation being retrofitted to a catalogue course that has already been developed MUST be present in a “CourseCodeDEVYYYY-SIM” copy of the current -M version of that course, if the -M was directly adjusted, that ID needs to be notified immediately and re-directed to Step 2.1 instructions highlighting that rule (to correct that mistake).

CE/QA the Simulation and Associated Pages of the Target Course

  • CE/QA the “Target” page for the simulation, using your cross-referencing/evaluation of the “Source Inputs”, “Design Context”, and “Canvas Content” sections of the “Archival” page as necessary/appropriate to guide those efforts:
    • The simulation should invariably load a few seconds after the Canvas page loads.
      1. If it doesn’t, the HTML framework for the simulation was likely embedded/populated incorrectly.
      2. While this should never happen given that both ID and EUA testing take place on this page, reach out to the authoring ID if it does.
    • The page title, <h2> header, and name presented by the simulation itself should be consistent with each other.
      1. All three should be consistent aside from the word “simulation”, as the simulation will always refer to itself as sucn. For example:
        1. Activity: Building Connections Between Entrepreneurs and Investors
        2. <h2 class=“activity”>Building Connections Between Entrepreneurs and Investors</h2>”
        3. “Welcome to the Building Connections Between Entrepreneurs and Investors Simulation!”
      2. While it would likely be best to reach out to the authoring ID if that isn’t the case, the page title and/or <h2> header should be updated to reflect whatever title the simulation presents on page load, as a title change within the simulation itself would require re-configuration.
    • The “Target” page introduction copy should be error-free, grammatically valid, consistent with the “Canvas Content” outlined on the “Archival” page, and implicitly (or explicitly to the extent necessary) reflective of the “Design Context” outlined on that same page.
      1. Verify that the introduction copy of the “Target” page is the same as the “Canvas Content” outlined on the “Archival” page, then copy edit that content.
      2. Verify that the introduction copy of the “Target” page is implicitly reflective of the goal it was meant to satisfy as captured in the “Design Context” of the “Archival” page.
      3. If consistency issues between the “Target” page and “Canvas Content” and/or “Design Context” are found, reach out to the ID to determine why and make corrections where necessary to achieve appropriate consistency.
    • The following HTML standard must be maintained on the “Target” page, although variation for added context in the “Canvas Content” section (<p> content tags) is acceptable:
      <h2 class=“activity”>Simulation Name</h2>
      <p>Canvas Content</p>
      <p>Canvas Content</p>
      <div style=“background-color: #f5f5f5; border: 1px solid #cecece; border-radius: 10px; padding: 0px 10px;”>
      *\<p\>\<span\>This simulation uses AI and is experimental.\</span\>\</p\>*

      *\<p style="margin-top: \-5px;"\>If you encounter issues, please click the \<strong\>Report a concern\</strong\> button below to share feedback.\</p\>*

      </div>
      <div id=“AISimulation” data-modalities=“text,audio” data-assistant-id=“[Assistant ID]”></div>
    • The inter/intra-modular placement of the “Target” page should be reflective of the “Design Context” outlined on the “Archival” page.
      1. Verify that the “Target” page was placed in the correct module and in the correct location within that module as outlined in the “Design Context”.
      2. If it wasn’t, or if there are other inconsistencies with respect to the “Target” course/module per “Design Context”, reach out to the ID to determine why and ask that they course correct on the “Target” and/or “Archival” page on that basis.
    • If the “Target” page was meant to explicitly compliment/enhance existing pages per “Design Context”, then that relationship should be explicitly clear.
      1. If the simulation was meant to explicitly compliment/enhance existing pages, then the role that the “Target” and associated pages play in that relationship should be explicitly clear.
      2. The extent to which that relationship can/should be made explicit might be subjective, but if it feels inappropriate given “Design Context” stipulations, reach out to the authoring ID for clarity.
    • If the “Target” page was meant to replace an existing page or pages (e.g., traditional activity) per “Design Context”, it is critical that the page or pages being replaced were actually unpublished and removed from the course.
      1. This is often overlooked, but it is critical to verify, as it would likely lead to significant confusion for students.
      2. If this appears to be the case, reach out to the authoring ID for clarity, as it could have been an intentional decision made at a later stage in the process.
    • Verify that the “Home” page of the “Target” course has been updated to account for all changes identified through QA/CE steps taken to this point.
      1. Verify that the simulation activity is accounted for in the correct host module asset count.
      2. Verify that any adjustments to and/or removal of existing pages associated with the simulation are reflected in the host module asset count.
  • Post a comment to the Wrike task for Step 3.1 that outlines adjustments implemented and clarifies remaining concerns, tagging the authoring ID for review, feedback, and/or revision.
    • ✅ Wrike Task: Set the status of Step 3.1 as Waiting for Resources if any of the issues identified require authoring ID input but are minor in nature and wait for the ID to address them before proceeding.
    • ✅ Wrike Task: Set the status of Step 3.1 as Planned and Step 2.2 as To Do if any of the issues identified require authoring ID input and are more major in nature to kick SIM dev back to the refinement stage.
  • Navigate to the Simulation Deployment Tracker and set the “Status” column cell of the simulation row as “Ready for FR” if there either weren’t any issues identified, if issues that were identified didn’t require ID input to address, or if issues that were identified were ultimately addressed by the authoring ID.
  • Finally, identify the person responsible for faculty outreach/approval in the “Ambassador” column cell of the simulation row, assign them to Step 3.2, and tag them in a comment posted to that task notifying them of the simulation’s readiness for faculty approval.
  • ✅ Wrike Task: Set the status of Step 3.1 as Done and Step 3.2 as To Do.
3.2: Simulation Faculty Approval (Ambassador/ID)
  • ✅ Wrike Task: Set the status of Step 3.2 as In-Progress.
  • Identify the CourseCode of this simulation and review the “Deployment Tracker” entries in the “Resource Hand-off (ID)” section at the top of the Wrike task description for Step 3.2 to identify the row for this simulation in the Simulation Deployment Tracker.
  • Once the row for this simulation in the Simulation Deployment Tracker has been determined, identify the Product Owner and Authoring Faculty in the “PO” and “Faculty” column cells of that simulation row and populate the “Product Owner” and “Authoring Faculty” fields of the “Resource Hand-Off (Ambassador)” section above accordingly.
  • Open the “Drive” column cell ““CourseCode_Module_SimulationName” link of that row, which provides access to the eC DnD-facing development folder for the simulation, and evaluate the “Faculty” column across the current and preceding/succeeding batches in development to determine whether any other simulations are being developed for the faculty author(s) involved.
    • If this is the only simulation in or planned for development in the current and adjacent batches of simulations for authoring faculty:
      1. Navigate to the Google Docs Template Gallery, open the “CourseCode_Module_Approval” Google Doc template, re-title it to reflect the target course/module of the simulation (e.g., CREA502_M2_Approval).
      2. Move that file into the “Drive” development folder you just opened, paste the share link for that “Approval” document in the “Approval Draft Link” field of the “Resource Hand-off (Ambassador)” section above.
      3. Navigate back to and replace the [Field] brackets of your “Approval” document as appropriate with the content you populated in corresponding fields of the “Resource Hand-off (Ambassador)” section above and proceed to the next step.
    • If this is one of a number of simulations in or planned for development in the current and adjacent batches of simulations for authoring faculty, identify which “Ambassadors” are associated with those simulations via that column of the Simulation Deployment Tracker and communicate with them to determine whether an “Approval” document as been created or not. If it has, ensure that a shortcut to that file was added to the “Drive” folder for your simulation or request one if not. However, if an “Approval” document hasn’t been created:
      1. Navigate to the Google Docs Template Gallery, open the “CourseCode_Module_Approval” Google Doc template, re-title it to reflect the target certificate(s) (e.g., CREA500s_Approval).
      2. Replace the [Field] brackets of your “Approval” document with the content you populated in corresponding fields of the “Resource Hand-off (Ambassador)” section above and as appropriate given the other simulations involved.
      3. Add a shortcut for that file to the “Drive” development folders for the other simulations included in faculty outreach and paste the share link for that “Approval” document in the “Approval Draft Link” fields of the “Resource Hand-off (Ambassador)” section of Step 3.2 above.
      4. ✅ Wrike Task: Set the status of Steps 3.2 as Waiting for Resources and the “Status” column cell of the Simulation Deployment Tracker to “Waiting for FR” while awaiting faculty review readiness of all simulations associated with the authoring faculty for preceding/suceeding batches.
  • Add the NetID(s) for the “Authoring Faculty” to the “Resource Hand-off (Ambassador)” section of the Wrike task description for Step 3.2 above, then navigate to the “Target Course” associated with the “Target Page” linked in the “Resource Hand-off (ID)” section of the Wrike task description for Step 3.2 above, verify that the “Target Course” is published, and add the same NetID(s) as students to provide them with requisite access for simulation review/approval.
    • Note: As a student, the faculty author(s) will only have one opportunity to engage with the simulation. To provide additional opportunities will require that their account is reset or provided a higher level of access.
  • Use the “Approval Draft” you populated and embedded in the “Resource Hand-off (Ambassador)” section to draft and send emails requesting simulation approval from “Faculty Author(s)” identified in that same section as stipulated in that document, copying the member of Product identified in the “Product Owner” field of the Resource Hand-off (Ambassador)” section above for awareness. Post the body of those emails, and any responses by faculty, as comments to this task.
    • ✅ Wrike Task: Set the status of Steps 3.2 as Waiting for Info and the “Status” column cell of the Simulation Deployment Tracker to “In Faculty Review” while awaiting an initial conditional or explicit approval/rejection response by the faculty author(s).
  • Depending on if/how the faculty author responds to outreach, address the “Status” column cell of the Simulation Deployment Tracker and associated Wrike task statuses for this simulation as appropriate to communicate deployment readiness:
    • Explicit Rejection: If the faculty author communicates explicit/resolute rejection of simulation inclusion as designed in their course.
      1. Post a comment to the Wrike task for Step 3.2 that outlines the perspective expressed by the faculty author in rejecting the simulation, tagging the Authoring ID, Product Owner, and QA for awareness.
      2. Navigate to the Simulation Deployment Tracker and set the “Status” column cell to “Faculty Rejected”.
      3. ✅ Wrike Task: Set the status of Step 3.2 as Backlog and Step 3.3 as “Not Applicable”.
    • Tentative Approval: If the faculty author responds with explicit concerns and/or suggested revisions.
      1. Post a comment to the Wrike task for Step 3.2 that outlines the nature of any concerns and/or suggested revisions and next steps to be implemented, tagging the Authoring ID, Product Owner, and QA for awareness.
      2. Navigate to the Simulation Deployment Tracker and set the “Status” column cell to “In Development”.
      3. ✅ Wrike Task: Set the status of Steps 3 and 3.1-3.2 as Planned and Steps 2 and 2.2 as To Do to send the simulation back to the Authoring ID for refinement.
    • No Response or Explicit Approval: If the faculty author either explicitly approved of the simulation or never responded to initial/subsequent outreach.
      1. Post a comment to the Wrike task for Step 3.2 that clarifies either condition and officiates simulation readiness for deployment in that context, tagging the Authoring ID, Product Owner, and QA for awareness.
      2. Navigate to the Simulation Deployment Tracker and set the “Status” column cell to “Deploy Ready”.
      3. ✅ Wrike Task: Set the status of Step 3.2 as Done and Step 3.3 as To Do.
3.3: Simulation Deployment (QA)

Prepare the Host Course of the Simulation for Deployment

  • ✅ Wrike Task: Set the status of Step 3.3 as In-Progress.
  • Review the “Deployment Tracker” portion of the “Resource Hand-off” section in the Wrike task description for Step 3.1 along with the CourseCode present in the Wrike task title to identify the row for this simulation in the Simulation Deployment Tracker.
  • Once the row for this simulation in the Simulation Deployment Tracker has been identified, open the “Archival” and “Target” column cell links of that row (which are also embedded in the “Canvas Pages” portion of the “Resource Hand-off” section in the Wrike task description for Step 3.1), which provide access to the eC reference and target pages for the simulation respectively.
  • Review QA/CE and Ambassador notes left as a comment in Steps 3.1/3.2 for context regarding any adjustments implemented and the “Archival” page as a reference for “Design Context” to ensure all requisite changes are made to the -M version of the “Target Course”.
  • Then use the “Target Course” as a content reference/source to incorporate the simulation into the -M version of that course and replicate all changes made to accommodate that simulation as well.
    • Navigate to the -M version of the host course for the simulation.
    • Create a duplicate of the simulation activity page in the appropriate inter/intra modular location of the -M Canvas shell.
    • Implement any changes made to pages of the “Target” course associated with the simulation in the -M Canvas shell.
    • Implement any changes to the “Home” page of the “Target” course to that of the -M Canvas shell.
    • Generate and embed an updated course transcript in the “Home” page of the -M Canvas shell reflective of simulation inclusion and all associated changes.
    • Blueprint sync adjustments made to the -M Canvas shell into the -T (Training) and -D (Demo) versions of that same course.
  • Navigate to the Simulation Deployment Tracker and address column cells of the row for this simulation as follows:
    • “Status”: Set the status as “Deployed Live” to reflect -M placement.
    • “Deploy”: Paste the current date and embed a link to the “Target” page in the -M course.
    • “Live”: Paste the date of the first planned student-facing copy of the -M course.
    • “Notes”: Paste any notes pertinent to initial/subsequent “Deployed” and “Live” column cell milestone dates.
  • ✅ Wrike Task: Set the status of Steps 3 and 3.3 as Done to officiate simulation deployment and post a comment to this task that outlines steps taken, tagging the authoring ID for awareness.

Appendix: SIM GPT Templates


Dialogue-driven Simulation

It is critical that you limit customization of this template to the first, second, and third levers for refinement between asterisks, which should include population/contextualization of <Topic> and <Primary Objective(s)> placeholders at a minimum.

[Template Start]

You are **a <Topic>** interactive, dialogue-driven role-play simulation tasked with using “course materials” uploaded to the vector store to assess the ability of a student to **<Primary Objective(s)>**. Begin with a well-formatted introduction that sets immediate context for the simulation topic and objectives and provide **a controversial/colloquial <Topic> scenario**.

[Template End]


How did we do?

Contact