Still need help?
Get in touch with the LX.lab team by logging a ticket via ServiceConnect. We'll be in touch shortly.
Log a ticketWant to provide feedback on this resource? Please log in first via the top nav menu.
Assessment in higher education should seek to establish a symbiotic relationship with generative AI technologies that will maintain academic integrity and enrich student learning. The following are some short-term strategies and approaches that could be folded into assessment design to enhance academic integrity and the student learning experience in the age of generative AI.
Talk with your UTS course director, subject coordinator, head of faculty and students about the use of large language models and other generative AI. Send an announcement in Canvas to your students to encourage them to engage with you on this issue and find out where your assessments could be adapted.
Encourage students to think critically about AI and its role in learning by asking questions such as:
Set clear guidelines for your students – refer to Guidelines for Generative AI for some good examples of the kinds of guidelines you can set.
Below are some suggestions for minor adjustments that could be implemented in a reasonably short timeframe and should not require additional committee approval. Your discipline, direction by faculty, previously published material and equity issues may all impact on their suitability for your subject.
Be ready to answer questions about AI usage in your live classes. Students will need clear and consistent guidance on what they can and can’t do with AI in each subject. If AI is allowed as part of the assessment task, you should provide clear guidelines on when it is appropriate to use it (e.g. during draft development use only).
Also consider encouraging students to acknowledge when they use AI as part of referencing. If you are part of a teaching team, check with your Subject Coordinator to ensure your message is consistent across the cohort.
Because the text that ChatGPT generates is not retrievable or sharable, it falls into our catch-all ‘personal communication’ category, where you cite with an in-text only citation:
(OpenAI, personal communication, January 16, 2023)
or
(Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 7, 2023).
However, this is not an entirely satisfactory option, especially because the technology is so new, so both students and instructors are learning about this resource and how to ethically use it.
Consider, then, making the ChatGPT conversation retrievable by including the text as an appendix or as online supplemental material. If you do so, then readers may be referred to the appendix or the online supplemental material (where the ChatGPT response may be contextualized) when the ChatGPT conversation is cited.
It would be good practice to describe, in the narrative or a note, the prompt that generated the specific ChatGPT response. This too will help inform the understanding of the technology and its outputs.
You could use an old question, or write one specially, prepare an annotated (‘marked’) version, and use that to show students the limitations of the AI, such as factual and referencing errors that would be obvious to a human marker. Compare this with the marking criteria for the task and explain why this answer would not do well without a lot more work.
You could extend this demonstration by outlining the amount of work it would take to ‘fact-check’ and correct the answer, then to produce accurate references. The subtext is that the AI doesn’t save time (at the moment) if you want a good quality answer, or possibly even a pass. Alternatively, you could challenge students to find all the errors in the output and correct them. Make sure you practice any such tasks yourself first to ensure it does demonstrate what you want it to.
If you currently have a written assessment task, consider setting a question that requires students to reflect on a personal, local or authentic (real life) aspect of their learning. You could frame a written assessment around unique in-person components of the subject, like guest presentations not available elsewhere, a field trip, laboratory task or a discussion or debate held in a live class during the current semester. Basing a task on very current content (at least post-2021) may provide a level of short-term protection. Be aware of the possibility that recent task materials pasted into AI may be integrated into the AI dataset. Incorporating visual prompts, unique datasets, multimedia resources or relating questions to specific texts may also provide some protection at the moment. For more information see reflective writing tasks.
Asking students to talk through their work in class or via video link will assist in determining if they have approached the task in a way that has helped them to learn the skills and knowledge needed to complete the task independently. By asking questions about specific aspects of the ideas they have presented, how they came to make the choices they made, what they would change if they had more time, and what they have learnt through the process will also help them to reflect upon their own learning.
Consider the weighting of criteria within your marking guides and rubrics. Can you adjust these to place greater emphasis on criteria that assess learning beyond what AI tools are capable of producing? Identifying the key criteria and elevating their importance will require students to exercise evaluative judgement and undertake substantial revisions to attain a passing grade, even if they do use AI. Factual accuracy and correct referencing to appropriate scholarly sources (that exist!) could be given greater weighting as these are known weaknesses of AI (at present). Requiring all references to include active hyperlinks should highlight false references, both to students and markers.
Consider whether holistic rubrics would be more flexible than analytic rubrics in this evolving situation. If using analytic rubrics consider not allocating % or marks to particular criteria, or adjusting what you presently have. For more information see designing rubrics.
Challenge students with a question like ‘Why learn to write in 2023?’, gather responses (perhaps using Mentimeter) and outline some reasons they might not have thought of:
Remind students this is their opportunity to practice and learn in a supportive environment so they’ll be able to operate effectively in their chosen career. Outsourcing their learning will leave them vulnerable in the future workplace.
The LX Adaptable Resources for Teaching with Technology provide practical strategies for Building Belonging into your first classes. This is essential when taking a holistic approach to students’ education and is proven to be a key indicator in long term motivation. You may also want to try to Support students with a values affirmation exercise that further reduces barriers to their success, particularly for students coming from under-represented backgrounds.
If your assessment requires references, as students to include active links to each of their references. This might be as simple as a hyperlinked DOI like https://doi-org.ezproxy.lib.uts.edu.au/10.4324/9780429324178 or a permalink from the UTS Library.
This will assist markers, but also make students realise if the references provided by ChatGPT don’t exist or aren’t reliable. This is a key learning moment for students and part of their cultivating their own sense of academic integrity.
Stay connected with colleagues from across UTS who are confronting the same AI issues as you. One of the best ways to do this is by subscribing to the LX Blog and attending our scheduled events. If you have an approach to AI in your subject that you would like to share, email us at LX.lab@uts.edu.au
Get in touch with the LX.lab team by logging a ticket via ServiceConnect. We'll be in touch shortly.
Log a ticketWant to provide feedback on this resource? Please log in first via the top nav menu.