What should faculty teaching this fall know about the rapidly expanding artificial intelligence (AI) landscape? How can we design assignments and exams that foster ethical use of AI tailored to specific learning objectives while discouraging inappropriate use? Our new series of captioned videos – with companion audio files and transcripts – is part of an ongoing effort to support Teaching & Learning in the Age of Artificial Intelligence launched by CTLE and CLASS last spring. Please follow this link to learn more: Peer-to-Peer Advice: A Timely Conversation with Faculty Colleagues [No Login Required]
From Defining AI to Ensuring Equitable Access: Six Points to Consider in Teaching This Fall
With thanks to the members of the CTLE-CLASS Informal Faculty Working Group on Teaching & Learning in the Age of Artificial Intelligence whose ideas and conversations inspired these points: Jonna Gilfus (Writing Studies, Rhetoric, and Composition); Roger Hallas (English), Denise Heckman (Design); Jay Henderson (Biomedical and Chemical Engineering); Jing Lei (Education); Zeke Leonard (Design); Laura Lisnyczyj (Languages, Literatures, and Linguistics), and Bei Yu (School of Information Studies).
- AI Literacy We need to prepare students – and ourselves – for a world that is changing rapidly as AI expands and transforms work, workplaces, and higher education.
- Defining AI Artificial Intelligence long predates ChatGPT, the platform that rocketed AI into popular conversation. Whether we want students to use AI broadly, use it in specific ways in specific assignments, or steer clear of it in exams, we need to define our terms clearly. In your course, does “AI” include Grammarly? Google Translate? Spell Check?
- Bias and Error in AI Because AI software is developed using voluminous data from the Internet, it is subject to broader biases found online, including stereotypes about people of color, women, and members of marginalized groups. In addition, AI software can produce information that appears factual but is inaccurate, such as making up quotations and or false claims.
- Ethical Usage of AI Using AI can save time and foster learning. But there are also situations in the classroom and the workplace in which using AI would be inappropriate or unethical. Given the newness of AI, transparency is a good starting point. This can take the form of conversations with students about situations in which using AI might – and might not – a good idea. More broadly, ethical use also encompasses concerns about AI and copyright as well as uncertain around user privacy as it is not always clear whether user generated prompts and related information is incorporated into and becomes part of an AI tool.
- Equity & Fairness in AI Access Please keep in mind that many AI tools are moving behind paywalls or being released in multiple versions, the latest and most sophisticated of which requires a subscription fee. In addition, AI tools may not meet Web Content Accessibility Guidelines (WCAG), the technical standards that ensure access for all users.
- AI Detection Perhaps it is fitting, or at least unsurprising, that AI detection is evolving quickly and bumpily, like the rest of AI. Detection tools are proliferating. None is foolproof. As Turnitin acknowledges on its own website: “Our AI writing detection model may not always be accurate…further scrutiny and human judgment” are needed to determine whether text is human or AI generated. For more on this, please view, listen or read our “Peer-to-Peer Advice” on responding to potential academic integrity violations.
CTLE and CLASS Tips and Strategies for Faculty and Instructors: What We Know about ChatGPT and Options for Responding [Answers: Requires Login with SUID]
Resource guide to the AI software ChatGPT [LibGuide on Library website]