Academic integrity, assessment, and artificial intelligence: Leading the decision-making process

Join me on Thursday, February 19th at 1 pm, Eastern (10 am, Pacific) for an interactive, virtual conversation about guiding others through the process of AI decision-making for teaching and learning. There will be lots of time for discussion and questions!

As digital learning leaders, we’re often called upon to advise and guide when issues related to academic integrity and assessment arise in relation to technology use. There are many ways that institutions and individuals are responding to the widespread use of AI (not to mention a wide range of strong opinions!), and oftentimes it falls to us to help others navigate the path forward.

Ever since GenAI become a “thing,” I’ve had many conversations with individuals in a variety of roles in higher ed about whether or not AI use should be allowed in teaching and learning. In these discussions, I try to steer folks away from black and white thinking about a singular academic task, and towards the big-picture view of preparing students for success in their future careers.

With the acknowledgement that generative AI has not been around long enough for any of us to have a clear idea of its long-term impacts (or best practices for its use), here are three considerations that I recommend to higher ed decision-makers when guiding them through AI policy development.

  1. Focus on learning outcomes and workforce relevance first. What competencies do students need to develop over the course of their academic program? How will they be expected to use AI once they enter the workforce in their field of study? If we prevent students from using the AI technologies that their employers are expecting them to use, then we are doing them a disservice and sending them into the workforce at a disadvantage.


  2. Assume lack of understanding, not ill intent when mistakes are made. Mistakes will be made . . . plenty of them! Wherever possible create touch points and opportunities for teachable moments before taking disciplinary action. My recent publication on how to de-risk taking risks emphasizes that a key part of digital literacy development is creating a culture where people can try new (or new-to-them) ways of using technology without fear of negative consequences if they don’t get it right the first time.


  3. Whether or not AI use is appropriate is highly dependent on context. What is appropriate AI use in one context might be completely inappropriate in another. Students in the earlier stages of a program might need to demonstrate that they are competent in foundational skills without AI assistance; however, it may make sense for them to leverage AI at a later stage as they work towards employment readiness. Certain AI tools are more appropriate for certain academic disciplines.


I’d love to hear your thoughts and best advice for those tasked with making decision about AI use during our next Digital Learning Leadership Community webinar. All are welcome!

You’ll have the opportunity to share your insights in response to several group discussion questions. These questions are designed to help us think through our approach in helping address the assessment and academic integrity challenges that higher education institutions are facing as a result of AI.



Next
Next

Introducing the Digital Learning Leadership Community