After more than a year in the generative AI “evolution”, academic integrity continues to be the number one concern teachers bring up to me in my workshops. I know there are already a lot ideas for “AI-Resistant” assessment out there, but I think it takes a multi-faceted approach to solve the AI as a cheat-bot dilemma. Before we dive into solutions, let us first frame the cheating situation in education.
Why do students cheat?
This to me is the most essential question we as educators should be asking ourselves. It’s a question that quite honestly most of us want to avoid because it causes us to reflect on our practices. In my book Learning Evolution, I share the analogy that ties to another poor student behavior….vaping. If you catch a student vaping, and you take away the vape, did you address the behavior? No, you just put a temporary roadblock into preventing the behavior. Cheating with AI is much of the same.
If a student uses AI to cheat and you take away the AI tool they used, did you address the behavior? Rather than blocking tools, we should instead address the motivation behind why students cheat. Here are just a few examples a recent group of high schools students (who prefer to remain anonymous) shared with me as to why they cheat:
- Pressure to get the grade – The American educational experience is built to encourage competition between students. There’s a lot of academic pressure on students to perform and get a high grade as it determines their future access to higher education institutions. In this scenario, learning takes a back seat to getting the best grade.
- Lack of interest in the topic – Not every student will need to use Calculus or Biology in their life. We won’t all become amazing authors or historians either. However, we subject students to antiquated and uninspiring curriculum rooted in the early 1900s. Then we deliver it using a model we learned in the 1970s and we are surprised whey they aren’t interested in our subject?
- Lack of motivation or laziness – Again, they don’t really want to take the time to learn a topic, so they look to cut corners.
- Lack of understanding – Students all learn in different ways. However, we often teach the same way…to the middle. While I think AI tools could help us finally personalize learning and help with various learning styles, we aren’t quite there yet. If a student struggles to understand a concept, even if they want to learn about it, they might feel the need to cheat until they figure it out.
- Risk-taking behavior can be exciting – I’ll be honest, I did a lot of stupid, risky stuff as a teenager. It’s part of growing independence. Teenagers push boundaries. Trying to “beat the system” or find the “cheat code” to finish a game is exciting. Take that innate behavior in many kids and then add a lack of motivation and interest in a topic with pressure to succeed, and the recipe for cheating becomes evident.
How prevalent is cheating now with AI?
I remember vividly “my friend” writing small words in pen on their hand before a test. A couple of decades ago, students were using their TI-81 calculator to store answers before they took a test. Cheating is not new, but is it increasing in this age of generative AI? The answer might surprise you.
According to research from Stanford University released this past fall, the proclivity for students to cheat has remained constant even with the introduction of AI. While their research shows that nearly 70% of students admit to cutting a corner at some point, that number is the same before the introduction of ChatGPT as it was after. This goes back to my earlier point that it’s not necessarily the tool, but rather the behavior that should be addressed.
Could blocking AI tools cause an equity issue?
When a student hires a tutor to help them write their college essay, society believes this is an acceptable form of support and not cheating. When a student has a parent help them with an assignment, society again accepts this as “not cheating.” When a teacher helps a single student or provides additional support to a small group of students, society doesn’t have an issue with this. Even when a student uses a tool like autocorrect, society largely doesn’t feel like this is cheating.
The problem is, not every student has funds to pay for a tutor. Not every student has access to a parent at home that can help them with their homework. Not every child has a supportive teacher. When we ban a tool that can help with editing, reviewing, brainstorming and tutoring, we create a major equity gap between those “societally acceptable” solutions vs. those we see as cheating.
So if we can’t block the tools and we don’t necessarily feel the need to change up what we are teaching, how can we actually discourage students from using AI tools unethically? Here are three different “what if” scenarios that I propose for educators to consider.
What if…we shifted our focus from final product to learning process?
One idea to help with cheating is changing what we place weight on in terms of evaluating learning. Modern education has been almost solely focused on evaluating the final product of learning rather than the process. Whenever you place weight and pressure on a final product (think term paper or presentation), you signal to students what is most important.
The truth is, we should be evaluating the final product AND the learning process. If we shift a little bit of energy in our grading practices in evaluating the ‘how’ and ‘why’ of student learning rather than the ‘what’, we make cheating obsolete. A student can get ChatGPT to write a paper on the American Revolution, but can they get it to create their opinion on the forces that lead to it? Or maybe what they would feel like if they lived during this historical time in our country’s history?
The easiest solution for this is to have students reflect on what they learned. This could be a written, verbal or recorded reflection about things they learned and mistakes they may have made along the way. Reflection and reflective observation is a major part of the learning cycle. It should be part of their final product and shifting focus from product to process eliminates the concern that chat bots will just do everything for them. This does mean educators will need to leverage rubrics more to evaluate student learning versus just checking multiple choice quiz answers, which could lead to teachers spending more time on grading. But it doesn’t have to be that way.
What if…we asked students to show their learning process?
Educators are already facing a shortage of time in their day-to-day teaching life. Grading a final product is the quick and easy way to check a box for student learning, even it it’s not the best way. However, spending additional hours of time we don’t have to evaluate learning process is also challenging. I suggest a different strategy: Have students prove their learning process to you.
I’ve spent the last several months looking for tools that could help students and teachers accomplish this. Utilizing free tools like the Google extension Learnics, students “capture” part of their learning process and then submit that along with their final product to prove their own learning. Here’s a video example of me doing some research on my distant relative while also capturing my work:
What if…we asked students to use AI as a rough draft?
Rather than block the tool or ban it, encourage its use responsibly. My friend Dr. Nick Cull, professor of public diplomacy at USC, did this very thing. The assignment was to take the point of view of another country trying to politic for a particular cause with the United States. Rather than tell his students not to use AI, he told them to have ChatGPT write the entire paper for them. He then graded this first draft as a “B-” paper and instructed his students to “make it into an A”. The students had to find the misinformation in the original draft and improve upon it. In some ways, they had to be smarter than AI.
As a result, the papers the students turned in were some of the best he’d ever seen written and as a result, the students actually understood the concept deeper than they had before. (check out my podcast interview with him here) Using generative AI as a tool for learning and teaching the digital literacy and ethics around its use can make for powerful student learning.
We are all still learning how to best use AI with our students. We know it will be a part of their future going forward, so blocking the tool only hinders their own learning development. We also know that students cheat, so rather than focus our energy on the tool used for cheating, we should tackle some of the reasons as to why they cheat. Ultimately, as an educator reading this post, you should reflect on what you value most in your classroom: Learning or grades?
Carl Hooker is an international speaker and trainer. He works with schools and events across the country to thoughtfully integrate tools like AI into learning. His latest book Learning Evolution shares several examples, strategies and ideas like this one. If you are interested in booking Carl for your next event or professional development day, fill out this speaking form to get more information.

LOVE the Nick Cull idea! That is a brilliant use of AI in the classroom. Thank you for sharing.
Pingback: 4 Ways to Integrate AI into the Learning Cycle - Hooked On Innovation
Pingback: What I Think Artificial Intelligence Will Do – & What It Won’t Do – In K-12 Education (Larry Ferlazzo) | Larry Cuban on School Reform and Classroom Practice