Using AI is the norm these days.
There are simply too many AI tools online that are too good not to use. From grammar checker Grammarly to management platform Notion, there’s something for everything, so it’s no surprise that students are making the most out of them.
But when hearing reports of schools giving zero marks to students using AI tools, it’s worth taking a step back and thinking: what counts as cheating when you’re using AI in schools?
First, a necessary disclaimer: every university, department, and lecturer has their own stance on students using AI on their assignments; Study International strongly recommends you check with your lecturer, department, and university before doing so.

Pro-tip: thinking about using AI for your assignment? The best way to avoid getting caught is to not use it at all. Source: AFP
What counts as cheating when using AI
If there’s one thing we know for certain, putting your assignment into any AI tool and copying the answers word for word is certainly cheating.
The same can be said if you use different pieces of AI work to create one cohesive assignment. This is better known as patchwork plagiarism, where you might think you’re creating a new piece of original work, but you’re not.
There are quite literally thousands of examples worldwide.
The Guardian reported almost 7,000 cases of UK university students cheating using AI in 2023-2024; an AI in Higher Education: Student Perspectives study found that 79% of its 8,000 student participants in Australia use AI to answer their questions; and more than half of the 1,000 students surveyed by BestColleges revealed that they have used AI on assignments or exams.

You can still sound human while using AI. Source: AFP
What doesn’t count as cheating when using AI
You’re not using AI to cheat if, well, you’re not using it to generate the work in any capacity.
Case in point: the NTU instance, where one of the three accused students (two of whom admitted to using generative AI in their assignments) made three citation mistakes and used a reference organiser – a tool that helps collect and organise references and citations – to list her references in alphabetical order.
After a long debacle, the student managed to successfully clear herself of academic fraud when her academic panel concluded that her assignment had no form of Generative AI after thoroughly looking at how the alphabetiser works.
That said, we emphasise that if you’re ever in doubt about a particular AI tool, run it by your lecturer or department before using it.

Unfortunately, you’re more likely than not to be accused of using AI in your assignment these days. Source: AFP
The grey areas when it comes to using AI in schools
Unfortunately, when it comes to using AI in your assignments, a lot of it falls under this grey area.
Take Marley Stevens from the University of North Georgia, for example.
Stevens, who wrote an academic paper with only the help of Grammarly’s grammar and spell-checking features, was accused of using AI to write a paper when the AI-detection system in Turnitin flagged it as robot-written.
This resulted in zero marks on the paper, a final grade below what she needed to qualify for a scholarship, being placed on academic probation for academic misconduct, and paying US$105 to attend a seminar about cheating.
“I’ve had other teachers at this same university recommend that I use [Grammarly] for papers,” said Stevens in a TikTok video. “So are they trying to tell us that we can’t use autocorrect or spellcheckers or anything? What do they want us to do, type it into, like, a Notes app and turn it in that way?”
In response, the university sent out an email to students, highlighting to them to “be aware that some online tools used to assist students with grammar, punctuation, sentence structure, etc., utilise generative artificial intelligence (AI); which can be flagged by Turnitin.” The email also cites Grammarly as one of the most common websites flagged, advising students to exercise caution when deciding to use it.
Stevens’s professor for the class later shared that he ran her paper through another AI-detection tool, Copyleaks, which flagged her work as bot-written. However, when Stevens did the same, the work was deemed as human-written instead.
“If I’m running it through now and getting a different result, that just goes to show that these things aren’t always accurate,” says Stevens.