AI Detection Tools

During the rapid initial rollout of generative AI tools, some organizations, such as Turnitin, developed and released tools whose goal was to detect AI-generated text. Turnitin initially released this tool as part of its standard plagiarism detection software, and Northwestern instructors had access to a report that detected text likely to be generated by AI. Subsequently, Turnitin announced that it would be removing this capability from its standard software in 2024 and would instead require separate licensing for universities wanting AI detection services.

Northwestern’s Position on AI Detection Tools

Northwestern has decided not to pursue purchasing AI detection services for the following reasons:

  1. AI detection tools appear to be biased against non-native English writers. Research has demonstrated that original prose written by non-native English writers is consistently misidentified as AI-generated.

  2. Writers can easily bypass AI detection tools by use of intermediate AI tool prompt strategies. Research and intuition suggest that using AI tools to generate text using multiple stages of prompting limits the possibility of detecting the use of AI tools. (For example, simply prompting ChatGPT to self-edit with the prompt, “Elevate the provided text by employing literary language,” greatly reduced detection rates.)

  3. Given the problematic and ineffective nature of these tools (see #1 and #2), paying for them is not a good use of university resources.

Principles for Mitigating AI Tool Misuse

How can faculty members mitigate and respond to the misuse of AI tools without the use of detection software? The following principles can be employed for these purposes.

  1. Clearly communicate with students about acceptable AI tool usage for each assignment. Do not assume that acceptable use will be understood without explicit communication of expectations. Be specific about what type of use is acceptable, keeping in mind that some usage will be very difficult to prevent or detect and may be unnecessary to prevent (just as students may use a Google search as a very early research step but may still use robust research practices for the final product).

    1. In any case, it would be good to require students to add to written work a link to relevant AI conversations (if possible) and an appendix paragraph of their own reflection on their process: what was challenging or rewarding, and how they used AI or similar tools in that process.

  2. Design learning activities to minimize temptation to misuse AI

    1. Get to know the skill level and communication style of each student. Use a variety of activity types (discussion forums, video recordings, live class discussions, reflection papers, etc.) rather than relying on a small number of high-stakes papers to assess student learning.

    2. Explain clearly what students stand to lose by opting for a shortcut: the exercises are for you. It's about how good an engineer you want to become.

    3. Clarify for students the relevance and growth provided by each learning activity. Choosing to do something effortful, inconvenient, or inefficient is more appealing when we believe it helps us grow and understand its purpose. If you don’t already, consider adding a student-centered purpose sentence or two to each graded task.

    4. Foster student investment in projects. If it is easier or more enjoyable for students to use their own skills and ideas than it is to generate quality AI content, students are more likely to do so.

      1. Allow student choice in assessment topics and formats, including media or graphical options such as videos, concept maps, or infographics rather than written essays.

      2. Invite students to submit written works in lower-stakes process stages, such as or more of the following before the final submission: topic selection, annotated bibliography, outline, conference draft, shown revisions, etc.

Principles for Responding to Suspected AI Tool Misuse

  1. Look for tangible evidence of student AI tool misuse or plagiarism. While it is usually impossible to prove that AI tools were misused, when you suspect misuse, ask yourself what basis you have for suspecting misuse. Examples include a verbal style that is inconsistent with other assignments the student submitted or treatment of the topic that seems generic and/or out of sync with the course materials. There is an element of discernment/judgment involved in investigating potential cheating but be sure to treat all students with respect and presumed innocence throughout the initial process of investigating possible AI tool misuse or plagiarism.

  2. Without prejudging, have a conversation with the student. Let the student know that you have some questions about the assignment. Ask the student about the process he or she used to produce the material. In an in-person or virtual meeting, ask the student for a brief verbal summary of the points made in the paper. These types of interventions usually make the situation clear.

    1. Students may be able to demonstrate original writing by showing a document’s revision history with incremental additions and changes.

  3. If AI tool misuse or cheating has occurred, followed established procedures for dealing with student cheating. See this page for more information.

Resources

  1. Scholarly

    1. GPT detectors are biased against non-native English writers

    2. Can AI-generated Text be reliably detected?

    3. On the Possibilities of AI-Generated Text Detection

    4. The Science of Detecting LLM-Generated Texts

  2. Popular

    1. From OpenAI

      1. Educator FAQ | OpenAI Help Center

      2. Teaching with AI — a guide for teachers using ChatGPT in their classroom—including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.

    2. Analysis | What to do when you’re accused of AI cheating

    3. OpenAI confirms that AI writing detectors don’t work  

    4. Why AI writing detectors don’t work