Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

During the rapid initial rollout of generative AI tools, some organizations, such as Turnitin, developed and released tools whose goal was to detect AI-generated text. Turnitin initially released this tool as part of its standard plagiarism detection software, and Northwestern instructors had access to a report that detected text likely to be generated by AI. Subsequently, Turnitin announced that it would be removing this capability from its standard software in 2024 and would instead require separate licensing for universities wanting AI detection services.

Northwestern’s Position on AI Detection Tools

Northwestern has decided not to pursue purchasing AI detection services for the following reasons:

  1. AI detection tools appear to be biased against non-native English writers. Research has demonstrated that prose written by non-native English writers is consistently misidentified as AI-generated.

  2. Writers can easily bypass AI detection tools by use of intermediate AI tool prompt strategies. Research and intuition suggest that using AI tools to generate text using multiple stages of prompting limits the possibility of detecting the use of AI tools. (For example, simply prompting ChatGPT to self-edit with the prompt, “Elevate the provided text by employing literary language,” greatly reduced detection rates.)

  3. Given the problematic and ineffective nature of these tools (see #1 and #2), paying for them is not a good use of university resources.

Principles for Mitigating AI Tool Misuse

How can faculty members mitigate and respond to the misuse of AI tools without the use of detection software? The following principles should be employed for these purposes.

  1. Clearly communicate with students about acceptable AI tool usage for each assignment. Do not assume that acceptable use will be understood without explicit communication of expectations. Be specific about what type of use is acceptable, keeping in mind that some usage will be very difficult to prevent or detect and may be unnecessary to prevent (just as students may use a Google search as a very early research step but may still use robust research practices for the final product).

  2. Structure course activities to help you get to know the skill level and communication style of each student. Use a variety of activity types (discussion forums, video recordings, live class discussions, reflection papers, etc.) rather than relying on a small number of high-stakes papers to assess student learning.

  3. Look for tangible evidence of student AI tool misuse or plagiarism. While it is usually impossible to prove that AI tools were misused, when you suspect misuse, ask yourself what basis you have for suspecting misuse. Examples include a verbal style that is inconsistent with other assignments the student submitted or treatment of the topic that seems generic and/or out of sync with the course materials. There is an element of discernment/judgment involved in investigating potential cheating but be sure to treat all students with respect throughout the initial process of investigating possible AI tool misuse or plagiarism.

  4. Without prejudging, have a conversation with the student. Let the student know that you have some questions about the assignment. Ask the student about the process he or she used to produce the material. In an in-person or virtual meeting, ask the student for a brief verbal summary of the points made in the paper. These types of interventions usually make the situation clear.

  5. If AI tool misuse or cheating has occurred, followed established procedures for dealing with student cheating. See this page for more information.

  • No labels