How Do AI Detectors Work? | Methods & Reliability

AI detectors (also called AI writing detectors or AI content detectors) are tools designed to detect when a text was partially or entirely generated by artificial intelligence (AI) tools such as ChatGPT.

AI detectors may be used to detect when a piece of writing is likely to have been generated by AI. This is useful, for example, to educators who want to check that their students are doing their own writing or moderators trying to remove fake product reviews and other spam content.

However, these tools are quite new and experimental, and they’re generally considered somewhat unreliable for now. Below, we explain how they work, how reliable they really are, and how they’re being used.

Note
Universities and other institutions are still developing their stances on how ChatGPT and similar tools may be used. Always follow your institution’s guidelines over any suggestions you read online. Check out our guide to current university policies on AI writing for more information.

You may also be interested in our research into the best AI detectors.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

How do AI detectors work?

AI detectors are usually based on language models similar to those used in the AI writing tools they’re trying to detect. The language model essentially looks at the input and asks “Is this the sort of thing that I would have written?” If the answer is “yes,” it concludes that the text is probably AI-generated.

Specifically, the models look for two things in a text: perplexity and burstiness. The lower these two variables are, the more likely the text is to be AI-generated. But what do these unusual terms mean?

Perplexity

Perplexity is a measure of how unpredictable a text is: how likely it is to perplex (confuse) the average reader (i.e., make no sense or read unnaturally).

  • AI language models aim to produce texts with low perplexity, which are more likely to make sense and read smoothly but are also more predictable.
  • Human writing tends to have higher perplexity: more creative language choices, but also more typos.

Language models work by predicting what word would naturally come next in a sentence and inserting it. For example, in the sentence “I couldn’t get to sleep last …” there are more and less plausible continuations, as shown in the table below.

Levels of perplexity
Example continuation Perplexity
I couldn’t get to sleep last night. Low: Probably the most likely continuation
I couldn’t get to sleep last time I drank coffee in the evening. Low to medium: Less likely, but it makes grammatical and logical sense
I couldn’t get to sleep last summer on many nights because of how hot it was at that time. Medium: The sentence is coherent but quite unusually structured and long-winded
I couldn’t get to sleep last pleased to meet you. High: Grammatically incorrect and illogical

Low perplexity is taken as evidence that a text is AI-generated.

Burstiness

Burstiness is a measure of variation in sentence structure and length—something like perplexity, but on the level of sentences rather than words:

  • A text with little variation in sentence structure and sentence length has low burstiness.
  • A text with greater variation has high burstiness.

AI text tends to be less “bursty” than human text. Because language models predict the most likely word to come next, they tend to produce sentences of average length (say, 10–20 words) and with conventional structures. This is why AI writing can sometimes seem monotonous.

Low burstiness indicates that a text is likely to be AI-generated.

A potential alternative: Watermarks

OpenAI, the company behind ChatGPT, claims to be working on a “watermarking” system where text generated by the tool could be given an invisible watermark that can then be detected by another system to know for sure that a text was AI-generated.

However, this system has not been developed yet, and the details of how it might work are unknown. It’s also not clear whether the proposed watermarks will remain when the generated text is edited. So while this may be a promising method of AI detection in the future, we just don’t know yet.

Check for common mistakes

Use the best grammar checker available to check for common mistakes in your text.

grammar-checker-common-mistakes

Fix mistakes for free

How reliable are AI detectors?

In our experience, AI detectors normally work well, especially with longer texts, but can easily fail if the AI output was prompted to be less predictable or was edited or paraphrased after being generated. And detectors can easily misidentify human-written text as AI-generated if it happens to match the criteria (low perplexity and burstiness).

Our research into the best AI detectors indicates that no tool can provide complete accuracy; the highest accuracy we found was 84% in a premium tool or 68% in the best free tool.

These tools give a useful indication of how likely it is that a text was AI-generated, but we advise against treating them as evidence on their own. As language models continue to develop, it’s likely that detection tools will always have to race to keep up with them.

Even the more confident providers usually admit that their tools can’t be used as definitive evidence that a text is AI-generated, and universities so far don’t put much faith in them.

Note
The tactics people might use to make AI writing less detectable can also simply make the text look highly suspicious or inappropriate for its intended purpose.

For example, inserting spelling mistakes or illogical word choices into a text will make it less likely to be flagged by an AI detector. But a text full of spelling mistakes and illogical word choices is not going to be graded well as a piece of academic writing.

AI detectors vs. plagiarism checkers

AI detectors and plagiarism checkers may both be used by universities to discourage academic dishonesty, but they differ in terms of how they work and what they’re looking for:

  • AI detectors try to find text that looks like it was generated by an AI writing tool. They do this by measuring specific characteristics of the text (perplexity and burstiness)—not by comparing it to a database.
  • Plagiarism checkers try to find text that is copied from a different source. They do this by comparing the text to a large database of previously published sources, student theses, and so on, and detecting similarities—not by measuring specific characteristics of the text.

However, we’ve found that plagiarism checkers do flag parts of AI-generated texts as plagiarism. This is because AI writing draws on sources that it doesn’t cite. While it usually generates original sentences, it may also include sentences directly copied from existing texts, or at least very similar.

This is most likely to happen with popular or general-knowledge topics and less likely with more specialized topics that have been written about less. Moreover, as more AI-generated text appears online, AI writing may become more likely to be flagged as plagiarism—simply because other similarly worded AI-generated texts already exist on the same topic.

So, while plagiarism checkers aren’t designed to double as AI detectors, they may still flag AI writing as partially plagiarized in many cases. But they’re certainly less effective at finding AI writing than an AI detector.

What are AI detectors used for?

AI detectors are intended for anyone who wants to check whether a piece of text might have been generated by AI. Potential users include:

  • Educators (teachers and university instructors) who want to check that their students’ work is original
  • Publishers who want to ensure that they only publish human-written content
  • Recruiters who want to ensure that candidates’ cover letters are their own writing
  • Web content writers who want to publish AI-generated content but are concerned that it may rank lower in search engines if it is identified as AI writing
  • Social media moderators, and others fighting automated misinformation, who want to identify AI-generated spam and fake news

Because of concerns about their reliability, most users are reluctant to fully rely on AI detectors for now, but they are already gaining popularity as an indication that a text was AI-generated when the user already had their suspicions.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

Try for free

Detecting AI writing manually

As well as using AI detectors, you can also learn to spot the identifying features of AI writing yourself. It’s difficult to do so reliably—human writing can sometimes seem robotic, and AI writing tools are more and more convincingly human—but you can develop a good instinct for it.

The specific criteria that AI detectors use—low perplexity and burstiness—are quite technical, but you can try to spot them manually by looking for text:

  • That reads monotonously, with little variation in sentence structure or length
  • With predictable, generic word choices and few surprises

You can also use approaches that AI detectors don’t, by watching out for:

  • Overly polite language: Chatbots like ChatGPT are designed to play the role of a helpful assistant, so their language is very polite and formal by default—not very conversational.
  • Hedging language: Look for a lack of bold, original statements and for a tendency to overuse generic hedging phrases: “It’s important to note that …” “is widely regarded as …” “is considered …” “Some might say that …”
  • Inconsistency in voice: If you know the usual writing style and voice of the person whose writing you’re checking (e.g., a student), then you can usually see when they submit something that reads very differently from how they normally write.
  • Unsourced or incorrectly cited claims: In the context of academic writing, it’s important to cite your sources. AI writing tools tend not to do this or to do it incorrectly (e.g., citing nonexistent or irrelevant sources).
  • Logical errors: AI writing, although it’s increasingly fluent, may not always be coherent in terms of its actual content. Look for points where the text contradicts itself, makes an implausible statement, or presents disjointed arguments.

In general, just trying out some AI writing tools, seeing what kinds of texts they can generate, and getting used to their style of writing are good ways to improve your ability to spot text that may be AI-generated.

AI image and video detectors

AI image and video generators such as DALL-E, Midjourney, and Synthesia are also gaining popularity, and it’s increasingly important to be able to detect AI images and videos (also called “deepfakes”) to prevent them from being used to spread misinformation.

Due to the technology’s current limitations, there are some obvious giveaways in a lot of AI-generated images and videos: anatomical errors like hands with too many fingers; unnatural movements; inclusion of nonsensical text; and unconvincing faces.

But as these AI images and videos become more advanced, they may become harder to detect manually. Some AI image and video detectors are already out there: for example, Deepware, Intel’s FakeCatcher, and Illuminarty. We haven’t tested the reliability of these tools.

Other interesting articles

If you want more tips on using AI tools, understanding plagiarism, and citing sources, make sure to check out some of our other articles with explanations, examples, and formats.

Frequently asked questions

How accurate are AI detectors?

AI detectors aim to identify the presence of AI-generated text (e.g., from ChatGPT) in a piece of writing, but they can’t do so with complete accuracy. In our comparison of the best AI detectors, we found that the 10 tools we tested had an average accuracy of 60%. The best free tool had 68% accuracy, the best premium tool 84%.

Because of how AI detectors work, they can never guarantee 100% accuracy, and there is always at least a small risk of false positives (human text being marked as AI-generated). Therefore, these tools should not be relied upon to provide absolute proof that a text is or isn’t AI-generated. Rather, they can provide a good indication in combination with other evidence.

How can I detect AI writing?

Tools called AI detectors are designed to label text as AI-generated or human. AI detectors work by looking for specific characteristics in the text, such as a low level of randomness in word choice and sentence length. These characteristics are typical of AI writing, allowing the detector to make a good guess at when text is AI-generated.

But these tools can’t guarantee 100% accuracy. Check out our comparison of the best AI detectors to learn more.

You can also manually watch for clues that a text is AI-generated—for example, a very different style from the writer’s usual voice or a generic, overly polite tone.

Can I cite ChatGPT?

Yes, in some contexts it may be appropriate to cite ChatGPT in your work, especially if you use it as a primary source (e.g., you’re studying the abilities of AI language models).

Some universities may also require you to cite or acknowledge it if you used it to help you in the research or writing process (e.g., to help you develop research questions). Check your institution’s guidelines.

Since ChatGPT isn’t always trustworthy and isn’t a credible source, you should not cite it as a source of factual information.

In APA Style, you can cite a ChatGPT response as a personal communication, since the answers it gave you are not retrievable for other users. Cite it like this in the text: (ChatGPT, personal communication, February 11, 2023).

How do I access ChatGPT?

You can access ChatGPT by signing up for a free account:

  1. Follow this link to the ChatGPT website.
  2. Click on “Sign up” and fill in the necessary details (or use your Google account). It’s free to sign up and use the tool.
  3. Type a prompt into the chat box to get started!

A ChatGPT app is also available for iOS, and an Android app is planned for the future. The app works similarly to the website, and you log in with the same account for both.

How long will ChatGPT be free?

It’s not clear whether ChatGPT will stop being available for free in the future—and if so, when. The tool was originally released in November 2022 as a “research preview.” It was released for free so that the model could be tested on a very large user base.

The framing of the tool as a “preview” suggests that it may not be available for free in the long run, but so far, no plans have been announced to end free access to the tool.

A premium version, ChatGPT Plus, is available for $20 a month and provides access to features like GPT-4, a more advanced version of the model. It may be that this is the only way OpenAI (the publisher of ChatGPT) plans to monetize it and that the basic version will remain free. Or it may be that the high costs of running the tool’s servers lead them to end the free version in the future. We don’t know yet.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Caulfield, J. (2023, September 06). How Do AI Detectors Work? | Methods & Reliability. Scribbr. Retrieved April 22, 2024, from https://www.scribbr.com/ai-tools/how-do-ai-detectors-work/

Is this article helpful?
Jack Caulfield

Jack is a Brit based in Amsterdam, with an MA in comparative literature. He writes for Scribbr about his specialist topics: grammar, linguistics, citations, and plagiarism. In his spare time, he reads a lot of books.

1 comment

Jack Caulfield
Jack Caulfield (Scribbr Team)
May 1, 2023 at 5:00 PM

Thanks for reading! Hope you found this article helpful. If anything is still unclear, or if you didn’t find what you were looking for here, leave a comment and we’ll see if we can help.

Still have questions?

Please click the checkbox on the left to verify that you are a not a bot.