As generative AI tools become increasingly sophisticated and widely used, the ability to distinguish between AI-generated and human-written text has become a hot topic. While AI can be an incredibly powerful tool for writing and research, it’s important to understand its strengths, limitations, and potential “tells” that may indicate its use. Below are some tips for detecting AI-generated text and, more importantly, how to improve and personalize AI-assisted writing to make it truly your own.

Last week’s Inside Higher Ed offers tips on how to distinguish AI-generated text from human-written text.  The author, a literature professor, generated 50+ essays with AI and compared them to characteristics she had come to expect from text written by humans.  Her findings are in bold below, supplemented by my own experience.

  • AI-generated essays are often confidently wrong.

Oh yes, I’ve found the confidence thing to be absolutely true.  Generative AI is like the world’s biggest people pleaser.  It tells you what it thinks you want to hear and goes all in on its response. But it’s worth noting that many times, it’s confidently right.  So here’s my first tip, and it’s a big one: It’s up to you as the user to do your due diligence in vetting the response.

  • AI essays tend to get straight to the point and
    AI-generated essays are often list-like

Yeah, these two can be AI “tells,” but they also point to one of the major strengths of generative AI:  organization of ideas.  I frequently use AI to refine my own writing to help me order my thoughts and cut to the chase.

Sometimes it organizes its responses into lists.  So, here’s my 2nd tip – and it’s a theme I’ll be repeating throughout:  if you don’t want a list, then ask it to put it in narrative or some other format.  Generative AI is really good at following directions so tell it exactly what you want.

  • AI-generated work is often banal, 
    AI-generated essays are often repetitive,
    the paragraphs of AI-generated essays also often begin with formulaic transitional phrases, and
    AI-generated text tends to remain in the third person

Yes, I’ve seen all these things in AI-generated writing.  Out-of-the-box, it can be formulaic and fairly bland.  But I’ve also seen it generate some really wild and creative stuff when I’ve asked it to do so.

So, I’m going to repeat my 2nd tip: If you don’t want it to be banal, repetitive, formulaic, or in the third person, then tell it that.  Be descriptive enough in your prompts to direct it to do what you want.  Tell it who you are, who your reader is, and what tone you’re looking for.  Think of the responses it provides as suggestions for you to take or leave.  In exchanges with AI, I’ll often say something like, “No, I don’t like that.  Try this instead.”  Or “Yes, that’s what I was thinking of.  Give me more of that.”

And remember that 1st tip again: AI may have helped you or even fully generated the text, but if you’re putting that text out into the world, you better make sure that you’re comfortable and confident having your name on it.  Always vet the content.

  • AI-produced text tends to discuss “readers” being “challenged” to “confront” ideologies or being “invited” to “reflect” on key topics

Yeah, using flowery, sophisticated-sounding words is something I’ve observed.  In fact, there’s a new study from Cornell that explores “excess word usage” as a way to detect generative AI use in academic texts.  The authors examined abstracts in PubMed from 2010-24 and found that there was an “unprecedented increase in excess style words” in recent scholarship which they attribute to ChatGPT usage.

Per the article, the following real 2023 abstracts illustrate this ChatGPT-style flowery excess language:

By meticulously delving into the intricate web connecting […] and […], this comprehensive chapter takes a deep dive into their involvement as significant risk factors for […].

A comprehensive grasp of the intricate interplay between […] and […] is pivotal for effective therapeutic strategies.

Initially, we delve into the intricacies of […], accentuating its indispensability in cellular physiology, the enzymatic labyrinth governing its flux, and the pivotal […] mechanisms.

So, going back to tip 2, if you don’t want it to use these types of flowery, sophisticated-sounding excess words, then prompt it to put it in another tone.  You’d be amazed at the many different tones and voices it can give you, such as 1st-year law student, new associate, or experienced attorney.  You can also ask it to put it in the tone of a specific Supreme Court Justice or well-known author.  Whether this raises IP issues is another ball of wax.

I recommend reading the full Inside Higher Ed article for further discussion of each point.

As someone who uses AI almost every day and teaches law students about the ethical and effective use of generative AI, I believe it’s valuable to be aware of these potential “tells” while also recognizing their limitations. While there are indeed characteristics that may hint at AI-generated content, it’s important to remember that they are not foolproof indicators.

The key is not to focus solely on detection, but rather on how to use AI effectively and ethically as a writing tool. By providing clear, specific prompts and actively refining AI-generated content, we can use the power of AI while maintaining our unique voice and ensuring the final product reflects our own thoughts and insights. This approach transforms AI from a potential shortcut into a valuable writing assistant, much like spell-check or grammar tools.

Remember, the goal is not to outsource our thinking to AI, but to use it as a tool to enhance our own abilities and productivity. As we continue to explore the possibilities and challenges of AI in research and writing, I encourage you to experiment with these tools responsibly and always prioritize your own critical thinking and analysis.

And by way of full disclosure: I used Claude.AI to help me organize my ideas and suggest phrasing for this post.  Can you tell?