Generative AI still cannot consistently or reliably create content without substantial inaccuracies, racial and gendered biases, or outright fabrications.

In our previous Generative Artificial Intelligence 101 blog post, we defined Generative AI and its many benefits. This time we will discuss why we cannot rely on Generative AI completely, just yet.

While Generative AI is leaps and bounds more advanced than any other type of machine learning program that has ever existed, the current models are still plagued with bugs that result in unreliable and inconsistent output. This is likely because Generative AI programs still cannot consistently sort reliable information from misinformation.

There is pervasive evidence documenting that current Generative AI programs produce material that demonstrates biases that perpetuate or amplify Western stereotypes, going so far as to apply bizarre clichés to basic objects, such as toys or homes.[1] Examples of this include image outputs from generators such as Stable Diffusion and DALL-E 3 that responded to prompts with a wide variety of offensive stereotypes: “attractive” people are white and young; “leaders” are men; Muslims are men who wear turbans or other head coverings; people who receive welfare services are Black; and so on.[2]

Another bug is Generative AI’s tendency to manipulate input images into output that is “bizarre and grotesque,” as Getty Images argued in its lawsuit against Stability AI. [3]  Generative AI output images can depict humans with missing or extra appendages, can fail to accurately follow standards of perspective and depth, or sometimes include unrelated elements in the background of an image.[4]

Probably the most well-known and most concerning issue is Generative AI’s tendency to “hallucinate,” or essentially make up data to include in its output. This is partially because of false or inaccurate input data informing the program’s output, as Generative AI does not have the capacity to determine what is true or false. Generative AI’s ability to create new material, however, takes this fabrication a step further. The made up answers to prompts are often difficult to discern as generators will integrate false information with accurate facts, blurring details, conflating historical individuals, events, and ideas. Particularly concerning is the trend of laypersons have started using Generative AI chatbots to request legal and medical information and, in turn, are relying upon the output, without knowing that some or all of the information included is fabricated.[5]

There is hope, though, that as developers continue to tweak and update Generative AI programs, they will find ways to address these various bugs. Already, scientists have progressed on the issue of hallucinations by developing a new method for detecting when an AI tool is likely to be hallucinating. Research published in June 2024 in the peer-reviewed scientific journal Nature, supports a finding that this new method is able to discern between correct and incorrect AI-generated answers approximately 79% of the time.[6] Advancements like these provide an assurance that Generative AI will be a potentially useful tool in the future, but it cannot function as a standalone resource at this time.

Up next, will using Generative AI get you sued? We explore the serious intellectual property legal risks associated with Generative AI content, especially copyright and trademark infringement.


[1] Tiku, Nitasha; Schaul, Kevin; Chen, Szu Yu “These fake images reveal how AI amplifies our worst stereotypes” The Washington Post (2023) https://www.washingtonpost.com/technology/interactive/2023/ai-generated-images-bias-racism-sexism-stereotypes/ (last visited Jul 9, 2024).

[2] Shipman, Matt “Can AI Do That? The Challenges, Limitations, and Opportunities of Generative AI” Medium (2024), https://medium.com/@shiplives/can-ai-do-that-the-challenges-limitations-and-opportunities-of-generative-ai-a1e3c0e0bc00 (last visited Jul 9, 2024).

[3] Getty Images (US), Inc. v. Stability AI, Inc., 1:23-cv-00135-JLH, District of Delaware

[4] Di Placadio, Dani “The Problem With AI-Generated Art, Explained” Forbes (2024), https://www.forbes.com/sites/danidiplacido/2023/12/30/ai-generated-art-was-a-mistake-and-heres-why/ (last visited Jul 9, 2024).

[5] Weise, Karen and Metz, Cade “When A.I. Chatbots Hallucinate” The New York Times (2023) https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html (last visited Jul 9, 2024).

[6] Perrigo, Billy “Scientists Develop New Algorithm to Spot AI ‘Hallucinations’” Time Magazine (2024) https://time.com/6989928/ai-artificial-intelligence-hallucinations-prevent/ (last visited Jul 9, 2024).


Stafford Rosenbaum LLP is a full-service law firm with two convenient office locations in Madison and Milwaukee, Wisconsin. 145 years of dedication to businesses, governments, nonprofits, and individuals has proven that effective client communication continues to be the heart of our practice.

The post Generative Artificial Intelligence 101: Evaluating the Consistency and Reliability of Generative AI for Content Creation first appeared on Stafford Rosenbaum LLP.