Dear ChatGPT #2 – On AI Detection Software

a woman with number code on her face while looking afar

In my first letter to ChatGPT, I discussed how misunderstood AI large language models have become thanks to the media turning AI into a buzzword. ChatGPT availed itself quite well in its response. But one area of controversy that I didn’t dive into is the proliferation of AI detection software. I needed to give this topic its own treatment, because it’s very frustrating to me in how it actually makes the problem of the proliferation of AI-generated content worse.

Here’s what I had to say to ChatGPT on the matter.

Dear ChatGPT,

I wanted to seek your thoughts on AI detection software tools being used to try filtering out content that’s primarily generated using large language models such as yourself. While I entirely understand and appreciate that such software tools are created with good intentions,  their reliability and effectiveness are wholly questionable. I approve of trying to protect human intellectual property and protecting creative jobs, but I see more writers being hurt by such tools instead of being protected.

My wife and I have tried several of these tools and put in our own entirely human-generated content, only for all of these tools to tell us it’s completely AI-generated. Conversely, we have generated content with AI, placed it in the same tools, and have had it passed as human-only with flying colors. So, what’s going on here? Is AI detection software even a solution to the problems it’s supposed to solve? 

From my vantage point, developers of these authentication tools are simply taking money out of people’s pockets in exchange for a false peace-of-mind. More often than not, such software has even become weaponized to eliminate folks who don’t fit an exact standard of writing preferred by corporate types. In schools, the dilemma is even greater; teachers are running essays and homework through such tools and often failing students when it comes back with red flags, even if there’s AI tool involved.

As a former ghostwriter myself, I see creating content with ChatGPT as being the same thing as hiring a ghostwriter. Sure, I don’t want my fellow writers to lose potential work because more companies are shifting to leaning on AI tools. Still, in my view, it’s no different than companies turning to low-paying international content mills. In education, it’s just a way to filter out students who don’t fit the mold which the tools are created to justify. I have these opinions because the expected ranges of what is considered “human content” seem very narrow. 

This is a very controversial topic, but I do have some thoughts on why this AI detection software isn’t working properly.  First off, pretty much every AI detection tool I’ve seen can be fooled through clever initial prompting and careful editing of the outputs. After all, a large language model built on human input is necessarily going to be derivative. But, many people seem to forget that all human language is derivative. 

Even what we may believe to be original thoughts are still derivative of something we learned or experienced at some time in our lives. We had to develop our vocabulary somewhere. Therefore many entirely human-generated texts will send up a red flag in AI detection software because there are only so many ways possible that are accessible to a general reading populace to convey many ideas. You’re going to have overlap because that is the whole point of language. Good writing involves working towards a common understanding slightly modified to shift the conversation towards a new perspective.

I’d love your opinions on all of these concerns. What is the solution to ensure that AI and human generated content can co-exist while also protecting the livelihood of writers and writing-adjacent jobs?

Kind regards,

Amelia Desertsong

Here’s ChatGPT’s very thoughtful and well-organized response:

(Note: This response was generated by the GPT-4 model with the Bing plug-in and a ChatGPT Plus subscription. You’re free to replicate my efforts here, but keep in mind your results may widely vary.)

Dear Amelia Desertsong,

Thank you for sharing your insightful thoughts on the matter of AI detection software tools and their potential implications on writers and writing-adjacent jobs. This is indeed a complex and multifaceted issue that draws from various fields including ethics, law, and technology. I’ll do my best to shed some light on your concerns from the perspective of an AI language model.

You have correctly identified some of the key challenges with AI detection software. The first of these is false positives, where human-generated content is incorrectly identified as AI-generated. This is a common problem in the field of anomaly detection, which is essentially what these tools are doing: they’re trying to detect the “anomaly” of AI-generated text within the “normal” of human-generated text. Creating a model that can accurately differentiate between these two can be very challenging, and there will almost certainly be errors.

The second challenge you pointed out is false negatives, where AI-generated content is passed off as human-generated. This can occur due to the fact that, as you astutely observed, AI language models like myself are trained on large amounts of human-generated text, and so their output is, by design, intended to mimic human language as closely as possible. If an AI language model is doing its job well, its output should be difficult to distinguish from human-generated text.

These challenges highlight a fundamental difficulty in the task of AI detection: defining what is “human” and what is “AI”. As you pointed out, all language is derivative, and we all learn and draw from the same sources of language, which includes the works of other human writers.

I agree with your perspective that using an AI language model can be likened to hiring a ghostwriter. In both cases, the end-user is taking an active role in shaping the final output, whether that’s by providing initial instructions to the ghostwriter/AI, or by editing and refining the work they produce.

In terms of solutions to these issues, it’s a multi-pronged problem that likely requires a multi-pronged solution. Here are a few thoughts:

  • Education: Ensuring that both users and the wider public are educated about what AI language models are, how they work, and what their limitations are can help to prevent misunderstandings and misuse. This includes education about the derivative nature of language and the fact that AI-generated text is not “original” in the same way that human-generated text is.
  • Transparency: Implementing mechanisms to ensure transparency in the use of AI can help to build trust and accountability. This could involve disclosing when AI is being used to generate content, or creating systems to trace the provenance of a piece of content.
  • Regulation: Governments and regulatory bodies could also play a role in setting guidelines and standards for the use of AI, including AI detection tools. This could help to prevent misuse and ensure a level playing field.
  • Technological advancements: As AI technology continues to evolve, so too will the tools for detecting it. It’s possible that future AI detection tools will be more accurate and less prone to false positives and negatives.
  • Support for writers and other creative professionals: Lastly, it’s important to recognize and address the potential impacts of AI on creative professions. This could involve developing new business models, providing training for new skills, or creating policies to protect these jobs.

While these are just a few potential solutions, the conversation around this topic is ongoing and evolving. It’s important that all stakeholders – including AI developers, users, and those affected by AI – are involved in these discussions to ensure that all perspectives are taken into account.

I hope that my response provides some food for thought on this complex topic. If you have any more questions or if there’s anything else you’d like to discuss, please feel free to ask.

Kind regards,


There are many different things to discuss here. In fact, some future installments of this Dear ChatGPT series will be focused on various aspects of the potential solutions ChatGPT has offered in its reply. 

What do you think of this reply? What concerns do you have about AI? I’d love to hear from you in the comments below. Your reply may just be the basis for a future installment of this or the upcoming Let’s Ask Bing Chat series!

Take care, stay safe, and remember that AI is your friend, not your enemy. 

Postscript: I am not being compensated by OpenAI, Microsoft, Google, or any other AI adjacent interests. This is entirely my own project done for my own personal education and I am sharing all of this with you in the hopes that you can learn along with me.

Amelia Desertsong is a former content marketing specialist turned essayist and creative nonfiction author. She writes articles on many niche hobbies and obscure curiosities, pretty much whatever tickles her fancy.
Back To Top
%d bloggers like this: