28 Sep 2025
Opinion piece: AI disclosure? Maybe it's nunya.
An argument against AI disclosure from AI Consultant Kelly Webb-Davies

AI Consultant, Kelly Webb-Davies
This is a blog post written by our AI Consultant, Kelly Webb-Davies. You can read her original post on Kelly's Substack.
If there is one principle that appears to be broadly accepted on both sides of the generative AI in writing/education debate it’s that people should be transparent and disclose their use.
I’d like to present a counterargument based on the time honoured Aussie tradition of being told “nunya”.
That, actually, it’s none of your business if someone has used generative AI when writing.
I don’t think I really need to go into a huge amount of detail to convince you that people are biased against AI writing. The term “AI slop” has been coined for writing with perceived AI features, and I’ve been hearing the term “AI shaming” more often. I don’t need to rely solely on social media posts and conversations at conferences to confirm this, because there is already plenty of research demonstrating that people will rate human writing lower if it is labelled as AI-written, that there is a social cost that comes with disclosure, and that declared AI use erodes trust.
But writing really is what large language models do best. They are LANGUAGE technology, and they offer huge affordances in terms of linguistic accessibility for groups who experience barriers to writing. Generative AI can help with standardised English expression for people who speak English as an additional language (EAL) and for those who speak stigmatised varieties of English — groups who are at a disadvantage due to their linguistic distance from prestige forms of language and therefore find it harder to access to academic and other professional spaces. The multimodal capabilities of generative AI can help neurodivergent folk with the writing process in a variety of ways. Digital language tools have been assisting writers for decades (e.g. speech-to-text, spellcheck, Grammarly…) and we have overall accepted that as long as a human is in control of the tool, then the output is the author’s own work. In my mind, using generative AI to adjust the form of language while maintaining the author’s ideas doesn’t seem so far removed from that — it’s just further along on the spectrum of digital assistance. We even accept that if someone’s writing is translated into an entirely different language, they still retain authorship over the ideas they created. On the face of it, the question of authorship might be less about the form and method of writing production than where the ideas themselves originated.
Maybe what is most important then, is HOW generative AI is used in the process — I agree we don’t just want people entering a prompt and accepting the output uncritically — hence the arguments for transparency and disclosure. While process transparency serves legitimate purposes in some contexts (such as research methodology), when it comes to using AI for language assistance the issue is more complex. Given the bias against AI-assisted writing, the marginalised groups who benefit most from using the technology for accessibility will simply be further discriminated against if they must disclose their use of it. It’s a lose/lose situation for them.
But the bias against AI-assisted writing is only part of a much deeper problem.
It’s linguistic discrimination all the way down.
We can see this in action when people hunt for AI markers in writing so they can declare it “AI slop” and dismiss it. But people are not very good at identifying AI vs. human generated text. This means there is a very real risk of discriminating against people who might not even be using AI in their writing.
Attempting to identify AI-assisted writing creates a clear bias problem that compounds the discrimination. People can only spot the obvious markers in writing from authors who lack the linguistic skills to edit AI-output to remove the markers, or those who are unable to afford the most advanced AI models. Meanwhile, the skilfully-edited AI-assisted writing from privileged users remains invisible, reinforcing the false confidence that AI detection based on vibes actually works. AI detection doesn’t just fail, it fails in ways that further disadvantage the groups who get the most benefit from AI-language assistance.
There is a legitimate concern that AI-assisted writing may be reinforcing formulaic writing and flattening language. I often come across the sentiment - don’t use AI because we want to hear your “voice” in your writing. But much of what gets identified as “formulaic, dull AI writing” could just be human writing that just doesn’t conform to expectations of voice or style. A writer’s voice should be their choice, and what is it to you if they happen to (or even choose to) write in a style that resembles the way ChatGPT writes?
Additionally, anyone using AI-assistance in their writing still has to make stylistic decisions about what output to keep and what to change. All of that post-editing process still requires writing skill.
And frankly, we don’t actually welcome most people’s authentic “voice” in their writing anyway.
We begin suppressing voices as soon as children start learning to write. The education system enforces standardised English and penalises those who don’t comply. Society judges people’s intelligence and education level in large part based on how they communicate using standard language – but standardised English is not most people’s “voice”. AI writing flattens language because it’s trained on trillions of examples of the exact over-represented, human-generated standardised language that razes the linguistic diversity that we could benefit from if we actually accepted people writing in voices that truly represented their identities.
Many people are anxious about writing (rightly so, I would argue, as I certainly experience it) because if they don’t perform language the “correct” way, they know they will be discriminated against not necessarily for the content of their ideas, but for the way they express them. But now, there’s a standard language machine widely available to everyone. Why would they not be well within their rights to use it? And what incentive could there possibly be for them to disclose use when doing so will almost certainly lead to further discrimination?
My approach when reading something that has hallmarks of AI-assistance in writing is to consider it the same way I do when I read something written with grammatical or spelling “mistakes” – I do my best to ignore the form and instead focus on the content. If a person has put their name to it, they are accountable for it, no matter what process they used to write it. I view it as a message they want to communicate, and I accept it at face value.
People will understandably hide what they know will be used against them. I’ve lost count now of how many times, after giving a talk about the affordances of AI in writing processes, that I have been approached by attendees thanking me for being open and unapologetic about using AI in my writing as an accessibility tool for my ADHD. They say that because they don’t share the privileges I do (my role, institution, race…) they cannot do the same themselves. They know they will be judged if they disclose their assistive AI use, and they are not wrong, based on the AI (and linguistic) shaming that is ubiquitous.
But it’s not only people with accessibility needs who are open with me about their AI use. This also applies to the numerous experienced academics I have spoken to who admit to me that they use AI-assistance in their writing frequently. I’ve never seen any of them declare it (and I couldn’t care less that they don’t - your secret is safe with me). They are in a position of privilege in that they have the language skills to make their use genuinely undetectable. The out-of-date institutional plagiarism policies they work under would fail students for doing the same thing less skilfully.
Until we create systems that empower people to write using their genuine (unstandardised/stigmatised) voice, or we allow them to use AI-assistance to meet the linguistic societal norms without judging them for it, we can’t blame writers for not disclosing AI-assistance. If we want people to be transparent we need to give them reason to be, which means abolishing linguistic discrimination.
Until then, it’s nunya.