jump to...
Register Policing, The New Culture Fit

Picture a regular staff meeting. Nothing dramatic on the calendar. No formal review. Just a document on the screen and a senior person deciding, in front of everyone, that a colleague’s writing does not sound right.
“This looks AI-generated.”
“This isn’t how we write here.”
“The substance is wrong too.”
That is the whole package. Authorship, voice, and substance collapse into one public critique. Senior witnesses nod. The colleague, three months into the company, takes the hit because there is no clean way to answer all three claims at once.
A week later, HR looks at the incident, or fails to, and the pattern starts to show. The same accusation keeps landing on non-native English speakers. On neurodivergent employees who use AI as a writing accommodation. On employees with disabilities who rely on AI tooling. On newer hires who have not yet learned the company’s private dialect.
One incident looks like judgment.
The file looks different.
The Bundle
What makes the AI accusation different from ordinary peer review is that it almost never arrives alone. It comes bundled with two other claims: a voice critique (“this isn’t our register,” “this doesn’t sound like one of us”) and a substance critique (“the content is also wrong”).
Each of those can be legitimate on its own. Work can be wrong. Voice can miss the audience. AI use can cross a line. But when the three arrive together, they create a critique that is almost impossible to answer in real time.
If the target defends the substance, the AI accusation undermines their credibility. The implied logic is simple: if they did not really write it, they do not really know it. If they defend their authorship, the voice critique anchors them as outside the in-group. If they defend their voice, the substance critique comes back.
The bundle is the mechanism. Pulling the three claims apart is the only defensive move with a chance, and most people, surprised in a public setting, do not know to do it.
Who Gets Hit, and Why
The people who take this hit most often share one thing: their writing can look slightly different from the dominant in-group’s writing for reasons that have nothing to do with AI.
Non-native English speakers often learn formal English through instruction rather than immersion. Vocabulary from textbooks. Sentence structure from grammar drills. Register from formal correspondence. Their natural professional register can come out polished and slightly formal, and AI also produces polished, slightly formal output. To an ear trained on native casual English, both can sound similar for the same reason: neither carries the irregularities of native casual register. The “AI detector,” biological or algorithmic, confuses the two.
Neurodivergent employees, particularly those with ADHD, autism-spectrum profiles, or executive-function differences, may use AI as a writing accommodation. Drafting is the hardest part of writing for some people. AI handles the first pass. The human supplies substance and editing. That is a legitimate accessibility use, closer to voice-to-text or grammar correction than most managers realize. It can also look indistinguishable from “AI-authored” content to a casual reader.
Employees with disabilities face the same pattern when AI is part of how work becomes possible: visual impairments using summarization, motor impairments using drafting, cognitive disabilities using restructuring.
Newer hires have a different problem. They have not absorbed insider voice yet. Insider voice is not one thing; it is a pile of micro-conventions. What gets said directly. What gets implied. Which aphorisms are allowed. Which terminology belongs to senior leadership. What tone you use when writing to the leadership team. None of this appears in the employee handbook. It is learned by osmosis. A new hire can write something accurate and well structured, then get told it feels “off” because they have not picked up the camouflage.
In each case, what looks like an “AI-generated tell” may be a non-AI signal being misread: language origin, neurology, disability, tenure.
Register Policing
We need a name for what’s happening, because “tone policing” is taken and doesn’t quite fit.
Tone policing critiques emotional content: “you sound angry,” “calm down,” “be more constructive.”
What’s happening here critiques register: formality level, vocabulary choice, sentence structure, density of aphorism, which professional voice you sound like. It is invisible to the in-group because the in-group does not notice it is enforcing anything. They experience their own register as normal and any deviation as off.
I will call it register policing: critiquing colleagues’ professional output not on substance or accuracy, but on whether their formality, vocabulary, and rhetorical structure match the in-group’s transmitted norms.
Register policing predates AI. AI tools made it visible because they broadened the population producing polished written output. That population now includes more outsiders than the in-group is comfortable with.
Two things make register policing hard to fight. First, it is unfalsifiable. There is no rulebook. No published style guide codifies “we don’t open paragraphs with conditional clauses” or “we don’t use the word regime.” The norms transmit by inheritance and get enforced by senior practitioners.
Second, the critique borrows the social weight of consensus without requiring proof. Everyone knows. But nobody has to write down what everyone supposedly knows.
Put those two together and you get a critique that cannot be objectively disproved, arrives with the force of common knowledge, and lands hardest on people already underrepresented in the dominant register. That is cultural credentialism with a new label.
Why This Is a Disparate-Impact Problem
US employment law, including Title VII for national origin and the ADA for disability, does not require intent. It looks at impact.
If a company’s senior employees disproportionately critique non-native English speakers’ writing as “AI-generated” or “not our voice,” and those critiques materially affect performance reviews, promotion velocity, or scope assignment, that is a disparate-impact pattern, regardless of whether anyone is acting in bad faith.
Three signals matter:
- A documented pattern of who receives the critique.
- Material consequence on employment terms: review language, promotion decisions, project assignments.
- The absence of a published policy distinguishing acceptable AI tool use from unacceptable AI authorship.
Most companies do not have the policy. Many are not tracking the pattern. And plenty of senior employees still treat AI accusations as craft critique rather than a management risk.
A Leadership Question, Not a Tooling Question
The instinct is to treat this as an AI-policy problem. It is bigger than that. AI tools are changing who can produce polished output. Existing culture reacts by tightening who is allowed to. That is not a tooling failure mode. It is a leadership failure.
In Agentic Organizations, the framework I write toward is the CCD Triad: Culture, Communication, and Delegation. Register policing sits at the intersection of Culture and Communication. AI broadens communication capacity. Existing culture responds by enforcing tighter membership criteria.
The healthy adaptation is to redesign the cultural-communication interface for the new capacity. The unhealthy adaptation is to let whoever is loudest in the meeting enforce the old one.
What Organizations Should Do
Publish an actual AI-use policy. Not a ban. Not a free-for-all. A written document that distinguishes acceptable AI-assisted drafting from unacceptable AI-authored output. The substance is the human’s. The prose can be assisted. The human remains responsible for both. In the absence of that policy, the line between acceptable and unacceptable gets enforced by whoever critiques most aggressively in meetings.
Treat AI-accusation patterns the way you would treat dress-code disparate-impact patterns. If senior employees disproportionately enforce “professional appearance” against women, Black employees, or Muslim employees, HR knows what kind of question it is looking at. If senior employees disproportionately enforce “not AI-generated” or “not our voice” against non-native English speakers, disabled employees, neurodivergent employees, or newer hires, the shape is similar. The vocabulary changed. The risk did not.
Separate substance critique from voice critique in performance review language. “The technical claim on page four is wrong” is a substance critique. “This doesn’t sound like us” is a voice critique. Voice critique should not appear in performance review language unless there is a documented standard behind it. Otherwise, register policing becomes career-affecting without ever becoming explicit.
What Individuals Should Do
If you are on the receiving end of the bundle, do not argue it on its own terms. Pull it apart. The AI claim, the voice claim, and the substance claim are three separate questions. Defending all three simultaneously is the trap. Answer them one at a time, on the record, in writing where you can.
Document silently. Save the comments, the timing, the witnesses, and who else has been targeted with the same vocabulary. Do not surface the file during an active critique cycle if you can avoid it. In that moment, even a valid record can get reframed as retaliation. Build the file. Keep the option live.
Know the legal terrain. If a pattern recurs and affects your employment terms, the appropriate channel may be HR or an employment attorney, not the manager chain. That decision tree is separate from the in-the-moment defense. Conflating the two weakens both.
A Closing Note
I am writing this because I have watched it happen, and I have felt it. The non-native English speaker reading their own polished output and being told it cannot be theirs. The neurodivergent colleague defending an accommodation as a craft choice. The new hire absorbing a public hit because they have not yet learned which words are out-of-bounds in their organization.
The pattern is not theoretical, and it is not specific to any one company. It is becoming one of the ways insider and outsider sorting happens in organizations where AI tools have outpaced the cultural infrastructure around them.
Register policing is not a slip in the craft conversation. It is a leadership question. The organizations that take it seriously will keep people the rest are about to lose.
Daron Yondem advises senior technology leaders on AI-driven organizational transformation. Learn more →