Humanising AI Content: Fact or fiction
Tools that “humanise” AI-generated content are not making text genuinely human—they are post-processing outputs to disrupt the statistical signals that AI detectors rely on. In effect, they try to increase linguistic entropy and introduce stylistic irregularities so the text resembles human writing patterns more closely.
Here is a precise breakdown of how they work.
1. Increasing Perplexity (Reducing Predictability)
As discussed in Natural Language Processing, AI text is typically too predictable.
Humanising tools:
- Replace common phrases with less probable alternatives
- Introduce less frequent vocabulary
- Alter phrasing to reduce “next-word predictability”
Example:
- AI: “This study examines the impact of leadership on performance.”
- Humanised: “This study takes a closer look at how leadership shapes performance outcomes in practice.”
The second version is slightly less predictable → higher perplexity.
2. Injecting Burstiness (Sentence Variation)
AI outputs tend to have uniform sentence structure. Humanising tools deliberately disrupt this.
They:
- Mix short and long sentences
- Insert fragments or emphasis sentences
- Vary punctuation and rhythm
Example:
- AI: Consistent, evenly structured sentences
- Humanised: Alternating flow
- “The results were significant. But not in the way we expected.”
This mimics natural human inconsistency.
3. Paraphrasing via Synonym Substitution
A core mechanism is multi-layer paraphrasing:
- Replace words with synonyms
- Reorder sentence structure
- Shift from passive → active voice (or vice versa)
Advanced tools use transformer models (similar to BERT) to:
- Preserve meaning
- Alter surface structure enough to evade pattern detection
However, naive synonym replacement often leads to:
- Awkward phrasing
- Semantic drift
4. Introducing Controlled Imperfection
Human writing is not perfectly optimised. Humanising tools simulate this by:
- Adding mild redundancy
- Slightly loosening logical flow
- Occasionally softening transitions (“however,” “in many cases,” “it seems”)
Some tools even:
- Insert hedging language
- Reduce overly formal tone
This counters the “overly polished” signal typical of AI text.
5. Stylometric Noise Injection
From a stylometry perspective, tools attempt to blur detectable patterns by:
- Varying function word frequency
- Altering sentence openers
- Changing paragraph rhythm
Essentially, they try to obscure the “statistical fingerprint” of AI writing.
6. Prompt Engineering + Rewriting Layers
Many “humanisers” are actually pipelines:
- Generate base content (AI)
- Re-prompt another model to:
- “Make this more conversational”
- “Add personal tone”
- “Vary sentence structure”
- Optionally run multiple rewrite passes
This layered transformation increases distance from the original generation pattern.
7. Adding Pseudo-Personalisation
Some tools insert:
- Light subjective framing
- Generalised “experience-like” phrasing
Example:
- “In many real-world situations…”
- “From a practical standpoint…”
This mimics human perspective without requiring real experience.
8. Why This Sometimes Works
Humanising tools are effective only because detectors rely on imperfect proxies such as:
- Perplexity
- Burstiness
- Surface-level stylistic cues
By manipulating these signals, the text may:
- Fall outside typical AI probability ranges
- Appear closer to human-written distributions
9. Limitations and Risks
i. Meaning Degradation
Repeated paraphrasing can:
- Distort technical accuracy
- Introduce ambiguity
ii. Detectability Arms Race
Detectors are evolving to:
- Look deeper than surface features
- Identify rewritten AI patterns
iii. False Sense of Security
Even “humanised” text can still be flagged because:
- Underlying structure may remain AI-like
- Advanced detectors analyse deeper coherence patterns
iv. Academic Integrity Concerns
In academic or professional contexts, using such tools may:
- Breach institutional policies
- Undermine authenticity of reflective work
10. What Actually Makes Writing Genuinely “Human”
Humanising tools approximate—but do not replicate—key human elements:
- Authentic reflection (real experiences, not generic phrasing)
- Cognitive inconsistency (nuanced, sometimes imperfect reasoning)
- Contextual judgement (tailoring ideas to specific situations)
- Voice and identity (consistent personal perspective)
These are difficult to simulate algorithmically.
Bottom Line
“Humanising” tools work by:
Manipulating statistical signals (predictability, variation, style) to resemble human writing distributions.
They do this through:
- Paraphrasing
- Sentence restructuring
- Variability injection
- Controlled imperfection
However, they operate at a surface level, and their effectiveness is inherently limited.
This article was written by ChatGPT.