eCommerceNews UK - Technology news for digital commerce decision-makers
Image  2

The dominant dialect… and why ChatGPT thought I was a man

Thu, 5th Mar 2026

AI has dominated and changed my way of doing most things in life, personally and professionally. But after a series of conversations, I began noticing small cues that felt slightly off - a tone misaligned, assumptions misplaced. Curious, I asked the model to generate an image of the person it believed it was speaking to. It hesitated. I insisted. Then it produced a picture of a man.

Large language models are trained on enormous volumes of internet text, much of it dating back to the early 1990s - a web that grew from universities, defence labs and engineering communities. Online authority clustered around academic publishing, open-source forums, finance blogs and start-up culture. These were spaces overwhelmingly male in authorship and leadership. AI systems don't resolve that history. They inherit it. Earlier distributions don't disappear simply because the conversation has shifted.

Modern systems include guardrails and fine-tuning designed to mitigate distortion. Yet the structural issue remains: these models were trained on archives of who historically wrote, led, funded, published and held authority. Now they draft job descriptions, filter applications, summarise performance reviews and recommend candidates. Historical patterns risk becoming operational logic.

When I asked the model to explain its reasoning, something clarified. It provided me with a neat list of conversation indicators that, presumably, would be associated with a male counterpart. My conversations ranged from technology and finance to plenty of subjects more commonly associated with women. But the model flagged not just topic - it flagged tone. Analytical register, directness,  and compressed reasoning: these were markers it associated with male speakers.

Every field develops a dominant dialect - a way of signalling authority. In finance and technology, that dialect favours certainty, brevity and a particular analytical register. Speaking it fluently reduces friction and signals belonging. The style is not inherently male, but it has historically been male-dominated. If AI systems are trained on decades of communication in that register, they will treat it as typical of authority. The corrective question shifts: is the goal simply to increase female representation within existing leadership archetypes, or to question the archetypes themselves?

For years, the advice to women entering male-dominated industries has been straightforward: study hard, build skills, make yourself fluent in the dominant culture. The uniforms changed - suits became hoodies. The vocabulary softened. The optics improved. But authority retained a recognisable silhouette - and the model saw it clearly, even when the room claimed otherwise.

The unsettling moment was not the error itself. It was noticing a flicker of validation - the sense that my tone and reasoning style placed me inside the historical pattern of who "sounds" authoritative. AI did not invent that association. It inherited it from decades of authorship. And I was ashamed to realise I had internalised it too. 

If those patterns are now embedded in systems that evaluate us, the question is not whether bias becomes immortal. It is whether it becomes archived - quietly sustained long after we believe we have moved on. Language evolves socially. Infrastructure evolves slowly. The gap between the two is where the problem lives.

(PS: Did I correct it? I didn't. Based on everything AI knows about us as a society, I didn't want it to talk to me like I'm a woman)

ANNA is offering free MTD Self Assessment quarterly filings  -  no strings attached. Customers will also get their full MTD Self Assessment, including the final return and ANNA's Auto Accountant free in the first year (so quarterly filings are prepared automatically). And new customers will get their last year of Self Assessment for 2025/2026 filed for free on 31 January (worth £150).