ARTICLE AD BOX

Gender pay gap reporting rules didn’t emerge from nowhere. They were born of a recognition that inequality, left unmeasured, is left unaddressed – and that transparency is a necessary lever for change.
And yet, many of the same organisations meeting these reporting requirements are introducing AI systems that risk baking inequality into everyday decisions. These tools now play a role in determining who gets hired or promoted, how much someone pays for insurance, whether they can secure a loan, and even the diagnosis they receive in a doctor’s office. But unlike pay data, the biases shaping these decisions remain largely invisible.
If fairness should be measured in pounds and pence, why shouldn’t it be measured in the code that decides careers, access to services and even health outcomes? And if we’ve accepted the need to report on pay disparities, is it time to apply the same logic to bias in AI?
AI is already making decisions that matter
AI is already embedded in the machinery of everyday decision-making. In recruitment, algorithms sift CVs, rank candidates, and even conduct initial interviews. More than 93% of Fortune 500 CHROs are integrating AI into HR processes, and over half of talent acquisition teams are already deploying automated hiring tools. In insurance, dynamic customer segmentation and premium adjustments are increasingly delegated to models built for efficiency, not equity. Similar patterns are emerging in credit scoring, healthcare diagnostics, and employee promotions.
The data powering these systems is rarely neutral. It often reflects historical operational processes shaped by decades of social, economic and demographic bias, which AI can faithfully reproduce at scale. Because these models are presented as “data-driven” and objective, their outputs can carry an undeserved aura of authority, making biased outcomes harder to spot.
As agentic AI systems are integrated into workflows and start influencing decisions in real time, bias doesn’t just become harder to detect; it becomes harder to reverse. Once entrenched, it can quietly shape outcomes for years before being uncovered. If it’s uncovered at all.
Where and how gender bias enters the AI lifecycle
Because AI is already influencing decisions with real consequences, the more pressing question is not whether bias exists, but where it starts. In most cases, it doesn’t appear as a sudden flaw after launch. It’s baked in early in the data that’s collected, and the assumptions that shape the model.
Historical datasets carry the fingerprints of past inequality, from hiring records that favour men to medical studies weighted towards male symptoms. When these datasets become the raw material for a model, the results will often reproduce those same patterns.
Design choices can compound the issue. Models trained to maximise efficiency, accuracy, or profit will optimise for those outcomes alone unless fairness constraints are deliberately included. Even business intelligence systems can tilt towards majority behaviours, marginalising edge cases and underrepresented groups unless diversity is built into their framework.
Technical interventions like reweighting or removing variables can help, but they are partial measures. Lasting change depends on having a mix of perspectives at the table, building equity into design right from the start. Without that, bias becomes part of the background noise, steering decisions long before anyone thinks to question them.
The business risk of a lack of AI transparency
When bias is allowed to take root early in the AI lifecycle, it creates ethical concerns as well as business ones. As these systems scale, a biased decision can touch thousands of people in minutes, exposing organisations to reputational and even legal damage. Regulators are already seeing this and closing in: the EU AI Act and new UK rules on algorithmic transparency will demand far greater accountability, with real penalties for falling short.
In a data-driven business, no output exists in isolation. A subtle bias in a hiring algorithm can determine who is brought into the organisation, which in turn influences who is promoted, who reaches leadership, and whose needs shape product decisions. Left unaddressed and these patterns can push an organisation in directions no one intended, and by the time the effects are visible, they’re already often woven into its culture and operations.
Moving from technical governance to organisational accountability
Public debates are shifting beyond whether bias exists in AI – that point is settled – but about who is responsible when it does.
The instinctive answer is to look to the data scientists and engineers, yet bias is rarely just a technical flaw. It’s a product of organisational choices: which data is collected, what outcomes are optimised for, and how much scrutiny is applied before models go live.
Audits, documentation, and testing are essential, but they can’t sit in isolation. They have to be embedded in a wider governance framework that ties AI outputs to their real-world impacts, with named accountability at the organisational level. Fairness in AI is not something achieved once and ticked off a list. Models evolve, data shifts, and biases re-emerge unless they are continually monitored and challenged.
That makes AI governance a core part of risk management. The systems delivering answers to business users should be able to surface their assumptions and flag potential blind spots, not just provide confident outputs that no one stops to question.
It’s time that we take a similar approach to AI as we’ve done with gender pay reporting, requiring organisations to conduct and publish regular bias audits of their AI systems – particularly those influencing the likes of employment, healthcare and financial outcomes.
Like pay reporting, transparency alone won’t solve the issue. Great insight into these systems, however, would create a foundation for accountability and help track their improvement over time.
Treating fairness in AI with the same seriousness as pay equity isn’t about applying one more compliance check. Algorithms are now part of the social contract between organisations and the people they affect. Businesses that approach AI as an opaque efficiency tool will find themselves reacting to crises they could have prevented. Those that treat it as a transparent, accountable decision-maker will be better placed to earn trust and compete in markets where fairness is not just a value, but a differentiator.
Jane Smith is the Field Chief Data & AI Officer for EMEA at ThoughtSpot