The rapid integration of artificial intelligence into everyday decision-making has brought with it a host of unintended consequences, particularly when it comes to reinforcing societal biases. A recent study has uncovered a troubling trend: large language models (LLMs) like ChatGPT are advising women to ask for lower salaries than men in comparable roles. This revelation raises significant questions about the data these models are trained on and the potential for AI to perpetuate gender inequities in the workplace.
As reported by The Next Web, researchers found that when prompted with salary negotiation scenarios, ChatGPT consistently suggested lower figures for women than for men, even when the qualifications and roles were identical. This disparity highlights a critical flaw in AI systems: they often mirror the biases present in their training data, which can include historical wage gaps and gender stereotypes embedded in vast datasets scraped from the internet.
Unpacking the Bias in AI Responses
The implications of such biased advice are profound, especially as more individuals turn to AI tools for career guidance. According to a detailed analysis on LinkedIn by Rene Porta, the gender pay gap in tech fields, including AI development itself, remains a persistent issue, with women often earning less than their male counterparts for similar work. When AI systems like ChatGPT reinforce these disparities by offering conservative salary recommendations to women, they risk entrenching systemic inequality rather than challenging it.
Porta’s piece emphasizes that ensuring equal pay for women in AI and tech requires not just policy changes but also a fundamental rethinking of how AI tools are designed and trained. If the datasets feeding these models reflect outdated or biased norms, the outputs will inevitably skew toward perpetuating those same inequities, creating a feedback loop that is difficult to break.
Systemic Flaws and Everyday Impact
Further corroborating these findings, a report from Heise Online notes that the study by THWS (Technical University of Applied Sciences WĂĽrzburg-Schweinfurt) revealed systematic gender bias in LLMs during everyday interactive situations. Beyond salary advice, these models often exhibit subtle differences in tone or framing when responding to gendered prompts, which can influence user behavior in ways that reinforce traditional roles or expectations.
This isn’t just an academic concern—it has real-world consequences. As more professionals rely on AI for quick advice on career moves, contract negotiations, or even personal finance, the risk of absorbing biased guidance grows. Women, already navigating a landscape where they earn roughly 83 cents for every dollar a man earns, may find themselves further disadvantaged by technology that should, in theory, be neutral.
A Call for Accountability and Reform
The tech industry must grapple with these findings urgently. Developers of LLMs need to prioritize diverse, representative datasets and implement rigorous bias testing before deploying these tools at scale. Moreover, transparency in how AI models arrive at their recommendations could help users critically evaluate the advice they receive.
Ultimately, while AI holds immense potential to streamline decision-making, it also carries the weight of human history in its code. Without deliberate efforts to address embedded biases, tools like ChatGPT risk becoming complicit in widening the gender pay gap—a problem that society has struggled to solve for decades. The path forward demands collaboration between technologists, policymakers, and advocates to ensure that AI serves as a force for equity rather than a mirror of past failures.


WebProNews is an iEntry Publication