In the rapidly evolving world of artificial intelligence, a troubling pattern has emerged: the replication of age-old misogyny through cutting-edge tools. Laura Bates, a prominent feminist activist and author, delves into this issue in her latest book, “The New Age of Sexism,” highlighting how biases are embedded in everything from chatbots to virtual realities. During a recent interview on PBS NewsHour, Bates explained that AI systems, trained on vast datasets reflecting societal prejudices, often amplify sexist stereotypes, portraying women in subservient roles or as objects of desire.
This isn’t mere coincidence. As Bates notes, technologies like sex robots and AI companions are designed with gendered features that reinforce harmful norms. For instance, virtual assistants such as Siri and Alexa default to female voices, subtly normalizing the idea of women as helpers, a point echoed in a 2019 UNESCO report shared via posts on X, where users discussed how such designs perpetuate gender biases at scale.
The Hidden Biases in AI Training Data
The root of the problem lies in the data fueling these systems. AI models learn from internet-sourced information, which is rife with misogynistic content. A Newsweek article from July 2025 quotes Bates warning of a “widening gap” in women’s access to technology, potentially devastating their economic participation. She argues that without diverse datasets, AI will continue to downgrade women’s resumes in hiring algorithms, as seen in Amazon’s infamous 2014 tool that was scrapped for discriminating against female applicants—a case highlighted in recent X discussions on recruitment biases.
Moreover, emerging technologies like the metaverse introduce new arenas for harassment. Bates describes virtual spaces where avatars face sexualized attacks, mirroring real-world violence but amplified by anonymity. This aligns with findings in a WIRED piece published in September 2025, which explores how deepfakes and sexbots are “reinventing misogyny,” often creating non-consensual explicit content targeting women.
From Virtual Assistants to Sex Robots: A Spectrum of Harm
The spectrum extends to physical embodiments. Sex robots, marketed primarily to men, embody exaggerated female forms, raising ethical questions about objectification. Bates, in her book detailed on Amazon, calls this a “harrowing account” of technology weaponizing against women, dragging progress backward. Recent news from the Fuller Project in May 2025 reinforces this, with Bates asserting that misogyny is “baked into” AI design, leading to a surge in tech-driven sexual violence.
Industry insiders must confront these issues head-on. UN Women’s 2024 explainer on AI and gender equality, still relevant in 2025 discussions on X, points to the gender digital divide, where only 20% of women in low-income countries are online, resulting in biased data that skews AI outputs.
Bridging the Gender Gap in Tech Adoption
Compounding the problem is unequal adoption. A Wall Street Journal study referenced in September 2025 X posts reveals women are less likely to use AI tools like ChatGPT, fearing a “competence penalty”—being seen as less capable, especially older women and those in male-dominated fields. This disparity risks entrenching inequalities in workplaces, from healthcare to finance.
Efforts to mitigate biases are underway, but they’re piecemeal. Companies like Google have pledged diverse training data, yet as Bates told PBS, systemic change requires regulatory oversight. Posts on X from tech ethicists in 2025 emphasize the speed of AI’s amplification of false patterns, urging immediate action.
Toward Equitable AI: Challenges and Pathways Forward
Looking ahead, the stakes are high. The Global AI Summit 2025, as announced on Bennett University’s site, aims to address emerging technologies, including bias mitigation. However, without inclusive design, AI could exacerbate divisions. Bates’ work, lauded on Goodreads for its urgency, serves as a rallying cry: technology must evolve to dismantle, not rebuild, patriarchal structures.
For industry leaders, this means auditing algorithms rigorously and promoting women’s roles in tech development. As a Waterstones blog from May 2025 outlines, failing to act risks entrenching misogyny in society’s digital fabric. The path forward demands vigilance, ensuring AI serves all equitably rather than perpetuating exclusion.