The CIO’s ‘Smell Test’: Why a Top Firm Is Betting on Human Instinct Over Algorithms to Police AI

Ernst & Young’s Global CIO, Steve George, is championing a “smell test” to detect AI-generated work, favoring human intuition over flawed algorithms. This strategy is part of a broader push to cultivate responsible AI use, emphasizing human oversight and accountability in an era of automated perfection.
The CIO’s ‘Smell Test’: Why a Top Firm Is Betting on Human Instinct Over Algorithms to Police AI
Written by Maya Perez

NEW YORK—In an era where corporations are racing to deploy algorithmic safeguards for every conceivable risk, Steve George, the Global Chief Information Officer at consulting giant Ernst & Young, is placing his trust in something far more analog: the human nose. Or, more precisely, a professional’s intuition for work that smells just a little too perfect, a little too polished, to be human.

As generative artificial intelligence floods the corporate world, producing everything from marketing copy to complex code, executives are grappling with a fundamental question of trust and authenticity. Mr. George’s method for detecting the uncredited use of AI isn’t a sophisticated piece of software, but what he calls a “smell test.” He told Business Insider that AI-generated text often has a telltale sheen of flawlessness. “It’s very articulate, it’s very verbose, it’s grammatically correct, it’s spelled correctly,” he noted, contrasting it with the slightly “messy” and imperfect nature of authentic human work. It’s a strategy that relies on managers knowing their teams well enough to spot an uncanny valley in a PowerPoint slide.

A Calculated Bet on Culture Over Code

This reliance on human judgment is not a sign of technological denial. On the contrary, it is a core component of EY’s sophisticated, multi-billion-dollar AI strategy. The firm is not trying to ban generative AI; it is actively encouraging its 400,000 employees to use it. Last year, EY announced a sweeping $1.4 billion investment in the technology, which included the launch of its own proprietary large language model platform, EY.ai EYQ. This internal system was developed to provide the power of generative AI within a secure, private environment, allowing teams to query sensitive client data without exposing it to public models.

The goal, according to company statements, is to augment, not replace, human capability. By providing a sanctioned tool, EY aims to channel the use of AI productively while educating its workforce on its proper application. Mr. George’s “smell test” is therefore less a tool of enforcement and more a cultural barometer, a way to ensure that employees are using AI as a co-pilot for brainstorming and first drafts, rather than a ghostwriter for final, client-facing deliverables. The emphasis is on transparency and the critical final step of human verification and refinement.

The Flawed Pursuit of Algorithmic Certainty

EY’s instinct-driven approach stands in stark contrast to the burgeoning market for AI detection software, which has produced mixed and often unreliable results. The academic world has already seen the pitfalls of over-reliance on these tools, with students being falsely accused of cheating by algorithms prone to false positives. A recent comprehensive study by the U.S. National Institute of Standards and Technology (NIST) evaluated a range of AI detectors and found their performance to be inconsistent, particularly when confronted with text from more advanced AI models or text that has been lightly edited by a human. The NIST report underscores the difficulty in creating a foolproof technological solution, validating the perspective that human oversight remains indispensable.

This technological unreliability is forcing organizations to confront a more complex challenge. Rather than seeking a silver-bullet algorithm, business leaders are finding they must invest in governance and education. The conversation is shifting from “Can we detect it?” to “How should we use it?” Companies are now focused on establishing clear guardrails, a task that involves answering fundamental questions about data privacy, intellectual property, and accountability. As detailed in the Harvard Business Review, creating a robust generative AI policy requires a nuanced approach that balances enabling innovation with mitigating profound risks.

Navigating the Risks of AI’s ‘Confident Stupidity’

The stakes for a firm like EY are immense. An unverified, AI-generated report presented to a client could be catastrophic. The primary danger lies in what the industry has termed AI “hallucinations”—instances where the model generates plausible-sounding but entirely fabricated information. These fabrications can range from citing non-existent legal precedents to inventing market data, posing a significant threat to the integrity of professional advice. This phenomenon, which technology experts have described as a form of “confident stupidity,” is a known bug in all current large language models, and as Reuters reports, it presents a major hurdle for enterprise adoption where accuracy is non-negotiable.

Mr. George’s concern about work being “too perfect” is directly linked to this risk. An AI model can produce a beautifully structured, grammatically immaculate report that is built on a foundation of falsehoods. A human expert, with their “messy” and experience-driven knowledge, is the last line of defense, capable of spotting a statistic that feels wrong or a conclusion that defies industry logic. The “smell test” is, in essence, a call for professionals to apply this deep-seated domain expertise, a skill that AI cannot yet replicate.

Redefining Productivity and Professional Skills

This new dynamic is reshaping the very definition of a valuable employee in knowledge-based industries. The ability to write a clean first draft is becoming commoditized. In its place, a new set of skills is rising in prominence: the art of crafting the perfect query (prompt engineering), the critical thinking required to evaluate and challenge an AI’s output, and the creative intelligence to synthesize that output into novel, insightful strategies. The focus is shifting from the generation of content to its curation, verification, and application.

This evolution in the labor market is a central theme in the World Economic Forum’s recent analysis of the future of jobs, which highlights analytical and creative thinking as the most critical skills for the coming decade. EY’s strategy implicitly recognizes this shift. By encouraging employees to use EY.ai EYQ, the firm is training them in these new core competencies, positioning its workforce not as operators who follow instructions, but as managers of a powerful new technological resource. The expectation is that an EY consultant’s value will increasingly be measured by the quality of their questions and the rigor of their review.

A Mandate for Human Accountability

Ultimately, EY’s approach is a pragmatic acknowledgment that in the age of AI, accountability cannot be outsourced to an algorithm. When a report is delivered or advice is given, the name on the account is that of a human being and a firm, not a language model. Mr. George’s message to his global team is clear: use the tools, embrace the efficiency, but never abdicate your professional responsibility. The final product must be owned, vetted, and vouched for by a human expert.

This philosophy suggests a future where the most successful organizations will not be those that simply adopt AI, but those that master the human-machine partnership. The “smell test” may seem like a quaint, low-tech solution in a high-tech world, but it represents a profound strategic choice—a bet that in the final analysis, the imperfect, intuitive, and messy process of human judgment remains the most valuable asset of all.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us