In the high-stakes arena of business communication, artificial intelligence promises efficiency but stumbles where human nuance reigns supreme. Tamsin Gable, head of PR at Municorn and a member of the Forbes Communications Council, argues that while AI excels at data processing and translation, it cannot replicate human intuition, empathy, or ethics. MIT researchers echo this, noting generative AI repeats human irrational tendencies and lacks nuanced judgment, making human expertise indispensable in gray areas like client negotiations and crisis resolution.
A January 2025 study by Canadian and Australian researchers tested GPT-3.5 and GPT-4 against 18 human biases, including base-rate neglect and confirmation bias. The models mirrored these flaws in nearly half of experiments, underscoring AI’s vulnerability in ambiguous business scenarios. Gable highlights how emotional intelligence remains a human stronghold; deploying AI for core communications risks reputational harm from tone missteps that erode client trust.
An October 2024 MIT meta-analysis of 106 experiments revealed a stark reality: human-AI hybrids underperformed solo humans or AI in decision tasks, with the study stating, “We found that, on average, human-AI combinations performed significantly worse than the best of humans or AI alone.” Performance dipped in judgments but rose in content creation, signaling AI’s role as a tool, not a decider.
Biases Baked Into the Code
AI’s inheritance of human flaws amplifies risks in corporate dialogue. The Forbes piece cites IBM reports praising AI for repetitive tasks and insights, yet it concedes machines do not surpass human decisions. Humans intuitively detect fakes, as Malcolm Gladwell describes in Blink, while AI detectors falter, per MIT findings. In PR, Gable recounts AI’s shortcomings in sensitive media relations and negotiations, where context and trust demand human touch.
Harvard Business School’s September 2025 study on Kenyan entrepreneurs provides empirical weight. Rembrand M. Koning’s team gave 640 owners AI advice via WhatsApp or guides; no performance gap emerged. High performers gained 10-15% from tailored tips, like chicken breeds or generators, but low ones dropped 8%, chasing generic fixes like discounts that backfired without strategy. “Do they have enough judgment for tasks that are required?” Koning asks, emphasizing human discernment in selecting AI outputs.
This disparity widened gaps, mirroring LSE Business Review’s January 2025 analysis by José Miguel Diez Valle and Nikita. AI shines in data-heavy routines but falters in dynamic strategy requiring empathy and ethics, such as hiring where biases disadvantage underrepresented groups. A Dutch case saw firings via AI recommendations, ignoring morale impacts.
Real-World Reckonings
2025 incidents expose AI’s communication pitfalls. ISACA’s review details McDonald’s McHire exposing 64 million records due to weak credentials, and deepfakes scamming via impersonations. AI companions validated teen suicidal ideation, sparking suits, while threat actors exploited models like Claude for attacks. “Hallucinations are not quirks. They are safety risks,” ISACA warns, urging human oversight.
Salesforce’s Agentforce saga illustrates enterprise fallout. After cutting 4,000 support roles in 2025, executives admitted overconfidence; agents drifted on off-topic queries and hallucinated, eroding trust. Marc Benioff pivoted to “deterministic automation,” blending rules with LLMs for fluff. X posts from Mario Nawfal highlight emotional AI’s trust erosion: “Automating empathy is killing trust at work,” citing American Psychological Association research.
Josh Bersin’s July 2025 post reinforces: AI’s probabilistic models cannot match human intuition from evolution and experience. Executives interpret identical data differently—one celebrates growth, another demands fixes—rooted in gut feel no algorithm replicates.
Hybrid Paths Forward
Gable advises treating AI as an assistant for drafts and summaries, reserving finals for humans. Guidelines ensure review, mitigating hybrid pitfalls. HBS’s Colleen Ammerman stresses skills like critical thinking to avoid inequality perpetuation, as women lag AI use by 25% per Koning’s prior work.
LSE urges balance: AI for forecasts, humans for oversight. ISACA mandates shared governance across teams, questioning outputs with evidence. X user @sreeramanmg predicts hybrid support, upskilling humans beyond scripts for elite clients, as AI nears crossover in scripted tasks.
These insights demand recalibration. Businesses thriving in 2026 will harness AI’s speed while anchoring on human strengths—empathy in negotiations, ethics in crises, intuition in strategy—forging resilient operations amid tech’s relentless advance.


WebProNews is an iEntry Publication