NYC AI Chatbot Debacle Illustrates the Challenges of AI Deployment

New York City is providing an example of why deploying AI can be a challenging proposition after its chatbot encouraged entrepreneurs to engage in illegal behavior....
NYC AI Chatbot Debacle Illustrates the Challenges of AI Deployment
Written by Matt Milano
  • New York City is providing an example of why deploying AI can be a challenging proposition after its chatbot encouraged entrepreneurs to engage in illegal behavior.

    NYC rolled out its MyCity AI chatbot as a tool for entrepreneurs and business owners. Unfortunately, the chatbot has been giving some bad advice. In some cases, it has even advised users to take action that would be illegal.

    Despite the issues, Mayor Eric Adams is standing behind the chatbot, according to TechRadar. Mayor Adams acknowledged the issues, saying MyCity AI is “wrong in some areas, and we’ve got to fix it.”

    At the same time, Mayor Adams emphasized that issues were to be expected any time a new technology is deployed.

    “Any time you use technology, you need to put it into the real environment to iron out the kinks,” he added.

    The list of wrong, and illegal answers, is extensive. TechRadar reports that the chatbot has been saying that business owners they could appropriate workers’ tips, landlords could discriminate based on income, and that stores did not have to accept cash, despite a New York law requiring stores to do so.

    NYC’s issues highlight the ongoing problems AI firms face building trust in their models. Hallucination—where AI gives false information or makes up answers—is a common problem the industry is still grappling with.

    Google CEO Sundar Pichai acknowledged the hallucination issue, saying it is “expected” and “no one in the field has yet solved the hallucination problems. All models do have this as an issue.

    “There is an aspect of this which we call—all of us in the field—call it a ‘black box,’” he added. “And you can’t quite tell why it said this, or why it got it wrong.”

    Unfortunately, when using AI models for mission-critical applications, such as legal advice, hallucinations can have serious consequences.

    In the meamtime, as TechRadar points out, NYC has put a disclaimer on MyCity AI, saying its responses “may sometimes be inaccurate or incomplete” and should not be taken as legal or professional advice.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit