New York City is providing an example of why deploying AI can be a challenging proposition after its chatbot encouraged entrepreneurs to engage in illegal behavior.
NYC rolled out its MyCity AI chatbot as a tool for entrepreneurs and business owners. Unfortunately, the chatbot has been giving some bad advice. In some cases, it has even advised users to take action that would be illegal.
Despite the issues, Mayor Eric Adams is standing behind the chatbot, according to TechRadar. Mayor Adams acknowledged the issues, saying MyCity AI is âwrong in some areas, and we’ve got to fix it.â
At the same time, Mayor Adams emphasized that issues were to be expected any time a new technology is deployed.
âAny time you use technology, you need to put it into the real environment to iron out the kinks,â he added.
The list of wrong, and illegal answers, is extensive. TechRadar reports that the chatbot has been saying that business owners they could appropriate workers’ tips, landlords could discriminate based on income, and that stores did not have to accept cash, despite a New York law requiring stores to do so.
NYC’s issues highlight the ongoing problems AI firms face building trust in their models. Hallucinationâwhere AI gives false information or makes up answersâis a common problem the industry is still grappling with.
Google CEO Sundar Pichai acknowledged the hallucination issue, saying it is “expected” and “no one in the field has yet solved the hallucination problems. All models do have this as an issue.
âThere is an aspect of this which we callâall of us in the fieldâcall it a âblack box,ââ he added. âAnd you canât quite tell why it said this, or why it got it wrong.â
Unfortunately, when using AI models for mission-critical applications, such as legal advice, hallucinations can have serious consequences.
In the meamtime, as TechRadar points out, NYC has put a disclaimer on MyCity AI, saying its responses âmay sometimes be inaccurate or incompleteâ and should not be taken as legal or professional advice.