Artificial intelligence (AI) may be taking the business world by storm, but that doesn’t mean companies understand it.
A new report by Corinium and FICO indicates that some 65% of respondent companies cannot explain how the AI they utilize makes decisions or predictions. Unfortunately, the lack of knowledge leaves companies open to AI being misused.
“Over the past 15 months, more and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level,” said Scott Zoldi, Chief Analytics Officer at FICO. “Organizations are increasingly leveraging AI to automate key processes that – in some cases – are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must understand and enforce auditable, immutable AI model governance and product model monitoring to ensure that the decisions are accountable, fair, transparent, and responsible.”
The issue is further exacerbated by a lack of agreement about what ethical standards AI must meet. While some 55% agree that AI systems should meet basic ethical standards, 43% believe they have no responsibility beyond the most basic regulatory compliance, even if the AI systems in question will impact people’s livelihoods.
“AI will only become more pervasive within the digital economy as enterprises integrate it at the operational level across their businesses,” said Cortnie Abercrombie, Founder and CEO, AI Truth. “Key stakeholders, such as senior decision makers, board members, customers, etc. need to have a clear understanding on how AI is being used within their business, the potential risks involved and the systems put in place to help govern and monitor it. AI developers can play a major role in helping educate key stakeholders by inviting them to the vetting process of AI models.”
The report shows how far industries have to go before AI can be trusted to handle the kinds of decisions many tech leaders are eager to thrust upon it.