AI Mistakes Doritos Bag for Gun at School, Student Handcuffed

At Kenwood High School, an AI gun detection system mistook a 16-year-old student's Doritos bag for a firearm, prompting armed police to handcuff him briefly. The false positive caused terror and reluctance to return to school, sparking calls for reviewing AI reliability and oversight in security deployments.
AI Mistakes Doritos Bag for Gun at School, Student Handcuffed
Written by Eric Hastings

The Incident at Kenwood High School

In a startling case highlighting the pitfalls of artificial intelligence in security systems, a 16-year-old student at Kenwood High School in Baltimore County, Maryland, found himself surrounded by armed police after an AI-powered gun detection system erroneously identified a crumpled bag of Doritos as a firearm. The episode unfolded on a Monday afternoon following football practice, when Taki Allen was casually eating chips with friends outside the school. According to reports, the AI system, designed to scan for weapons, flagged the innocuous snack bag stuffed in Allen’s pocket, triggering an immediate alert to law enforcement.

Allen described the harrowing experience in interviews, recounting how officers approached with guns drawn, ordering him to kneel and place his hands behind his back before handcuffing him. “They made me get on my knees, put my hands behind my back and cuff me,” he told local media. The mix-up was quickly resolved once police inspected the item, but not before the teen endured a terrifying ordeal that left him shaken and reluctant to return to school.

Implications for AI Reliability in Public Safety

The technology in question is provided by Omnilert, a company specializing in AI-driven gun detection for schools and other venues. Omnilert’s system uses cameras to analyze footage in real-time, aiming to identify potential threats swiftly. However, this incident underscores a critical vulnerability: false positives that can escalate harmless situations into dangerous confrontations. As detailed in a report by Dexerto, the AI mistook the shiny, crinkled surface of the Doritos bag for the metallic sheen of a gun, prompting a rapid police response involving multiple units.

Baltimore County Councilman Julian Jones has called for a thorough review of the system’s deployment, emphasizing the need to scrutinize how such errors occur and their impact on students. “We have police officers pulling up on a kid with guns drawn,” Jones stated, as reported by The Banner. This demand reflects broader concerns among educators and parents about balancing school safety with the risks of overreliance on imperfect technology.

Broader Context of AI in Security Systems

Similar mishaps have raised alarms in the industry, where AI is increasingly touted as a solution to gun violence in schools. Yet, experts warn that these systems, while innovative, are prone to errors influenced by lighting, angles, and everyday objects. In Allen’s case, the false alarm led to an armed response that could have ended tragically, echoing other instances where AI misidentifications have caused undue panic. Coverage from WBAL-TV highlighted Allen’s fear, noting he is now “terrified” to attend classes, a sentiment that underscores the psychological toll on young users.

Omnilert has defended its product, asserting that it enhances safety by providing early warnings, but the company acknowledged the error and is investigating. As per insights from Fox Baltimore, school officials are reviewing protocols to prevent future occurrences, including potential adjustments to the AI’s sensitivity thresholds.

Lessons for Future AI Deployments

For industry insiders, this event serves as a cautionary tale about the integration of AI in high-stakes environments like education. The rush to adopt such technologies amid rising school shootings must be tempered with rigorous testing and human oversight to mitigate biases and inaccuracies. Reports from Metro News point out that Allen, a Black student, adds a layer of concern regarding potential racial biases in AI systems, though no evidence of such was explicitly cited in this instance.

Ultimately, as AI continues to permeate security measures, stakeholders must prioritize accuracy and ethical considerations to avoid turning tools meant for protection into sources of harm. The Doritos debacle at Kenwood High may prompt regulatory scrutiny, ensuring that future innovations safeguard rather than endanger the very communities they aim to protect.

Subscribe for Updates

AITrends Newsletter

The AITrends Email Newsletter keeps you informed on the latest developments in artificial intelligence. Perfect for business leaders, tech professionals, and AI enthusiasts looking to stay ahead of the curve.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us