BALTIMORE—In a stark illustration of the pitfalls of artificial intelligence in public safety, a 16-year-old high school student was swarmed by armed police and handcuffed after an AI-powered gun detection system mistook his bag of Doritos for a firearm. The incident, which unfolded outside Kenwood High School in Baltimore County, has ignited debates about the reliability of AI surveillance tools in educational settings.
Taki Allen, a junior at the school, was sitting with friends after football practice on Monday when the system flagged what it perceived as a threat. According to reports, police arrived with guns drawn, ordering Allen to the ground. “They made me get on my knees, put my hands behind my back and cuff me. Then, they searched me and they figured out I had nothing,” Allen told WBAL-TV. Officers later discovered the ‘weapon’ was merely a discarded snack bag on the ground.
The False Alarm: A Bag of Chips Triggers Police Response
The AI system in question is part of a broader deployment in Baltimore County Public Schools, designed to detect guns through video analytics. Installed across high schools in the district, it alerts authorities to suspicious objects resembling firearms. In this case, the technology apparently confused the crinkled, metallic sheen of a Doritos bag for a gun, leading to an immediate police response.
School officials confirmed the incident but emphasized its rarity. “This is a highly unusual situation,” a spokesperson for Baltimore County Public Schools told The Guardian. The system, believed to be from Omnilert based on local reports, uses AI to scan camera feeds in real-time, aiming to prevent school shootings by identifying threats early.
Inside the Technology: How AI Gun Detection Works
AI gun detection systems like Omnilert employ machine learning algorithms trained on vast datasets of images to recognize firearm shapes. These tools integrate with existing security cameras, analyzing footage for patterns that match known weapons. However, false positives remain a challenge, as everyday objects can mimic gun silhouettes under certain lighting or angles.
According to a post on X by Smarter Every Day from 2019, early prototypes of such systems used machine learning to plug into cameras and output raw data identifying potential guns. More recent advancements, as noted in a CBS Philadelphia article about ZeroEyes in New Jersey schools, involve AI that detects firearms within camera views and sends instant alerts to law enforcement.
The Incident’s Aftermath: Trauma and Calls for Review
Allen described the experience as traumatizing. “I don’t think a chip bag should be mistaken for a gun,” he said in an interview with WMAR-2 News. His mother, Towanda Allen, expressed outrage, telling BBC News that the ordeal left her son shaken and highlighted the risks of over-reliance on imperfect technology.
Baltimore County Police have not confirmed the exact role of the Doritos bag but acknowledged the AI alert prompted their response. In a statement to Gizmodo, officials noted that while the system enhances safety, human verification is crucial. The school district is reviewing the incident to refine the technology.
Broader Context: AI in School Safety Amid Rising Gun Violence
The deployment of AI surveillance in schools reflects a growing trend amid America’s epidemic of gun violence. With over 300 school shootings since 2018, districts are turning to tech solutions. Baltimore County introduced its system to proactively address threats, similar to initiatives in other states.
For instance, Glassboro Public Schools in New Jersey became the first to implement ZeroEyes, an AI program that detects firearms via camera feeds, as reported by CBS Philadelphia last month. Eagle Eye Networks recently launched its own AI-enabled gun detection tool for schools, according to EdTech Innovation Hub.
Challenges and Criticisms: False Positives and Privacy Concerns
Critics argue that such systems, while well-intentioned, can exacerbate tensions and lead to unnecessary trauma, especially in communities of color. Posts on X, including one from Thrill Science, highlight intensifying concerns over AI surveillance in schools following the Baltimore incident, noting the false flag of the Doritos bag.
A 2023 X post from Only In Boston discussed Attleboro Public Schools’ gunshot detection system, which differentiates gun acoustics from other noises. However, visual AI like that in Baltimore is prone to errors, as evidenced by this case. “Hallucinations FTW,” quipped a user on X from The Prepaid Economy, referencing the AI’s mistaken identification.
Industry Perspectives: Balancing Innovation and Accuracy
Experts in AI and security emphasize the need for improved training data to reduce false positives. Barry Norton of Milestone Systems, in a piece for Campus Safety Magazine, discussed how AI analytics can detect threats proactively but must be optimized for accuracy.
Entrepreneurs like Brett Adcock have invested in advanced AI hardware for school safety. In X posts from 2024, Adcock announced his company Cover, which licenses NASA technology to detect concealed weapons using AI imaging systems, generating point clouds to identify threats.
Future Implications: Refining AI for Real-World Use
As AI tools proliferate, incidents like Allen’s underscore the importance of human oversight and ethical deployment. Baltimore County schools plan to assess the system’s parameters, potentially incorporating more robust verification processes.
Meanwhile, the tech industry continues to innovate. Cover’s system, as detailed in Adcock’s X updates, aims to prevent shootings by scanning for concealed weapons at school entrances, representing a next-generation approach beyond current visual analytics.
Policy and Ethical Considerations: A Call for Regulation
Policymakers are grappling with how to regulate AI in sensitive environments like schools. Advocacy groups call for transparency in AI algorithms to prevent biases that could disproportionately affect minority students.
In the wake of this event, as reported by Dexerto, there are growing demands for independent audits of school AI systems to ensure they enhance, rather than endanger, student safety.
Conclusion: Lessons from a Snack Bag Mix-Up
The Baltimore incident serves as a cautionary tale for the rapid adoption of AI in public safety. While these technologies hold promise in averting tragedies, their flaws can lead to real harm, prompting a reevaluation of how we integrate AI into daily life.
As schools nationwide weigh the benefits against the risks, the experience of Taki Allen reminds us that technology must be tempered with humanity to truly protect those it serves.


WebProNews is an iEntry Publication