ESET Discovers PromptLock: AI Ransomware Proof-of-Concept from NYU

Researchers at ESET discovered PromptLock, initially seen as the first AI-powered ransomware using local generative AI to create adaptive, polymorphic scripts for cross-platform attacks. Revealed as an NYU proof-of-concept, it highlights AI's potential to enhance cyber threats, urging advanced defenses and ethical governance in AI deployment.
ESET Discovers PromptLock: AI Ransomware Proof-of-Concept from NYU
Written by Victoria Mossi

In the rapidly evolving world of cybersecurity, a recent discovery has sent ripples through the industry: the emergence of what was initially hailed as the first known AI-powered ransomware. Researchers at cybersecurity firm ESET uncovered a malware strain dubbed PromptLock, which leverages generative artificial intelligence to create malicious scripts on the fly. This development, detailed in an ESET blog post, highlights how AI can supercharge traditional threats, making them more adaptive and harder to detect.

PromptLock operates by running a local instance of OpenAI’s gpt-oss:20b model through the Ollama API, generating unique Lua scripts for each execution. These scripts enable cross-platform attacks on Windows, Linux, and macOS systems, scanning for valuable data, exfiltrating it, and encrypting files. Unlike conventional ransomware with static code, this AI-driven approach produces polymorphic variants, evading heuristic-based defenses that rely on pattern recognition.

The Mechanics of AI-Enhanced Malware and Its Implications for Detection Strategies

The innovation lies in its autonomy: the malware doesn’t phone home to a command server but generates attack logic locally, sidestepping API monitoring tools. As reported in Tom’s Hardware, this could complicate endpoint security, where traditional antivirus software struggles against code that mutates with every run. Industry experts warn that such techniques lower the barrier for cybercriminals, allowing even novices to deploy sophisticated threats without deep programming knowledge.

Further analysis reveals that PromptLock was not a wild threat but a controlled experiment. A research paper on arXiv outlines how large language models can autonomously orchestrate complete ransomware campaigns, from reconnaissance to encryption, demonstrating the potential for AI agents to execute multi-stage attacks with minimal human intervention.

From Proof-of-Concept to Real-World Warnings: The NYU Connection

It turns out PromptLock originated as an academic project at New York University’s Tandon School of Engineering. As detailed in an NYU Engineering announcement, the team developed this proof-of-concept to illustrate the risks of AI in malware creation. By uploading the sample to VirusTotal for testing, they inadvertently sparked media frenzy, with initial reports mistaking it for an active threat.

Subsequent clarifications, including another piece from Tom’s Hardware, confirmed its research origins. The code mimics typical ransomware behavior—targeting specific directories, stealing sensitive files, and demanding payment— but was designed to showcase vulnerabilities rather than cause harm. NYU researchers emphasized ethical guidelines, using it to advocate for safeguards in AI deployment.

Broader Industry Ramifications and the Push for AI Governance in Cybersecurity

This episode underscores a growing concern: as AI tools become ubiquitous, their misuse in cybercrime is inevitable. Defenders must now contend with threats that evolve in real-time, prompting calls for advanced detection methods like behavioral analytics and AI-powered countermeasures. Companies like ESET are already adapting, integrating machine learning to counter these dynamic attacks.

Yet, the PromptLock saga also highlights the double-edged sword of research transparency. By publicizing such projects, academics aim to preempt real threats, but it risks inspiring malicious actors. As the field advances, balancing innovation with security will be crucial, with regulators eyeing frameworks to govern AI in sensitive domains.

Looking Ahead: Preparing for an AI-Driven Threat Era

In conversations with industry insiders, there’s consensus that PromptLock is just the beginning. Future iterations could incorporate more advanced models, enabling autonomous decision-making in attacks. Cybersecurity firms are ramping up investments in AI defenses, while enterprises are advised to bolster data encryption and zero-trust architectures.

Ultimately, this research serves as a wake-up call, urging collaboration between tech developers, researchers, and policymakers to mitigate risks before AI-fueled malware becomes commonplace in the wild.

Subscribe for Updates

CybersecurityUpdate Newsletter

The CybersecurityUpdate Email Newsletter is your essential source for the latest in cybersecurity news, threat intelligence, and risk management strategies. Perfect for IT security professionals and business leaders focused on protecting their organizations.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.

Notice an error?

Help us improve our content by reporting any issues you find.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us