The cybersecurity firm Kaspersky has detected and blocked more than 142 million clicks on phishing links in Q2 2025, a 3.3% rise than in the first quarter of the year. While the percentage may seem small, in scale it represents millions of additional attacks.
Generative AI is a major factor behind the surge. Large language models and media generation tools are now used by cybercriminals to generate highly personalized, conceivable lures with relative ease. These systems are reducing the technical difficulties of phishing and increasing the stakes for businesses and individuals.
How Generative AI is Automating and Scaling Phishing Campaigns
What once was required to be performed manually can now be automated. Generative AI platforms can create personalized emails, landing pages, audio messages, and even voice calls that readily match the tone of the company, speak in the language used by the C-suite, or even sound urgent, all of which are classic social engineering techniques, now rendered more effective.
For instance, an attacker can scrape social media to collect contextual information about a target, then use LLMs to create a message referencing recent events, jobs, or internal terminology. The outcome is emails that look natural and get past the gut-checks most users rely on.
On the other hand, the ability to speak multiple languages enables attackers to tailor their messages to different regions, significantly expanding their reach. Phishing kits have now been equipped with artificial intelligence translation and response engines, which means attackers can run their campaigns 24/7. This ability to scale and customize at speed is one of the main drivers behind the steady rise in phishing attempts reflected in the Q2 report.
AI-Powered Tactics: Deepfakes, Voice Cloning, and Synthetic Identities
Text deception is not the whole picture. Attackers are using voice synthesis and deepfake video to impersonate executives, especially in high-stakes environments such as finance or law. A single AI-cloned voicemail might be enough to make a false wire transfer request seem authentic when combined with a legitimate-looking email.
Visual deception is also advancing. There are tools to set up fake document portals and credential-harvesting sites that replicate login pages pixel-to-pixel exact duplicates. Even the multi-factor authentication is spoofed, tricking users into handing over time-sensitive codes.
The development of synthetic biometric data like mimicked signatures and handwriting samples adds another layer of concern. AI is now able to imitate stylized texts well enough to bypass casual human analysis, and even digital verification systems.
Security Implications and the Need for Proactive Defense
The evolving nature of phishing poses a unique challenge: the human eye and instinct cannot be the last line of defense. Messages mimic real communication styles so well that instinctive checks no longer suffice, and static, rule-based filters often miss the cues.
Defending against this new wave requires a layered strategy. At the technical level, security teams should deploy AI-driven detection tools that monitor for anomalies in sender behavior, language, and access patterns. These adaptive systems go beyond keyword matching and learn to flag what doesn’t “fit” an organization’s normal communication flow.
At the organizational level, IT leaders must implement zero-trust systems, limit user permissions and deploy email authentication standards such as SPF, DKIM, and DMARC. To further improve brand trust and allow users to recognize legitimate communication, use verified mark certificates that can visually authenticate emails with brand logos.
Encryption adds another layer of assurance: TLS secures email in transit between servers, and end-to-end encryption ensures that even if attackers trick a user into forwarding sensitive data, the message remains unreadable to outsiders.
But technology is only half the defense. User awareness and culture are just as critical. Employee training is vital, not only to prevent older fake alerts, but to raise a question when the content seems realistic to stop unwarranted urgency, sensitive, or out-of-context content.
What This Means for Individuals
Phishing has always counted on people making quick decisions. The difference now is that AI makes those lures look and sound far more convincing than the old typo-filled emails most of us learned to ignore. A message might reference something you actually posted online, use phrases your manager often says, or even arrive as a voicemail in a familiar voice.
That shift matters for everyday users. It’s no longer just about spotting broken English or suspicious links. Today, the scam may look like a password reset you were expecting or a message from a friend sharing a document. On social platforms, AI can spin up cloned profiles and chatbots that slowly build trust before slipping in a request.
The best defense here isn’t complicated, but it does take discipline. If a message feels urgent or unusual, pause before reacting. Confirm through another channel, call back on the official number, open the app directly rather than clicking the link or ask the colleague face-to-face if possible. Those small steps often make the difference between being tricked and shutting down an attack.
Generative AI hasn’t changed the fundamentals of phishing. It’s still a con that works only when someone takes the bait. What has changed is the polish. That means the burden shifts to all of us to slow down, verify, and treat “too real to doubt” messages with healthy suspicion.
Conclusion
A 3.3% increase in phishing might seem small, but it corresponds with a larger trend of automated and AI-facilitated cybercrime. Phishing is no longer about email-based fraud but is dynamic, cross-platform, and alarmingly convincing.
Generative AI is not just an addition to the toolkit of the attacker, but a force multiplier. The capabilities that have been enabling businesses to write code, summarize documents, or better customer service, are now being used against them.
The 3.3% spike is only an early warning. The real takeaway is this: generative AI will keep raising the quality of phishing attempts, and defenders – whether companies or everyday users – will have to adapt just as quickly.