I probably sound like a broken record at this rate, but it does bear repeating: AI is changing everything. It’s not just changing the way we deal with the present in terms of industries like education, agriculture, technology, software, cybersecurity, and the future of the planet, but now, it’s also being used to help hackers and threat actors perpetuate their crimes at an even more frequent and constant basis. One industry that seems, unfortunately, to be thriving on account of this explosion of AI is the Phishing industry.
Phishing is an industry, believe it or not. Like in all areas of potential grift, fraud, and scams, attackers find new ways of streamlining operations to be able to mass produce their attacks and operations to get the most profit in the quickest amount of time. As technology evolves, so too do the attacks, and the scams. Phishing is, to put it simply and succinctly, a scam where attackers mimic/impersonate legitimate sites and organizations and send what seem to be official emails and text messages to potential victims for credential harvesting, payment invoice scams, and personal data collection. Scams are nothing new in human history, of course, and phishing is just another variation of fraud and impersonation. However, the advent of AI and the widespread use of evolving technologies have only lead to monumental increases in phishing attacks.
What used to take hours of careful work and tons of social engineering and delivery methods now takes a fraction of the time, and it’s becoming even easier to create. A recent article detailing how attackers are using generative AI for their phishing attack endeavors is startling, to say the least. Researchers at Okta, an identity management consultant company, uncovered the startling fact that threat actors and attackers have been using generative AI to build up their phishing website pages. In this case, these threat actors have been using Vercel’s v0, a generative AI model, to make these scam website pages.
This works by inputting a prompt for the AI, asking it to build a copy of a website login page. In a matter of seconds, without needing to know any kind of software development or programming, a whole website login page that could mimic legitimate business and institution sites was spun up and created. This process would take less than 30 seconds and then, after its completion, the page would be ready for use by anyone interested in trying it out.
While this research was published, it seems the AI model was updated to explicitly update its service with disclaimers about this process, and to publish these disclaimers on these fake login pages it created. It seems that recently developers and engineers at the company have been working in conjunction with Okta to safeguard this AI model against these kinds of phishing-building prompts. Even as an exercise I undertook, in asking the AI for two login pages, it made sure to have a disclaimer about the pages.
It does beg the question that, when AI mistakes occur or loopholes are found, how will cybersecurity practitioners, and let alone everyday users, be able to detect and stop these attacks?
How bad can things get before they end up getting worse?
This is not so much fear mongering as reckoning with the dangerous potentialities that will continue to increase in the future, with more advancements in AI and more everyday usage and implementation at all levels of industry, gaps and vulnerabilities will occur, and more sophisticated and quicker attacks will take place. Phishing, like so many other forms of cyber attacks and campaigns, will become even more troublesome. It’s up to us to safeguard ourselves and each other, and it’s up to us to deal with AI and continuously the costs and risks, and see if they outweigh the benefits.