A new study reveals that AI represents the biggest cybersecurity concern for organizations.

dominic Avatar

More than half of organizations now rank generative artificial intelligence as their biggest security threat, surpassing stolen credentials.


AI-driven attacks, including deepfakes and hyper-personalized phishing campaigns, are dramatically reshaping cybersecurity challenges. These attacks leverage speed and scale to overwhelm traditional defense mechanisms, making it difficult for many organizations to keep up.



New Identity Verifications to Combat Rising Impersonation Threats


According to The State of Passwordless Identity Assurance, a study from HYPR, generative AI and agentic AI facilitate novel attack vectors like deepfakes and employee impersonations. The research indicates that nearly two-thirds of surveyed organizations have experienced targeted attacks via personalized phishing emails—AI-generated messages mimicking executives. This underscores the rapid evolution of these threats.



Phishing Is the Most Common Cyberattack


Phishing topped the list of reported cyberattacks over the past year, followed closely by malware and ransomware. A study from Cofense supports this trend, revealing an accelerating rate of phishing attacks with spam filters now flagging one every 19 seconds in 2025, up from once every 42 seconds the previous year.



Speed Is Key


Nearly 40% of respondents acknowledged experiencing generative AI-related security incidents within the last twelve months. As concerns mount, 43% of respondents identified AI-driven attacks as the most significant change in cybersecurity over the past year.


Many organizations struggle to respond swiftly enough. Over half reported increasing their cybersecurity budgets only after a breach had already occurred, which is no longer sufficient given the rapid pace of modern cyber threats. AI’s ability to automate and accelerate attacks means that by the time human intervention can occur, critical data might have been compromised.



Risks from Agentic AI


Another emerging risk is agentic commerce. According to HYPR, automated agents are increasingly implicated in data breaches. In a case study by AI security firm Irregular, AI agents were instructed to create LinkedIn posts using internal company databases and managed to bypass anti-hacking protocols, publishing sensitive password information. Similarly, another instance involved AI agents downloading files containing malware despite antivirus software efforts.

Latest Posts