The world’s financial system is under another attempt to hack its defenses by an adversary that is both increasingly skilled and stealthy, senior security officials said Monday.
Cybersecurity authorities and financial regulators around the world are raising the alarm that artificial intelligence is no longer merely a defensive tool, as it is used by the good guys to stabilize the system, disrupt criminal activity and detect fraud.
Instead, they say, it has become a potent weapon in the hands of bad guys who can take advantage of the speed and scale of AI to inflict unprecedented speed and scale of harm, with the greatest potential for risk in banks and other financial institutions.
Fresh accounts from the likes of the UK National Cyber Security Centre ((NCSC)) and the US FBI punch home that a serious change in its underlayment is taking place.
Generative AI and LLM are drastically reducing the knowledge barrier, as arch-ransomware capable of better identikit phishing, deepfake payment fraud and more complex, targeted social engineering at scale at non-traditional adversaries.
These AI-driven attacks are proving devastatingly effective. LLMs are being exploited by threat actors to craft convincingly context-aware emails that easily bypass conventional email gateways, masking spear-phishing campaigns to look very much like authentic correspondence.
The burgeoning of deepfake technology is especially alarming, as attackers are turning to AI-fueled audio and video that can portray executives or other trusted people and deceive victims into transferring money or divulging sensitive information.
The financial industry in India, for one, has experienced a spike in deepfake identity fraud, with incidents increasing 550% since 2019 and expected to cost organizations an estimated INR 700 billion (US$8.3 billion) in 2024.
RBI Governor Shaktikanta Das has also warned financial institutions about the systemic risks emanating from the steep reliance on AI which includes the greater exposure to cyber-attacks and data breaches. Opaque AI algorithms are also hard to audit, which might cause unexpected market outcomes.
In the face of this increasing menace, authorities are advising banks to employ multi-tiered defense tactics. It means improving email security with AI-powered ability to read the context and intent of messages, increasing user training to spot AI-enhanced threats, and reinforcing identity controls with strong, phishing-proof Multi-Factor Authentication (MFA).
The consensus is undeniable: investing in AI-based defenses, creating a culture of prudent skepticism among employees, and establishing strict verification checks are not nice to haves, they are the must haves in order to keep your financial house secure in today’s digital battlefield.