Since generative AI emerged as a powerful tool for criminals, its usage has grown significantly – leading to more than 50 per cent of fraud now involving the use of AI, according to the latest report from Feedzai, the global AI-native financial crime prevention company.
According to the newly released ‘2025 AI Trends in Fraud and Financial Crime Prevention‘ Feedzai report, while generative AI is powering significant proportions of fraud, nine in 10 banks are also using AI to detect fraud. In fact, two-thirds of banks have integrated AI in the past two years alone.
The Feedzai report also reveals that 90 per cent of financial institutions are combating emerging fraud with AI-powered solutions to safeguard consumers and counter rising threats.
However, while banks are adopting AI to combat fraud, they face significant roadblocks in implementation, especially ensuring the technology is ethical and transparent. In contrast, criminals using AI focus solely on exploiting the technology for illegal gain, without the constraints that banks face in adhering to strict ethical and regulatory frameworks.
According to the report, 44 per cent of financial professionals report that fraudelent schemes use deepfakes, and 56 per cent of professionals cite social engineering, a set of manipulative tactics used by fraudsters to exploit human psychology and trick individuals into revealing sensitive information, as another significant tactic powered by AI.
“Today’s scams don’t come with typos and obvious red flags — they come with perfect grammar, realistic cloned voices, and videos of people who’ve never existed,” explained Anusha Parisutham, senior director of product and AI at Feedzai. “We’re seeing scam techniques that feel genuinely human because they’re being engineered by AI with that intention. But now, financial institutions also have to deploy advanced AI technologies to fight fire with fire to combat scams.”
Banks encountering AI challenges
Fraudsters are also utilising voice cloning techniques, with 60 per cent of professionals recognising this as a major concern, followed by 59 per cent citing SMS and phishing scams powered by AI to deceive victims.
AI-driven fraud tactics, including deepfakes, social engineering, and voice cloning, often result in account takeovers and scams, which, as unauthorized fraud, are generally reimbursable under most circumstances and harder to detect. While deepfakes alone don’t provide direct access to accounts, they play a critical role in building trust during the entrapment stage of scams.
Banks use AI to fight fraud, but they face more obstacles in ensuring it’s ethical and transparent, while criminals exploit AI without such concerns.
Despite AI’s proven benefits, data management remains a significant challenge for financial institutions. Eighty-seven per cent of banks cite data management as their biggest hurdle, with fragmented data sources and regulatory constraints slowing AI adoption, particularly among smaller institutions.
As AI becomes essential in fraud prevention, ethical considerations are paramount. The report finds that 89 per cent of banks prioritise explainability and transparency in their AI systems, demanding governance frameworks that ensure fairness, security, and accountability.
“In some ways, AI is like a car. When automakers design a car, they don’t just think about horsepower. They also consider safety features such as seatbelts, airbags, and anti-lock brakes that will keep drivers and passengers safe,” said Pedro Bizarro, Ph.D., co-founder and chief science officer of Feedzai. “The same is true for AI. Models that aren’t designed with trust at the forefront can lead to significant problems for users. By ensuring that AI decisions are transparent, robust, unbiased, secure, and tested, businesses will accelerate innovation and reinforce customer confidence.”
AI augmentation
In the face of criminals using AI, banks also embrace it as a central defense mechanism. The report reveals that 90 per cent of financial institutions use AI to expedite fraud investigations and detect new tactics in real-time. AI is used for scam detection (50 per cent), transaction fraud (39 per cent), and anti-money laundering (30 per cent), positioning it as the critical tool in the battle against financial crime.
Looking ahead, AI will not replace human roles but will continue to augment them. Forty-three percent of financial professionals report increased efficiency within fraud teams, enabling experts to focus on higher-value, complex fraud cases. As AI evolves, the future will see even more powerful AI-powered solutions, such as behavioral analytics and real-time anomaly detection, to stay ahead of emerging threats.
By embracing specialised AI, banks are able to work harder and smarter in combating fraud-based threats, increasing both the reach and efficiency of their defenses. AI has become a critical tool in enabling financial institutions to detect fraud at scale, but human oversight remains essential to ensure its responsible use. While the road to full AI adoption remains challenging, the right technology and frameworks will continue to redefine fraud prevention and bolster security for consumers and institutions alike.
Recognising these challenges, Feedzai developed its TRUST Framework to help financial institutions build AI systems that are transparent, unbiased, and secure.
Source: https://thefintechtimes.com/