Artificial intelligence (AI) is a powerful resource capable of increasing productivity and efficiency in companies, if used correctly, but it is also increasingly being used by those who commit fraud. A study by FICO Spain indicates that 76% of Spaniards have received some type of message, email or phone call that was actually a scam. The surprising thing is that 8% ended up making the payment despite having been alerted by their bank and only 19% informed their entity of the impersonation.
“Before, it was easy to detect a fraudulent email because there were grammatical errors or phrases that were suspicious. Currently, even Europol itself warns of ChatGPT’s ability to write impeccable texts that criminals use to carry out phishing and other forms of fraud,” warns Felipe García, lawyer and partner at the Círculo Legal Madrid firm. Both this generative AI system and others capable of creating very realistic deepfakes raise the urgency of reinforcing security in the corporate sphere through compliance, a powerful tool capable of preventing these attacks.
The Lawyer admits that this technology represents “a real problem for companies and citizens, which requires regulation regarding its use and development, especially in generative AI, limiting its use”, that is, according to the lawyer, “a real problem; for example, identity theft to open accounts and have significant amounts of money is becoming more frequent, which in many cases ends with the affected person in the dock, being charged with a crime of scam, techniques such as facial spooling, or even voice imitation are used for this, if the use of this intelligence is not stopped, fraud problems are going to increase exponentially in the coming months, the EU must react fast as well as the Member States, as always, reality is ahead of regulation, and on many occasions ahead of the State Security Forces and Bodies, which also have no room for manoeuvre in many of the cases that pass through their hands. The importance of compliance, the solution to these threats lies in prevention and training. The Lawyer insists that “companies must review their risk policies and, if they see that fraud can grow more, make the corresponding investments to prevent this type of irregular behaviour.” In this sense, almost 5% of digital transactions carried out in Spain in the first half of the year were suspected of fraud, according to Transunion. For García, the destination of this investment must encompass both technical and human resources.
“We must have teams led by people who promote specific training plans to avoid fraud, in all modalities, systems, models or programs are of no use if behind them, there are no people who involve the organisation in specific training actions to avoid fraud in all its forms, without a doubt, a key figure for organisations, the Compliance Officer, stands as the main protagonist. Who better to tackle these remains?
By the end of the year, it is expected that an agreement will be reached in the European Council regarding the launch of the EU Artificial Intelligence Law, a pioneering regulation that will establish obligations for providers and users based on the level of risk and that It seems that it is going to put the EU at the head of the regulation of this intelligence.
This regulation is, in García’s opinion, “very necessary to regulate the development and use of AI, as a leading, disruptive and very useful technology, but with undoubted harmful potential.”
The spokesperson for Círculo Legal Madrid recalls that Spain will have a specific regulator: the State Agency for the Supervision of Artificial Intelligence (Aesia), a state body based in A Coruña whose entry into operation is scheduled before the end of 2023. “It is going to implement a sandbox to test this regulation in companies that are housed in this new deregulated environment,” says García, adding that “AI is not just data protection, given that it affects other fundamental rights. We will have to look very carefully at its implementation and how it is coordinated with the existing privacy regulator, which is the Spanish Data Protection Agency (AEPD).”