Interview

The European Union AI Act and Fintech

The European Union AI Act and Fintech
Hristo Iliev


Author: Phd Hristo Iliev – Chief Data Scientist at NOTO

Hristo is a High-Performance Computing specialist and data science enthusiast with a doctoral degree in atomic and molecular physics from the University of Sofia, software developer, and systems administrator. He specialises in data analytics and predictive modelling at scale, development and performance tuning of parallel applications, and is currently the only Stack Overflow user to hold at once golden badges for the two major parallel programming paradigms – OpenMP and the Message Passing Interface.

Before venturing into the startup world, Hristo worked for six years at RWTH Aachen University, where he optimised various scientific and engineering applications, taught message passing to users of the university supercomputer, and served as co-organiser and co-editor of the proceedings of the first HPC symposium of the Jülich Aachen Research Alliance. Once he left academia, he became the principal data scientist of the Dutch startup holler.live, developing within the EU-funded DataPitch programme an interactive advertising solution for IPTV as well as the company’s internal product analytics.

The world is being swept up in a tidal wave of change brought on by Artificial Intelligence (AI) and Machine Learning (ML). AI is able to bring answers to previously intractable problems, such as computer vision or direct machine translation, by replacing traditional preformulated logic with autonomous extraction of patterns from huge quantities of data. 

Watching transformer models based on OpenAI’s GPT-3 produce computer programs or synthesise inconceivable pictures based on short verbal descriptions is often breathtaking and may appear more like magic than computer science. However, when it comes to making critical business decisions such as assessing client risk and rejecting high-risk transactions or recommending medical treatment, magic should be avoided at all costs. Recognizing the significant differences between traditional software engineering and data-centric AI solutions, the potential for harm when misused, and the varying enthusiasm among EU member states to adopt or even allow AI-based solutions on the market, the European Commission drafted a new regulation that aims to become the next General Data Protection Regulation (GDPR) in terms of how it affects not only the EU market, but also influences legislations far beyond the EU. The EU AI Act. which introduces a sophisticated ‘product safety framework’ constructed around a set of four risk categories, has received both praise and harsh condemnation from across all factions. 

What are the primary pillars of the AI Act, and how does it affect the deployment of innovative AI-based solutions in the regtech and fintech space, based on our understanding of the currently-draft text?

Risk-Based Classification
The landscape of AI algorithms and solutions is vast, and writing a comprehensive regulation by naming specific instances is unrealistic. Instead, the AI act takes a risk-based approach, using the somewhat ambiguous impact on the wellbeing and fundamental rights of EU citizens to categorize AI solutions as low, medium, and high-risk. While low-risk uses are usually permitted to self-regulate, medium- and high-risk uses face increasingly stringent prohibitions and compliance requirements. Some use cases are deemed to pose such an enormous risk that they are explicitly prohibited, including social scoring systems or subliminal manipulation techniques. As a commercial regulation, the legislation excludes military and pure research applications of AI, which are governed by separate legal frameworks. 

The practical implementation and enforcement of the rule, as with GDPR, is delegated to national competent bodies under the umbrella of a European Artificial Intelligence Board. However, unlike GDPR, those bodies are given considerable powers for direct market intervention, such as the ability to demand the suspension and/or destruction of an AI system (or model).

High-Risk Applications
Any use of AI that could have a major negative impact on the health, safety, or basic rights of the individual EU citizen is deemed high risk. AI-based security components of specifically regulated products are automatically classified as high-risk applications. Furthermore, an annex that is an inseparable component of the AI legislation defines and establishes a set of currently eight recognized high-risk usage areas. The list will be evaluated and updated on a yearly basis. At the moment, these domains are as follows: biometric identification; the operation and management of critical infrastructure; education and vocational training; employment management and access to self-employment; access to and enjoyment of essential public and private services and benefits; law enforcement; the management of migration, asylum, and border control; the administration of justice and democratic processes and biometric identification 

A thorough set of compliance criteria is put forth for high-risk AI systems. A risk management approach, with procedures described in the regulation, is imposed and must be engaged throughout the application’s lifespan. The data used to train, test, and validate the models is subject to a set of standards. The datasets must be full and free of potential biases and errors, which can be difficult to accomplish in practice, particularly for bigger datasets where it is nearly impossible to eliminate errors. The entire data pipeline must be documented and will be subject to robust governance mechanisms.

Critiques
One of the primary criticisms of the AI act is that it is heavily based on current EU legislation involving the creation and placement of physical commodities on the market, therefore the roles it specifies and the responsibilities it assigns to those roles are a poor fit for how AI systems are produced and deployed in practice. The Act is seen to impose unnecessary obligations, for example, on service providers who make their AI systems available to other corporations, to treat the latter as the system’s “manufacturers.” However, if a pre-trained model is misapplied or manipulated in any way by a corporate user of a third-party technology, for example, by feeding it extra training data, the onus remains on the original developer. Despite the difficulties in defining the function, it has been proposed that the category of “deployers” be included in the Act. 

The inclusion of a fixed list of prohibited AI uses in the body of the regulation itself has been criticised as inflexible because it makes it difficult to quickly modify it to, for example, incorporate emerging harmful technologies. Furthermore, many people believe the list is arbitrary.

Impact on the Fintech Sector
With regards to the AI act, the financial industry is in a murky middle ground. While some applications of AI in finance, such as credit risk assessment, are expressly listed as high-risk uses, this is done in the context of access to essential services, such as housing and utilities, with the goal of preventing discrimination. The rest of the uses are allegedly governed by the corresponding financial regulations and fall under the act’s catch-all Article 69, which encourages the development of a code of conduct and the voluntary application of the heightened requirements to lower risk AI systems. 

A concerning change to the still-unfinished language has been submitted, which broadens the prohibition on social scoring systems to include private businesses exploiting “social or economic status.” These shifts may have unanticipated and far-reaching ramifications for fintech, because AI-based fraud protection, for example, is strongly reliant on information indicative of the subject’s economic status. Without such a critical predictor, AI models will have to rely on more general information, resulting in a decrease in efficiency. As a result, model sensitivity will be reduced, and the rate of fraud or false positives will rise. 

The act also poses operational hazards. It empowers competent authorities to order the withdrawal of a model from the market, its effective obliteration, or a retrain. While GDPR permits individuals to request that their data be deleted, it does not necessarily affect models developed with it. The AI Act modifies this, and market surveillance agencies can now order that high-risk models be retrained without using the deleted data in the training sets.

The AI Act would help with a number of issues, including fraud prevention, discrimination reduction, and limiting surveillance capitalism. Consciousness on the part of all parties involved is necessary from the very beginning of the development process if AI is to be responsible and trustworthy. Our collective technological proclivities will determine the form of our future civilization. AI impact and compliance evaluations, industry standards, technological roadmaps, and codes of conduct are vital instruments for raising this level of consciousness.


About NOTO

Notolytix Ltd. was founded in 2015 by a group of – fraud prevention & IT veterans, from global companies like Groupon, Paysafe and Rakuten 

NOTO is an enterprise grade solution designed to address all financial crime threats. NOTO is data agnostic and uniquely flexible solution that empowers its users to efficiently combat fraud and money laundering across any vertical or industry. NOTO delivers unsurpassed ROI and truly global capabilities. 

One simple integration helps companies transform their approach to fraud, compliance and risk management in any sector or vertical. 

To learn more about NOTO, visit About NOTO