An interesting proposal launched in April by the Commission relates to a regulation laying down the rules on the use of Artificial intelligence (AI). Amongst the objectives stated, the Commission clearly aims to ensure the safety of such systems and the respect of individuals fundamental rights. The regulation shall apply to both providers and users of such systems irrespective where they are located in as much as they affect natural persons located in the European bloc.
The regulatory framework for AI makes use of a risk-based approach to classify AI systems, envisaging systems that create;
- unacceptable risk;
- high risk;
- low or minimal risk.
Depending on such classification, different requirements are set to apply. It is clear that those AI systems which pose unacceptable risk, including social scoring and manipulative systems that cause harm, shall be prohibited. In the context of such proposal, the evaluation of credit worthiness or establishing a credit score by AI systems is listed (Annex III to the proposal) as a high-risk activity. This particular aspect will thus have a direct impact on those banks which currently make use or plan to use such systems in their processes.
Where a bank is a provider of a high risk AI system, a number of requirements need to be adhered to, including a related sound risk management system, appropriate logging records, detailed documentation, effective human oversight, and high levels of robustness and cyber-security. Equally, if a bank is a user of a high-risk AI system, a number of obligations will still be applicable, including human oversight and the monitoring of the AI system operation.
Though this regulation is still at proposal stage, banks are encouraged to be proactive and identify any use of such systems in their processes. If these are in use, it is important that banks start planning the necessary infrastructure to support these requirements. Such implication is also valid for those institutions that also intend to adopt such technology in future.