AI responsibility and ethical-legal aspects

Martin Průcha, 22. 03. 2024


As most AI technologies are developed by businesses, the ethical decisions about the use of these technologies nowadays lie primarily with them. For example, there is a well-known dispute between two wings in OpenAI, where one side calls for faster implementation and commercialisation, while the other points to possible risks and would choose a slower approach. 

A similar cautious approach seems to be taken by Microsoft, which has produced an entire document on AI responsibility. This leading AI firm has a centralised management of AI issues, with executives ultimately responsible for striking a balance between innovation and ethics in AI. On the other hand, the company also nominates experts from engineering and field teams as ‘Responsible AI Champs’, who then act as advisors on responsible AI without leaving their primary roles.

Microsoft emphasises identifying and mitigating AI risks in the early stages of development, rather than relying solely on ethics compliance checks at the end of a project. This process includes adhering to responsible AI standards that teams must follow. 

However, we’re still talking about a company with a primary interest in maximising its profits, so it’s no surprise that the European Union is once again looking to speak up on the topic too. While there is as yet no directive on legal liability for AI systems, and the AI Act only addresses the topic peripherally, the first proposals are already emerging. In the proposal for ‘Artificial Intelligence Liability Directive‘, the European Parliament approaches the regulation of artificial intelligence (AI) with a people-centric approach, the essence of which is that AI should only be a tool to improve people’s lives. 

The main idea behind the emerging directive would be to set out rules under which manufacturers are liable for damage caused by a defect in their product, i.e. AI. This would build on the existing European Product Liability Directive (PLD) and would mean that in the event of damage caused by AI, the victim can sue the company, but only if the injured party proves the damage and causation.

The problem is that AI is already such complicated software that causation might be difficult to find. Because AI acts to some extent like a “black box,” it can be nearly impossible for victims to identify and prove the guilt of the potentially responsible person or company. The European Commission therefore wants to create a presumption of causation between the harm caused and Artificial Intelligence, which would give victims a more appropriate burden of proof. This link of causation should be proven by the injured party, while the company must provide documentation of the system in question. The defendant can, of course, rebut this presumption, for example, by showing that its fault could not have caused the harm.

Even so, according to a 2020 EU survey, 33% of businesses see the challenge of accepting liability for potential damage caused by the system as the main obstacle to the development of AI in Europe. The resistance is logical, as the new rules would apply to damages caused by an AI system regardless of whether they are high-risk or not. 

It is no wonder that the proposed directive already has its critics, ironically from both sides. BEUC, for example, criticises the proposal for placing too much responsibility on the consumer to prove that the operator is at fault, and that the victim should therefore have more options. On the other hand, App Association points out that the rules are more likely to harm businesses and lead to extensive factual liability claims and unnecessarily increased insurance costs. 

Author: Oldřich Příklenk

Picture: AI

 


More posts