Artificial intelligence and the negative consequences of the technology

The Council of Europe’s Expert Committee on Human Rights Dimensions of automated data processing and different forms of artificial intelligence (“The Committee”) has published a study regarding liability when using digital technology and artificial intelligence (“AI”). AI is a digital technology area that is developing rapidly and offers great opportunities, but also new challenges. Piñera del Olmo has highlighted some of these challenges in the previous article https://www.pineradelolmo.com/artificial-intelligence-and-product-liability/

The Committee’s study examines the complex legal and ethical issues that arise when using digital technology and AI. More specifically, it focuses on the threats to human rights and personal integrity and how responsibility should be claimed in relation to risks to human rights and fundamental freedoms protected by the European Convention.

The Committee concluded that the power of advanced digital technologies and systems should not be exercised without liability. In other words, we need to take human rights protection in a global and connected age seriously. Therefore, those who benefit from AI and other digital technologies must also take responsibility for the negative consequences of the technology. As a result of this, each country should protect human rights by having tools in place that allow countries to claim responsibility for the risks and damages that arise from the use of advanced digital technology.

The study can be downloaded in PDF format, in English or in French, from the Council of Europe’s website.

Piñera del Olmo comment:

An example of when AI can be used as a tool is during advising. Piñera del Olmo sees great opportunities for lawyers and the rule of law. However, AI is a tool and not a panacea. As with all tools, the one who uses it is responsible. In order for the user to be able to have this responsibility, it is essential that the tools are transparent and that the users understand them. Lack of understanding of the tools and the data on which they are based means that the advising becomes incomplete. In the case of, for example, law firms, this will of course affect the client, who can hold the law firm responsible for negligent advising.

In order to reduce the risk of being negligent when using AI as a tool, Piñera del Olmo recommend you to:

– Choose an established AI system with small margin of error

– Follow the programmer’s recommendations regarding human input and sample checks

– Keep staff trained on how the system is used and controlled

In order to reduce the risk of clients misunderstanding the reliability of the results when using AI, Piñera del Olmo recommend you to:

– Agree with client that AI should be used (in, for example, due diligence)

– Clearly present the limitations and risks of the AI system to the client in an educational way

If you want to know more about AI, do not hesitate to contact one of our lawyers.

Article By Lovisa Lundin

Disclaimer: This information in this article is not definitive, for up to date legal advice which is relevant to your case please contact our lawyers.