Inteligencia Artificial y Privacidad

Simply mentioning “artificial intelligence” already invokes dangerous robots destroying the world. Fortunately, reality shows that the current uses of this technology are definitely of a much more positive character. In fact, the development in this field that some sectors are experiencing, such as health or agriculture, can bring great benefits to humanity in the long term.

When we speak of artificial intelligence (AI), we often refer to simple algorithms with a very specific function previously determined by a human being. This, for example, happens in some AI tools used in finance. Machines are capable of processing huge amounts of information to identify patterns that allow us to make financial predictions for the future. This, therefore, would be one of many examples in which AI processes information within a very specific field of action.

However, the state of the art also allows the creation of software that is capable of making its own decisions, without these having been previously foreseen by the developers. This would be the case, for example, of a project created by a technological giant some time ago, which aimed to optimize the way in which chatbots interact with humans. In this way, they put two chatbots to interact and learn from each other. However, the result was that, in this process, the chatbots warned that human language is not the most efficient way of communicating, so, in not too long, they created a new language and started communicating using this one, which was, of course, unintelligible for humans.

The possibility that machines can make their own decisions significantly makes compliance with privacy regulations even more difficult. We are familiar with the use of “automated decisions” in some industries, such as the insurance sector. But let us go further and imagine that an AI tool that, performing research activities (of any kind), understands that it should take into account more categories of data than initially established in order to achieve better results. For example, when it initially only accessed location, sex and interests, it decides autonomously to also process data on people’s sexual orientation. This could completely change the scope of the processing activity from a privacy point of view and even invalidate the data protection measures we had implemented in the first place.

It is therefore necessary to look at the development of AI at a global level and to anticipate the implications it may entail in the future. To this end, below are the four key privacy points that should be taken into consideration in any project. Of course, comprehensive data protection compliance would include many more issues to be analysed, but I believe that these four points are decisive and will really have a major impact on determining the validity of AI projects in the future.

  1. Privacy by design. If, when constructing a building, the architect plan the placement of an elevator from the beginning, it will be easy to build the building with this element. However, installing an elevator in an old building in London´s city centre is normally difficult and expensive. The same happens with any tool or computer application. If privacy is taken into account from the outset, any needed components can easily be included to ensure compliance. To this end, conducting a Data Protection Impact Assessment before starting the activity can play a key role.
  1. Data minimization. Limiting from the beginning the amount and type of data that can be accessed by the AI tool is fundamental, not only to comply with a basic principle of privacy, but also to ensure that the scope of the processing activity cannot be substantially varied by the tool, with the implications that this would entail.
  1. Transparency. It is essential to inform users about who, how and why their personal data will be processed. I do not know of any privacy regulations anywhere in the world that do not include the need to be transparent with individuals about how their personal information is being used. Transparency is consolidated as a fundamental axis when dealing with personal data and, in the field of disruptive technologies, it is not only mandatory, but also has special relevance at an ethical level.
  1. Lawfulness. The legitimate bases are a series of situations, defined in Article 6 of the General Data Protection Regulation, in which the processing of personal data would be permitted. One of them is obtaining consent from the individual, but there are five others that may apply in certain cases. Also, there are some exceptions that allow the processing of special categories of personal data (art. 9 GDPR). Having a valid legal basis for the processing of data is something fundamental that is not always easy to justify in certain projects involving the use of AI technologies.
  1. Security. An organization will always be responsible for the security of the information it processes. Just as money is invested in creating innovative tools, it needs to be invested in making sure they are secure. Not a few security incidents have affected the personal data of millions of people, which have usually resulted in significant fines. Robust security measures are therefore essential for any AI project involving personal data.

 

Share:
Written by Jose Caballero Gutierrez
Lawyer specialized in IT, privacy and media. Associate at Promontory (UK). Previously at PwC and ECIJA. Writing about law, internet, strategy and innovation.