What does law enforcement sensitive mean

AI for the police and the judiciary - no compromises when it comes to the protection of fundamental rights

Our security and judicial authorities need to be brought up to date with the latest technology. Intelligent software systems can also provide relief for the police and the judiciary and significantly improve the quality of the work of the authorities. However, their use must be carefully weighed and the highest possible security precautions against any kind of abuse and civil rights violations must be prescribed.

Because one thing is clear to me: human and civil rights are non-negotiable - even and certainly not when government agencies use new technologies. We need a high defensive wall within the EU against the misuse of artificial intelligence as it is practiced by many in authoritarian regimes.

How exactly the use of artificial intelligence in criminal law and its use by the police and judiciary will be regulated in the future is something we are currently dealing with in the European Parliament. As the responsible Renew Europe shadow rapporteur in the Internal Market and Consumer Protection Committee, I make sure that the fundamental rights of EU citizens remain protected in the future.

In September 2020 we will vote on the opinion of our committee on the subject in the European Parliament. It is particularly important to me that people continue to have to make all the ultimate decisions in criminal matters. Software support can be very helpful, but it must be humanly controlled and no one may be suspected or convicted solely on the basis of a machine decision. Accurate, non-discriminatory data sets are of the utmost importance. Algorithms must be accessible and verifiable by relevant control bodies and AI used by police and law enforcement agencies should be published under an open source license. This would not only increase data security and people's trust in the software, but also promote innovation at the same time.

Using AI for law enforcement is highly risky to our civil rights in many areas. This applies, for example, to automatic facial recognition software, whose widespread use in public spaces I firmly reject. In this and some other high-risk areas, such as the so-called scoring of citizens, the EU Commission is requested to submit regulatory proposals that only allow use under clearly defined conditions and prohibit it entirely in other cases.

For the protection of fundamental rights in particular, it is crucial to develop EU-wide requirements that cannot be undermined by individual member states. A closely coordinated exchange between national authorities on the use of AI in law enforcement can help and should definitely be encouraged.

In the upcoming legislative process at EU level, I will continue to advocate the protection of civil rights and insist that rights such as data protection, non-discrimination and protection of privacy are fully respected when using AI by the police and the judiciary.

More on the subject: