How to ensure a fair and transparent use of artificial intelligence technologies?

Paris, France, May 2021 – Efus’ working group on Security and Innovation organised a web conference on the use of artificial intelligence (AI) technologies that gathered online representatives from various European municipalities, partners of the Cutting Crime Impact project, academics and researchers, on 5 May. 

What safeguards against the pitfalls of surveillance technologies? 

Since the start of the web conference series of the Efus working group on Security and Innovation, we have discussed the opportunities and risks of using AI technologies in the domain of urban security and crime prevention. During sessions on predictive policing and facial recognition, we heard from experts but also from cities about their perceptions, initiatives and challenges. Artificial intelligence-based technologies can offer an array of opportunities in the domain of urban security. Facial recognition softwares can support the search for missing people and the identification and tracking of criminals. Crime prediction softwares can accelerate the processing and analysis of large amounts of data and can help guide security authorities in their daily operations. 

In this web conference we wanted to go a step further and not only weigh the ethical, legal and social implications of the use of such surveillance technologies but also discuss what safeguards exist. Luxembourgish local elected official Jana Degrott and Linda Van de Fliert from the City of Amsterdam’s Chief Technology Office discussed different uses of algorithms, risks of discrimination and why public control of AI needs to be facilitated1

Algorithms at the service of the municipality

Predictive policing and facial recognition are two of the most polarising use cases of artificial intelligence. The legal, ethical and social implications are manifold2. When it comes to data selection, there is the risk of using historic crime data that might result in automated decisions that reinforce discriminatory bias. The fact that algorithms are trained on this massive amount of data makes transparency tricky – and understanding where decisions come from too. This in turn impacts the ability to correct erroneous decisions. 

The use of surveillance cameras can impact the freedom of assembly and of association, as well as the right to non-discrimination: studies have found that the error rate varies depending on gender and skin colour3. In addition, not a lot of research has been done on how facial recognition softwares work for differently abled people. 

Whether because of financial reasons, a lack of need or ethical considerations, not every city will use or experiment with crime forecasting software or facial recognition. That doesn’t mean that artificial intelligence is not used in other instances. Jana Degrott pointed out that administrative decision-making may be facilitated using modern AI-based technologies. This can fast-track administrative processes and remove some aspects of human error, however there is also the risk that such AI-based systems will have in-built biases drawn from existing biases in data collection and previous administrative work. 

A human-centred approach

Linda Van de Fliert emphasised that Amsterdam focuses on a human-centred approach to new technologies. Case uses include tools to facilitate law enforcement of non-violent crime, such as algorithms to prioritise enforcement of holiday rental fraud by analysing the credibility of reports and thus ensuring that priority issues are at the top of the list to be treated by law enforcement officers. 

Amsterdam’s crowd monitoring system Public Eye  uses an algorithm to analyse the number of people in a given space and thus facilitate administrative measures to prevent overcrowding or potentially dangerous crowd movements. Data on crowding is also released to the public so that they may use it to plan their own movements (something which is of particular importance amid the ongoing health crisis). This project does not use facial recognition, but may remain controversial because it nevertheless recognises and registers of human forms. 

EU guidelines for trustworthy AI

A recurring criticism of AI technology is the opacity of algorithms – their blackbox character, which makes the causal relationship between input and output difficult to track and thus to contest if needed. Without this understanding of where a decision comes from, public trust in the technology will continue to be limited. Linda Van de Fliert pointed out how important trust is in maintaining democratic freedoms and the rule of law. Although there are EU Ethics Guidelines for Trustworthy AI, they do not include comprehensive details on instrumentalisation and therefore leave cities with many questions on best practices. 

In April 2021, the European Commission published a proposal for a regulatory framework on the use of AI. This framework was conceived as an answer to insufficient existing legislation and sets out rules to enhance transparency and minimize risks to fundamental rights. The document focuses on high-risk AI systems, including, amongst others, the use of crime forecasting softwares and facial recognition in urban spaces. These high-risk uses can only be put into place if they fulfill a number of requirements such as: the use of high-quality datasets, the establishment of appropriate documentation to enhance traceability, sharing of adequate information with the user and the design and implementation of appropriate human oversight measures4

Trust and transparency

The cities of Amsterdam and Helsinki have developed instruments to foster trust and transparency. Procurement conditions for AI technology define what is meant by transparency, both technical and procedural, in order to ensure explainability for citizens. Suppliers are required to provide information on the assumptions and choices made in the development of the algorithms as well as clearly outlining measures taken to ensure the integrity of the data sets. The Algorithm Registers5 used in Amsterdam and Helsinki, consist of websites accessible to all citizens, which explain how algorithms work, what data they use and what risk management plans are in place to mitigate potential discrimination.  

Towards an operationalisation of transparency requirements

The AI registers of Amsterdam and Helsinki are one example of how to communicate with users and be transparent about potential risks. Linda Van de Fliert pointed out that the right balance must be struck between communicating with citizens and giving too much information, which would result in a veneer of transparency, rather than real communication about the risks and opportunities of an algorithm. The format of the interface plays an important role in ensuring this balance: even users that do not have technological skills/expertise should  comprehend the reasoning behind AI-driven decisions. Representatives from cities and other local authorities must, too, be included in development and innovation processes of new AI tools, sharing their on-the-ground needs and best practices for encouraging community engagement and acceptance.

Jana Degrott pointed out that especially at the beginning of the development phase of an algorithm, comprehensive work must be done to ensure diversity and fair representation in data sets and algorithms so as to avoid biases from becoming ingrained in these systems: “If we don’t fix the bias, we just automate the bias.” The panels and leaders responsible for developing and implementing these new AI-based technologies should be representative of the population, including vulnerable groups and minorities. She further suggested that the views of activists should be considered in the planning process, since this group is well-placed to offer unique insights, particularly when it comes to aspects of public concern and public acceptance. 

> Stay up to date on new developments in urban security & technology by following our working group on Efus Network


1 White Paper on “Public AI Registers – Realising AI transparency and civic participation in government use of AI” by Meeri Haataja, Linda van de Fliert and Pasi Rautio available from: https://algoritmeregister.amsterdam.nl/wp-content/uploads/White-Paper.pdf 

2 Additional information can be found on the CCI factsheets on predictive policing and the Efus factsheet on facial recognition, available here and here

3  The Best Algorithms Struggle to Recognize Black Faces Equally, Tom Simonite.
Wired, 2019. Available from: https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/  

4 Proposal for a Regulation laying down harmonised rules on artificial intelligence, European Commission, April 2021, available at: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence

5 A video that outlines how the Amsterdam AI register works can be found here: https://vimeo.com/469737917