Police are using big data to profile young people, putting them at risk of discrimination

Amnesty International has raised a series of human rights issues in connection with the “gang matrix” developed and run by London’s Metropolitan Police, in a recent report. According to the report, appearing on the database could affect the lives of 3,806 people, 80% of whom are between 12 and 24 years old.

There are no specific details about how the matrix operates and is used by police. It exists, at least in part, to address the difficulties in policing gang activities across different districts. But it’s suspected that – because of government data sharing – appearing on the database will “follow” young people around, affecting their access to housing, education or work.

The Met said in a statement, “The overarching aim of the matrix is to reduce gang-related violence and prevent young lives being lost”, but added that it was working with Tottenham MP David Lammy, Amnesty International and the Information Commissioner’s Office to “help understand the approach taken”.

Discrimination through data

The first and most obvious issue is that the gang matrix appears to discriminate against ethnic minorities: 87% of the people listed on the the matrix are from Asian and minority ethnic (BAME) backgrounds, and 78% are black. As Amnesty notes, this is clearly disproportionate; black people only make up 13% of London’s population, and 27% of those people known by police to be responsible for serious youth violence.

Other concerns about discrimination have been raised in relation to police use of surveillance technologies. For example, UK research highlighted how early CCTV systems tended to focus disproportionately on minority groups, particularly young black males.

According to Amnesty’s report, there seems to be no clear process or criteria for deciding what a gang is, who is a gang member and who should ultimately be added to the matrix. Decisions are reported to be made differently across London boroughs, with little evaluation or scrutiny. Notably, The Guardian reports that 40% of young people on the matrix from the borough of Haringey had “zero” risk of causing harm.

If this is indeed the case, it directly conflicts with the human rights law prohibiting “arbitrary rights interference”. Essentially, if the state interferes with an individual’s rights there should be a legal basis for that activity, to protect people from being treated differently based on arbitrary measures such as where they live, or who reviews each case.

“Gang culture” is often confused with youth culture, and without such protections people may be targeted based on what clothes they wear, what music they listen to or even how they greet each other.

Do you happen to like hip hop and live on an estate?Shutterstock.

Such problems are also a feature of broader big data policing practices – for instance, when data is used to determine whether someone is at risk of reoffending. A US study by ProPublica found that an algorithmic risk assessment tool used in the criminal justice system was heavily biased against black people, while recent reports on the use of surveillance algorithms by the Los Angeles Police Department raised similar concerns.

When complex algorithms and big data sets are used to make decisions, it can be difficult to work out exactly what factors influenced that decision, and to what extent. Human biases are built into big data, from the way data are collected, down to the categories used to sort them. Biases are then replicated, amplified and reinforced – testing, transparency and a decision about whether it’s even appropriate to use an algorithm are therefore essential.

A lack of transparency

The third problem is the lack of transparency surrounding who is on the database, how they got on there, how it affects their lives and the processes for getting their name out of the matrix. Amnesty report that being on the database may affect access to social services, education and work – this can have a huge impact on an individual’s life opportunities, especially for young people already up against the odds.

Transparency is also essential to protect against arbitrariness: if the public, researchers or other authorities don’t know what’s happening, they can’t challenge it and question whether particular forms of surveillance technology are being used appropriately – or if they should even be used at all.

Of course, it’s essential to take advantage of technological developments. If organisations such as the UN are using big data to pursue sustainable development, or if Amnesty is using technology to document human rights violations, it is appropriate that the police also assess how technology can assist their work. But it’s equally important that society does not rush to adopt new technology, for its own sake: we must understand any potential harms.

Technology should serve society, and not undermine societal goals such as fairness and equality of opportunity. The gang matrix demonstrates the dangers to citizens, which arise when human rights are not given due consideration. In the UK, it’s well-established that policing should be by consent, and that good policing depends on good community relations.

Discrimination, arbitrariness and a lack of transparency can only lead to dissatisfaction, distrust and alienation from the state. When they occur, it’s not just human rights that are violated – it’s the very goals which technology was deployed to serve.

Daragh Murray, Lecturer in International Human Rights Law at Essex Law School, University of Essex and Pete Fussey, Professor of Sociology, University of Essex

This article was originally published on The Conversation. Read the original article.