Wed | Jun 16, 2021

Anika Gray and Urmila Pullat | Privacy, distrust and big data

Published:Sunday | June 14, 2020 | 12:16 AM

When we think of countries sharing common problems, we do not readily think of India and Jamaica. One is a populous multifarious behemoth, home to 1.3 billion people, while the other is a tiny island, the home of reggae and barely 2.7 million people. Both are, however, former British colonies that inherited colonial systems of policing – institutions less concerned with protection and service and more with using force to control people. This policing philosophy persists in these post-colonial societies and has contributed to abuses of power and deeply entrenched corruption, as well as a deep distrust between the police and the populace. The introduction of big data and predictive policing technology into these toxic environments is a cause for worry. While these tools can likely contribute to good policing outcomes, without adequate research and protections in place, they can also exacerbate deep-rooted issues in society.

Predictive policing methods have been in existence for some years now, and while there is not one fixed definition of what the term means, it has come to signify certain types of data-oriented or data-based methods used to identify geographical crime hotspots or locate individuals likely to commit crimes. (Think PredPol and CrimeScan in the US.)

In India, AI-based data analytics systems such as the Crime Mapping Analytics and Predictive System (CMAPS) and the Punjab Artificial Intelligence System (PAIS) have emerged as front runners in India’s move towards ‘smart policing’ and use of big data. In Jamaica, the National Identification System (NIDS) was poised to serve as the launching pad for the use of big data in policing, but the courts struck it down for inconsistency with the Constitution. Undoubtedly, these technologies might be able to improve crime-fighting, but they run the risk of helping to create police states. In 2018, the Punjab police reportedly used face-recognition technology in the PAIS – a database with more than 100,000 records – to track down an alleged gangster, who was later killed by the police under questionable circumstances.


Using big-data technology to collect and analyse private information about citizens and then predict, identify, and surveil those who are deemed potential lawbreakers raises serious concerns for privacy rights. The courts in both India and Jamaica have ruled that the right to privacy has, at minimum, three dimensions: privacy of person, informational privacy (right to disseminate and control personal information), and privacy of choice. But under what circumstances should protection of individual privacy rights trump the public interest in controlling and preventing crime? Judgments from Indian and Jamaican courts tell us that big-data policing must strictly respect the individual’s right to privacy and should be limited to protecting a specific overriding public interest and that, even in those cases, there must be a robust data-protection regime.

Police in both countries also grapple with existing distrust of law enforcement – particularly in marginalised communities. These communities have legitimate reasons for their mistrust. Jamaican and Indian security forces have for years engaged in extrajudicial killings, torture, and over-policing of these communities. In Jamaica, the killing of seven unarmed young men in Braeton, the Tivoli Gardens massacre in 2010, and the most recent revelation of a ‘police death squad’ are all examples of the deadly nature of law-enforcement abuse of power. In India, in 2019, Telangana police extrajudicially killed four alleged rapists, claiming that they were trying to flee. These are not problems that are limited to India and Jamaica alone but, instead, are examples of what happens and can happen in broken justice systems in countries around the world.


Predictive policing cannot fix what is already broken. The distrust of law enforcement allows crime to flourish in marginalised communities, but the mere application of technology and data-based analytical systems cannot, without more, solve the deep institutional and cultural issues that already exist in beleaguered police forces. Moreover, algorithm design for AI-based and big-data systems can reinforce biases that could then lead to self-fulfilling criminal behaviour patterns in already marginalised communities or geographical locations. Racial and social discrimination can therefore increase under the guise of using technology to fight crime.

The use of big-data technology can build the community’s trust in the law if it is used to treat individuals fairly, within the norms of procedural justice. Fairness primarily involves allowing citizens to have a voice, as well as treating all citizens impartially and with dignity and respect. However, procedural justice cannot be the only factor to build trust. According to Yale Law School Professor Monica Bell, improving the trust between the marginalised and the police is not about addressing compliance but dealing with alienation, or ‘legal estrangement’. Her research demonstrates that many persons from marginalised communities comply with the law but still feel alienated and unprotected by the police because of procedural injustices and structural exclusion.

The use of technology to further alienation was on display in India during nationwide protests against the racial Citizenship Amendment Act (CAA) – police forces in different states were using the opportunity to film protesters and, apparently, using facial recognition software to identify dissenters, ‘miscreants’, and others involved in ‘anti-national’ activities in the country. Indian police used this technology to crack down on dissent and book leaders of the protests on harsh charges. The actions of the Indian police, aided by technology, can only serve to deepen distrust and widen social chasms.

The utilisation of big-data policing in environments, like Jamaica and India, with institutional deficiencies in the police force, deep structural inequalities, and entrenched mistrust of the police requires further and ongoing examination. It is always tempting to rely on tech-based tools to improve and aid policing and crime prevention. However, scarce monetary resources must not be allocated with abandon to big-data policing technology without a holistic and evidence-based understanding of its strengths and weaknesses in specific contexts.

- Anika Gray is a lawyer and lecturer at the Faculty of Law, The University of the West Indies, Mona. Urmila Pullat is a lawyer and researcher based in India. Both authors work on criminal justice, law, and policy issues. Send feedback to and