Advertisement

BTL Pine Needles leaderboard

Artificial intelligence will not make police body-worn cameras more effective

AI won’t fix the systemic problems of policing, but better policies and legislation might

Canadian PoliticsPolicingUSA Politics

A Minneapolis Police Department officer wears a body camera while responding to a call on October 25, 2019. Photo by Tony Webster/Flickr.

Earlier this month, the Ann Arbor Police Department announced it would be the first in Michigan to use artificial intelligence to analyze its officer’s body-worn camera footage. Twenty police departments across the United States already use AI technology for this purpose.

Law enforcement has used AI for years, but the technology is only more recently becoming incorporated into police body camera programs.

In July, the Winnipeg Police Service announced plans to acquire AI software to analyze video, although the service does not yet have body cameras. However, “it is not a matter of if, but when” police officers in the city will get them, according to the chair of the Winnipeg Police Board.

Blue Line, Canada’s national law enforcement magazine, has expressed enthusiasm for the use of AI to analyze body camera footage: “More police today wear body cameras to provide ‘vision intelligence’ but without video analytics. With AI all this data can undergo intense analytics.”

Body cameras were intended to reduce use of force incidents and bring more accountability to policing but the evidence to support these claims is inconsistent. The integration of AI in its current configuration to improve the purported effectiveness of body-worn cameras is unlikely to help.

The efficacy of the use of AI for the analysis of body-cam footage is unknown. It’s also costly. Body cameras in some jurisdictions already run taxpayers millions of dollars every year. The addition of AI analytics tools can cost as much as $250,000 more per year.

A major player in the body camera analysis industry is Chicago-based company Truleo, which launched in October 2021. Truleo claims its proprietary AI technology can detect use of force incidents and says it “screens for both professional and unprofessional officer language to enable supervisor recognition or review.”

Police officers have been using both verbal and non-verbal signals for years to warn other officers when their body-worn cameras are turned on “to prevent or halt the recording of misconduct.” “I went Hollywood” is one verbal cue used by officers for this purpose, according to a watchdog agency.

AI audio detection technology would surely miss non-verbal signals designed to conceal police misconduct. But what about its supposed ability to detect wrongdoing through spoken words? Digital tools would have to be programmed to recognize the code words used by officers before an incident happens.

Just as officers have learned to avoid the surveillance of body cameras, we can expect that some officers will do the same with AI language detection by using modified terms to signal to their colleagues that their cameras are recording. What’s more, AI analysis of body-cam footage will be imperfect because of algorithmic bias.

Who will determine exactly what words constitute professional and unprofessional officer language? How would “I went Hollywood” be classified?

Another concern is that when language-based AI detects incidents for supervisor review, they will likely be kept out of public view. In Canada, a Globe and Mail investigation found that the process of accessing information held by the government “gives institutions incentives to keep records hidden,” including police complaints.

Police disciplinary records are confidential in many US states, but even when records are made publicly available there are loopholes. Consider the city of Minneapolis, where former police officer Derek Chauvin murdered George Floyd in May 2020. Bad behaviour by officers is largely hidden from the public in Minneapolis due to a loophole that conceals misconduct by calling it “coaching” instead of “discipline.” Offending officers are “coached” after the misconduct to improve their performance, rendering any records about the allegations non-public.

Chauvin was the subject of at least 22 complaints or internal investigations during his nearly two decades with the Minneapolis Police Department. Supervisors where obviously aware of the issues with Chauvin’s use of force incidents that included documentation showing he restrained numerous victims by their necks as far back as 2015. Chauvin was presumably “coached” for his misdeeds and remained on the job.

Chauvin was wearing a body camera when he murdered Floyd. This clearly failed to prevent a pattern of cruel brutality. We might reasonably surmise that had AI analysis been used prior to Chauvin’s fatal restraint of Floyd, it would have done little to deter future transgressions.

Artificial intelligence tools cannot fix the systemic problems of policing, but better policies and legislation might. Public access to police disciplinary records must exist in all jurisdictions and should absolutely include language that closes loopholes that attempt to conceal police misconduct from public view.

Christopher J. Schneider is Professor of Sociology at Brandon University and author of Policing and Social Media: Social Control in an Era of New Media (Lexington Books, 2016).

Advertisement

URP leaderboard September 2024

Browse the Archive