By: Chaslyn Facciponti

Published on: Mat 4, 2023

Those who have seen the movie “I, Robot,” where the robot takes on human attributes and goes rogue, probably left that movie thinking: Wow, that was creepy; at least it’s just a movie.  But what if it wasn’t “just a movie” anymore?  What if the reality of modern technology becoming a routine part of policing in our society was an ever-looming possibility?  And further, what does this mean for our Fourth Amendment rights when we are no longer dealing with humans violating constitutional rights but technology, especially for marginalized communities that take the brunt of over-policing already?

            In Katz v. United States, Justice Harlan stressed that a violation of the Fourth Amendment occurs when government officers violate a person’s “reasonable expectation of privacy” and that it extends to an individual’s property, including protection from physical instruction via electronic means.[1]  If the touchstone of the Fourth Amendment is “reasonableness” (which means requiring a warrant before government officials search an individual or their property), then in this new age of modern technology, do we consider the use of big tech in policing to be “reasonable”?[2]  The outstanding cry from communities all across the United States is, no!

            While the “RoboCop-gone-rogue” argument may seem a little far-fetched now, it’s not that far off, and new technology is emerging every day that is helping officers “do more with less.”[3]  Danielle Abril, a data privacy specialist, discusses these issues at length, detailing the story of officers in California who decided to use a drone to investigate a potential hostage situation.[4]  The police were unsure if the man inside was armed with a knife or not; the drone showed them through the third-floor window that the man was not armed and they were able to successfully wait him out since there was no risk to life.[5]  While this seems like a positive outcome where the technology was used for the greater good, what about when one day it’s not? What if the window, or say, the door to your house, is ajar, and the officers decide to fly the drone inside, crossing the threshold of the home; is this an unreasonable “search”?  Who will safeguard our communities from this?[6]  Who will make these decisions regarding controlling tech in policing?  If left to police chiefs to decide, one can imagine they will always want to use technology that can make their job easier, even if it violates an individual’s Fourth Amendment rights.

            The alarm bells are going off across the country, and it is becoming more accessible and easier to violate our constitutional rights through the use of technology.  Even more alarming is that this technology can be (consciously or unconsciously) coded in a way that is biased towards Black and Brown communities on a massive scale.[7]  The ACLU is currently bringing attention to the fact that facial recognition technology is biased from how it was coded to how it’s being used.[8]  Studies at Harvard show that the algorithms for facial recognition technology, when performed on Black women, had a 34% error rate.[9]  How are they coded with a racial bias, you might ask?  Clare Garvie, author of Forensic Without the Science, explains that our faces contain “inherently biasing information such as demographics, expressions, and assumed behavioral traits, it may be impossible to remove the risk of bias and mistake,” and that when coders don’t use enough images to accurately capture all the different facial expressions of an entire demographic, the algorithm begins to assume everyone looks alike.[10]

            While these topics barely reach the tip of the iceberg, communities must continue to protest the use of these technologies by law enforcement and call on Congress to put a stop to this.  We must require big tech companies to shut down these programs until they can assure they won’t be used by law enforcement, and if they are to be used by law enforcement to track and collect data on individuals, a warrant must be required to protect our Fourth Amendments rights.

[1] See Katz v. U.S., 389 U.S. 347, 360-61 (1967) (Harlan, J., concurring) (stressing that an electronic invasion of privacy into a physical space that is meant to be private may constitute a violation of the Fourth Amendment).

[2] See Riley v. Cal., 573 U.S. 373, 393 (2014) (holding that a search of all data stored on a cell phone at a traffic stop is not the same as a brief physical search pertaining to safety concerns and thus, requires a warrant).

[3] Danielle Abril, Drones, Robots, License Plate Readers: Police Grapple with Community Concerns as They Turn to Tech for Their Jobs, Wash. Post (Mar. 9, 2022, 7:00 AM),

[4] Id.

[5] Id.

[6] See id. (highlighting that police are aware that the balance between using tech and violating expectations of privacy is a very fine line to walk).  

[7] Ashley Del Villar & Myaisha Hayes, How Face Recognition Fuels Racist Systems of Policing and Immigration – And Why Congress Must Act Now, ACLU, (July 22, 2021),; see also Facial Recognition Tool Led to Mistaken Arrest Lawyer Says, AP News, (Jan. 2, 2023), (discussing the story of a Black man from Georgia who was misidentified and arrested on a fugitive warrant for theft in a state, Louisiana, that he’d never even stepped foot in a day in his life).

[8] Villar & Hayes, supra note 7.

[9] See Alex Najibi, Racial Discrimination in Face Recognition Technology, Blog, Science Police, Special Edition: Science Police and Social Justice, (Oct. 24, 2020),

[10] Clare Garvie, A Forensic Without The Science, Face Recognition in U.S. Criminal Investigations, Geo. Law Ctr. Priv. & Tech, (Dec. 6, 2022),

Posted in

Share this post