Professor Andrew Guthrie Ferguson of the University of the District of Columbia has posted Big Data and Predictive Reasonable Suspicion , forthcoming in University of Pennsylvania Law Review .
Here is the abstract:
The Fourth Amendment [to the U.S. Constitution] requires “reasonable suspicion” to seize a suspect. As a general matter, the suspicion derives from information a police officer observes or knows. It is individualized to a particular person at a particular place. Most reasonable suspicion cases involve police confronting unknown suspects engaged in observable suspicious activities. Essentially, the reasonable suspicion doctrine is based on “small data” – discrete facts involving limited information and little knowledge about the suspect.
But what if this small data is replaced by “big data”? What if police can “know” about the suspect through new networked information sources? Or, what if predictive analytics can forecast who will be the likely troublemakers in a community? The rise of big data technology offers a challenge to the traditional paradigm of Fourth Amendment law. Now, with little effort, most unknown suspects can be “known,” as a web of information can identify and provide extensive personal data about a suspect independent of the officer’s observations. New data sources including law enforcement databases, third party information sources (phone records, rental records, GPS data, video surveillance data, etc.), and predictive analytics, combined with biometric or facial recognition software, means that information about that suspect can be known in a few data searches. At some point, the data (independent of the observation) may become sufficiently individualized and predictive to justify the seizure of a suspect. The question this article poses is can a Fourth Amendment stop be predicated on the aggregation of specific, individualized, but otherwise non-criminal factors?
This article traces the consequences in the shift from a “small data” reasonable suspicion doctrine, focused on specific, observable actions of unknown suspects, to the “big data” reality of an interconnected information rich world of known suspects. With more targeted information, police officers on the streets will have a stronger predictive sense about the likelihood that they are observing criminal activity. This evolution, however, only hints at the promise of big data policing. The next phase will be using existing predictive analytics to target suspects without any actual observation of criminal activity, merely relying on the accumulation of various data points. Unknown suspects will become known, not because of who they are but because of the data they left behind. Using pattern matching techniques through networked databases, individuals will be targeted out of the vast flow of informational data. This new reality subverts reasonable suspicion from being a source of protection against unreasonable stops, to a means of justifying those same stops.
Filed under: Applications, Articles and papers, Policy debates, Technology developments Tagged: Andrew G. Ferguson, Big data and law, Criminal justice information systems, Criminal law information systems, Legal predictive analytics, Prediction in criminal justice, Prediction in criminal law, Prediction in criminal procedure, Prediction in law, Prediction in law enforcement, Prediction in legal informatics, Prediction in legal information systems, Prediction in legal theory, Prediction in policing, Predictive analytics in law, Predictive policing, Probability in criminal law, Probability in criminal procedure, Probability in legal information systems, Probability in legal theory, Probable cause, Quantifying criminal law concepts, Quantifying criminal procedure concepts, Quantifying reasonable suspicion, Quantitative legal prediction, Reasonable suspicion, University of Pennsylvania Law Review
via Legal Informatics Blog http://ift.tt/1fGYl6D
Niciun comentariu:
Trimiteți un comentariu