February 14, 2017 (Fault Lines) – Millennials get a bad rap. Baby boomers and Gen X like to stereotype them as lazy, self-obsessed and needy, even though some data show that the three generations are pretty much the same in their attitudes to work.
Talk about unfair! But even if you’re not one to believe in studies, there are Millennials who definitely break the mold. Two Duke students have figured out a way to fix traffic stops.
Vaibhav Tadepalli, a senior, and Chris Reyes, a graduate student, say their invention, called Sentinel, represents a “better way” for law enforcement officials to do their job.
They say their invention, a robot that handles traffic stops, can make interactions safer for law enforcement officials and the people they come in contact with.
A robot cop! How does it work?
When Sentinel is complete, drivers will be able to interact with an officer through a video screen that is attached to a lift that rises as high as 7 feet in the air.
“Instead of me getting out of the car, the robot drives itself. It deploys from the car and drives itself over to his window,” Reyes said.
And how do Vaibhav and Reyes think traffic stops will work when their invention hits the mean streets?
Vaibhav said the interaction is similar to Skype.
“I’m looking at the screen, and I see Chris in the car behind me,” he said. “He asks for my license and registration. I hold it up to the screen and it scans that.”
The information goes directly into the officer’s laptop. Sentinel also records video.
“It can record what’s going on. It can start populating data, from what type of make and model the car is to its license plate,” Reyes said.
The two have high hopes for Sentinel.
“We really hope that we won’t read headlines that someone has been fatally shot at a traffic stop through no fault of their own, whether it’s an officer, whether it’s a motorist, that’s really our goal,” Vaibhav said.
Outstanding. What could possibly go wrong?
The first and most basic problem is one Justice Stevens observed back in ´97 when the Supreme Court decided Maryland v. Wilson: there aren’t any data on risks to police during traffic stops. And things haven’t changed in the past twenty years, despite an effort by notorious expert witness-for-hire, Bill Lewinski, to invent a new set of officer best practices in 2011.
In one of his patented “studies,“ the kind of thing in which he manages to misattribute the majority decision in Wilson to Justice Scalia, Lewinski arrives at two conclusions about stops by simulating an encounter where an armed driver opens fire as a cop walks up to his window. First, approaching on the passenger’s side is safer than the driver’s. Second, retreating first, then drawing a gun is safer than vice versa.
Hardly enlightening stuff, given that we don’t know how often Lewinski’s scenario happens relative to other sources of risk during a stop. (That includes things like detainees opening fire earlier or later in the encounter, initiating a chase or cops getting hit by traffic.) Worse, Lewinski’s scenario fails to address one of the core concerns of Wilson, the danger posed by passengers.
The lack of data means that while it’s possible a traffic-stop robot would be an improvement over the way police currently handle stops, it could just as easily make things worse. Vaibhav and Reyes are almost certainly flying blind. Their “rigorous“ self-reported research methodology doesn’t give rise to much hope they know something we don’t.
The inventors repeatedly consult with law enforcement officials, and they even went on a ride-along with a police officer.
The robot may in fact be wholly unnecessary, since cops already almost never die during traffic stops. According to the FBI, ten or fewer officers were killed as the result of a stop in any given year since 2011. Relative to the 26.4 million Americans who, in the 2011 Police-Public Contact Survey, said their most recent encounter with a cop took the form of a traffic stop, that’s a vanishingly small percentage.
Because nationwide efforts to collect data on people killed by police are a very recent phenomenon, it’s still tough to say how many non-cops die annually during traffic stops. However, of the 85 people the Guardian claims were killed by cops in December 2016, only six died as the result of a traffic stop, implying fewer than a hundred a year. (What’s more, purely anecdotally, it’s tough to see what a robot could’ve done to save the life of someone like Nick Hamilton.)
In other words, it’s conceivable – if totally speculative – that the Duke kids‘ robot could be a better way to handle a subset of some dangerous traffic stops, themselves a minuscule subset of what’s already a very safe practice. Is that a gamble worth spending, according to Vaibhav and Reyes, $1500 a unit on? That‘s money that could otherwise go toward properly equipping cops, hiring more officers or even implementing sensible reforms, like teaching the police how to interact with the mentally ill. Hell, even margarita machines and $300-a-pair EpiPens have some guaranteed use.
And then there are the guaranteed downsides. Advanced as the concept of an extensible iPad with wheels may be, the robot has a few shortcomings compared to humans. For example, it doesn’t have a nose, meaning any cop who used it would be unable to smell marijuana after a stop for a broken brake light.
Think of all the dealers who’d get away with their crimes. Think of the K9s who wouldn’t get their exercise, the forfeitable assets left on the table, all the pretextual stops that’d come to naught. And because the robot records video, even the old “I saw the drugs lying out in plain view” chestnut would be a less reliable revenue generator. With financial incentives like these, how could the police possibly be reluctant to adopt this new tech, like they were with body cams?
Moreover, what happens when a cop using the robot thinks he sees something untoward? Is he right, or did the angle of the camera, the quality of the recording, a trick of the light, any of a hundred other factors deceive him? Is it smart to rely on a single sense instead of five? Conversely, could a bad apple use an ambiguous recording to justify a dubious search?
And what happens then? The cop will have to get out and approach the car anyway. If he’s lucky, there won’t be a car chase first. Worse, the mere act of the cop getting out of the car tells the detainee there’s a confrontation coming. At best, both parties will be keyed up and hostile, making a bad outcome that much more likely. At worst, the detainee has extra time to get out his gun.
Then there’s another, rather fundamental problem: any tool that makes cops sequester themselves in their cars and overtly treat the people they interact with as threats would undermine decades of progress. It hearkens back to the bad old days before “community policing,“ back when cops only ventured out of their cruisers to ruin someone’s day. And it wouldn’t just foster distrust; it’s a boneheaded idea because it teaches cops and noncops alike to treat outlier cases as if they were representative.
Remember that by any objective measure, traffic stops are already safe. The last time we decided to spend a lot of money on preempting a vanishingly rare threat by putting people under general suspicion, we got the TSA. Is that anyone’s idea of a good way to govern?
The Duke students‘ robot idea is cool. They deserve props for thinking creatively about a difficult problem. But a magic fix it ain’t. For once, the folks at PoliceOne and I are in complete agreement. Metallic cops are no replacement for the fleshy kind.