We “have to pass the bill so that you can find out what is in it,” infamously said House Speaker Nancy Pelosi (D-Calif.) about ObamaCare in 2010. In reality, having the public not know what’s in a bill until it’s too late is standard procedure. A prime example is reportedly in the recently passed $1.2 trillion, 2,700-page (as long as almost 10 average books) infrastructure bill: a requirement that “kill switches” be put in all new cars.
Many Americans actually support this idea. They find appealing the prospect of being able to shut down a dangerous, fleeing criminal’s car before he possibly kills innocents. And this is a good. Yet a government that can at a button’s touch save can also thus savage.
What’s more, even if you trust the state, do you trust the technology?
Motorious’ Steven Symes reported on Big Brother’s backseat driving last Wednesday:
According to an article written by former U.S. Representative Bob Barr, hidden away in the recently passed infrastructure bill … is a measure to install vehicle kill switches into every new car, truck, and SUV sold in this country. The regulation likely won’t be enforced for five years, so maybe there’s time to do something about this.
As we’ve seen both in this country and others recently, what constitutes “law-abiding” can change drastically overnight. For example, in September a car was pulled over in New Zealand and the occupants arrested when police discovered the trunk was full of Kentucky Fried Chicken meals. They were smuggling the fast food to customers in locked-down Auckland, against quarantine measures. Yet not too long before, delivering restaurant orders to people was considered a reputable, legal activity.
It gets even better: Barr points out that the bill, which has been signed into law by President Biden, states that the kill switch, which is referred to as a safety device, must “passively monitor the performance of a driver of a motor vehicle to accurately identify whether that driver may be impaired.” In other words, Big Brother will constantly be monitoring how you drive. If you do something the system has been programmed to recognize as driver impairment, your car could just shut off, which could be incredibly dangerous.
There is the possibility the kill switch program might measure your driving as impaired, then when you try to start the car up again the engine won’t fire up.
Moreover, Barr also warned that at issue is “an ‘open’ system, or at least one with a backdoor, meaning authorized (or unauthorized) third-parties can remotely access the system’s data at any time.”
So aside from possible hackers, police or other government agencies could presumably access your car’s system and kill your engine. This presents great privacy concerns. As Symes wonders, would a warrant be necessary for such action?
Even if you don’t fear tyranny, this technology poses serious practical problems. “For example, what if a driver is not drunk, but sleepy, and the car forces itself to the side of the road before the driver can find a safe place to pull over and rest?” asks Barr. “Considering that there are no realistic mechanisms to immediately challenge or stop the car from being disabled, drivers will be forced into dangerous situations without their consent or control.”
“The choice as to whether a vehicle can or cannot be driven — for vehicles built after 2026 — will rest in the hands of an algorithm over which the car’s owner or driver have neither knowledge nor control,” Barr adds.
So even under a best-case scenario, technology’s inherent fallibility should be considered. Just ponder the accidents involving self-driving cars. Then, just yesterday, I used a supermarket self-checkout aisle as I often do when they’re free, and the machine had a “brain cramp” (not an unusual experience). “You see, this is why automation can’t completely replace humans,” I quipped with the young female employee attempting to remedy the problem. “You never malfunction, do you?”
At least, though, I was only stranded at the register for an extra 10 minutes — not by the side of a busy highway with cars whizzing by.
So while kill-switch technology would occasionally save a life by stopping the odd fleeing criminal, consider some other situations.
Scenario one: You’re traveling deep in the country on a frigid night and, either because you’re tired or due to technology malfunction, the “central computer” kills your engine. Unable to restart your vehicle, you’re now stuck in sub-zero temperatures without heat. Presumably, the cops would rescue you, but in how long? And what if there’s no cellphone service in that area and the police weren’t otherwise notified? Hypothermia can kill.
Scenario two: You’re a woman fleeing from some blood-lust maniac, a stranger or unstable boyfriend, and the system deems your driving “erratic” and halts your engine. It would presumably kill your pursuer’s motor, too. Yet not only might he have his kill switch disabled or be driving an older vehicle not thus limited, but, at best, now you’re left facing a dangerous and possibly armed man bent on hurting you.
Scenario three: Hackers disable your engine on a stretch of lonely road to facilitate a car-jacking — or worse.
Of course, this is just more reason to be armed; only, the people giving us kill switches also want to kill the Second Amendment.
Again, some say the kill switch idea would save some lives. But is this true on balance? Would the people not killed by fleeing criminals be outnumbered by those dying because of the technology?
Even if the answer is no, the “It saves lives” argument is never dispositive. After all, putting a camera in every room of every home in America and feeding the video real-time into an artificial-intelligence computer — which would then dispatch police to a residence at any sign of trouble — would surely “save lives.” But is living longer, collared and on a short leash, living at all?