*This article originally appeared in Highways.today.
After billions of dollars in moonshot investments, autonomous driving is coming back to Earth. I called this article “Houston, We Have a Problem” because the automated vehicle industry is facing it’s Apollo 13 moment: Will we run out of oxygen or MacGyver a solution that gets automation back to earth with products that are safe, reliable, and commercially viable?
To make this right, we have to look at where we went wrong.
Asked what he would do if he only had an hour to save the world, Einstein famously remarked that he would spend 55 minutes defining the problem, and only five minutes solving it.
Unfortunately, spurred on by a gold rush of investment, the automated mobility sensing industry has clearly not taken the time to properly define the problem.
The industry consensus is that automated driving is a cognitive problem. A google search for “cognitive radar” generates over 40K hits and nearly all companies working in the autonomous vehicle space have some form of “cognitive” or “intelligent” or “smart” prominently featured in their branding materials.
Branding your company’s products as “intelligent” may boost funding, but framing the autonomous challenge as “cognitive” fails to properly define the problem. And if we’re trying to solve the wrong problem, we will fail to meet the need: safe, commercially viable, automated mobility that works under all driving conditions.
Maslow’s hammer: Why “cognitive” automation fails.
To call attention to the limiting effects of a cognitive bias, psychologist Abraham Maslow would reference the timeless expression, “If the tool you have is a hammer, everything starts looking like a nail.”
The over-reliance on a cognitive model for automated driving is evident in more than just marketing materials. It’s hard-wired into the mobility AI the industry is creating. (And, coincidentally, it’s also hard-wired into the way many of us think.)
Sound cognitive assessments require a systematic analysis of all relevant data. It makes sense that cognitive-engineered AI’s would require a complicated perception stack to integrate and process the mountain of data generated by high-resolution cameras, LIDAR and traditional radar. They need to observe the entire forest to see a single tree.
Did you instantly jump back?
That instinctual response could save your life. Taking time “to think about it?”
It’s no surprise that cognitive-engineered AI performs like the distracted drivers it is meant to replace: drowning in data, slow to respond to unexpected threats.
The response by the autonomous vehicle industry to its disturbing accident record? Let their bet on cognitive AI ride. Processing speed has to catch up with the marketing campaign sometime, right?
Actually, it already has.
“Cognitive” radar is already here. It’s just not the droid we’re looking for.
As research at the University of Pennsylvania has shown, the human retina transmits visual input at roughly 10 mbps. About the broadband speed you’d expect when you check into a chain motel.
If you think that’s slow, consider the glacial pace of human cognition. While difficult to clock precisely, most experts agree that conscious thought crawls along at about 50 – 60 bits per second. A fraction of the processing speed of even a ten-year-old laptop.
The onboard supercomputers from Nvidia, Qualcomm and Intel driving today’s autonomous vehicles are orders of magnitude faster than that. So we have plenty of processing speed. Why aren’t we winning the race for safe, reliable, autonomous vehicles?
Let’s look at that snake example again.
Our survival brain boils things down to fight / flight / freeze to allow for the instant responses necessary to successfully navigate unexpected, life and death situations. Instinctual and elegant heuristic responses like this require almost no processing power.
Heuristics = The culmination of evolution: established patterns that happen before cognition – near immediate.
It’s like a line of BASIC – IF snake, THEN run!
Safely navigating the world depends upon myriad situation-specific sub-routines like this that operate beneath our conscious awareness to guide our steps and keep us safe.
This guidance system has been beta-tested through millions of years of evolution. A much better model to emulate for autonomous vehicle guidance systems than the cognitive model embraced by the industry today.
Most automated vehicle stacks rely on a central computer to pull all the data in, make sense of it, and take action—like our cerebral cortex. Makes perfect sense, right? But if we touch a hot pan and wait for the conscious brain to do the right thing, you’re in for a lot of pain.
The way the body does this is through the sympathetic nervous system—specifically the spinal column. Instead of the signal going all the way to the brain and waiting for it to disengage from the task it’s focusing on, the spinal column registers the threat and reflexively pulls back. The brain then checks in and takes additional protective steps.
If the onboard supercomputers driving today’s automated vehicles can’t emulate the human reflexes needed for safe driving, how do we do it?
It’s actually in aerospace and defense that we’re finding the tech that gets this done. The high-speed, high-efficiency, low-cost edge processing solutions ADAS needs to finally pull into the driveway have been keeping military and commercial pilots alive for decades.
Edge Processing = faster, more accurate, sitrep.
Instead of relying on more and more resources, more and more code—like the conventional automated vehicle perception stack—military and aerospace engineers have been successfully working for years on a very different approach. They’ve been putting powerful algorithms into the simple ARM processors that run many aircraft and weapons systems.
“Old school” ARM processors are lower cost, smaller, lighter, and can be more robust (think the nose cone computer for the Apollo spacecraft). Critically, they can also be attached directly to sensors.
Decades spent mastering the unique limitations—and advantages—of these small ARM chips has spurred tremendous innovation. Through an obsessive focus on “timing & sizing”, these engineers have created hyper-efficient code that maximizes speed, reliability, and resource consumption.
Compare this with conventional engineering: just stacking on code and hardware-dependent on massive computers and pushing off massive computation to the cloud. This brings us to where we are today in the automated vehicle world. Running to stand still.
Edge processing leverages lightning-fast proximal devices to respond reflexively and reduce processing load on the central navigation computer. Biomimetic insight: when humans are overwhelmed, we don’t have the “bandwidth” to make good decisions. Reducing the processing load for an autonomous vehicle’s central navigation computer helps it make better decisions, too.
Our team is finding that military-grade edge processing + biomimetic engineering = real solutions at all levels of automation. Military tech + a human touch tech may seem an odd marriage to some. But if we’re not using the right technology to solve problems for our fellow humans, we’re going about this all wrong.