Winding Road

Nearly eleven years ago, when I became CEO of Seeing Machines, we looked forward to this moment in time, believing autonomous vehicles would be just around the corner. In fact, automakers predicted cars without steering wheels would hit the market in 2023 or 2024—or that steering columns would disappear into the dashboard.

Now, it’s 2022, and we’ve come to understand that while this might be true someday, realizing the potential of automated driving will take longer than we originally anticipated.

Falling short of the goal post can be a disheartening reality for those who’ve dedicated years to advancing this tech. However, it doesn’t mean all the work that’s been put into autonomy has gone to waste. We’re just applying it differently, and I’d say it’s proven to be more relevant than ever.

Taking aim at distracted driving with automated tech

Distracted driving remains the number one killer of teenage drivers in countries like the U.S., Canada, the U.K., and Australia—and the technology originally intended to drive the vehicle can now be a powerful tool in assisting the driver.

In-cabin and driver monitoring will be critically important to improving on-the-road and off-the-road safety. By understanding what’s going on outside of the vehicle and what the driver is available for, the system can act like a guardian angel, taking over when needed and keeping passengers out of trouble.

In-cabin monitoring presents some unique challenges, though.

When people think of different sensor modalities, they automatically think of what’s occurring outside the vehicle—what the vehicle is seeing around them. That’s an easier challenge to address. The outside world is square-edged and predefined—signs on the side of a road, stop lines, other cars of similar shape, size, and made of metal.

Interior monitoring is more difficult to execute as we humans are actually quite squishy. When you think about the regionalization around ethnicity, age, gender, facial hair, sunglasses, hats, and then, more recently, the use of masks—we’re all very different.

Additionally, the system must understand what the driver is available to execute. Can they respond in time to something the system sees outside the car? From there, it needs to broaden its understanding further to take in what everyone else in the vehicle is doing. How many people are in the car? How big are they and where are they sitting? What are their arms and heads doing? Could their actions impact the driver’s ability to respond to a threat?

Working together, these interior and exterior systems will paint an incredible picture that enables the car to understand what the driver can do—and what it has to do to help the driver when they can’t.

Sensor redundancy isn’t enough—we need better software

Are we making meaningful progress toward an automated future? Absolutely.

When I launched Seeing Machines, it was out of the belief that cars would be the first robots humans would interact with in mass numbers. It was a difficult journey—inventing a technology, creating awareness, marketing it, and building demand.

Today, computer vision, autonomy, and mobility are words we hear almost every day. What’s more, compute costs have come down, cameras have come of age, and radars are coming to the forefront. People understand the frailty of technologies, like LiDAR, when exposed to all elements and all conditions.

Now, I work with startups like Visionary Machines, SightData, and Spartan, that possess a range of capabilities, including AI, computer vision, and edge processing. I see these different modalities playing into the overall perception stack in exciting ways.

It’s a fascinating time to be involved in this space, as these technologies were so nascent just a decade ago. Just think about cameras. For most people, the only camera they own now is in their smartphone. Who would have thought that ten years ago?

Now, the question perception and safety engineers ask as they seek to leapfrog forward: Is sensor redundancy the answer—or is it sensor fusion?

The way I see it, sensor fusion is absolutely essential. No single technology can do it all. They all have their weaknesses, whether it’s range, environmental conditions, or lighting. And, now, that’s compounded by the demand for systems that not only look 360 degrees around the car but simultaneously monitor drivers, as well.

By enhancing the software on existing hardware, we can improve computational load and reduce latency. It’s how we’ll enable vehicles to bring together all the available information and make the split-second decisions that save lives.

Automated driving is a challenging but worthy pursuit

We may not be as close to automated driving as we thought we would be. However, we have made critical strides forward and the tech advantages made in its pursuit are already paying dividends on the safety front.

In some way, we’re all waking up every morning knowing we’re making the world a slightly safer place. How fortunate we are to work in this industry.


Ken Kroeger
Executive Advisory Board Member | Former CEO of Seeing Machines

Get the Spartan sensing edge.