Leveraging Computational Photography Concepts by using Software Defined Sensors

Auto Tech Outlook | Friday, September 16, 2022

As more new cars come with sensors, engineers are starting to use some ideas from computational photography to improve their ability to see.

FREMONT, CA: Software-defined sensors take advantage of ideas from computational photography.

The shift toward smartphones as the main device for taking pictures has hurt the traditional camera industry. Some of the same ideas let phones make pictures that look well-lit even when it's almost entirely dark or portraits with blurred background in-vehicle sensors. In a car, nearly everything is controlled by software, and individual sensors are no exception. It controls the parts that make the sensor work and processes the signals it sends.

Most of the lidar that has gotten attention so far has focused on medium to long-range sensing to make it possible for assisted and autonomous driving systems. Most, if not all, of the next wave of lidar sensors companies, can update their onboard software if the vehicle can get updates over the air (OTA). However, a new near-infrared light-based sensor called near-field lidar (NFL) is now taking shape. NFL sensors use near-infrared light with a wavelength of about 905 nm, just like most other lidars.

Light is sent out, and the time it takes to bounce back from objects determines how far away they are. Most other lidars differ from NFL because they don't have beam steering, lower power outputs, or lower resolution. It's made to fit into applications that already use ultrasonic sensors or short-range radar, like parking assist, curb detection, automatic emergency braking in the back, and figuring out when a crash is about to happen.

Even though it doesn't have a beam steering system to control, the software can still change how it works based on what's happening. When the vehicle moves faster, with more airflow for cooling, it could increase the emitter's power for longer range detection, but not as long as other more expensive sensors. It could also use some of the same computational photography techniques as smartphones, like stacking multiple frames to get details than the sensor could get in a single frame. Closed-loop feedback from the driver assistance system's perception software can use to process the edges of objects.

Read Also

follow on linkedin Copyright © 2022 www.autotechoutlook.com All Rights Reserved | Privacy Policy | About Us | Subscribe
Top