Image Sensors are Driving Automotive Innovation
Camera technology has improved significantly, initially driven by digital cameras and then smartphones. Automotive applications are especially challenging, but the days of a single backup camera are numbered and the use of cameras for advanced driver-assistance systems (ADAS) and for self-driving cars is significantly increasing the number of cameras spread around a single vehicle. There’s a consumer system designed for trailers that incorporates 19 cameras and uses advanced processing techniques to hide the trailer from view, exposing surrounding traffic.
I talked with OmniVision’s Andy Hanvey, director of automotive marketing, about these automotive imaging challenges and opportunities facing designers today.
Andy Hanvey, Director of Automotive Marketing, OmniVision Technologies Inc.
Now that rearview cameras are mandatory, where else do you see cameras being deployed in vehicles today, and what new applications are being explored for future models?
Cameras are being deployed throughout the vehicle to increase driver safety and comfort, including surround view systems, ADAS applications and for mirror replacement.
As you mentioned, rearview cameras are already mandated, but there are more mandates to be enforced. The next mandates relate to ADAS and driver-monitoring-system (DMS) applications. What this means is that a front-viewing ADAS camera (similar to this one) will be mandatory for all vehicles in North America and the EU.
In addition, the EU is mandating driver-monitoring cameras starting in 2022 on vehicles as driver distraction is one of the critical factors in accidents. Similar safety consideration is also happening in China, where they recently announced a mandatory national standard for all cars to have either dashboard cameras or an event data recorder (EDR) for safety and insurance purposes.
What are some of the toughest technical challenges facing the designers of these new automotive imaging applications, for both machine vision and human viewing?
Some of the challenges in machine vision and human viewing are similar. For example, both applications have the challenges that require technologies to deliver high dynamic range (HDR), LED flicker mitigation (LFM) and excellent low-light performance by shrinking the pixel size. In addition, the support for ASIL functional safety is important in both application areas.
There are also some differences; for example, in autonomous driving applications. It’s essential that the image sensor data be protected from hackers. Therefore, cryptography/authentication capabilities are critical.
OmniVision is able to address these system problems. Our image sensors and image signal processors (ISPs) feature LFM and HDR techniques that are optimized for autonomous applications (Fig. 1). We also provide a platform of sensors that can provide faster time-to-market for autonomous solutions, which we believe is key to solving this problem.
1. Carmakers are beginning to require the combination of LFM and HDR.
For viewing applications, the ability to support the widest range of architectures is important. The architectures could consist of having an image sensor work in conjunction with the ISP in the camera (through two chips or one SoC), or multicamera architectures where the ISP is situated in the ECU. OmniVision’s comprehensive image sensor and ASIC portfolio enables us to support the widest range of architectures.
Thermal performance is also critical, and this can be related to the dark current (DC) of the image sensor and the drive to minimize this in all products. This is especially important for machine-vision products that work in low light and high temperatures.
What are the different approaches for LFM, and how does OmniVision help automotive designers solve this problem?
The problem is complex, and a multistage approach is essential. There are different techniques to achieve this, including LOFIC, chopping, and split-pixel. OmniVision has tried and tested different routes, and the company is currently experiencing success with the industry’s smallest split-pixel design. Firstly, on the pixel level, you need to capture the LED (within a time exposure of 11 ms). OmniVision uses the small photodiode of the split pixel for capturing the LED, which is achieved by making the small photodiode less sensitive. Other techniques involve the need for additional storage capacitors in the pixel, but these methods tend to have thermal challenges.
However, capturing the LED is only one part of the problem. The other part is combining the data in an intelligent way to maximize both HDR and LFM. In the past, designers were forced to select either LFM or HDR, but now with our HDR and LFM engine (HALE) combination algorithm, you can achieve both simultaneously. HALE also provides up to 25% lower power consumption than other methods while ensuring smooth operation over the automotive temperature range up to 125°C.
Why is it important to be able to capture images without flicker over the highest possible dynamic range, and how has OmniVision been able to achieve this while operating at automotive temperatures?
This relates to the brightness of the LEDs in real-life situations, ranging from LED signs/headlights to turn signals on the vehicle. Having this coverage over the widest range provides automotive designers with more system flexibility. Operating over the full automotive temperature range is critical. The cameras are becoming smaller, allowing them to be placed in more confined spaces.
In addition, the cameras deliver excellent image quality in demanding lighting conditions, whether in sunny Arizona, winter in Oslo, or a rainy day in London. If the technology used has thermal dependencies, system designers will have more compromises to make. The split-pixel technology in OmniVision’s LFM image sensors perform over the automotive temperature range of up to 125°C.
What else will designers need as they move beyond mirror replacement and camera-monitoring systems to integrate LFM into front-view ADAS and other machine-vision systems for autonomous driving?
For autonomous-driving (AD) systems, the requirements would include higher resolution, higher dynamic range, and simultaneous LFM, multiple color filter arrays (CFAs), and cybersecurity. These are all features that our customers have been requesting from us. It’s important to have a higher-resolution image sensor in an AD system, as the resolution increases the distance at which an object can be identified. This is important for an AD system so that decisions can be made sooner.
LFM technology is an area in which OmniVision has significant experience, so taking this technology into a higher-resolution sensor is a natural progression, and it addresses the requirements from customers in this application space. We hope to share more details with you and your readers later this year about our first image sensors with this added functionality.
What are the design challenges for driver state monitoring, as well as in-cabin passenger monitoring, and how does OmniVision address them?
For the DMS, it’s about four things: global shutter (GS), high near-infrared (NIR) quantum efficiency (QE) in the 940-nm wavelength, small size, and low power. At OmniVision, we have a full range of GS image sensors from VGA to 2MP that can meet the needs for driver monitoring. Another trend is that GS image sensors are being integrated as part of an ADAS system, so having ASIL functions is very important, and our 2MP GS platform is the first to add ASIL features. Adding our Nyxel technology to this product range, with the industry’s best QE, then becomes a game changer for this application area.
2. Different imaging technologies are needed for driver monitoring and in-cabin monitoring systems.
In-cabin applications are different than DMS in that they don’t need a global shutter image sensor. Here, a rolling shutter is more than adequate. For cabin monitoring, the key is being able to provide an RGB image for viewing, an enhanced image at night, and an image for machine-vision processing (Fig. 2). To achieve these requirements, it’s necessary to use RGB-IR technology, which applies a different CFA over the pixels to capture both RGB and IR.
How are you seeing this technology being used in automotive applications? And what does it mean for the designers?
Our Nyxel technology is very exciting and can be a game changer for some automotive applications. For example, adding Nyxel to global shutter sensors can bring big benefits to the DMS application space (Fig. 3). It can increase the QE from about 12% to 40% at 940 nm, which is a massive improvement. This gives the system designer much more flexibility and can reduce the power consumption of the LEDs in the system.
3. Nyxel technology can significantly improve the capture of driver-monitoring images taken in NIR light outside the visible spectrum.