1) Growth of ADAS Systems: Typically referred to as advanced driver-assistance systems (ADAS), L2 and lower levels of autonomous driving feature electronic systems designed to increase the safety of a vehicle during operation. It’s also common today for vehicles to be equipped with driver assistance or partial automation features, such as lane-departure warning, forward-collision warning, adaptive cruise control, and lane change assistance. L2 entry and advanced systems are expected to be present in 59% of all new vehicles by 20255, and the market for ADAS systems will reach $74.9 billion by 20306. 2) Rise of partial hands-free operation: L3 is the first-level of autonomous driving which operates under specific conditions. Over the next few years, OEMs are planning to release vehicles with L3 systems, operating over predefined highways or locations. Known as “geofencing7,” the vehicle defines spatial boundaries and references detailed maps of the surrounding terrain. The car projects sensor data onto the maps to determine the safest route. As with L2 systems, drivers must be ready to take control of the vehicle at all times. Examples of partial hands-free operation are GM’s Super Cruise↗ or Ford’s BlueCruise↗. L3 systems that feature this level of automation will begin to outpace L2 systems in the latter half of the decade. 3) More sensors and increased resolution: When sensors first made their appearance on the automotive scene, they were expensive – but new technology becomes more affordable over time. While today's vehicles already use intelligent sensors for a variety of functions – controlling and processing oil pressure, temperature, emission and coolant levels, to name a few – as the cost of sensors decreases, manufacturers will continue to incorporate more of them to enable new features such as full 360-degree visibility. Additionally, subsequent generations of sensors will provide increased resolution. Whereas most camera sensors today offer 1-2 megapixels, future camera sensors will provide resolutions of close to 8 megapixels, for higher-quality images and video that improve the ability to identify objects in the environment. The combination of camera, RADAR and LiDAR sensors operating at the same time ensures reliability through redundancy. For example, if a bright light renders a camera sensor inoperative, data from another sensor such as RADAR will ensure continuous operation. A data aggregation system collects data from all the sensors and combines it to generate a comprehensive “picture” of the vehicle's surroundings. This operation, referred to as sensor fusion, is performed by a high-performance SOC or FPGA that processes the data and hands it off to the ADAS/AD system, for interpretation and to aid in real-time decision making. ● More sophisticated AI-driven systems: As the number of autonomous vehicles operating on roads increases, the more data we’ll have at our disposal. Data from the camera’s sensors can be uploaded to the cloud, where it can be analyzed and used through machine learning to train new, more sophisticated AI inferences. These inferences are then sent back to the vehicle via over-the-air (OTA) updates, initiating a continuous loop of refinement. Over time, a vehicle’s responses to real-world driving situations and “corner-cases” – problems that occur outside normal operating parameters – will continue to improve.
Storage, Memory and Power Management Are Essential for Success