Skip to content

Autonomous Driving and the Modern Data Center – Why High-Performance Memory and Storage Solutions Are Essential

  • mail
Over the next decade, autonomous passenger vehicles could become commonplace on public streets and highways, transforming the way we travel and conduct our daily lives. The market for these self-driving vehicles was valued at $22.22 billion in 2021 and could more than triple by 2027, reaching about $75.95 billion1. What’s more, research firm IDTechEx reports2 that autonomous vehicles could match or exceed safety needs by 2024, and self-driving vehicles could meet all mobility demands in the U.S. by 2046. However, before autonomous passenger vehicles can become mainstream and operate independently, the artificial neural network on which they rely must first be trained. Labeled data – and lots of it – must pass through neural network algorithms, creating what’s called an “inference” that enables the neural networks to identify various objects and obstacles on the road. For example, to identify a pedestrian, it must first process pictures or videos of individual people, so that it understands what to look for and make decisions in real-time, based on that information. To achieve a high degree of accuracy – which is essential for automotive and public safety – the datasets required to generate inferences are enormous, requiring substantial compute, memory and storage resources. Where are those datasets stored? Data centers.
Foundation for Autonomous Driving: High-Performance, Massive Storage As vehicles with L2 and higher levels of autonomy operate on roads, data collected from the on-board sensors will be used not just to operate vehicles, but in data centers to train even more sophisticated inferences. Experts estimate that autonomous vehicles could generate as much as 40 TB of data per hour3 from the cameras, sensors and other technology that the vehicles use to operate. The amount of data generated depends on the sensor technology, the number of sensors used and the resolution of the sensors. For example, a 720p camera with a 1 Mbps bitrate generates 5MB of data every minute. Higher-resolution cameras – 1080p for example – generate up to 10.3 GB per minute. Each vehicle may have six or more cameras, and drivers spend about one hour driving every day4. For 720p cameras, this translates to 1.8TB of data generated per vehicle per day. Add in more sensors – RADAR and LiDAR sensors for redundancy or more cameras for better coverage and the data generated increases incrementally. A small percentage of the data collected –~30% – will be uploaded and used for training inference models. Given this, it will only take a small fleet of 62,000 autonomous vehicles to generate 1 Petabyte of uploaded data per month. Keep in mind, there are 276 million vehicles5 on the roads in the United States! Furthermore, ADAS and AD developers are increasingly using the data collected during driving to generate synthetic video to quickly increase the training data set as more data leads to better inferences. Developers can reconstruct and alter environments (ex: an intersection or stretch of highway) and insert vehicles, pedestrians and other objects for any number of scenarios. Synthetic video also helps guarantee inclusion of high-quality data for corner case scenarios in the training data set. Data center storage requirements are set to increase exponentially as ADAS and AD systems move to higher-resolution sensors, more vehicles equipped with these systems are sold, and massive amounts of data generated from synthetic video continues to increase. In addition to scalability to address ever growing datasets, key SSD requirements for inference training include high I/O performance to quickly transfer large sets of data and low latency to minimize the time required to feed data to the CPU and GPUs during training. To accomplish this, data center architects will need to design systems which support the latest SSD interfaces while looking ahead to products on the horizon. For example, high-performance SSDs that support PCIe 5.0 can provide a significant improvement to data centers tasked with inference training. With a top capacity of 15.36TB, Samsung recently released its PM1743 enterprise SSD featuring a sequential read speed of up to 13,000 MB/s and a random read speed of 2,500K IOPS, delivering ~2x performance over the previous PCIe 4.0 generation of products. Built with Samsung’s advanced sixth-generation V-NAND, the PM1743 is designed to process vast quantities of data to meet the advanced requirements of high-performance servers.
Even as PCIe 5.0 products are being adopted, the next generation of SSD products are already in development. Earlier this year, the PCI Special Interest Group (SIG) officially released the PCIe 6.0 specification which doubles the performance of PCIe 5 devices. Companies developing data centers will want to plan when they will implement these new products to keep pace with growing AI training data sets. Advanced Memory Computation with High-Bandwidth Memory For much of its history, computing usage for AI training followed Moore’s Law, doubling every two years. However, since 2012, AI training compute has doubled every 3-4 months.
https://openai.com/blog/ai-and-compute/ This trend is expected to continue for the foreseeable future, driven by ever-increasing model parameters, which, in general, enables an inference to make more accurate predictions. Recently, Google developed a computer vision model that contained 2 billion parameters and achieved 90.45% top-1 accuracy6. While this achievement set a record, computer vision systems for fully autonomous vehicles will need to achieve an even higher accuracy rate, due to reliability requirements for addressing safety concerns – and models with more parameters will be necessary. Training neural networks requires memory that supports high bandwidth, because the process is distributed across numerous servers running in parallel and processing extremely large training sets. This must be accomplished while reducing training timefrom several days to just a few hours. In addition to being able to deploy these inferences into production faster, accelerating training time also reduces costs, as the amount of power consumed to cool systems decreases. On-chip memory is the most efficient solution, however it can be cost-prohibitive and will not scale as the number of AI model parameters increases to hundreds of billions and eventually trillions. High Bandwidth Memory (HBM) is ideally suited for AI applications. HBM employs stacks of vertically interconnected DRAM chips, supports 1,024 I/Os and implements Through Silicon Via (TSV) technology, where connections between die are achieved through thousands of copper-filled holes functioning as wires with alternating layers of external microbumps.
Source:Samsung These HBM stacks are mounted to a silicon interposer in conjunction with system controllers or AI accelerators to produce a system-in-package (SiP) solution. For a single stack, different HBM densities are achieved by varying the number of stacked DRAM dies, up to a maximum of eight. Higher system densities and bandwidth can be achieved by mounting multiple HBM stacks within the SiP. The current iteration available today is HBM2E, and a single stack supports up to 409 GB/s bandwidth. The next generation – HBM3 – is expected to support >600GB/s, enabling data center servers the bandwidth needed to efficiently train increasingly complex and safer inferences for autonomous driving while cutting power consumption and costs. On the Road to Accuracy The need to collect, manage and process vast amounts of data is impacting numerous industries and innovations that leverage AI.Wherever public safety is concerned, the success of those innovations will depend on scalable, low-power memory and storage solutions that support high-performance AI training. According to Donovan Hwang, Senior Director of Marketing for Samsung, “Autonomous vehicles will have a significant impact on data center resources in the near future. As a result, data center architects will need to accommodate for a continuous loop of data storage and training. HBM is an ideal solution for AI training, as it supports the high performance per watt ratio required while PCIe 5 enterprise SSDs like the PM1743 provide the capacity, bandwidth and latency needed to store and transfer massive training data sets.” Powered by HBM solutions and enterprise SSDs such as those from Samsung, data centers can make autonomous vehicles a truly viable and safe alternative for motorists in the near future.” Learn more about Samsung’s innovative memory solutions, and how they’re driving innovation in the automotive industry.
1 This report covers the latest trends and technology advancements in the autonomous driving market. 2 IDTechEx expects autonomous driving services will create opportunities for the underlying sensors market in the next 20 years. 3 Morgan Stanley reports that these numbers are equivalent to an iPhone’s use over 3,000 years. 4 Data generation will vary widely based upon the car’s activity. 5 These numbers represent cars registered between 1990 and 2020. 6 The model, called ViT-G/14, was described in a paper published on arXiv and is based on Google’s recent work on Vision Transformers (ViT).