In recent years, the growth of technologies like Artificial Intelligence, Machine Learning, and Cloud Computing has led to the generation of massive amounts of data. The rise of data-driven technologies has also triggered the need for more powerful computer hardware architecture. More and more cores are being integrated onto single processor chips in order to create powerful processors capable of handling the processing and performance demands of data-intensive applications. However, memory bandwidth and density have lagged behind increasing CPU core count, leading to a gap between the processor and memory performance.
The insatiable demand for memory density and bandwidth is pushing the limits of existing memory technologies. Conventional DRAM design limits the scaling of memory capacity beyond a certain range, requiring an entirely new memory interface technology. What’s more, the rise of AI and Big Data has fueled the trend towards heterogeneous computing, where multiple processors of different types work in parallel to process massive volumes of data.
In light of these trends, a next-generation interconnect technology is essential for heterogeneous computing and composable infrastructure, to enable efficient resource utilization.
What Is Compute Express Link™ (CXL)?
An open standard developed through the CXL™consortium, CXL↗
is a high-speed, low-latency CPU-to-device interconnect technology built on the PCIe physical layer. CXL provides efficient connectivity between the host CPU and connected devices such as accelerators and memory expansion devices.
The CXL transaction layer is made up of three dynamically multiplexed sub-protocols on a single link. These protocols are known as CXL.io, CXL.cache and CXL.mem. When a CXL device is connected to a CXL host, it is discovered, enumerated, configured and managed through CXL.io protocol. CXL.cache enables CXL devices to access processor memory, and CXL.mem enables processors to access CXL device memory. CXL.cache and CXL.mem protocol stacks have been optimized for low latency.