As data centers scale to support increasingly complex AI workloads, they require memory that delivers high performance without compromising power efficiency. SOCAMM21) is Samsung’s next-generation LPDDR5X-based server module that brings LPDDR-class power efficiency into a modular form factor purpose-built for AI infrastructure. By combining high bandwidth and low power in a compact, detachable design, SOCAMM2 helps data centers increase system density while improving cooling efficiency and overall total cost of ownership (TCO).
1) Small Outline Compression Attached Memory Module
As data centers scale to support increasingly complex AI workloads, they require memory that delivers high performance without compromising power efficiency. SOCAMM21) is Samsung’s next-generation LPDDR5X-based server module that brings LPDDR-class power efficiency into a modular form factor purpose-built for AI infrastructure. By combining high bandwidth and low power in a compact, detachable design, SOCAMM2 helps data centers increase system density while improving cooling efficiency and overall total cost of ownership (TCO).
1) Small Outline Compression Attached Memory Module
As data centers scale to support increasingly complex AI workloads, they require memory that delivers high performance without compromising power efficiency. SOCAMM21) is Samsung’s next-generation LPDDR5X-based server module that brings LPDDR-class power efficiency into a modular form factor purpose-built for AI infrastructure. By combining high bandwidth and low power in a compact, detachable design, SOCAMM2 helps data centers increase system density while improving cooling efficiency and overall total cost of ownership (TCO).
1) Small Outline Compression Attached Memory Module
Built on LPDDR5X, SOCAMM2 delivers significantly lower memory power consumption compared with conventional DDR-based server memory solutions2), offering more than 70% better power efficiency without compromising performance. This reduction in power consumption and heat simplifies thermal management, helping data centers maintain stable operating temperatures while easing cooling requirements — a critical consideration in high-density AI infrastructure. As a result, SOCAMM2 enables more AI servers to operate within the same power budget, improving overall energy efficiency and lowering TCO.
2) Compared with conventional DDR5 based RDIMM modules.
Built on LPDDR5X, SOCAMM2 delivers significantly lower memory power consumption compared with conventional DDR-based server memory solutions2), offering more than 70% better power efficiency without compromising performance. This reduction in power consumption and heat simplifies thermal management, helping data centers maintain stable operating temperatures while easing cooling requirements — a critical consideration in high-density AI infrastructure. As a result, SOCAMM2 enables more AI servers to operate within the same power budget, improving overall energy efficiency and lowering TCO.
2) Compared with conventional DDR5 based RDIMM modules.
Built on LPDDR5X, SOCAMM2 delivers significantly lower memory power consumption compared with conventional DDR-based server memory solutions2), offering more than 70% better power efficiency without compromising performance. This reduction in power consumption and heat simplifies thermal management, helping data centers maintain stable operating temperatures while easing cooling requirements — a critical consideration in high-density AI infrastructure. As a result, SOCAMM2 enables more AI servers to operate within the same power budget, improving overall energy efficiency and lowering TCO.
2) Compared with conventional DDR5 based RDIMM modules.
Beyond its power-efficiency advantages, SOCAMM2 is also engineered to deliver the high memory bandwidth required by AI accelerators. Each module provides up to 153.6 GB/s of bandwidth — up to 2.6× higher than DDR-based server memory3) — helping reduce memory bottlenecks and keep GPUs more fully utilized. This additional throughput allows more AI tasks to run in parallel on the same server infrastructure, improving responsiveness for real-time and latency-sensitive inference services.
3) Compared with conventional DDR5 based RDIMM modules.
Beyond its power-efficiency advantages, SOCAMM2 is also engineered to deliver the high memory bandwidth required by AI accelerators. Each module provides up to 153.6 GB/s of bandwidth — up to 2.6× higher than DDR-based server memory3) — helping reduce memory bottlenecks and keep GPUs more fully utilized. This additional throughput allows more AI tasks to run in parallel on the same server infrastructure, improving responsiveness for real-time and latency-sensitive inference services.
3) Compared with conventional DDR5 based RDIMM modules.
Beyond its power-efficiency advantages, SOCAMM2 is also engineered to deliver the high memory bandwidth required by AI accelerators. Each module provides up to 153.6 GB/s of bandwidth — up to 2.6× higher than DDR-based server memory3) — helping reduce memory bottlenecks and keep GPUs more fully utilized. This additional throughput allows more AI tasks to run in parallel on the same server infrastructure, improving responsiveness for real-time and latency-sensitive inference services.
3) Compared with conventional DDR5 based RDIMM modules.
SOCAMM2 is installed in a horizontal orientation, unlike conventional vertically mounted modules. This layout improves system-level space utilization around CPUs and AI accelerators, enabling more flexible heatsink sizing and placement as well as cleaner front-to-back airflow design. Combined with SOCAMM2’s low-power characteristics, this horizontal design enhances cooling efficiency and increases thermal headroom in high-density AI servers while remaining compatible with both air- and liquid-cooling systems.
SOCAMM2 is installed in a horizontal orientation, unlike conventional vertically mounted modules. This layout improves system-level space utilization around CPUs and AI accelerators, enabling more flexible heatsink sizing and placement as well as cleaner front-to-back airflow design. Combined with SOCAMM2’s low-power characteristics, this horizontal design enhances cooling efficiency and increases thermal headroom in high-density AI servers while remaining compatible with both air- and liquid-cooling systems.
SOCAMM2 is installed in a horizontal orientation, unlike conventional vertically mounted modules. This layout improves system-level space utilization around CPUs and AI accelerators, enabling more flexible heatsink sizing and placement as well as cleaner front-to-back airflow design. Combined with SOCAMM2’s low-power characteristics, this horizontal design enhances cooling efficiency and increases thermal headroom in high-density AI servers while remaining compatible with both air- and liquid-cooling systems.
SOCAMM2 adopts a detachable modular design, unlike traditional soldered LPDDR implementations. This approach allows data centers to scale memory capacity and performance over time by upgrading or replacing modules without modifying the mainboard, thereby simplifying lifecycle management and day-to-day system maintenance. By reducing the effort and disruption associated with memory service operations, SOCAMM2 helps minimize server downtime and extend the useful life of existing server platforms — bringing LPDDR-class efficiency into a compact, modular form factor designed to evolve alongside AI workloads.
SOCAMM2 adopts a detachable modular design, unlike traditional soldered LPDDR implementations. This approach allows data centers to scale memory capacity and performance over time by upgrading or replacing modules without modifying the mainboard, thereby simplifying lifecycle management and day-to-day system maintenance. By reducing the effort and disruption associated with memory service operations, SOCAMM2 helps minimize server downtime and extend the useful life of existing server platforms — bringing LPDDR-class efficiency into a compact, modular form factor designed to evolve alongside AI workloads.
SOCAMM2 adopts a detachable modular design, unlike traditional soldered LPDDR implementations. This approach allows data centers to scale memory capacity and performance over time by upgrading or replacing modules without modifying the mainboard, thereby simplifying lifecycle management and day-to-day system maintenance. By reducing the effort and disruption associated with memory service operations, SOCAMM2 helps minimize server downtime and extend the useful life of existing server platforms — bringing LPDDR-class efficiency into a compact, modular form factor designed to evolve alongside AI workloads.
https://semiconductor.samsung.com/support/contact-info/global-network/