Skip to content

All-Flash NVMe™ Reference Architecture with Red Hat Ceph Storage 3.2

  • mail
Download
All-Flash NVMe™ Reference Architecture with Red Hat Ceph Storage 3.2
All-Flash NVMe™ Reference Architecture with Red Hat Ceph Storage 3.2
Enterprise storage infrastructure and related technologies continue to evolve year after year. In particular, as IoT, 5G, AI, and ML technologies are gaining attention, the demand for SDS (software-defined storage) solutions based on clustered storage servers is also increasing. Ceph has emerged as a leading SDS solution that takes on high performance intensive workloads. Therefore, high throughput and low latency features of storage devices are important factors that improve the overall performance of the Ceph cluster. Adoption of a Ceph cluster on a NVMe™ SSD will maximize performance improvement. So, Samsung designed Ceph clusters based on all-flash NVMe™ SSDs and conducted various tests to provide Ceph users with optimized Ceph configurations. This document introduces Samsung’s NVMe™ SSD Reference Architecture for providing optimal performance in Red Hat ® Ceph storage with Samsung PM1725a NVMe™ SSD on an x86 architecture-based storage cluster. It also provides an optimized configuration for Ceph clusters and their performance benchmark results. Samsung configured five storage nodes based on all-flash with PM1725a NVMe™ SSD, resulting in 4 KB random read performance surpassing 2 million IOPS. The table below shows the results for the performance and latency evaluated in this Reference Architecture. Please click the link below to download the white paper.
4 KB Random WorkloadWriteRead
Avg. Throughput (KIOPS)4932255
Avg. 99.99%th Latency (ms)74.20153.21
Avg. Latency (ms)12.972.85
128 KB Sequential WorkloadWriteRead
Avg. Throughput (GB/s)18.851.6