Anyone for a slice of record Pi? New landmark sees 314 trillion digits calculated as news site trounces Google Cloud - for now
Storage throughput determined success more than raw processor count
- StorageReview’s physical server calculated 314 trillion digits without a distributed cloud infrastructure
- The entire computation ran continuously for 110 days without interruption
- Energy use dropped dramatically compared with previous cluster-based Pi records
A new benchmark in large-scale numerical computation has been set with the calculation of 314 trillion digits of pi on a single on-premises system.
The run was completed by StorageReview, overtaking earlier cloud-based efforts, including Google Cloud’s 100 trillion digit calculation from 2022.
Unlike hyperscale approaches that relied on massive distributed resources, this record was achieved on one physical server using tightly controlled hardware and software choices.
Runtime and system stability
The calculation ran continuously for 110 days, which is significantly shorter than the roughly 225 days required by the previous large-scale record, even though that earlier effort produced fewer digits.
The uninterrupted execution was attributed to operating system stability and limited background activity
It also depends on balanced NUMA topology and careful memory and storage tuning designed to match the behavior of the y-cruncher application.
The workload was treated less like a demonstration and more like a prolonged stress test of production-grade systems.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
At the center of the effort was a Dell PowerEdge R7725 system equipped with two AMD EPYC 9965 processors, providing 384 CPU cores, alongside 1.5 TB of DDR5 memory.
Storage consisted of forty 61.44 TB Micron 6550 Ion NVMe drives, delivering roughly 2.1 PB of raw capacity.
Thirty-four of those drives were allocated to y-cruncher scratch space in a JBOD layout, while the remaining drives formed a software RAID volume to protect the final output.
This configuration prioritized throughput and power efficiency over full data resiliency during computation.
The numerical workload generated substantial disk activity, including approximately 132 PB of logical reads and 112 PB of logical writes over the course of the run.
Peak logical disk usage reached about 1.43 PiB, while the largest checkpoint exceeded 774 TiB.
SSD wear metrics reported roughly 7.3 PB written per drive, totaling about 249 PB across the swap devices.
Internal benchmarks showed sequential read and write performance more than doubling compared to the earlier 202 trillion digit platform.
For this setup, power consumption was reported at around 1,600 watts, with total energy usage of approximately 4,305 kWh, or 13.70 kWh per trillion digits calculated.
This figure is far lower than estimates for the earlier 300 trillion digit cluster-based record, which reportedly consumed over 33,000 kWh.
The result suggests that, for certain workloads, carefully tuned servers and workstations can outperform cloud infrastructure in efficiency.
That assessment, however, applies narrowly to this class of computation and does not automatically extend to all scientific or commercial use cases.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.