Users with the National Energy Research Scientific Computing Center (NERSC) can run AI jobs on the organization’s Perlmutter supercomputer for half-price this month.
In the midst of a lack of worldwide availability of computing horsepower for AI workloads, the facility – which operates on behalf of the US Department for Energy’s Office of Science – is changing the equation.
Between September 7 and October 1, those registered with the organization will be charged half the normal charges. For example, a three-hour job that normally runs on seven nodes would incur a charge of 21 GPU node-hours – but throughout September, it will be charged 10.5 GPU node-hours.
Perlmutter’s A100 GPUs
“Using your time now benefits the entire NERSC community and spreads demand more evenly throughout the year, so to encourage usage now, we are discounting all jobs run on the Perlmutter GPU nodes by 50% starting tomorrow and through the end of September,” wrote user engagement group leader, Rebecca Hartman-Baker.
Hartman-Baker also pointed to additional help that NERSC will be offering users. This may be of use to those who are getting bad performance and need help making sure their script is up to scratch, or just those who want to try out code but aren’t sure where to start, among other potential uses.
Established in 2021, Perlmutter is an HPE Cray EX supercomputer that uses AMD Zen 3 Epyc CPUs as well as Nvidia A100 Tesla Core GPUs. The first phase of development saw the machine fitted with 1,536 GPU-accelerated AMD CPU nodes, each including four A100 GPUs, complemented with 35PB all-flash Lustre-based storage. The second phase saw the supercomputer augmented with 3,072 CPU-only nodes, each with two AMD Epyc processors and 512GB memory.
The supercomputer itself is largely used for nuclear fusion simulations, climate projections, as well as material and biological research. The first workloads run on Perlmutter included a project to discover how atomic interactions worked – which may lead to better batteries and biofuels.
GPU capacity to run AI workloads is hard to come by, and the offer is sadly only applicable to members of NERSC. It was originally pointed out by a Microsoft high-performance computing (HPC) specialist Glenn Lockwood, who pointed out NERSC could “make a killing” by backfilling idle capacity with commercial workloads.
This would be particularly applicable during the summer months when academics are largely away. There are, however, alternative means of renting GPUs, including through Akash’s decentralized Supercloud for AI network.