In January 2021, Georgia Tech formalized the adoption of a new method of funding research cyberinfrastructure. The new model aims to be more sustainable and flexible and has the full support of Georgia Tech’s executive leadership. The new model is based on actual consumption of resources – similar to that of commercial cloud offerings. In this new model, PIs are charged only for what they actually use rather than for some fixed capacity which may remain idle. Other advantages of this model include:
- Researchers have more flexibility to leverage new hardware releases instead of being restricted to hardware purchased at a specific point in time. Researchers can tailor computing resources to fit their scientific workflows rather than being restricted to the specific quantity and configuration of compute nodes purchased.
- Rapid provisioning without the requirement to wait for a lengthy procurement period to complete.
- Insulation from failure of compute nodes. Compute nodes in need of repair can be taken out of service and repaired, allowing jobs to proceed using other compute nodes in the pool rather than decreasing the capacity available to a particular user.
- A free tier that provides any PI the equivalent of 10,000 CPU-hours (per month) on a 192GB compute node and one TB of project storage at no cost.
- PACE staff will monitor the time jobs wait in the queue and procure additional equipment as needed to keep wait times reasonably low.
- Note that a similar consumption model has been used successfully at other institutions such as Univ. Washington and UCSD, and this approach is also being developed by key sponsors (e.g. NSF’s cloudbank.org).
PACE has secured the necessary campus approvals to waive the F&A overhead on PACE services, as well as purchases from commercial cloud providers, for proposals submitted to sponsors between January 1, 2021 and June 30, 2023. While not formally committed to at this point, this waiver is anticipated to continue. Complete details on the cost model and the rates are published at this site.
Policies affecting the Phoenix cluster are maintained by the faculty-led PACE Advisory Committee.
PACE Services
- Phoenix cluster – The Phoenix cluster is the largest computational resource in PACE. It ranked #277 on the November 2020 Top500 (top500.org) list of world-wide supercomputers. A wide range of scientific disciplines are supported.
- Hive cluster – The Hive cluster is funded by the National Science Foundation (NSF) through Major Research Instrumentation (MRI) award 1828187, “MRI: Acquisition of an HPC System for Data-Driven Discovery in Computational Astrophysics, Biology, Chemistry, and Materials Science", and is dedicated to supporting research in accordance with the terms of that award.
- Firebird cluster – The Firebird cluster supports research involving Controlled Unclassified Information (CUI), including ITAR and other forms of protected data.
- ICE – The Instructional Cluster Environment is an educational resource separate from production research resources intended to provide students at the undergraduate and graduate level with an opportunity to gain first-hand scientific computing experience including HPC and GPU programming. It is configured with the same hardware and software as the Phoenix cluster in order to facilitate transitions between learning and research contexts. PACE manages both PACE-ICE and CoC-ICE clusters.
- OSG – The Open Science Grid is a national, distributed computing partnership for data-intensive research. PACE operates resources that participate in OSG, supporting various projects including LIGO, VERITAS, and CTA. PACE received an NSF CC* award 1925541: “Integrating Georgia Tech into the Open Science Grid for Multi-Messenger Astrophysics”. With this award, PACE added CPU/GPU/Storage to the existing OSG capacity, as well as the first regional StashCache service that benefits all OSG institutions in the Southeast region, not just Georgia Tech.
- ScienceDMZ is a “frictionless” network that uses Globus to facilitate high-speed data transfers between PACE and research cyberinfrastructure resources at collaborating institutions world-wide.
- Archival storage is a low-cost storage tier designed for long-term storage of data sets, and can be included as a key component of data management plans.
- Extended Consultation Services - For projects and complex problems that are specific to research groups, PACE offers paid Extended Consultation Services (ECS) that include dedicated technical expertise in excess of standard service. Each ECS request is evaluated on case-by-case basis, and decided per PACE team availability.
What is included in a free tier account on the Phoenix cluster
All academic and research faculty (“PIs”) participating in PACE are automatically granted a certain level of resources in addition to any additional funding they may bring. Each PI is provided 1TB of project storage and a number of credits equivalent to 10,000 CPU-hours (per month) on a 192GB compute node. These credits may be used towards any computational resources (e.g., GPUs, high memory nodes) that are available within the Phoenix cluster. All PACE users also have access to the preemptable backfill queue at no cost.
Click the link for further information on how to join PACE.