Some critical questions from our faculty include: 


"How can I participate?"

There are several straightforward options to consider:

1.  Immediate access to the shared resources available in the FoRCE Research Computing Environment.

This option provides faculty immediate access to the shared resources of the FoRCE cluster including compute nodes, GPUs, and basic storage.  Policies affecting the FoRCE cluster are maintained by the Faculty Governance Committee, and detailed on the policy page.

Cost: This option is provided at no additional cost to participants.

How to sign-up: Submit a brief proposal online here. (Note: You must be logged into the site using you GT credentials. Login here.)

2.  Faculty contribute nodes for shared access, augmenting the FoRCE cluster

Faculty are encouraged to use their HPC and research computing dollars to add nodes to the FoRCE.  In all cases, priority is given to the owner of these nodes.  When unused by the owner, these nodes may be utilized by other faculty who have also contributed nodes.  Policies affecting sharing are maintained by the Faculty Governance Committee, and detailed on the policy page.  By participating in this way, faculty will also have access to the FoRCE cluster and other shared clusters as time is available.  This option is good for faculty who must periodically have nodes reserved for their use, but also have workloads that can be handled by a shared queue. As another advantage, participants gain access to recent architectures in longer term as the FoRCE cluster grows in size with the addition of new nodes.  

Cost: As jobs in the shared environment may execute on other nodes, a baseline hardware configuration must be maintained.  Participants pay for the compute nodes they add to FoRCE as well as expanded storage.  (See #4)

 How to sign-up: Contact

3.  Faculty purchase compute nodes and expanded storage dedicated for exclusive use.

Faculty who require a cluster environment with dedicated compute nodes can purchase these nodes and still take advantage of the federated infrastructure.  These nodes are not shared with the FoRCE cluster or other shared clusters, and are available exclusively to the participant and to researchers they authorize for access. This option is good for faculty who expect to keep their nodes busy most of the time.

Cost: Participants pay for the compute nodes precisely sized to their requirements as well as expanded storage.  (See #4)

How to sign-up: Contact

4.  Faculty purchase expanded storage.

All user accounts are granted a 5GB home directory quota.  For long term storage of data sets, faculty may purchase dedicated storage to augment the existing base storage allocation by adding disk space to a project directory.  This storage is fully backed up and implemented using best-practice redundant disk arrays to provide data integrity and availability.  This option can be used independently of computational node model above.

Cost: Project storage is provided as a dedicated portion of a shared highly expandable DDN/GPFS storage system.  Storage may be purchased in increments as small as 1TB and may be easily expanded on demand.

How to sign-up: Contact

5. Faculty who want central hosting of a stand-alone non-federated cluster.

To maximize the value of every dollar invested in HPC, we strongly encourage participation in the federated cluster model. An existing cluster which simply needs floor space, power, cooling, and a network connection may be able to be hosted under the PACE Federation.  The impact of all such stand-alone requests will be evaluated on a case-by-case basis to ascertain the impact on the long-term availability of hosting facilities and associated resources.

Cost: TBD case-by-case.

How to sign-up: Contact to refine technical details, costs and options.

[ Return to top ]


"How much does it cost?"

PACE offers various compute options according to the table below.  We do not currently support the acquisition of compute elements based on AMD processors.  This pricing may vary over time as market conditions fluctuate. PACE does not levy any additional charges past the equipment costs shown below, as quipment costs are passed directly from vendors to faculty without markup.  Specific pricing will be provided at time of purchase. Participants are encouraged to seek support from PACE in choosing cost effective and proper hardware for their purpose. Pricing on Intel compute nodes and storage is current as of June, 2018.


128GB Intel Compute node - $8,300

  • dual-socket, 14-core Intel E5-2680v4 "Broadwelll" @ 2.5Ghz (28 cores total)
  • 128 GB DDR4-2400 memory
  • FDR Infiniband card
  • port on the PACE FDR Infiniband switch
  • ethernet cabling
  • shipping, installation and burin-in testing
  • 5-year next-business-day on-site warranty


​256GB Intel Compute node - $9,700

  • same configuration as the 128GB Intel node, just more memory


​512GB Intel Compute node - $12,800

  • same configuration as the 128GB Intel node, just more memory


NOTE: PACE can no longer support consumer-grade nVidia GPUs (e.g. the GTX or Titan line).  As of June 2018, the shipping times for the P100 and V100 GPUs have equalized.  The V100 provides increased performance for a moderate increase in cost.  PACE can support either the P100 or V100, with a slight recommendation for the V100, based on price/performance.


P100 GPU node - $13,100

  • dual-socket, 4-core Intel E5-2324v4 "Broadwelll" @ 2.5Ghz (8 cores total)
  • 128 GB DDR4-2400 memory
  • 1x nVidia P100 GPU
  • FDR Infiniband card
  • port on the PACE FDR Infiniband switch
  • ethernet cabling
  • shipping, installation and burin-in testing
  • 5-year next-business-day on-site warranty


dual P100 GPU node - $17,700

  • same configuration as the P100 GPU node, just with a second P100 GPU.


V100 GPU node - $14,200

  • an upgrade to the P100 GPU node, using a V100 GPU rather than the P100.


Storage - $100 / TB / year

  • provisioned from shared GPFS filesystem
  • may be easily expanded on demand
  • smallest increment is 1TB
  • multiple years may be paid up front (i.e. $300 / TB for 3 years)
  • includes nightly backups

[ Return to top ] 


"What do I get in return?"

The advantage of the federated model is that everyone benefits from the Institute's commitment to pre-load infrastructure and support.  This applies to every participant who chooses any of the first 4 options described above.  These benefits include:

  • lower direct costs to participants for HPC equipment purchases by leveraging shared resources
  • guidance in developing system specifications consistent with the federated architecture
  • full life-cycle procurement management
  • hosting in a professionally managed data center with a 24/7 operations staff
  • racks, power and cooling to meet high-density demands
  • installation management
  • acceptance testing in accordance with both PACE and end-user requirements
  • secure high-speed scratch storage
  • head node for login access
  • a small home directory (bulk storage is funded by the faculty member)
  • commodity Ethernet networking (Infiniband, if desired, is funded by the faculty member)
  • back-up and restore
  • queue management
  • system administration
  • software and compiler administration (loads, updates, licensing)
  • hardware fixes
  • the dedicated support team manages all aspects of the cluster
  • shared access to recent architecture


[ Return to top ]