PACE Phoenix Storage Hotfix – Sept 24th, 2024

WHAT’S HAPPENING? 

Due to a recent instance of lower performance in our Project storage system (coda1), we will be working with our storage vendor to apply updates to the underlying device on Tuesday, September 24th. This should not cause any outage, but may result in decreased performance for some operations during the patch deployment. Due to the non-zero risk of outage, we will be working hand-in-hand with the vendor during this operation, and will be monitoring performance closely. Please do let us know if you observe impact to any work during that time, and we will refund jobs accordingly.                        

WHEN IS IT HAPPENING? 
The update process will begin on Tuesday morning, Sept 24th, 2024. 
We will send an announcement when the update is complete. 

WHY IS IT HAPPENING? 

Patches to the storage devices underlying Phoenix Project storage (coda1) have been recommended by the device vendor to improve reliability and performance based on recently observed degraded performance of the metadata servers on our Lustre filesystem.  

WHO IS AFFECTED? 

Phoenix users *may* experience slower performance of Phoenix Project storage during the update, and there is a low risk of outage. 

WHAT DO YOU NEED TO DO? 

Please do let us know if you observe impact to any work using the Phoenix Project filesystem (coda1) during that time, and we will refund jobs accordingly. 

WHO SHOULD YOU CONTACT FOR QUESTIONS? 

For any questions, please contact PACE at pace-support@oit.gatech.edu.

PACE-Wide Emergency Shutdown – Sept 3, 2024

WHAT’S HAPPENING? 

It is necessary to shut down the whole cohort of PACE clusters next week to make repairs in the datacenter. 

The repair and cluster resumption will take up to 1 day to complete, requires shutting down all nodes in the research hall, and must be done in the next few days.  
 
This shutdown will NOT affect Globus access, login-node access, or access to any storage locations.  

WHEN IS IT HAPPENING? 

Tuesday, September 3rd, 2024, starting at 4 PM EDT. Compute nodes are expected to return to availability on the afternoon of Wednesday, September 4th.  

WHY IS IT HAPPENING? 

Databank, the physical infrastructure provider for our datacenter, detected an issue over the weekend where multiple cooling doors reported high temperature alerts. They traced the issue to a high team chiller sensor. It was temporarily bypassed to avoid the multiple alerts and needs to be replaced to avoid additional issues.  

This outage is necessary to prevent widespread catastrophic failure of the servers in the research hall.  

WHO IS AFFECTED? 

All PACE Users. Any running jobs on ALL PACE Clusters (Phoenix, Hive, Firebird, ICE, and Buzzard) will be stopped at 4pm on the afternoon of September 3rd, 2024. For Phoenix and Firebird, we will provide refunds for interrupted jobs on paid accounts only by default. Please let us know if this causes a significant loss of funds resulting in inability to continue work on your free-tier Phoenix allocation!  

WHAT DO YOU NEED TO DO? 

Wait patiently; we will communicate as soon as the clusters are ready to resume work. 

WHO SHOULD YOU CONTACT FOR QUESTIONS? 

For any questions, please contact PACE at pace-support@oit.gatech.edu.

PACE Maintenance Period Aug 06-09 2024

[Update 07/31/24 02:23pm]

WHEN IS IT HAPPENING?

PACE’s next Maintenance Period starts at 6:00 AM on Tuesday, August 6th (08/06/2024) and is tentatively scheduled to conclude by 11:59 PM on Friday, August 9th (08/09/2024). An extra day is needed to accommodate additional testing needed due to both RHEL7 and RHEL9 versions of our systems as we migrate to the new Operating System. PACE will release each cluster (Phoenix, Hive, Firebird, ICE, and Buzzard, along with their associated RHEL9 environments) as soon as maintenance work and testing are completed. We plan to focus on the largest portion of each system first, to ensure access to data and compute capabilities are restored as soon as possible.

Also, we have CANCELED the November maintenance period for 2024 and do NOT plan to have another maintenance outage until early 2025.

WHAT DO YOU NEED TO DO?   

As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the maintenance by the scheduler. During this Maintenance Period, access to all the PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, ICE, and Buzzard. Please plan accordingly for the projected downtime. CEDAR storage will not be affected.

For Phoenix, we are migrating 427 nodes (~30% of the ~1400 total nodes on Phoenix) from RHEL7 to RHEL9 in August. The new RHEL9 nodes will not be available immediately after the Maintenance Period is completed but will come online the following week (August 12th – 16th). After this migration, about 50% of the Phoenix cluster will be migrated over to RHEL9, including all but 20 GPU nodes. Given this, we strongly encourage Phoenix users who have not migrated their workflows over to RHEL9 to do so as soon as possible.

WHAT IS HAPPENING?   

ITEMS REQUIRING USER ACTION: 

  • [Phoenix and Hive] Continue migrating nodes to the RHEL 9 operating system
  • Migrate 427 nodes to RHEL9 in Phoenix 
  • Migrate 100 nodes to RHEL9 in Hive 
  • [Phoenix, Hive, Firebird, ICE] GPU nodes will receive new versions of the NVIDIA drivers, which *may* impact locally built tools using CUDA. 
  • [Phoenix] H100 GPU users on Phoenix should use the RHEL9 login node to avoid module environment issues.

ITEMS NOT REQUIRING USER ACTION: 

  • [all] Databank cooling loop work, which will require shutdown of all systems 
  • [all] Upgrade to RHEL 9.4 from 9.3 on all RHEL9 nodes – should not impact user-installed software 
  • [all] Research and Enterprise Hall Ethernet switch code upgrade 
  • [all] Upgrade PACE welcome emails 
  • [all] Upgrade Slurm scheduler nodes to RHEL9 
  • [CEDAR] Adding SSSD and IDmap configurations to RHEL7 nodes to allow correct group access across PACE resources 
  • [Phoenix] Updates to Lustre storage to improve stability  
  • File consistency checks across all metadata servers, appliance firmware updates, external metadata server replacement on project storage 
  • [Phoenix] Install additional InfiniBand interfaces to HGX servers 
  • [Phoenix] Migrate OOD Phoenix RHEL9 apps 
  • [Phoenix, Hive] Enable Apptainer self-service 
  • [Phoenix, ICE] Upgrade Phoenix/Hive/ICE subnet managers to RHEL9 
  • [Hive] Upgrade Hive storage for new disk replacement to take effect 
  • [ICE] Updates to Lustre scratch storage to improve stability 
  • File consistency checks and appliance firmware updates 
  • [ICE] Retire ICE enabling rules for ECE 
  • [ICE] Migrate ondemand-ice server to RHEL9 

WHY IS IT HAPPENING?

Regular maintenance periods are necessary to reduce unplanned downtime and maintain a secure and stable system.

WHO IS AFFECTED?  

All users across all PACE clusters.

WHO SHOULD YOU CONTACT FOR QUESTIONS?   

Please contact PACE at pace-support@oit.gatech.edu with questions or concerns.

Thank you,

-The PACE Team 

[Update 07/15/24 03:36pm]

WHEN IS IT HAPPENING?  

PACE’s next Maintenance Period starts at 6:00AM on Tuesday August 6th, 08/06/2024, and is tentatively scheduled to conclude by 11:59PM on Friday August 9th, 08/09/2024. The additional day is needed to accommodate additional testing needed due to the presence of both RHEL7 and RHEL9 versions of our systems as we migrate to the new Operating System. PACE will release each cluster (Phoenix, Hive, Firebird, ICE, and Buzzard, along with their associated RHEL9 environments) as soon as maintenance work and testing is completed. We plan to focus on the largest portion of each system first, to ensure access to data and compute capabilities are restored as soon as possible.  
 
Additionally, we have cancelled the November maintenance period for 2024, and do not plan to have a maintenance outage until early 2025 

WHAT DO YOU NEED TO DO?   

As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the maintenance by the scheduler. During this Maintenance Period, access to all the PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, ICE, and Buzzard. Please plan accordingly for the projected downtime. CEDAR storage will not be affected. 

WHAT IS HAPPENING?   

ITEMS REQUIRING USER ACTION: 

  • [Phoenix and Hive] Continue migrating nodes to the RHEL 9.3 operating system.  

ITEMS NOT REQUIRING USER ACTION: 

  • [all] Databank cooling loop work, which will require shutdown of all systems 
  • [CEDAR] Adding SSSD and IDmap configurations to allow correct group access across PACE resources 
  • [Phoenix] Updates to Lustre storage to improve stability  
  • File consistency checks across all metadata servers, appliance firmware updates, external metadata server replacement on /storage/coda1 

WHY IS IT HAPPENING?  

Regular maintenance periods are necessary to reduce unplanned downtime and maintain a secure and stable system.  

WHO IS AFFECTED?  

All users across all PACE clusters.  

WHO SHOULD YOU CONTACT FOR QUESTIONS?   

Please contact PACE at pace-support@oit.gatech.edu with questions or concerns.  

Thank you,  

-The PACE Team 

[OUTAGE] Phoenix Project Storage

[Update 06/20/2024 04:58pm]

Dear Phoenix Users,

Summary: The Phoenix cluster is back online. The scheduler is unpaused and the jobs that have been put on hold are now resumed, and the file system is ready for use.

Details: All the appliance components for Phoenix project storage were restarted, and file system consistency was confirmed. We’ll continue to monitor it and run additional consistency checks over the next few days.

Impact: If you were running jobs on Phoenix and using project storage, please verify that your jobs have not run into any issues. We will be issuing refunds for all impacted jobs, so please reach out to pace-support@oit.gatech.edu if you have encountered any issues.

Thank you for your patience,

-The PACE Team

[Update 06/20/2024 01:36 pm]

Summary: The metadata servers on Phoenix, for project storage, /storage/coda1, are currently down due to degraded performance.

Details: During additional testing with the storage vendor as part of investigation of the performance issues from this morning, it was necessary to bring the storage fully offline, rather than resuming service.

Impact: We have paused the scheduler for now, so you will not be able to start jobs on Phoenix. We will release the scheduler once we have verified that project storage is stable. Access to project storage (/storage/coda1) is currently interrupted, however, scratch storage (/storage/scratch1) is not affected. If you were running jobs on Phoenix and using project storage, please verify that your jobs have not run into any issues. We will be issuing refunds for all impacted jobs as usual.

Only project storage on Phoenix is affected – storage on Hive, ICE, Buzzard and Firebird work without issues.

Thank you for your patience as we work with our storage vendor to resolve this outage. We will continue to provide updates as work continues.

Please contact us at pace-support@oit.gatech.edu with any questions.

PACE Maintenance Period (May 07 – May 10, 2024) 

[Update 05/09/24 04:25 PM]

Dear PACE users,   

The maintenance on the Phoenix, Hive, Firebird, and OSG Buzzard clusters has been completed. The Phoenix, Hive, Firebird, and OSG Buzzard clusters are back in production and ready for research; all jobs that have been held by the scheduler have been released. 

The ICE cluster is still under maintenance due to the RHEL9 migration, but we expect it to be ready tomorrow. Instructors teaching summer courses will be notified when it is ready. 

The POSIX user group names on the Phoenix, Hive, Firebird, and OSG Buzzard clusters have been updated so that names will start with the “pace-” prefix. If your scripts or workflows rely on POSIX group names, they will need to be updated; otherwise, no action is required on your part. This is a step towards tighter integration of PACE systems with central IAM tools, which will lead to improvements across the board in the PACE user experience.

Just a reminder that the next Maintenance Period will be August 6-8, 2024

Thank you for your patience! 

-The PACE Team 

[Update 05/07/24 06:00 AM]

PACE Maintenance Period starts now at 6:00 AM on Tuesday, 05/07/2024, and is tentatively scheduled to conclude by 11:59 PM on Friday, 05/10/2024.

[Update 05/01/24 06:37 PM]

WHEN IS IT HAPPENING?

PACE’s next Maintenance Period starts at 6:00 AM on Tuesday, May 7th, 05/07/2024, and is tentatively scheduled to conclude by 11:59 PM on Friday, May 10th, 05/10/2024. An extra day is needed to accommodate physical work done by Databank in the Coda Data Center.PACE will release each cluster (Phoenix, Hive, Firebird, ICE, and Buzzard) as soon as maintenance work is complete. 

WHAT DO YOU NEED TO DO?

As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the maintenance by the scheduler. During this Maintenance Period, access to all the PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, ICE, CEDAR, and Buzzard. Please plan accordingly for the projected downtime. 

WHAT IS HAPPENING?

ITEMS REQUIRING USER ACTION: 

  • [all] During the maintenance period, the PACE team will rename all POSIX user groups so that names will start with the “pace-” prefix.
    • This will NOT affect numerical GIDs, but if your scripts or workflows rely on group names, they will need to be updated.
    • If you don’t use POSIX user group names in your scripts or workflows, no action is required on your part.
    • This is a step towards tighter integration of PACE systems with central IAM tools, which will lead to improvements across the board in the PACE user experience.
    • NOTE: This item was originally planned for January but was delayed to avoid integration issues with IAM services, which have now been resolved.
  • [ICE] Migrate to the RHEL 9.3 operating system – if you need access to ICE for any summer courses, please let us know! 
    • The ICE login nodes will be updated to RHEL 9.3 as well, and this WILL create new ssh host-keys on ICE login nodes – so please expect a message that “WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!” when you (or your students) next access ICE after maintenance. 
  • [ICE] We will be retiring 8 of the RTX6000 GPU nodes on ICE to prepare for the addition of several new L40 nodes the week after MD. 
  • [software] Sync Gaussian and VASP on RHEL7 pace-apps.
  • [software] Sync any remaining RHEL9 pace-apps for the OS migration.
  • [Phoenix, ICE] Upgrade Nvidia drivers on all HGX/DGX servers.
  • [Hive] The scratch deleter will not run in May and June but will resume in July.
  • [Phoenix] The scratch deleter will not run in May but will resume in June.
  • [ICE] The scratch deleter will run for Spring semester deletion during the week of May 13.

ITEMS NOT REQUIRING USER ACTION: 

  • [datacenter] Databank maintenance: replace all components of cold loop water pump that had issues a couple of maintenance periods ago.  
  • [Hive] Upgrade the underlying GPFS filesystem to version 5.1 in preparation for RHEL9.
  • [datacenter] Repairs to one InfiniBand switch and two DDN storage controllers with degraded BBUs (Battery Backup Unit).
  • [datacenter] Upgrade storage controller firmware for DDN appliances to SFA 12.4.0. 
  • [Hive] Consolidate all the ICE access entitlements into a single one, all-pace-ice-access.
  • [Hive] Upgrade Hive compute nodes to GPFS 5.1.
  • [Phoenix] Replace cables for the Phoenix storage server.
  • [Firebird] Patch Firebird storage server to 100GbE switch and reconfigure.
  • [Firebird, Hive] Deploy Slurm scheduler CLI+Feature bits on Firebird and Hive. 
  • [datacenter] Configure LDAP on the MANTA NetApp HPCNA SVM.

WHY IS IT HAPPENING?  

Regular maintenance periods are necessary to reduce unplanned downtime and maintain a secure and stable system.  

WHO IS AFFECTED?  

All users across all PACE clusters.  

WHO SHOULD YOU CONTACT FOR QUESTIONS?   

Please contact PACE at pace-support@oit.gatech.edu with questions or concerns. You may read this message on our blog

Thank you,  

-The PACE Team 

[Update 04/22/24 09:53 AM]

WHEN IS IT HAPPENING?  

PACE’s next Maintenance Period starts at 6:00AM on Tuesday May 7th, 05/07/2024, and is tentatively scheduled to conclude by 11:59PM on Friday May 10th, 05/10/2024. The additional day is needed to accommodate physical work carried out by Databank in the Coda datacenter. PACE will release each cluster (Phoenix, Hive, Firebird, ICE, and Buzzard) as soon as maintenance work is complete. 

WHAT DO YOU NEED TO DO?   

As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the maintenance by the scheduler. During this Maintenance Period, access to all the PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, ICE, CEDAR, and Buzzard. Please plan accordingly for the projected downtime. 

WHAT IS HAPPENING?   

ITEMS REQUIRING USER ACTION: 

  • [all] During the maintenance period, the PACE team will rename all POSIX user groups so that names will start with the “pace-” prefix.  
    • This will NOT affect numerical GIDs, but if your scripts or workflows rely on group names, they will need to be updated. 
    • If you don’t use POSIX user group names in your scripts or workflows, no action is required on your part. 
    • This is a step towards tighter integration of PACE systems with central IAM tools, which will lead to improvements across the board in the PACE user experience.  
    • NOTE: This item was originally planned for January, but was delayed to avoid integration issues with IAM services, which have now been resolved.
  • [ICE] Migrate to the RHEL 9.3 operating system – if you need access to ICE for any summer courses, please let us know! 
    • Note – This WILL create new ssh host-keys on ICE login nodes – so please expect a message that “WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!” when you (or your students) next access ICE after maintenance.

ITEMS NOT REQUIRING USER ACTION: 

  • [datacenter] Databank maintenance: replace all components of cold loop water pump that had issues a couple of maintenance periods ago.  
  • [Hive] Upgrade the underlying GPFS filesystem to version 5.1 in preparation for RHEL9 
  • [datacenter] Repairs to one InfiniBand switch and two DDN storage controllers with degraded BBUs (Battery Backup Unit) 
  • [datacenter] Upgrade storage controller firmware for DDN appliances to SFA 12.4.0. 

WHY IS IT HAPPENING?  

Regular maintenance periods are necessary to reduce unplanned downtime and maintain a secure and stable system.  

WHO IS AFFECTED?  

All users across all PACE clusters.  

WHO SHOULD YOU CONTACT FOR QUESTIONS?   

Please contact PACE at pace-support@oit.gatech.edu with questions or concerns.