Posts

[Update] [storage] Phoenix Project storage degraded performance

[Updated March 31, 2025 at 414pm]

Dear Phoenix researchers,

As the Phoenix project storage system has stabilized, we have restored login access via ssh and resumed starting jobs.

The cost for the jobs running during the performance degradation will not count towards the March usage.

The Phoenix OnDemand portal can again be used to access project and scratch space. Any user still receiving a “Proxy Error” should contact pace-support@oit.gatech.edu for an individual reset of their OnDemand session.

Globus file transfers have resumed. We have determined that transfers to/from home, scratch, and CEDAR storage were inadvertently paused, and we apologize for any confusion. Any paused transfer should have automatically resumed.

The PACE team continues to monitor the storage system for any further issues. We are working with the vendor to identify the root cause and prevent future performance degradation.

Please contact us at pace-support@oit.gatech.edu with any questions. We appreciate your patience during this unexpected outage.

Best,

The PACE Team

[Updated March 31, 2025 at 12:41pm]

Dear Phoenix Users,

To limit the impact of the current Phoenix project filesystem issues, we have implemented the following changes to expedite troubleshooting and limit impact to currently running jobs:

New Logins to Phoenix Login Nodes are Paused

We have prevented new login attempts to the Phoenix login nodes. Users that are currently logged in will be able to stay logged onto the system.

Phoenix Jobs Prevented from Starting

Jobs that are in the queue but that have not yet started have been paused to prevent them from starting. These submitted jobs will remain in the queue.

Jobs that are currently running may experience decreased performance if using project storage. We are doing our best to prioritize the successful completion of these jobs.

Open OnDemand (OOD)

Users of Phoenix OOD can log in and interact with only their home directory. Project and scratch space are not available.

Some users of Open OnDemand may be unable to reach this service and are experiencing “Proxy Error” messages. We are investigating the root cause of this issue.

Globus File Transfer Paused for Project Space

File transfers to/from project storage on Globus have been paused. Other Globus transfers (Box, DropBox, and OneDrive cloud connectors; scratch; home; and CEDAR) will continue.

The PACE team is working to diagnose the current issues with support from our filesystem vendor. We will continue to share updates as we have them and apologize for this unexpected service outage.

Best,

The PACE Team

[storage] Phoenix Project storage degraded performance

We are currently experiencing degraded performance on Phoenix Project storage. We are investigating with the vendor and will provides updates as we learn more. 

Summary: Performance of Phoenix project storage is currently degraded.

Details: Two of our MDS (MetaData Servers) rebooted early Monday morning, March 31, and load averages are unusually high on one of them.

Impact: Researchers may experience significant slowness in read & write performance on Phoenix project storage until we are able to mitigate the issue. Conda environments located in project storage may be very slow to load (even if the python script to run is located elsewhere) or fail to activate, while attempts to view project storage files via the OnDemand web portal may time out.”

Phoenix storage performance degraded

[Update 3/21/25 12:30 PM]

Following the completion of the rebuild and copyback processes on the impacted redundant storage pool, Phoenix project storage performance has returned to normal. Please contact pace-support@oit.gatech.edu if you encounter any further issues.

[Original post 3/19/25 5:00 PM]

Summary: Performance of Phoenix project storage is currently degraded.

Details: Multiple redundant disks failed yesterday and today, and storage is slowed while the redundant pool rebuilds.

Impact: Researchers may experience significant slowness in read & write performance on Phoenix project storage until the process is complete. Conda environments located in project storage may be very slow to load (even if the python script to run is located elsewhere) or fail to activate, while attempts to view project storage files via the OnDemand web portal may time out.

Please visit https://status.gatech.edu for updates and contact pace-support@oit.gatech.edu with any questions.

[Advance Notice] Planned Spring-Break (March 17-21st) Downtime

[Update 3/6/25]

Summary: All PACE compute nodes will be unavailable from 4:00 PM on Friday, March 14, through Tuesday, March 18, to repair a water leak in the Coda Datacenter Research Hall cooling system. Access to login nodes and data will remain available.  

Details: Due to a water leak discovered last month, a seal will be replaced at the start of Spring Break in the cooling system. A full replacement of the pump is planned for the May maintenance period, which will be extended one day and is now planned for May 6-9, 2025, once all parts are available. Non-compute nodes in the Enterprise Hall will not be impacted in the Spring Break repair.  

Impact: During the outage, it will not be possible to run any compute jobs on any PACE cluster (Phoenix, Hive, ICE, Firebird, Buzzard). Login nodes and storage systems will remain available. A reservation has been placed on all schedulers to prevent any jobs from starting if their walltime request extends past 4:00 PM on March 14; these jobs will be held until maintenance is complete.  

Thank you for your patience as we work to restore full functionality of the cooling system. You can read this message on our blog.  

Best, 

-The PACE Team 

[Original Post 2/19/25]

Summary: A water leak has occurred in the CODA Datacenter Research Hall cooling system due to the failure of a pump seal – as a result, we are planning a two or three-day outage (pending confirmation from the Databank and mechanical contractor) during the week of March 17th-21st, which we hope will have less impact due to Spring Break. Access to login nodes and data will remain available, as these live in a different part of the datacenter. No compute services (Phoenix, Hive, ICE, Firebird, or Buzzard compute nodes) will be available. We will follow up once the exact days of the outage are finalized. 

Details: A pump seal in the CODA research hall cooling system failed on Feb 16th. The leak is not currently impacting operation of any PACE resources. Databank is working on a full pump replacement (“flange-to-flange”) plan to address the issue. Databank is actively sourcing the pump and associated parts and coordinating with a new mechanical contractor. We currently target the pump replacement to occur during Spring Break (March 17 – 21). However, this target date could change based on supply chain constraints. The mechanical work is estimated to take one to two days (depending on if additional damage or issues are identified during the pump replacement). Upon completion of the work, the PACE team will need one business day to conduct all necessary testing on the ~2,000 systems and release the five clusters currently hosted in the Research Hall (Phoenix, Hive, ICE/AI Makerspace, Firebird, and OSG Buzzard).  

Being able to perform the work during Spring Break represents a best-case scenario. Databank is actively monitoring the leak and the overall health of the cooling system. Should the situation deteriorate quickly or a catastrophic failure occur, Databank will coordinate emergency repair work to replace the pump seal itself using available on-site spare parts. Under this scenario, a complete pump replacement would be coordinated during the planned PACE Maintenance period in May.  
 
We are striving to keep the shutdown as short as possible. A reservation has been placed on the cluster to prevent any jobs being cancelled by the shutdown – which will cause some jobs to hold until the outage is over. 

Thank you for your patience as we work to recover from this situation

Best, 

-The PACE Team 

[Notice] Phoenix Scheduler Account Issue

Following the Slurm upgrade during the January 2025 maintenance window, the monthly usage reset did not execute as scheduled on February 1. Consequently, reported balances were lower than expected, as the reported usage still included January utilization. Having identified the issue, the PACE team manually reset usage across all accounts at 12:00 PM on February 4. Additionally, the vendor has been notified of the bug to provide a patch before the next monthly cycle.  

Any jobs that ran from the beginning of February until now will not count towards February usage, and any overages of pre-set limits will be refunded.  

We sincerely apologize for the inconvenience.  

Thank you and have a great day! 

PACE team 

[Complete] PACE Maintenance Period – January 13th-16th 2025

WHEN IS IT HAPPENING?  

PACE’s next Maintenance Period starts at 6:00AM on Monday January 13th, 01/13/2025, and is tentatively scheduled to conclude by 11:59PM on Thursday January 16th, 01/16/2025. The additional day is needed to accommodate additional testing needed due to the presence of both RHEL7 and RHEL9 versions of our systems as we migrate to the new Operating System. PACE will release each cluster (Phoenix, Hive, Firebird, ICE, and Buzzard) as soon as maintenance work and testing are completed. We will prioritize ICE to support Spring courses as soon as possible, and for the others, plan to focus on the largest portion of each system first (for Phoenix and Firebird where both OSs are present), to restore access to data and compute capabilities. 

 
WHAT DO YOU NEED TO DO?   

As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the maintenance by the scheduler. During this Maintenance Period, access to all the PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, ICE, and Buzzard. Please plan accordingly for the projected downtime. CEDAR storage will not be affected. 

WHAT IS HAPPENING?   

ITEMS REQUIRING USER ACTION: 

  • [Phoenix] Continue migrating nodes to the RHEL 9 operating system, which will complete post-MD – after this, Phoenix will be 75% on the RHEL9 OS. 
  • [Hive] COMPLETE migrating nodes to the RHEL 9 operating system. 
  • [Phoenix and Hive] Default login behavior will change so that login-phoenix and login-hive will point to RHEL 9 login nodes rather than RHEL 7 nodes, which WILL trigger SSH warnings. For more information on SSH at PACE, see our documentation
      

ITEMS NOT REQUIRING USER ACTION: 

  • [Phoenix, Hive, Firebird, ICE] Upgrade Slurm to 24.11.10 
  • [all] DataBank will perform cooling tower cleaning requiring all machines in the research hall to be powered off 
  • [all] Upgrade border firewall hardware  
  • [Phoenix,ICE] Upgrade IB (InfiniBand) switch firmware 
  • [Phoenix,Hive] Move Globus endpoints to new network to improve performance 
  • [ICE] Enable self-service container builds 
  • [Phoenix] Upgrade all storage servers to latest version to support performance improvements, covering scratch and project (coda1) storage 
  • [Firebird] Upgrades to underlying storage servers to improve functionality 

WHY IS IT HAPPENING?  

Regular maintenance periods are necessary to reduce unplanned downtime and maintain a secure and stable system.  

WHO IS AFFECTED?  

All users across all PACE clusters.  

WHO SHOULD YOU CONTACT FOR QUESTIONS?   

Please contact PACE at pace-support@oit.gatech.edu with questions or concerns.

Thank you,  

-The PACE Team 

[Resolved] Phoenix login nodes outage on Dec 5, 2024

On the morning of December 5, 2024, the RHEL9 login nodes of the Phoenix cluster became unresponsive. The problems started at 4:37 AM, when one login node (out of two) had a memory problem; at 6:27 AM, it crashed. The other login node crashed at 9:37 AM, rendering the RHEL9 environment on Phoenix inaccessible. Both login nodes were restarted at 11:30 AM, which resolved the issue. The jobs that crashed between 4:37 and 11:30 AM have been refunded.

 Message about Storage Performance, Reliability, and Future Plans for Phoenix 

Executive summary 

PACE recognizes that the increasing frequency of performance issues on the storage system is causing disruptions to your research on the Phoenix cluster. We are striving to mitigate the impact of these events while taking proactive measures to improve the reliability of our systems for the future. To this end, we are introducing new storage technology, and prioritizing migration of data from the existing project storage to the new system over the next year. We are currently working towards finalizing a seamless migration plan. Once the plan is ready, around late spring, we will follow up with detailed information regarding the timeline and any potential workflow impacts. Our goal is to minimize disruption and ensure that everyone is aligned on key milestones. There will be no changes to the unit price until the new system is fully implemented, data migration is complete, the existing system is reconfigured, and we have sufficient data usage metrics to determine any necessary price adjustments. We estimate this will be no earlier than the end of 2025. We believe this to be the fastest path towards a stable and effective storage solution that can cater to the varied storage needs of our user community. Please find more details below. 

Phoenix cluster at PACE hosts two major storage systems – scratch and project. Scratch is the temporary file system for storing files used during job execution — it is cleaned up once a month by including files older than 60days for deletion. Project is a long-term file system – it has 2.3pB of data and about 1.8B files on it. Besides environmental factors such as chilled water outage, these two storage systems have been significant contributors to downtime and degraded performance of the Phoenix cluster. Based on our analysis, so far for the calendar year 2024, storage failure or severe degradation accounts for 47% of unplanned downtime. Addressing this concern has been our primary focus over the past six months. This message aims to share our progress and plans for the next 12 months in this regard.  

Scratch Space 

The Scratch space is supported by a DDN 400NVX2 unit, which includes a mix of flash drives (NVMe) and spinning disks.  The system was offlined during August maintenance for a major software upgrade performed by the vendor to improve its stability.  Furthermore, software issues/instability on this unit has required us to turn off the use of flash drives for hot pools that has impacted performance. To address this, a second software upgrade is planned for January 2025 during our scheduled maintenance period, at which point we will configure the flash space to use Progressive File Layout (PFL), keeping small files in flash and progressively increasing the number of stripes across all devices for improved performance. 

Project Space 

Project space is supported by a DDN18K system purchased in 2020. This unit primarily uses spinning disks and is nearing its end of life. Over the past two years, we have been experiencing an increase in issues related to software defects and disk failures. While the system has built-in redundancies to support multiple disk failures, the disk rebuild process coupled with an increasing number of disk intensive research jobs on Phoenix are negatively impacting the performance. 

We have adopted a two-pronged strategy to address the project storage:  

  1. Perform software and limited hardware upgrades. During the August maintenance period, the hardware subsystem supporting the metadata functionality of the Lustre filesystem was replaced with a new dedicated unit. Due to its complexity, this operation required an additional day of work. The objectives of this upgrade were to a) improve the performance of the metadata functionality and b) provide an upgrade path for the future software releases. During the January 2025 maintenance period, the vendor will be able to perform a major software upgrade, to standardize Lustre 12.4 on all the storage appliances. This will increase the stability and performance of the project storage while simplifying its management.  
  1. Consult with other research computing sites (including the Texas Advanced Computing Center) to invest in a new storage system to replace or complement our DDN 18K unit. 

Based on feedback from other research computing sites, we made an initial investment in an all-flash storage system from VAST Data. Being an all-flash storage with disaggregated architecture we expect significantly better uptime and performance on this new system. Particularly, VAST system does not require downtime to perform major code upgrades. The vendor has installed the system, and we are in the process of bringing the unit into production to host data, initially in support of improvements to the DDN18K.  

Our plan over the next 12 months includes: 

  1. Migrate all data from the current DDN18K project space to the new VAST storage space to support these improvements. Due to the storage outages and performance issues affecting our user community, we want to clarify the following during the migration process over the next 12 months, or as long as necessary to complete the DDN18K improvements: 
  • All storage credit balances, including the free tier, will be equivalent to the existing system. 
  • The unit rate of $5.67 per TB per month for the paid tier will remain the same for both the current project space and the new VAST Storage system. 
  1. Work with our vendor to reconfigure the DDN18K appliance to efficiently use all available SSD / NVMe drive space and recreate storage pools to remove performance bottlenecks during disk failure. This will require a complete reformatting of the storage space. 
  1. Gather metrics on storage efficiencies (e.g., capacity reduction after data de-duplication and compression) and operational efficiencies so we can more accurately calculate the rate for VAST Storage. 
  1. Leverage the VAST Data analytics to help users archive older data to lower-cost storage options such as CEDAR.  
  1. Publish comparative analysis about functionality, performance, and resiliency between the different storage services provided by PACE to help users make decisions on what storage service(s) to use based on their type of research and data. 

In the long term, we expect the new VAST Data storage system to be offered as a separate service with its own storage rate. This rate will likely be higher than the current $5.67 per TB per month for the DDN18K system. However, we cannot determine the final rate until we better understand how our data usage and efficiencies are managed by the new system. 

Storage credits that have already been purchased, or are purchased during this transition, will retain their value on the DDN18K system (in terabyte months). Alternatively, they can be converted to storage credits on the VAST system at a ratio to be determined once the transition is complete. At that point, users will have the option to stay on VAST or move back to the reconfigured DDN18K, or choose a different, potentially cheaper storage option. 

Our goal is to enhance the reliability of the storage system in PACE while introducing new technologies to meet the diverse needs and budgets of the Georgia Tech Research community. Our aim is to develop migration strategies that minimize disruptions to your workflows and offer more storage options tailored to your requirements. To this end, we are cross-evaluating bulk versus individual group migrations, and we will be engaging with different research groups as necessary. We are committed to providing regular updates during this process. 

Thank you for your understanding and support during this project. If you have any questions or concerns, please feel free to reach out to us at pace-support@oit.gatech.edu. 

[Resolved] Firebird ASDL Outage

On Oct 30, 2024, at 9:20 PM, there was a drive failure on the Firebird ASDL servers (on the ZFS pool dedicated to the ASDL project). The ASDL login nodes were offlined. Several jobs failed, and no new jobs were accepted since 10:09 AM on Oct 31. The NFS server was restarted and tested, and the ASDL nodes were back online at 12:38 PM on Oct 31.

New GPUs for Phoenix, V100s being Replaced 

[Additional Message 11/7/24]

As we prepare to remove 12 of the V100 servers from Phoenix next week in preparation for the arrival of new GPU nodes in December, we would like to inform you of another set of new GPUs available on the cluster through the embers backfill QOS.

There are 8 nodes, each with 8 L40S GPUs, providing 64 GPUs that have been available exclusively on embers (due to the ownership of this equipment) since late September in the Phoenix RHEL9 environment.

Visit our Phoenix Slurm guide on GPU requests to learn how to request them. Be sure to include a request for the embers QOS when requesting L40S architecture, at least until the additional L40S nodes for general use become available in December on inferno. You must make the request from the RHEL9 environment. Access via Phoenix OnDemand is not yet available.

Please contact pace-support@oit.gatech.edu with any questions.

[Original Post 10/31/24]

We’re happy to announce that there are will be 6 new H200 machines coming to Phoenix for general usage, with 8x NVIDIA H200 GPUs each, along with 2x L40S machines, each with 8x NVIDIA L40S GPUs. These will be available on the RHEL 9 operating system on Phoenix, which is required to support the new hardware. 

12 of the existing V100 servers will be REMOVED from the Phoenix RHEL7 environment to make room for the new L40S hardware, due to having reached the end-of-life on vendor support. The overall impact will be to greatly increase both the number and power of GPUs available on Phoenix – 24 V100 GPUs will be replaced with 16 L40S and 48 H200 GPUs. 
 
This change will begin on Nov. 11th, when the V100 machines will be removed, and we will 
begin installing the new servers, which we hope to release by December 6th
 
The new machines will be available via both the Inferno QoS and Embers on RHEL9. Jobs using the new H200 machines will be charged at a rate of $0.673 per GPU Hour ($1.4571 for GTRI), matching the current H100 rate. The rate for the new L40S GPUs will be shared prior to their release, as we’re working through approvals.