Headnode Violation Detector Updates

Running many or extended resource-intensive processes on the login nodes slows the node for all users and is a violation of PACE policy, as it prevents others from using the cluster. We would like to make you aware of recent improvements to our headnode violation detector.

PACE may stop processes that improperly occupy the headnode, in order to restore functionality for all members of our user community. Please use compute nodes for all computational work. If you need an interactive environment, please submit an interactive job. If you are uncertain about how to use the scheduler to work on compute nodes, please contact us for assistance. We are happy to help you with your workflows on the cluster.

If you run processes that overuse the headnode, we will send an email asking you to refrain from doing so. We have recently updated our violation detector to ensure that emails are sent to the proper user and to adjust the logic of the script to align it with policy.

Thank you for your efforts to ensure PACE clusters are an available resource for all.

Reboot on login-hive1 on Tuesday, December 21, at 10:00 AM

Summary: Reboot on login-hive1 on Tuesday, December 21, at 10:00 AM

What’s happening and what are we doing: As part of our preparations for the RHEL7.9 testflight environment that will be available in January, PACE will reboot the login-hive1 headnode on Tuesday, December 21, at 10:00 AM. Hive has two headnodes, and the login-hive2 headnode will not be impacted. The load balancer that automatically routes new user login-hive connections to either login-hive1 or login-hive2 has been adjusted to send all new connections to login-hive2 beginning the afternoon of December 15.

How does this impact me: If you are connected to login-hive1 at the time of the reboot, you will lose your connection to Hive, and any processes running on login-hive1 will be terminated. Running interactive jobs submitted from login-hive1 will also be disrupted. Batch jobs will not be affected. Users connected to login-hive2 will not be impacted. Users who connected to Hive prior to Wednesday afternoon may be on login-hive1 and should complete their current work or log out and back in to Hive before Tuesday. Users who ssh to login-hive.pace.gatech.edu beginning this afternoon will all be assigned to login-hive2 and will not be impacted. If you specifically ssh to login-hive1.pace.gatech.edu, then you will still reach the node that is scheduled to be rebooted and should complete your session before next Tuesday.

What we will continue to do: PACE will monitor the Hive headnodes and ensure that login-hive1 is fully functional after reboot before re-initiating the load balancer that distributes user logins between the two headnodes.
Please accept our sincere apology for any inconvenience that this temporary limitation may cause you. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

[Complete] Hive Project & Scratch Storage Cable Replacement

[Update 11/12/21 11:30 AM]

The pool rebuilding has completed on Hive GPFS storage, and normal performance has returned.

[Update 11/10/21 11:30 AM]

The cable replacement has been completed without interruption to the storage system. Rebuilding of the pools is now in progress.

[Original Post 11/9/21 5:00 PM]

Summary: Hive project & scratch storage cable replacement potential outage and subsequent temporary decreased performance

What’s happening and what are we doing: A cable connecting one enclosure of the Hive GPFS device, hosting project (data) and scratch storage, to one of its controllers has failed and needs to be replaced, beginning around 10:00 AM tomorrow (Wednesday). After the replacement, pools will need to rebuild over the course of about a day.

How does this impact me: Since there is a redundant controller, there should not be an outage during the cable replacement. However, a similar previous replacement caused storage to become unavailable, so this is a possibility. If this happens, your job may fail or run without making progress. If you have such a job, please cancel it and resubmit it once storage availability is restored.
In addition, performance will be slower than usual for a day following the repair as pools rebuild. Jobs may progress more slowly than normal. If your job runs out of wall time and is cancelled by the scheduler, please resubmit it to run again.

What we will continue to do: PACE will monitor Hive GPFS storage throughout this procedure. In the event of a loss of availability occurs, we will update you.

Please accept our sincere apology for any inconvenience that this temporary limitation may cause you. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

TensorFlow update required due to identified security vulnerability

Summary: TensorFlow update required due to identified security vulnerability

What’s happening and what are we doing: A security vulnerability was discovered in TensorFlow. PACE has installed the patched version 2.6.0 of TensorFlow in our software repository, and we will retire the older versions on November 3, 2021, during our maintenance period.

How does this impact me: Both researchers who use PACE’s TensorFlow installation and those who have installed their own are impacted.

The following PACE installations will be retired:

Modules: tensorflow-gpu/2.0.0 and tensorflow-gpu/2.2.0

Virtual envs under anaconda3/2020.02: pace-tensorflow-gpu-2.2.0 and pace-tensorflow-2.2.0

Please use the tensorflow-gpu/2.6.0 module instead of the older versions  identified above. If you were previously using  a PACE-provided virtual env provided  inside the anaconda3 module, please use the separate new module instead. You can find more information about using PACE’s TensorFlow installation in our documentation. You will need to update your PBS scripts to call the new module, and you may need to update python code to ensure compatibility with the latest version of the package.

If you have created your own conda environment on PACE and installed TensorFlow in it, please create a new virtual environment and install the necessary packages. You can build this environment from the tensorflow-gpu/2.6.0 virtual environment as a base if you would like, then install other packages you need, as described in our documentation. In order to protect Georgia Tech’s cybersecurity, please discontinue use of any older environments running prior versions of TensorFlow on PACE.

What we will continue to do: We are happy to assist researchers with the transition to the new version of TensorFlow. PACE will offer support to researchers upgrading TensorFlow at our upcoming consulting sessions. The next sessions are Thursday, October 28, 10:30-12:15, and Tuesday, November 2, 2:00-3:45. Visit our training page for the full schedule and BlueJeans links.

Thank you for your prompt attention to this security update, and please accept our sincere apology for any inconvenience that this may cause you. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

Hive scheduler recurring outages

[Update 11/5/21 3:15 PM]

During the November maintenance period, PACE separated Torque and Moab, the two components of the Hive scheduler. This two-server setup, mirroring the Phoenix scheduler arrangement, should improve stability of the Hive scheduler under heavy utilization. We will continue to monitor the Hive scheduler. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

[Update 10/15/21 5:15 PM]

The Hive scheduler is functioning at this time. The PACE team disabled several system utilities that may have contributed to earlier issues with the scheduler. We will continue to monitor the scheduler status and to work with our support vendor to improve stability of Hive’s scheduler. Please check this blog post for updates.

[Update 10/15/21 4:15 PM]

The Hive scheduler is again functional. The PACE team and our vendor are continuing our investigation in order to restore stability to the scheduler.

[Original Post 10/15/21 12:35 PM]

Summary: Hive scheduler recurring outages

What’s happening and what are we doing: The Hive scheduler has been experiencing intermittent outages over the past few weeks requiring frequent restarts. At this time, the PACE team is running a diagnostic utility and will restart the scheduler shortly. The PACE team is actively investigating the outages in coordination with our scheduler vendor to restore stability to Hive’s scheduler.

How does this impact me: Hive researchers may be unable to submit or check the status of jobs, and jobs may be unable to start. You may find that the “qsub” and “qstat” commands and/or the “showq” command are not responsive. Already-running jobs will continue.

What we will continue to do: PACE will continue working to restore functionality to the Hive scheduler and coordinating with our support vendor. We will provide updates on our blog, so please check here for current status.

Please accept our sincere apology for any inconvenience that this temporary limitation may cause you. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

 

Hive Project & Scratch Storage Battery Replacement

[Update 9/23/21 3:15 PM]

The replacement batteries have reached a sufficient charge, and Hive GPFS performance has been restored. Thank you for your patience during this maintenance.

[Original Post 9/23/21 12:30 PM]

Summary: Battery replacement on Hive project & scratch storage will impact performance today.
What’s happening and what are we doing: UPS batteries on the Hive GPFS storage device, holding project (data) and scratch storage, need to be replaced. During the replacement, which will begin shortly this afternoon, storage will shift to write-through mode, and performance will be impacted. Once the new batteries are sufficiently charged, performance will return to normal.
How does this impact me: Hive project and scratch performance will be impacted until the fresh batteries have sufficiently charged, which should take approximately 3 hours. Jobs may progress more slowly than normal. If your job runs out of wall time and is cancelled by the scheduler, please resubmit it to run again.
What we will continue to do: PACE will monitor Hive GPFS storage throughout this procedure.
Please accept our sincere apology for any inconvenience that this temporary limitation may cause you. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

Globus maintenance downtime on September 18

Summary: Globus maintenance downtime on September 18
What’s happening and what are we doing: Globus will be undergoing maintenance worldwide on September 18, beginning at 11:00 AM and expected to last for up to 30 minutes, to complete database upgrades. Details are available on the Globus website.
How does this impact me: You will not be able to access Globus during this time nor start a transfer. Any transfers in progress will be paused and will automatically resume upon completion of maintenance. This affects all Globus services, including endpoints at PACE on our Phoenix and Hive clusters, plus others you may use at other computing sites.
If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

[Resolved] Hive scheduler outage

[Update 4:40 PM 7/23/21]

After continued investigation, cleaning up the scheduler logs, and rebooting the scheduler node, we have restored the Hive scheduler to full functionality. Jobs that have been submitted and queued are now running, and there was no interruption to running jobs. New jobs submitted at this time should start as space becomes available, as usual. Thank you for your patience as we investigated this situation.

Please contact us at pace-support@oit.gatech.edu with any questions.

[Original Message 1:35 PM 7/23/21]

The Hive scheduler has been experiencing intermittent outages over the last few days while under heavy load, and jobs have been unable to start for nearly all of today (Friday). You may find that jobs you have submitted to Hive remain queued and do not start. We are actively working to restore functionality and will update you as more information becomes available. Thank you for your patience as we investigate this situation.
Please contact us at pace-support@oit.gatech.edu with any questions.

Phoenix storage issue

[Update 2:05 PM 7/22/21]

The controller reboot is complete, and we believe no disruption occurred in access to Phoenix storage. Please contact us at pace-support@oit.gatech.edu with any questions.

[Original Message 12:55 PM 7/22/21]

In coordination with our support vendor, we are working to resolve an issue with a Phoenix Lustre metadata controller, which supports both project and scratch storage.
At 1:30 PM today, we will reboot one of the controllers. We do not expect any impact to users, as the other controller is running without error at this time. Should there be any unexpected impact, we will work to restore full functionality as quickly as possible. We will provide an update when this work is complete.
Please contact us at pace-support@oit.gatech.edu with any questions.

Hive Project Storage Quota Update

In coordination with the Hive PIs, PACE has updated our quota policies for project storage on Hive, in order to facilitate easier access to available storage capacity for our users. For project storage, accessed via the “data” symbolic link in your home directory, block quotas are now shared by an entire research group, rather than being set at the user level. All users in a single PI’s storage allocation have access to the entire quota, which brings Hive in line with Phoenix’s quota arrangement. Most research groups have 50 TB of project storage on Hive, with the exception of those specifically provided with a higher allocation in the NSF grant funding the cluster. Each user maintains a limit of 2 million files within their research group’s project storage.

You can review your storage usage on Hive by running the updated “pace-quota” command on any Hive node. Quotas for home (5 GB per user) and scratch (7 TB per user) directories are unchanged. Please visit our documentation for more details about Hive storage.

Please contact us at pace-support@oit.gatech.edu with any questions about using Hive.