Posts

Join us today for GT’s Virtual ARC Symposium & Poster Session @ SC21 that’s on Wednesday (11/17) 6:00pm – 8:00pm

This is a friendly reminder that the ARC Symposium and Poster Session is today from 6:00pm – 8:00pm (EST).  Join us for this exciting virtual event that will feature invited talks plus more than 20! poster presenters whom will highlight GT’s efforts in research computing, so relax for the evening and engage with our community and guests as we have a number joining from outside GT that includes Microsoft, AMD, Columbia, UCAR, to name a few…   Hope you can join us.

Links to Join the Event:

To join the ARC Symposium invited talks session (6:00 – 7:00pm EST), please use the BlueJeans link below: https://primetime.bluejeans.com/a2m/live-event/jxzvgwub

To join the ARC Symposium poster session (7:00pm – 8:15pm EST), use the following link:
https://gtsc21.event.gatherly.io/

ARC Symposium Agenda:

5:45 PM EST – Floor Opens

6:00 PM EST – Opening Remarks and Welcome 

Prof. Srinivas Aluru, Executive Director of IDEaS

6:05 PM EST –  “Exploring the Cosmic Graveyard with LIGO and Advanced Research Computing”

Prof. Laura Cadonati, Associate Dean for Research, College of Sciences

6:25 PM EST – “Life after Moore’s Law: HPC is Dead, Long Live HPC!”

Prof. Rich Vuduc, Director of CRNCH

6:45 PM EST –  “PACE Update on Advanced Research Computing at Georgia Tech”

Pam Buffington, Interim Associate Director of Research Cyberinfrastructure, PACE/OIT and Director Faculty & External Engagement, Center for 21st Century University

7:00PM EST – Poster Session Opens (more than 20 poster presenters!!)

8:15PM EST – Event Closes

[Complete] Hive Project & Scratch Storage Cable Replacement

[Update 11/12/21 11:30 AM]

The pool rebuilding has completed on Hive GPFS storage, and normal performance has returned.

[Update 11/10/21 11:30 AM]

The cable replacement has been completed without interruption to the storage system. Rebuilding of the pools is now in progress.

[Original Post 11/9/21 5:00 PM]

Summary: Hive project & scratch storage cable replacement potential outage and subsequent temporary decreased performance

What’s happening and what are we doing: A cable connecting one enclosure of the Hive GPFS device, hosting project (data) and scratch storage, to one of its controllers has failed and needs to be replaced, beginning around 10:00 AM tomorrow (Wednesday). After the replacement, pools will need to rebuild over the course of about a day.

How does this impact me: Since there is a redundant controller, there should not be an outage during the cable replacement. However, a similar previous replacement caused storage to become unavailable, so this is a possibility. If this happens, your job may fail or run without making progress. If you have such a job, please cancel it and resubmit it once storage availability is restored.
In addition, performance will be slower than usual for a day following the repair as pools rebuild. Jobs may progress more slowly than normal. If your job runs out of wall time and is cancelled by the scheduler, please resubmit it to run again.

What we will continue to do: PACE will monitor Hive GPFS storage throughout this procedure. In the event of a loss of availability occurs, we will update you.

Please accept our sincere apology for any inconvenience that this temporary limitation may cause you. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

[Complete – PACE Maintenance Period: November 3 – 5, 2021] PACE Clusters Ready for Research!

Dear PACE researchers,

Our scheduled maintenance has completed ahead of schedule! All PACE clusters, including Phoenix, Hive, Firebird, PACE-ICE, COC-ICE, and Buzzard, are ready for research. As usual, we have released all users jobs that were held by the scheduler. We appreciate everyone’s patience as we worked through these maintenance activities.

Our next maintenance period is tentatively scheduled to begin at 6:00AM on Wednesday, February 9, 2022, and conclude by 11:59PM on Friday, February 11, 2022. We have also tentatively scheduled the remaining maintenance periods for 2022 for May 11-13, August 10-12, and November 2-4.

The following tasks were part of this maintenance period:

ITEMS REQUIRING USER ACTION:

  • [Complete] TensorFlow upgrade due to security vulnerability. PACE will retire older versions of TensorFlow, and researchers should shift to using the new module. We also request that you replace any self-installed TensorFlow packages. Additional details are available on our blog.

ITEMS NOT REQUIRING USER ACTION:

  • [Complete][Datacenter] Databank will clean the water cooling tower, requiring that all PACE compute nodes be powered off.
  • [Complete][System] Operating system patch installs
  • [Complete][Storage/Phoenix] Lustre controller firmware and other upgrades
  • [Complete][Storage/Phoenix] Lustre scratch upgrade and expansion
  • [Postponed][Storage] Hive GPFS storage upgrade
  • [Complete][System] System configuration management updates
  • [Complete][System] Updates to NVIDIA drivers and libraries
  • [Complete][System] Upgrade some PACE infrastructure nodes to RHEL 7.9
  • [Complete][System] Reorder group file
  • [Complete][Headnode/ICE] Configure c-group controls on COC-ICE and PACE-ICE headnodes
  • [Complete][Scheduler/Hive] separate Torque & Moab servers to improve scheduler reliability
  • [Complete][Network] update ethernet switch firmware
  • [Complete][Network] update IP addresses of switches in BCDC

If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu. You may read this message and prior updates related to this maintenance period on our blog.

Best,

-The PACE Team

 

TensorFlow update required due to identified security vulnerability

Summary: TensorFlow update required due to identified security vulnerability

What’s happening and what are we doing: A security vulnerability was discovered in TensorFlow. PACE has installed the patched version 2.6.0 of TensorFlow in our software repository, and we will retire the older versions on November 3, 2021, during our maintenance period.

How does this impact me: Both researchers who use PACE’s TensorFlow installation and those who have installed their own are impacted.

The following PACE installations will be retired:

Modules: tensorflow-gpu/2.0.0 and tensorflow-gpu/2.2.0

Virtual envs under anaconda3/2020.02: pace-tensorflow-gpu-2.2.0 and pace-tensorflow-2.2.0

Please use the tensorflow-gpu/2.6.0 module instead of the older versions  identified above. If you were previously using  a PACE-provided virtual env provided  inside the anaconda3 module, please use the separate new module instead. You can find more information about using PACE’s TensorFlow installation in our documentation. You will need to update your PBS scripts to call the new module, and you may need to update python code to ensure compatibility with the latest version of the package.

If you have created your own conda environment on PACE and installed TensorFlow in it, please create a new virtual environment and install the necessary packages. You can build this environment from the tensorflow-gpu/2.6.0 virtual environment as a base if you would like, then install other packages you need, as described in our documentation. In order to protect Georgia Tech’s cybersecurity, please discontinue use of any older environments running prior versions of TensorFlow on PACE.

What we will continue to do: We are happy to assist researchers with the transition to the new version of TensorFlow. PACE will offer support to researchers upgrading TensorFlow at our upcoming consulting sessions. The next sessions are Thursday, October 28, 10:30-12:15, and Tuesday, November 2, 2:00-3:45. Visit our training page for the full schedule and BlueJeans links.

Thank you for your prompt attention to this security update, and please accept our sincere apology for any inconvenience that this may cause you. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

Hive scheduler recurring outages

[Update 11/5/21 3:15 PM]

During the November maintenance period, PACE separated Torque and Moab, the two components of the Hive scheduler. This two-server setup, mirroring the Phoenix scheduler arrangement, should improve stability of the Hive scheduler under heavy utilization. We will continue to monitor the Hive scheduler. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

[Update 10/15/21 5:15 PM]

The Hive scheduler is functioning at this time. The PACE team disabled several system utilities that may have contributed to earlier issues with the scheduler. We will continue to monitor the scheduler status and to work with our support vendor to improve stability of Hive’s scheduler. Please check this blog post for updates.

[Update 10/15/21 4:15 PM]

The Hive scheduler is again functional. The PACE team and our vendor are continuing our investigation in order to restore stability to the scheduler.

[Original Post 10/15/21 12:35 PM]

Summary: Hive scheduler recurring outages

What’s happening and what are we doing: The Hive scheduler has been experiencing intermittent outages over the past few weeks requiring frequent restarts. At this time, the PACE team is running a diagnostic utility and will restart the scheduler shortly. The PACE team is actively investigating the outages in coordination with our scheduler vendor to restore stability to Hive’s scheduler.

How does this impact me: Hive researchers may be unable to submit or check the status of jobs, and jobs may be unable to start. You may find that the “qsub” and “qstat” commands and/or the “showq” command are not responsive. Already-running jobs will continue.

What we will continue to do: PACE will continue working to restore functionality to the Hive scheduler and coordinating with our support vendor. We will provide updates on our blog, so please check here for current status.

Please accept our sincere apology for any inconvenience that this temporary limitation may cause you. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

 

PACE’s centralized OSG service, powered with a new cluster “Buzzard”

We are happy to announce a new addition to PACE’s service portfolio to support Open Science Grid (OSG) efforts on campus and beyond. This service is kick-started by a brand new cluster, named “Buzzard”, funded by an NSF award* lead by Dr. Mehmet Belgin and Semir Sarajlic of PACE, in collaboration with Drs. Laura Cadonati, Nepomuk Otte, and Ignacio Taboada of the Center for Relativistic Astrophysics (CRA). 

Open Science Grid (OSG) is a unique consortium that provides shared infrastructure and services to unify access to supercomputing sites across the nation, making a vast array of High Throughput Computing (HTC) resources available to US-based researchers. OSG has been instrumental in ground-breaking scientific advancements, including but not limited to the Nobel-winning Gravitational Waves research (LIGO).  

Did you know that all of the GT researchers already qualify for OSG? This means you can join today and start running jobs on this vast resource at no cost. We highly encourage you to register for PACE’s next OSG orientation class, which will get you started with the basics of running on OSG.  As an added resource, PACE offers documentation to get researchers quickly started with OSG. 

In addition to training and documentation, PACE offers resource integration services. More specifically, GT faculty members now have an option to acquire new resources to expand Buzzard with their own OSG projects, similar to the High Performance Computing (HPC) services PACE had been successfully offering since 2009 prior to the new cost model. As a part of the NSF award, PACE already started supporting several exceptional OSG projects, namely LIGO, IceCube and CTA/VERITAS, and we look forward to supporting more OSG projects in the future! 

If you are interested in the OSG service, please feel free to reach out to us (pace-support@oit.gatech.edu) and we’ll be happy to discuss how our new service can transform your research. 

Thank you! 

 

* This material is based upon work supported by the National Science Foundation under grant number 1925541. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. 

Announcing the PACE OSG Orientation Class

Dear PACE Researchers, 

PACE is pleased to announce the launch of the PACE Open Science Grid (OSG) Orientation class that introduces Georgia Tech’s research community to OSG and the distributed high throughput computing resources that are available via OSG Connect.   Join us for this virtual orientation to learn about OSG and how it may benefit your research needs. 

Please see below the dates for the sessions and the registration form: 

Dates and times:  October 15, 10:30am – 12:15pm 

                               November 11, 1:30pm – 3:15pm 

Registration:         https://b.gatech.edu/3Bi4Yie 

This class is based in part on the work supported by the NSF CC* award 1925541: “Integrating Georgia Tech into the Open Science Grid for Multi-Messenger Astrophysics”. With this award, PACE, in collaboration with Center for Relativistic Astrophysics, added CPU/GPU/Storage to the existing OSG capacity, as well as the first regional StashCache service that benefits all OSG institutions in the Southeast region, not just Georgia Tech.  

This orientation is the first step into PACE’s longer-term plans to support OSG initiatives on campus. Please be on the lookout for more exciting announcements from our team in the very near future. 

We look forward to you joining us for the OSG orientation. 

Best,

The PACE Team

Hive Project & Scratch Storage Battery Replacement

[Update 9/23/21 3:15 PM]

The replacement batteries have reached a sufficient charge, and Hive GPFS performance has been restored. Thank you for your patience during this maintenance.

[Original Post 9/23/21 12:30 PM]

Summary: Battery replacement on Hive project & scratch storage will impact performance today.
What’s happening and what are we doing: UPS batteries on the Hive GPFS storage device, holding project (data) and scratch storage, need to be replaced. During the replacement, which will begin shortly this afternoon, storage will shift to write-through mode, and performance will be impacted. Once the new batteries are sufficiently charged, performance will return to normal.
How does this impact me: Hive project and scratch performance will be impacted until the fresh batteries have sufficiently charged, which should take approximately 3 hours. Jobs may progress more slowly than normal. If your job runs out of wall time and is cancelled by the scheduler, please resubmit it to run again.
What we will continue to do: PACE will monitor Hive GPFS storage throughout this procedure.
Please accept our sincere apology for any inconvenience that this temporary limitation may cause you. If you have any questions or concerns, please direct them to pace-support@oit.gatech.edu.

Hive and Phoenix Scheduler Configuration Change

Dear PACE Researchers, 

We would like to announce an upcoming change to the scheduler configuration on the Phoenix and Hive clusters at 9:00 AM on Thursday, September 23rd. This change should improve the scheduler performance given the large number of jobs executed by our users. 

What will PACE be doing: PACE will reduce the retention time for job-specific logs from 24 hours to 6 hours after job completion.  Reducing the amount of job information the scheduler needs to process regularly should provide a more stable and faster job submission environment. Additionally, the downtime associated with scheduler restarts should improve, as job ingestion time will be reduced accordingly.  

Who does this message impact: Any user who attempts to use qstat for a job more than 6 hours after completion will be unable to do so moving forward. In addition to the scheduler job STDOUT/STDERR files, job statistics for completed jobs on Phoenix and Hive can be queried at https://pbstools-coda.pace.gatech.edu. 

What PACE will continue to do: We will monitor the clusters for issues during and after the configuration change to assess any immediate impacts from the update. We will continue to assess the scheduler health to ensure a stable job submission environment. 

As always, please contact us at pace-support@oit.gatech.edu with any questions or concerns regarding this change. 

Best Regards, 
The PACE Team

[Complete] PACE Maintenance Period (November 3 – 5, 2021)

[Complete 11/5/21 3:15 PM]

Our scheduled maintenance has completed ahead of schedule! All PACE clusters, including Phoenix, Hive, Firebird, PACE-ICE, COC-ICE, and Buzzard, are ready for research. As usual, we have released all users jobs that were held by the scheduler. We appreciate everyone’s patience as we worked through these maintenance activities.

Our next maintenance period is tentatively scheduled to begin at 6:00AM on Wednesday, February 9, 2022, and conclude by 11:59PM on Friday, February 11, 2022. We have also tentatively scheduled the remaining maintenance periods for 2022 for May 11-13, August 10-12, and November 2-4.

The following tasks were part of this maintenance period:
ITEMS REQUIRING USER ACTION:
• [Complete] TensorFlow upgrade due to security vulnerability. PACE will retire older versions of TensorFlow, and researchers should shift to using the new module. We also request that you replace any self-installed TensorFlow packages. Additional details are available on our blog.

ITEMS NOT REQUIRING USER ACTION:
• [Complete][Datacenter] Databank will clean the water cooling tower, requiring that all PACE compute nodes be powered off.
• [Complete][System] Operating system patch installs
• [Complete][Storage/Phoenix] Lustre controller firmware and other upgrades
• [Complete][Storage/Phoenix] Lustre scratch upgrade and expansion
• [Postponed][Storage] Hive GPFS storage upgrade
• [Complete][System] System configuration management updates
• [Complete][System] Updates to NVIDIA drivers and libraries
• [Complete][System] Upgrade some PACE infrastructure nodes to RHEL 7.9
• [Complete][System] Reorder group file
• [Complete][Headnode/ICE] Configure c-group controls on COC-ICE and PACE-ICE headnodes
• [Complete][Scheduler/Hive] separate Torque & Moab servers to improve scheduler reliability
• [Complete][Network] update ethernet switch firmware
• [Complete][Network] update IP addresses of switches in BCDC

If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu.

 

[Update 11/1/21 2:00 PM]

C-group controls will be configured on the login nodes for both COC-ICE and PACE-ICE during the maintenance period this week. This should help mitigate overuse of the login node by students running heavy computations, which has slowed the node for others.

Please use compute nodes for all computational work and avoid resource-intensive processes on the login nodes. Students who need an interactive environment are requested to submit an interactive job. Students who are uncertain about how to use ICE schedulers to work on compute nodes should contact their course’s instructor or TA for assistance. They can help you with workflows on the cluster. PACE will stop processes that overuse the login nodes, in order to restore functionality for all students.

Thank you for your efforts to ensure ICE clusters are an available resource for all students in participating courses.

[Reminder 10/26/21 4:30 PM]

Additional details and instructions for the TensorFlow upgrade are available in another blog post.

[Full announcement 10/20/21 10:30 AM]

As previously announced, our next PACE maintenance period is scheduled to begin at 6:00 AM on Wednesday, November 3, and end at 11:59 PM on Friday, November 5. As usual, jobs that request durations that would extend into the maintenance period will be held by the scheduler to run after maintenance is complete. During the maintenance window, access to all PACE-managed computational and storage resources will be unavailable. This includes Phoenix, Hive, Firebird, PACE-ICE, COC-ICE, and Buzzard.

Please see below for a tentative list of activities:

ITEMS REQUIRING USER ACTION:

  • TensorFlow upgrade due to security vulnerability. PACE will retire older versions of TensorFlow, and researchers should shift to using the new module. We also request that you replace any self-installed TensorFlow packages. Additional details and instructions will follow in a separate message.

ITEMS NOT REQUIRING USER ACTION:

  • [Datacenter] Databank will clean the water cooling tower, requiring that all PACE compute nodes be powered off.
  • [System] Operating system patch installs
  • [Storage/Phoenix] Lustre controller firmware and other upgrades
  • [Storage/Phoenix] Lustre scratch upgrade and expansion
  • [System] System configuration management updates
  • [System] Updates to NVIDIA drivers and libraries
  • [System] Upgrade some PACE infrastructure nodes to RHEL 7.9
  • [System] Reorder group file
  • [Headnode/COC-ICE] Configure c-group controls on COC-ICE headnode
  • [Scheduler/Hive] separate Torque & Moab servers to improve scheduler reliability
  • [Network] update ethernet switch firmware
  • [Network] update IP addresses of switches in BCDC

If you have any questions or concerns, please contact us at pace-support@oit.gatech.edu.

 

[Early announcement]

Dear PACE Users,

This is a friendly reminder that our next Maintenance period is tentatively scheduled to begin at 6:00AM on Wednesday, 11/03/2021, and it is tentatively scheduled to conclude by 11:59PM on Friday, 11/05/2021. As usual, jobs with resource requests that would be running during the Maintenance Period will be held until after the Maintenance Period by the scheduler. During the Maintenance Period, access to all the PACE managed computational and storage resources will be unavailable.

As we get closer to the Maintenance Period, we will communicate the list of activities to be completed and update this blog post.

If you have any questions or concerns, please do not hesitate to contact us at pace-support@oit.gatech.edu.

Best,

The PACE Team