Posts

[Resolved] Expected Network Interruptions Due to Campus Network Maintenance – Intermittent delays or disruption to major campus IT services

[Original Post – February 18, 2019] On Sunday, Feb. 24, OIT will perform a series of data center upgrades and migrations. This service window includes intermittent delays or disruption to major campus IT services between 7 a.m. and 8 p.m. as well as occasional interruptions in wireless connectivity between 9 a.m. and 12 p.m.

During this service upgrade, the intermittent service interruptions will result in periods when users may not be able to connect to PACE managed resources or they may be disconnected from their sessions, which may  interrupt interactive jobs that rely on an active SSH connection to a given cluster.   However, these upgrades will not impact running or queued batch jobs.  OIT anticipates all the service upgrades and migrations to conclude by 8 p.m., and PACE users should resume their work as usual.

For additional information and details on the services that OIT will be upgrading and migrating, please refer to the status page link at https://status.gatech.edu

PACE clusters ready for research

Our February 2019 maintenance (http://blog.pace.gatech.edu/?p=6419) is complete on schedule. We have brought compute nodes online and released previously submitted jobs. Login nodes are accessible and your data are available. As usual, there are a small number of straggling nodes we will address over the coming days.
Please let us know any problems you may notice: pace-support@oit.gatech.edu

Compute
* (COMPLETE) Vendor will replace defective components on groups of servers
Network

* (COMPLETE) Ethernet network reconfiguration

Storage
* (COMPLETE) GPFS / DDN enclosure reset

* (COMPLETE) NAS maintenance and reconfiguration

Other
• (COMPLETE) PACE VMWare reconfiguration to remove out of support hosts

* (COMPLETE) Migration of Megatron cluster to RHEL7

[Resolved] Scheduler problem on RHEL7 Dedicated Clusters

[Resolved – February 1, 21:35] At about 5:20pm on February 1, scheduler for the new RHEL7 dedicated clusters went down after encountering a segmentation fault error.  We’ve resolved the incident, and brought the scheduler back online.  As far as we know, this incident impacted two jobs based on our assessment.  We advise that you review your jobs from today.  Additionally, users who may have attempted to submit jobs between 5:20pm – 9:35pm may have experienced scheduler communication errors when running qstat, qsub… commands.

We will continue to monitor the scheduler and update if needed. If you experience any further issues, please contact pace-support@oit.gatech.edu.

Thank you for your attention, and apologies for this inconvenience.

 

[Resolved] Networking (InfiniBand) problems

[Resolved, January 28] We had one of our main Mellanox IB switch’s partially go down on Sunday morning, which has left large amount of compute nodes without access to the IB interconnect.  Our system engineers have resolved the matter at about 9:41am, and the IB switch is back online.  As far as we know, the following queues have been impacted: athena-intel, atlantis, atlas-6-sunge, atlas-intel, force-6, joe-intel, joe-test, novazohar,, pace-devel, swarm, and zohar.   We advise that you review your jobs from this weekend/current jobs as this incident may have interrupted your jobs.  If your jobs have failed due to errors pertaining to MPI errors or files could not write to /scratch/ or  /data/[Your_Files], then please resubmit your jobs. 

We will continue to monitor this switch and update if needed.  If you experience any further issues, please contact pace-support@oit.gatech.edu.

Thank you, and sorry for this inconvenience.

PACE quarterly maintenance – (Feb 15-16, 2019)

[Update – 02/11/2019] Our updated quarterly scheduled maintenance task list will include the following:

Compute

  • (no user action needed) Vendor will replace defective components on groups of servers

Network

  • (no user action needed) Ethernet network reconfiguration

Storage

  • (no user action needed) GPFS / DDN enclosure reset
  • (no user action needed) NAS maintenance and reconfiguration

Other

  • (no user action needed) PACE VMWare reconfiguration to remove out of support hosts

 

[Original Post – 01/18/2019] We are preparing for a short maintenance day on February 15, 2019. Unlike our regular schedule, which starts on Thursdays and takes three days, this maintenance will start on a Friday and take only two days.

As usual, jobs with long walltimes will be held by the scheduler to ensure that no active jobs will be running when systems are powered off. These jobs will be released as soon as the maintenance activities are complete.

In general, we’ll perform maintenance on the GPFS storage, migrate some Virtual Machines to new servers, perform hardware changes on one of the clusters, and finalize the migration of “/usr/local”, which is network attached mount point on all machines, to a more reliable storage pool.

While we are still working on finalizing the task list and details, none of these tasks are expected to require any user actions.

We’ll update this post as we have more details.

 

 

Changes to mount points (no user impact expected)

The investigation results that followed the system failures that temporarily rendered the scientific repository unresponsive (http://blog.pace.gatech.edu/?p=6390) will require some additional maintenance. To facilitate this maintenance, we will make a change to the mount point for /usr/local, which is network mounted and identical on all compute nodes.

Our tests indicate that this swap can be performed live, without impacting running jobs. It’s also completely transparent to users; you don’t need to change or do anything as a result.

In the unlikely event of job crashes that you suspect are caused by this operation, please contact pace-support@oit.gatech.edu and we’ll be happy to assist.

Thank you,
PACE Team

[Resolved] Wide spread problems impacting all PACE machines

Update (12/21, 10:15am): A correction: The problems have started this morning around 8:15am, not yesterday evening as previously communicated. The systems were back online at 8:45am.

Update (12/21, 9:15am): There has been another incident started last night, causing the same symptoms (hanging and unavailability of scientific repository). OIT storage engineers reverted the services on the redundant system (high availability pair) and the storage is available again. We continue to work on investigating the root cause of recurring failures experienced since the past several weeks.

Update (12/12, 6:30pm): The services are successfully migrated to the high availability pair and the filesystems are once again accessible. We’ll continue to monitor the systems and take a close look into the errant components. It’s still a possibility that some of these problems may recur, but we’ll be ready to address them should they happen.

Update (12/12, 5:30pm): Unfortunately the problems seem to be coming back. We continue to work on this. Thank you for your patience.

Update (12/12, 11:30am): We identified the root cause as a configuration conflict between two devices and resolved the problem. All systems are back online and available for jobs.

Update (12/12, 10:00am): Our battle with the storage system continues. This filesystem is designed as a high availability service with redundancy components to prevent such situations, but unfortunately the second system failed to take over successfully. We are investigating the possibility of network being the culprit. We continue to work rigorously to bring the systems back online ASAP.

Update (12/11, 9:00pm): Continued problems, we are working on it with support from related OIT units. 

Update (12/11, 7:30pm): We mitigated the issue, but the intermittent problems may continue to recur until the root cause is addressed. We continue to work on it.

Original message:

Dear PACE Users,

At around 3:45pm on Dec 11  the fileserver that serves the shared “/usr/local” on all PACE machines started experiencing problems. This issue causes several wide-spread problems including:

  • Unavailability of the PACE repository (which is in “/usr/local/pacerepov1”)
  • Crashing of newly started jobs that run applications in the PACE repository
  • New logins will hang

Running applications that have their executables cached in memory may continue to run without problems, but it’s very difficult to tell exactly how different applications will be impacted.

We are working to resolve these problems ASAP and will keep you updated on this post.

 

 

 

 

Brief Interruption to VPN During Urgent VPN Service Maintenance

On November 29, 2018, from 10:00pm (EST)- 11:00pm (EST), OIT will be conducting maintenance of our VPN service.  During this period, users that are connected to our clusters via the VPN (anyc.vpn.gatech.edu) will be disconnected, and you will need to reconnect to the VPN and then the cluster.  This service maintenance will not impact any of the running batch jobs, but it may impact running interactive jobs during this period.  For additional details on the maintenance taking place, please visit the following site: https://status.gatech.edu/incidents/9ljkjx72462x

Thank you for your attention to this urgent maintenance that OIT is conducting.

[Resolved] CoC-ICE Cluster: Multi-node job problem

[Update – November 26, 2018] We’ve identified the issue and resolved the configuration error.  Users are now able to submit multi-node jobs on the CoC-ICE cluster.

[Original Post – November 21, 2018]

We are investigating an issue in which users experience hanging jobs when they submit a multi-node job on CoC-ICE cluster.   This issue does not impact users who are submitting jobs on a single node.  Also, this issue is not impacting the PACE-ICE cluster.

Thank you for your patience, and we apologize for this inconvenience while we resolve this issue.

[Resolved] ICE Clusters – Intermittent account problems

We received multiple reports about jobs crashing after being allocated on the instructional clusters (COC-ICE and PACE-ICE).   We’ve determined that intermittent account problems are the cause of this issue, and we are working towards a solution.

Thank you for your patience, and we apologies for the inconvenience.