Posts

PACE Debugging and Profiling Workshop on 03/21/2013

Dear PACE community,

We are happy to announce the first of the Debugging and Profiling Workshop that will take place on 03/21/2013 1pm-5pm, in the Old Rich Building Conference Room (ITDC 242).

If your code is crashing, hanging, producing inaccurate results, or running unbearably slow, you *do* want to be there. We will go over text and GUI based tools that are available on the PACE clusters, including gdb, valgrind, DDT, gprof, PAPI and TAU. There will be hands-on examples, so bring your laptop if you can, although it is not mandatory.

If you bring a laptop to follow the hands-on examples, please make sure that you have:

  • An active PACE account with access to one of the RHEL6 queues
  • Access to “GTwifi”
  • A terminal client to login (PuTTy for windows, Terminal for Mac)
  • A text editor that you are comfortable with (Vim, Emacs, nano, …)

Don’t worry if your laptop is not configured to access the PACE clusters. I will be in the conference room half an hour early to help you prepare for the session. Just show up a bit early with your laptop, and we will take care of the rest together 🙂

Please RSVP (to mehmet.belgin@oit.gatech.edu) by 03/19/2003 and include your GT username. Your RSVP will guarantee a seat and printed out copies of the course material. You will also be able to fetch an electronic copy (including all the slides and codes) anytime by running a simple command on the cluster (we will do that during the class).

Here’s the full schedule:

  • 12:30pm -> 1:00pm : (Optional) Help session to make sure your laptop is ready for the workshop
  •  1:00pm -> 2:45pm : Debugging session (gdb, valgrind, DDT)
  •  2:45pm -> 3:15pm : Break
  •  3:15pm -> 5:00pm : Profiling session (gprof, PAPI, TAU )

The location is the Old Rich Building, ATDC conference room, #242. The google knows us as “258 4th Street“. We are right across the Clough Commons Building.

We look forward to seeing you there!

Breaking news from NSF

Looks like Dr. Subra Suresh will be stepping down from his position as Director of NSF, effective late March to become the next President of Carnegie Mellon.

Click the link here: Staff Letter 2-4-13 to download a copy of his letter to the NSF community.

Interesting times are ahead for both NSF and DOE.

New and Updated Software: Portland Group Compiler and ANSYS

Two new sets of software have been installed on PACE-managed systems – PGI 12.10 and ANSYS 14.5 service pack 1.

PGI 12.10

The Portland Group, Inc. (a.k.a. PGI) makes software compilers and tools for parallel computing. The Portland Group offers optimizing parallel FORTRAN 2003, C99 and C++ compilers and tools for workstations, servers and clusters running Linux, MacOS or Windows operating systems based on the following microprocessors:

This version of the compiler supports the OpenACC GPU programming directives.
More information can be found at The Portland Group website.
Information about using this compiler with the OpenACC directives can be found at PGI Insider and OpenACC.

Usage Example

$ module load pgi/12.10
$ pgfortran example.f90
$ ./a.out
Hello World

ANSYS 14.5 Service Pack 1

ANSYS develops, markets and supports engineering simulation software used to foresee how product designs will behave and how manufacturing processes will operate in real-world environments.

Usage Example

$ module load ansys/14.5
$ ansys145

Panasas problems, impacting all PACE clusters

The Panasas storage server started responding slowly approximately an hour ago. We are using this server to host all of the software stack, and also for the “scratch” directory in your home folders. 

No jobs have been killed, but you will notice significant degradation in the performance. Starting new jobs/commands will be also slow, although they should run.

We are actively working with the vendor to resolve these issues and will keep you updated via this blog and the “pace-availability” email list.

Thank you for your patience.

PACE Team

Collapsing nvidiagpu and nvidia-gpu queues

PACE has several nodes with NVidia GPUs installed.
There are currently two queues (nvidiagpu and nvidia-gpu) that have GPU nodes assigned to them.
It is confusing to have two queues with the same purpose and slightly different names, so PACE will be collapsing both queues into the “nvidia-gpu” queue.
That means that the nvidiagpu queue will disappear, and the nvidia-gpu queue will have all of the resources contained by both queues.

Please send any questions or concerns to pace-support@oit.gatech.edu

January 2013 quarterly maintenance is complete

Greetings!

We have completed our quarterly maintenance activities.  Head nodes are online again and available for use, queued up jobs have been released, and the scheduler is awaiting new submissions.

Our RedHat 6 clusters have received system software updates.  Please keep an eye on your jobs to verify everything is operating correctly.

Our Panasas scratch storage has received another round of updates.  Preliminary testing indicates that we should have a resolution to our crashes, but the quota system is known to be broken.  As advised by Panasas, we have disabled quotas on scratch.  Please do your best to stay below the 20TB threshold.  We will be monitoring usage and know where you live.  🙂

We have a new license server providing checkouts of the Portland Group and Intel compilers, Matlab DCS, the Allinea DDT debugger and Lumerical.  Please let us know if you have problems accessing this software.  The old server is still running and we will be monitoring it for a short while for extraneous activity.

More nodes from Joe and the FoRCE have been converted from RHEL5 to RHEL6.  If you are still using the RHEL5 side of the world, please prioritize a transition to RHEL6.  We stand ready to assist you with this transition.

Finally, our new configuration system has been deployed in prototype mode.  We will use this to gather operational information and other data that will facilitate a full transition to this system in a future maintenance day.

As usual, please let us know (via email to pace-support@oit.gatech.edu) if you encounter any issues.

Happy Computing!

–Neil Bright
 

Symposium: Integrating Computational Science into your Undergraduate Curriculum

Clemson University (Clemson, SC) is hosting a symposium on February 11, 12, and 13.
The topic is “Integrating Computational Science into your Undergraduate Curriculum”
The workshop, symposium and training are open at no charge to all interested faculty and students who register to attend.
Financial assistance for primarily undergraduate faculty is available to cover travel costs.

See the Symposium website for the agenda and registration information.

Datacenter modifications

Tomorrow morning (January 9) at 8:30am, facilities management will be performing some work on the power distribution systems in the Rich datacenter.  None of this work is being performed on anything that power PACE systems; there should be zero impact on any job or computer that PACE manages.  However, due to the nature of sharing space in the datacenter; in the event of a major problem, PACE systems may be affected.

Once again, there should be zero impact on PACE systems; no jobs or computers should be affected.

Please let us know (via email to pace-support@oit.gatech.edu) if you have any questions or concerns.

TestFlight upgraded to new 6.3 stack

Here’s our present for the holidays: a new OS stack based on RHEL6.3, which our tests indicated that we get a performance boost across all CPU architectures. Please try your codes now on TestFlight to make sure we haven’t introduced new bugs in this stack, and report to us any problems you see.

Scheduled Quarterly Maintenance on 01/15/2013

The first quarterly maintenance of 2013 will take place on 01/15. All of the systems will be offlined for the entire day. We hope that no jobs will need to be killed, since we have been placing holds on jobs that would still be running on that day. If you submitted jobs with long walltimes (exceeding 01/15), then you will notice that they are being held by the scheduler to protect them from getting killed.

Here’s a summary of the tasks that we are planning to accomplish on the maintenance day.

* OS upgrade (6.2 to 6.3): We will upgrade the RHEL OS to version 6.3. This version offers better compatibility with our hardware, with potential benefits on performance. We have been testing existing software stack with this version to verify compatibility and do not expect any problems. We are upgrading the testflight nodes to 6.3 (they should be online very soon), so please submit test jobs to this queue to verify that your codes will continue to run on the new system.

* Scratch storage maintenance: As most of you already know, we have been working with Panasas to resolve the ongoing crashes. Panasas has identified the cause that will require a new release of their system software. We expect to deploy a tested version on this maintenance day.

 Important: The new release will be tested on a separate storage system that was provided by Panasas, and not on our production system. Therefore, we must be prepared for the possibility of unforeseen problems that will only be triggered by production runs with actual usage patterns. As an effort to shield long running jobs from such an undesired event, we are placing another reservation to only allow jobs that will complete by 02/17/2013, while holding longer jobs. This way, should we need to declare an emergency downtime on that day, we will be able to do so with minimal impact. This will require jobs with more than 31 days of walltime to be held until February the 17th, so please consider this while setting walltimes for your jobs. This reservation is contingent upon the stability of the system, and it can be removed earlier than this date if we feel confident enough. We are sorry for this  inconvenience.

* Conversion of more RHEL5 nodes to RHEL6: The majority of our users have made the switch to RHEL6 systems already. Therefore, we will migrate more of the FoRCE and Joe nodes to corresponding RHEL6 queues. We are not getting rid of the RHEL5 queues entirely (just yet), but the number of nodes they contain will be significantly reduced. Please contact us if your jobs are still dependent on RHEL5, since this version will be depreciated in the near future.

* Deployment of new database-driven configuration builders (dry-run mode only): We are developing a new system to manage user accounts and queue management, along with many other system management tasks, to minimize human error and maximize efficiency. We will deploy a dry-run mode only prototype of this system, which will run alongside with existing mechanisms. This will allow us to test and verify the new system against real usage scenarios to assist in the development effort, and will not be used for actual management tasks.

* New license server: We will start using a new license server, since the system on the existing server is getting old. We will migrate the existing licenses to the new server on the maintenance day. We don’t expect any difficulties, but please contact us if you notice any problems with licenses.

As always, please let us know if you have any concerns or questions at pace-support@oit.gatech.edu.