A unique opportunity to meet in an intimate setting with the grid generation, solver, and post-processing tool developers prominent in the field! Past attendees include developers of FUN3D, OpenFoam, Overflow, Overgrid, Overture, SUGGAR++, and technical rep- resentatives from Pointwise, Intelligent Light, Celeritas, and more!
For more information, visit: http://www.2014.oversetgridsymposium.org/index.php
XSEDE 2014 is coming to town, and here’s their announcement (https://www.xsede.org/web/conference/xsede14)
Mark your calendars and join us in Atlanta for XSEDE14, July 13-18, 2014!
The annual XSEDE conference brings together the extended community of individuals interested in advancing research cyberinfrastructure and integrated digital services for the benefit of science and society. XSEDE14 will place a special emphasis on recruiting and engaging under-represented minorities, women, and students as well as encouraging participation by people from domains of study that do not traditionally use high-performance computing. Sessions will be structured to engage people who are new to computational science and engineering, as well as providing in-depth tutorials and high-quality peer-reviewed papers that will allow the most experienced researchers to gain new insights and knowledge.
Hotel and Registration deadlines extended!
The XSEDE14 Conference is shaping up to be an excellent conference! We are pleased to announce that the hotel has extended our room block rate until June 27. To align with the extended room block extension, the conference registration will remain at $500 for full conference participation through June 27. After June 27, the late registration fee of $600 will begin.
After some consultation with members of the GT IT community (thank you specifically, Didier Contis for raising the awareness of the issue), as well as our vendor, we have identified the cause of the high rate of disk failures plaguing storage units purchased a little bit more than a year ago.
An update to the firmwares running on the internal backplanes of the storage arrays was necessary, and performance and availability were greatly improved immediately after these were applied on the arrays. These firmwares are normally manufacturer maintained materials only, and aren’t readily available to the public like controller firmwares are, which led to some additional delays before repairs.
That said, we have retained the firmwares and the software used to apply them for any future use should other units have issues.
This morning (approximately between 3am and 8am) we suffered a failure in one of our physical hosts which makes up part of our VM farm. This failure caused several head nodes to go offline, as well as one of the PACE run license servers for software.
**********
For ALL PACE run clusters, it would be wise to double check your job runs in case they may have lost their license server prior to kicking off this morning or if it was running during this time.
**********
The following head nodes went offline, but have returned:
cygnus-6
granulous
megatron
microcluster
mps
rozell
testflight-6
The following license server went offline, but has returned:
license-gt
In the cases of the head nodes, no jobs should have been affected nor any data lost because of nodes being offline.
We’ve noticed an increase in a type of disk failure on some of the storage nodes that ultimately has a severe negative impact on storage performance. In particular, we observe that certain models of drives in certain manufacturing date ranges seem to be more prone to failure.
As a result, we’re looking a bit more closely at our logs to keep an eye on how widespread this is, but most of the older storage seems fine; it has tended towards some of the newer storage using both 2Tb and 4Tb drives. The 2Tb drives are the more surprising to us as the model line involved has generally been performing as expected, with many older storage units using the same drives without having these issues.
We are also engaging our vendor to see if this is something that they are seeing elsewhere, and making sure we keep a close eye on our stock of replacements to deal with these failures.
We’ve gone ahead and replaced some disks in your storage as the type of failures they are generating right now cause dramatic slowdowns in I/O performance for the disk arrays.
As a result of the replacements, the array will remain slow for a period of ~5 or so hours as the arrays rebuild themselves to have the appropriate redundancy.
We’ll be keeping an eye on this problem as we have recently noticed a spike in the number of these events as of late.
Applications are now being accepted for Experiencing HPC for Undergraduates, a program designed to introduce high performance computing (HPC) research topics and techniques to undergraduate students at the sophomore level and above. The program introduces various aspects of HPC research at the SC14 Conference to increase awareness of opportunities to perform research as an undergraduate and potentially in graduate school or in a job related to HPC topics in computer science and computational science.
SC14 will be held Nov. 16-21, 2014 in New Orleans. Complete conference information can be found at:http://sc14.supercomputing.org
The Experiencing HPC for Undergraduates Program contains selected parts of the main SC Technical Program, with several additional elements. Special sessions include panels with current graduate students in HPC areas to discuss graduate school and research, and panels with senior HPC researchers from universities, government and industrial labs to discuss career opportunities in HPC fields.
Prof. Jeff Hollingsworth, co-chair of Experiencing HPC for Undergraduates, discusses the program and the need to develop the next generation of HPC professionals in an HPCwire podcast at: http://www.hpcwire.com/soundbite/toward-next-generation-hpc-professionals/
Applications must be submitted using the SC14 submission site at https://submissions.supercomputing.org/. The deadline to apply is Sunday, June 15.
ANSYS version 15 and Matlab version R2014a have been installed on PACE clusters.
To see examples of how to properly load and use the new versions, execute the following commands and follow the instructions provided.
$ module help ansys/15.0
$ module help matlab/r2014a
If you have any problems executing the examples given by “module help”, please contact pace-support@oit.gatech.edu