Customer Testimonials
"With Oxford providing HPC not just to researchers within the University, but to local businesses and in collaborative projects, such as the T2K and NQIT projects, the Slurm scheduler really was the best option to ensure different service level agreements can be supported. If you look at the Top500 list of the World's fastest supercomputers, they're now starting to move to Slurm. The scheduler was specifically requested by the University to support GPUs and the heterogeneous estate of different CPUs, which the previous TORQUE scheduler couldn't, so this forms quite an important part of the overall HPC facility."
Julian Fielden, Managing Director at OCF
"In 2010, when we embarked upon our mission to port Slurm to our Cray XT and XE systems, we discovered first-hand the high quality software engineering that has gone into the creation of this product. From its very core Slurm has been designed to be extensible and flexible. Moreover, as our work progressed, we discovered the high level of technical expertise possessed by SchedMD who was very quick to respond to our questions with insightful advice, suggestions and clarifications. In the end we arrived at a solution that more than satisfied our needs. The project was so successful we have now migrated all our production science systems to Slurm, including our 20 cabinet Cray XT5 system. The ease with which we have made this transition is testament to the robustness and high quality of the product but also to the no-fuss installation and configuration procedure and the high quality documentation. We have no qualms about recommending Slurm to any facility, large or small, who wish to make the break from the various commercial options available today"
Colin McMurtrie, Head of Systems, Swiss National Supercomputing Centre
"Thank you for Slurm! It is one of the nicest pieces of free software for managing HPC clusters we have come across in a long time. Both of our Blue Genes are running Slurm and it works fantastically well. It's the most flexible, useful scheduling tool I've ever run across."
Adam Todorski, Computational Center for Nanotechnology Innovations, Rensselaer Polytechnic Institute
"Awesome! I just read the manual, set it up and it works great. I tell you, I've used Sun Grid Engine, Torque, PBS Pro and there's nothing like Slurm."
Aaron Knister, Environmental Protection Agency
"Today our largest IBM computers, BlueGene/L and Purple, ranked #1 and #3 respectively on the November 2005 Top500 list, use Slurm. This decision reduces large job launch times from tens of minutes to seconds. This effectively provides us with millions of dollars with of additional compute resources without additional cost. It also allows our computational scientists to use their time more effectively. Slurm is scalable to very large numbers of processors, another essential ingredient for use at LLNL. This means larger computer systems can be used than otherwise possible with a commensurate increase in the scale of problems that can be solved. Slurm's scalability has eliminated resource management from being a concern for computers of any foreseeable size. It is one of the best things to happen to massively parallel computing."
Dona Crawford, Associate Directory Lawrence Livermore National Laboratory
"We are extremely pleased with Slurm and strongly recommend it to others because it is mature, the developers are highly responsive and it just works."
Jeffrey M. Squyres, Pervasive Technology Labs at Indiana University
"We adopted Slurm as our resource manager over two years ago when it was at the 0.3.x release level. Since then it has become an integral and important component of our production research services. Its stability, flexibility and performance has allowed us to significantly increase the quality of experience we offer to our researchers."
Dr. Greg Wettstein, Ph.D. North Dakota State University
"SLURM is the coolest thing since the invention of UNIX... We now can control who can log into [compute nodes] or at least can control which ones to allow logging into. This will be a tremendous help for users who are developing their apps."
Dennis Gurgul, Research Computing, Partners Health Care
"SLURM is a great product that I'd recommend to anyone setting up a cluster, or looking to reduce their costs by abandoning an existing commercial resource manager."
Josh Lothian, National Center for Computational Sciences, Oak Ridge National Laboratory
"SLURM is under active development, is easy to use, works quite well, and most important to your harried author, it hasn't been a nightmare to configure or manage. (Strong praise, that.) I would range Slurm as the best of the three open source batching systems available, by rather a large margin."
Bryan O'Sullivan, Pathscale
"SLURM scales perfectly to the size of MareNostrum without noticeable performance degradation; the daemons running on the compute nodes are light enough to not interfere with the applications' processes and the status reports are accurate and concise, allowing us to spot possible anomalies in a single sight."
Erest Artiaga, Barcelona Supercomputing Center
"SLURM was a great help for us in implementing our own very concise job management system on top of it which could be tailored precisely to our needs, and which at the same time is very simple to use for our customers. In general, we are impressed with the stability, scalability, and performance of Slurm. Furthermore, Slurm is very easy to configure and use. The fact that SLURM is open-source software with a free license is also advantageous for us in terms of cost-benefit considerations."
Dr. Wilfried Juling, Direktor, Scientific Supercomputing Center, University of Karlsruhe
"I had missed Slurm initially when looking for software for a cluster and ended up installing Torque. When I found out about Slurm later, it took me only a couple of days to go from knowing nothing about it to having a SLURM cluster than ran better than the Torque one. I just wanted to say that your focus on more "secondary" stuff in cluster software, like security, usability and ease of getting started is *really* appreciated."
Christian Hudson, ApSTAT Technologies
"SLURM has been adopted as the parallel allocation infrastructure used in HP's premier cluster stack, XC System Software. Slurm has permitted easy scaling of parallel applications on cluster systems with thousands of processors, and has also proven itself to be highly portable and efficient between interconnects including Quadrics, QsNet, Myrinet, Infiniband and Gigabit Ethernet."
Bill Celmaster, XC Program Manager, Hewlett-Packard Company
Last modified 14 April 2015