BLUESHARK SUPERCOMPUTER (HPC)
Introduction
The Blueshark cluster at Florida Tech is a IBM iDataplex system, comprised of 63 compute nodes (Total of 1,720 processor cores and 4,397GB RAM), 11 GPU nodes, 1 storage node and 1 head node. For more information about the High Performance Computing, see http://en.wikipedia.org/wiki/High-performance_computing. The Blueshark Cluster was funded by the National Science Foundation major Research Implementation grant.
Request Access
Faculty can request access to the Blueshark cluster by entering a support request at https://myfootprints.fit.edu.
For an entire course to get access, the course instructor has to request access and provide all the TRACKS accounts needing access.
For a guest, visitor, or student to get access, they must have a faculty sponser to get access to the cluster.
Mailing List
To join the mailing list, click Subscribe at this URL: https://lists.fit.edu/sympa/info/hpc
Hardware

The configuration of each of the 50 general compute nodes is:
- IBM System x iDataPlex dx360 M3/M4
- 2 x Hexa-Core Intel Xeon X5650 @ 2.67GHz CPUs
- 24GB of RAM
- 250GB SATA HDD
- 1 Gb Ethernet Interconnect
There are 11 GPU compute nodes with this configuration:
- Dell PowerEdge C4130
- 2 x 10 core Intel Xeon E5-2650 @ 2.30GHz
- 131GB of RAM
- 1TB SATA HDD
- 4 x Nvidia Tesla K40m
- 1 Gb Ethernet Interconnect
- Mellanox InfiniBand Interconnect
The 13 big memory compute nodes are configuration:
- SuperMicro 1u Servers
- 2 x 10 core Intel Xeon E5-2650 @ 2.30GHz
- 131GB of RAM
- 120GB SATA HDD
- 1 Gb Ethernet Interconnect
The storage node configuration:
- Dell PowerEdge R630
- Dell PowerVault MD3060
- 60 x 4TB HDD
- 240TB raw capacity
The head node configuration is:
- IBM System x iDataPlex dx360 M3
- 2 x 4 Core Intel Xeon X5550 @ 2.67GHz
- 24GB of RAM
- LSI MegaRAID SAS Controller
- Storage Expansion Unit
- 8 x 1 TB 7200RPM SAS Hot-Swap HDD
- 10 GbE link to compute nodes via Chelsio T310 10GbE Adapter
- Redundant Hot-swap Power Supplies
Other hardware resources:
- 2 x BNT 48port 1GbE switches with dual 10GbE
Software
The HPC software environment is implemented in CentOS 7 Linux.
Software Installed:
- ATLAS - Automatically Tuned Linear Algebra Software
- BLAS - Basic Linear Algebra Subprograms
- Boost C++
- CUDA - Nvidia CUDA Programming
- DMTCP - Distributed MultiThreaded CheckPointing
- Environmental Modeling System - http://strc.comet.ucar.edu/
- Fluent
- Gaussian - http://www.gaussian.com
- GNU Compilers - C/C++/Fortran
- Java
- LAPACK - Linear Algebra Package
- MPI - Message Passing Interface - MPICH and OpenMPI
- NetCDF - Network Common Data Form
- Octave - GNU Octave
- Paraview - Data Analysis and Visualization
- PETSc - Portable, Extensible Toolkit for
Scientific Computation
- Portland Group Compiler - C/C++/Fortran/MPICH
- Python
- SAGE Math - http://www.sagemath.org