MENU

Blueshark Supercomputer - HPC

High Performance Computing (HPC)

The Blueshark HPC Cluster at Florida Tech is an IBM iDataplex system, comprised of 63 compute nodes (A total of 1,720 processor cores and 4,397GB RAM), 11 GPU nodes, 1 storage node, and 1 head node. The Blueshark Cluster was funded by the National Science Foundation Major Research Implementation grant.

HPC Cluster is a set of computers working in parallel performing similar to a supercomputer for a fraction of the price. The HPC Cluster is made up of a cluster of nodes connected by a high-speed network that performs intense computing tasks. The cluster is connected to the external world through a single head node.

To learn about High-Performance Computing or Supercomputers, see Wikipedia.

Request Access

Departmental HPC Clusters are being used to facilitate specific needs of the department. The availability of the department clusters varies between each department.

Faculty can request access to the Blueshark cluster by entering a support request.

For an entire course to get access, the course instructor has to request access and provide all the TRACKS accounts needing access.

For a guest, visitor, or student to get access, they must have a faculty sponsor to get access to the cluster.

Mailing List

To join the mailing list, click Subscribe.

Hardware

Front Shot of Blueshark HPC

The configuration of each of the 50 general compute nodes is:

  • IBM System x iDataPlex dx360 M3/M4
  • 2 x Hexa-Core Intel Xeon X5650 @ 2.67GHz CPUs
  • 24GB of RAM
  • 250GB SATA HDD
  • 1 Gb Ethernet Interconnect

There are 11 GPU compute nodes with this configuration:

  • Dell PowerEdge C4130
  • 2 x 10 core Intel Xeon E5-2650 @ 2.30GHz
  • 131GB of RAM
  • 1TB SATA HDD
  • 4 x Nvidia Tesla K40m
  • 1 Gb Ethernet Interconnect
  • Mellanox InfiniBand Interconnect

The 13 big memory compute nodes are configured:

  • SuperMicro 1u Servers
  • 2 x 10 core Intel Xeon E5-2650 @ 2.30GHz
  • 131GB of RAM
  • 120GB SATA HDD
  • 1 Gb Ethernet Interconnect

The storage node configuration:

  • Dell PowerEdge R630
  • Dell PowerVault MD3060
  • 60 x 4TB HDD
  • 240TB raw capacity

The head node configuration is:

  • IBM System x iDataPlex dx360 M3
  • 2 x 4 Core Intel Xeon X5550 @ 2.67GHz
  • 24GB of RAM
  • LSI MegaRAID SAS Controller
  • Storage Expansion Unit
  • 8 x 1 TB 7200RPM SAS Hot-Swap HDD
  • 10 GbE link to compute nodes via Chelsio T310 10GbE Adapter
  • Redundant Hot-swap Power Supplies

Other hardware resources:

  • 2 x BNT 48port 1GbE switches with dual 10GbE

Software

The HPC software environment is implemented in CentOS 7 Linux.

Software Installed:

  • ATLAS - Automatically Tuned Linear Algebra Software
  • BLAS - Basic Linear Algebra Subprograms
  • Boost C++
  • CUDA - Nvidia CUDA Programming
  • DMTCP - Distributed MultiThreaded CheckPointing
  • Environmental Modeling System
  • Fluent
  • Gaussian
  • GNU Compilers - C/C++/Fortran
  • Java
  • LAPACK - Linear Algebra Package
  • MPI - Message Passing Interface - MPICH and OpenMPI
  • NetCDF - Network Common Data Form
  • Octave - GNU Octave
  • Paraview - Data Analysis and Visualization
  • PETSc - Portable, Extensible Toolkit for
    Scientific Computation
  • Portland Group Compiler - C/C++/Fortran/MPICH
  • Python
  • SAGE Math
Edit Page