AI.Panther Supercomputer - HPC

High Performance Computing (HPC)

The AI.Panther HPC Cluster at Florida Tech is a Aspen Systems cluster, comprised of ? compute nodes (Total of ? processor cores and ?GB RAM), ? GPU nodes, 3 storage nodes, 1 login node, and 1 head node. The AI.Panther cluster was funded by the National Science Foundation major Research Implementation grant.

HPC Cluster is a set of computers working in parallel performing similar to a supercomputer for a fraction of the price. The HPC Cluster is made up of a cluster of nodes connected by a high-speed network that perform intense computing tasks.

To learn about High Performance Computing or Supercomputers, see Wikipedia.

Request Access

Faculty can request access to the AI.Panther cluster by entering a support request.

For an entire course to get access, the course instructor has to request access and provide all the TRACKS accounts needing access.

For a guest, visitor, or student to get access, they must have a faculty sponser to get access to the cluster.

Mailing List

To join the mailing list, click Subscribe.

Hardware

Head / Login Nodes

  • 4 x Intel Xeon Cascade Lake Silver 4210R, 2.4 GHz 10-Core CPUs
  • 12 x 8 = 96GB RAM
  • 4 x 960GB Enterprise SSD

A100 PCIe GPU Nodes

  • 4 x AMD EPYC 7402P Rome, 2.8 GHz 24-Core CPUs
  • 32 x 32 = 1024GB RAM
  • 4 x 960GB Enterprise SSD
  • 16 x NVIDIA Tesla Ampere A100 40GB Memory, PCIe
  • 24 x A100 NVLink Bridge

A100 SXM4 GPU Nodes

  • 8 x AMD EPYC 7402 Rome, 2.8 GHz 24-Core CPUs
  • 64 x 32 = 2048GB RAM
  • 4 x 960GB Enterprise SSD
  • 16 x NVIDIA Tesla Ampere A100 40GB Memory, 4 baseboards with 4 A100s with NVLINK 

High Memory Compute Nodes

  • 32 x Intel Xeon Cascade Lake Gold 6240R, 2.4GHz, 24-Core CPUs
  • 192 x 32 = 6144GB RAM
  • 16 x 960GB Enterprise SSD

ZFS User/Home Fileserver (78TB available after overhead)

  • 2 x Intel Xeon Cascade Lake Silver 4215R, 3.20GHz, 8-Core CPUs
  • 12 x 16 = 192GB RAM
  • 2 x 240GB Enterprise SSD
  • 8 x 14TB SAS HDD configured as RAIDZ2
  • 2 x P4800X 375GB Optane SSDs

ZFS Archive Fileserver (420TB available after overhead)

  • 2 x Intel Xeon Cascade Lake Silver 4215R, 3.20GHz, 8-Core CPUs
  • 12 x 32 = 384GB RAM
  • 2 x 240GB Enterprise SSD
  • 36 x 16TB SATA HDD configured as four RAIDZ2 arrays
  • 2 x P4800X 375GB Optane SSDs

Network Switches

  • One externally managed HDR IB switch
  • One managed 1GbE switch
  • One 25Gb (18 port)/100Gb (4 port) Ethernet switch
  • Nodes have 25Gb Ethernet connection to the 25GbE switch (Dual Port 1GbE with IPMI, HDR100 IB Connection). 100GbE links from 25GbE switch are available for future expansion.

Software

The HPC software environment is implemented in Ubuntu Linux.

Software Installed:

  • ATLAS - Automatically Tuned Linear Algebra Software
  • BLAS - Basic Linear Algebra Subprograms
  • Boost C++
  • CUDA - Nvidia CUDA Programming
  • DMTCP - Distributed MultiThreaded CheckPointing
  • Environmental Modeling System
  • Fluent
  • Gaussian
  • GNU Compilers - C/C++/Fortran
  • Java
  • LAPACK - Linear Algebra Package
  • MPI - Message Passing Interface - MPICH and OpenMPI
  • NetCDF - Network Common Data Form
  • Octave - GNU Octave
  • Paraview - Data Analysis and Visualization
  • PETSc - Portable, Extensible Toolkit for
    Scientific Computation
  • Portland Group Compiler - C/C++/Fortran/MPICH
  • Python
  • SAGE Math