MENU

AI.Panther Supercomputer - HPC

High Performance Computing (HPC)

The AI.Panther HPC Cluster at Florida Tech is a Aspen Systems cluster, comprised of 16 compute nodes (Total of 768 processor cores and 6,144 GB RAM), 8 GPU nodes with each node containing 4 Nvidia A100 GPUs , 3 storage nodes, 1 login node, and 1 head node. The AI.Panther cluster was funded by the National Science Foundation major Research Implementation grant.

HPC Cluster is a set of computers working in parallel performing similar to a supercomputer for a fraction of the price. The HPC Cluster is made up of a cluster of nodes connected by a high-speed network that perform intense computing tasks.

To learn about High Performance Computing or Supercomputers, see Wikipedia.

Request Access

Faculty can request access to the AI.Panther cluster by entering a support request.

For an entire course to get access, the course instructor has to request access and provide all the TRACKS accounts needing access.

For a guest, visitor, or student to get access, they must have a faculty sponsor to get access to the cluster.

Mailing List

To join the mailing list, click Subscribe.

Hardware

Head / Login Nodes

  • 2 x Intel Xeon Cascade Lake Silver 4210R, 2.4 GHz 10-Core CPUs
  • 48GB RAM
  • 2 x 960GB Enterprise SSD RAID-1

A100 PCIe GPU Nodes

  • 2 x AMD EPYC 7402P Rome, 2.8 GHz 24-Core CPUs
  • 256GB RAM
  • 1 x 960GB Enterprise SSD
  • 4 x NVIDIA Tesla Ampere A100 40GB Memory, PCIe
  • 2-slot A100 NVLink Bridge

A100 SXM4 GPU Nodes

  • 2 x AMD EPYC 7402 Rome, 2.8 GHz 24-Core CPUs
  • 512GB RAM
  • 1 x 960GB Enterprise SSD
  • 4 x NVIDIA Tesla Ampere A100 40GB Memory with NVLINK 

ZFS User/Home Fileserver (109TB available after overhead)

  • 2 x Intel Xeon Cascade Lake Silver 4215R, 3.20GHz, 8-Core CPUs
  • 192GB RAM
  • 2 x 240GB Enterprise SSD RAID-1
  • 8 x 14TB SAS HDD configured as RAID-Z2
  • 2 x 375GB Optane PCI-E L2ARC/ZIL

ZFS Archive Fileserver (420TB available after overhead)

  • 2 x Intel Xeon Cascade Lake Silver 4215R, 3.20GHz, 8-Core CPUs
  • 384GB RAM
  • 2 x 240GB Enterprise SSD RAID-1
  • 36 x 16TB SATA HDD configured as RAID-Z2
  • 2 x 375GB Optane PCI-E L2ARC/ZIL

Network Switches

  • One externally managed HDR IB switch
  • One managed 1GbE switch
  • One 25Gb (18 port)/100Gb (4 port) Ethernet switch
  • Nodes have 25Gb Ethernet connection to the 25GbE switch (Dual Port 1GbE with IPMI, HDR100 IB Connection). 100GbE links from 25GbE switch are available for future expansion.

Software

The HPC software environment is implemented in Ubuntu Linux.

Software Installed:

  • ATLAS - Automatically Tuned Linear Algebra Software
  • BLAS - Basic Linear Algebra Subprograms
  • Boost C++
  • CUDA - Nvidia CUDA Programming
  • DMTCP - Distributed MultiThreaded CheckPointing
  • Environmental Modeling System
  • Fluent
  • Gaussian
  • GNU Compilers - C/C++/Fortran
  • Java
  • LAPACK - Linear Algebra Package
  • MPI - Message Passing Interface - MPICH and OpenMPI
  • NetCDF - Network Common Data Form
  • Octave - GNU Octave
  • Paraview - Data Analysis and Visualization
  • PETSc - Portable, Extensible Toolkit for
    Scientific Computation
  • Portland Group Compiler - C/C++/Fortran/MPICH
  • Python
  • SAGE Math
Edit Page