DiRAC Extreme Scaling (Tesseract)¶
DiRAC Extreme Scaling (also known as Tesseract) is available to industry, commerce and academic researchers. For information on how to get access to the system please see the DiRAC website.
The Tesseract compute service is based around an HPE SGI 8600 system with 1476 compute nodes. There are 1468 standard compute nodes, each with two 2.1 GHz, 12-core Intel Xeon (Skylake) Silver 4116 processors and 96 GB of memory. In addition, there are 8 GPU compute nodes each with two 2.1 GHz, 12-core Intel Xeon (Skylake) Silver 4116 processors; 96 GB of memory; and 4 NVidia V100 (Volta) GPU accelerators connected over NVlink. All compute nodes are connected together by a single Intel Omni-Path fabric and all nodes access the 3 PB Lustre file system. As well as the fast, parallel Lustre storage, Tesseract also provides a tiered storage solution based on zero watt disk storage and tape storage built using HPE DMF.
This documentation covers:
- Tesseract User Guide: general information on how to use Tesseract
- Software Libraries: notes on compiling against specific libraries on Tesseract. Most libraries work as expected so no additional notes are required however a small number require specific documentation
Information on using the SAFE web interface for managing accounts and reporting on your usage on Tesseract (and DiRAC as a whole) can be found on the DiRAC SAFE Documentation
- Connecting to Tesseract
- Data Transfer Guide
- File and Resource Management
- Application Development Environment
- Running Jobs on Tesseract
- Using the Tesseract GPU Nodes
- References and further reading