Our scientific computing service enables you to perform complex scientific calculations on your own. We offer a wide range of hardware and software infrastructures and methodological advice and services for the application of mathematical modelling, simulation and optimisation calculations, intelligent data analyses and AI processes.

enlarge the image: Photo of a server
Photo: Colourbox

What does scientific computing mean at Leipzig University?

For scientific simulations to develop medical agents, mixed-integer optimisations to improve municipal energy systems or automated handwriting recognition in ancient oriental documents with the help of neural networks, Big Data analyses and much more, fast processors, enough RAM and specialised hardware such as modern GPUs with cores to accelerate AI are indispensable.

All these resources are at your disposal, along with a high-performance network and several hundred terabytes of storage for source data and results.

Our offer is aimed both at researchers who need simple and intuitive web-based access to accelerated hardware in the short term and at HPC experts who expect a well-maintained and administered HPC environment.

Our service is rounded off by a large, constantly growing range of up-to-date software for a wide variety of application areas from diverse disciplines.

In addition to theory and experiment, the broad spectrum of methods of scientific computing opens up new approaches to scientific knowledge in all disciplines and is thus an indispensable part of everyday scientific life today.

Interactive Scientific Computing in the Web Browser

Easy access to our hardware and software resources is available via the Jupyter and RStudio web interfaces.

With these interactive user interfaces and integrated development environments, you define your scientific problems, execute them and can visualise results directly.

Ideal for evaluating new ideas or methods and preparing more complex simulations or data analyses.

The Jupyter service combines common data science and AI tools in a simple user interface.

Based on Python and a wide range of associated libraries, you can train neural networks, analyse data and prepare sophisticated simulation setups.

Your own reusable environments with all the tools you need can be created.

The service offers users:

  • initial consultation and user documentation
  • Extensive software catalogue (e.g. Tensorflow, PyTorch, OpenCV, Keras etc.)
  • Access within the network of Leipzig University
  • Access to GPUs to accelerate calculations

RStudio provides a comprehensive integrated development environment for the R programming language for statistical computing. The service allows access to our hardware resources and use of the extensive library of software.

The service offers users:

  • Auto-completion, automatic indentation, syntax highlighting, code folding, integrated help and information on objects in the working environment.
  • View and edit data sets
  • Combine scripts, data and other files into projects (.Rproj)
  • Version management with Git
  • Create reports from within RStudio using knitr or Sweave
  • Graphical debugger
  • Direct compilation and integration of code in C, C++ or Fortran

High-Performance Computing on the Command Line (Slurm)

Our SC infrastructure allows experts to use solutions optimised for specific requirements using the latest hardware and software. These include Big Data solutions, the acceleration of complex, parallel simulations or data analyses and the training of neural networks. We offer researchers in the region direct access to our Linux-based clusters and a wide range of software.

The service offers users:

  • Initial consultation and user documentation to assist in the introduction of High Performance Computing.
  • Use of a shared HPC infrastructure
  • User access from the network of Leipzig University
  • Access to scratch memory (for storing input and output values as well as intermediate results)
  • Access to GPUs (max. 8)
  • access to an extensive software environment (including Intel Compiler Suite, MatLab, Gaussian, RosettaNet, Tensor-Flow, Jupyter-Lab)

Software for scientific calculations

We offer our users a wide range of current scientific software.

This includes the most common simulation packages in the natural sciences, but also tools for modelling and evaluating numerical and text-based data in other disciplines. In addition, we offer a wide range of developer tools for various programming languages. The associated libraries for fully exploiting the parallel hardware architecture are of course also available.

In addition, we offer a wide range of developer tools for various programming languages. The associated libraries for fully exploiting the parallel hardware architecture are of course also available.

We offer a very large selection of pre-installed software. The range of software is constantly being expanded according to demand. The Scientific Computing service team is happy to accept your individual requirements.

The following sample represents a selection of 5% of the total software offer:

  • GROMACS
  • Rosetta
  • Amber
  • Gaussian
  • LAMMPS
  • ORCA
  • Matlab
  • C/C++/Fortran compiler and associated libraries
  • CUDA
  • Python/Perl/Java/R/Lua/Go
  • TensorFlow
  • OpenCV
  • various MPI libraries

To increase flexibility and compatibility, many of the software packages are available in several different versions.

TO THE ENTIRE CURRENT SOFTWARE STACK

HARDWARE RESOURCES OVERVIEW

The Clara cluster can be used for sophisticated simulation and optimisation calculations as well as for training neural networks.

  • 31 servers with a total of 992 CPU cores, 20 TB RAM and 216 graphics cards.
  • 8 servers each contain 4 NVIDIA Tesla V100 graphics cards.
  • 23 servers each contain 8 NVIDIA RTX 2080 Ti graphics cards.

Technical data

  • 31 Gigabyte - AMD(R) Zen (14nm) based systems
    • CPU: 1x AMD(R) EPYC(R) 7551P @ 2.0GHz - Turbo 3.0GHz (32 CPU cores)
    • Memory: 512 GB
    • Local scratch space: 3.5 TByte NVMe-based
    • Network: 100GBit/s Infiniband 
    • OS: CentOS 7

Nodes with single precision

  • 23 Nodes
  • GPU: 8x Nvidia RTX 2080 Ti per node
    • 4352 CUDA cores
    • 544 Tensor cores
    • 11GByte GDDR6

Nodes with double precision

  • 8 Nodes
  • GPU: 4x Nvidia Tesla V100 per node
    • 5120 CUDA cores
    • 640 Tensor cores
    • 32GB ECC HBM2 RAM

The Paula cluster is the ideal system for resource-intensive computations, which can be further accelerated by the available modern GPUs.

  • 12 servers with 1,536 CPU cores, 12 TB RAM, and 96 graphics cards

Technical data

  • 12 MegWare Saxonid C128 GPU nodes
  • CPU: 2x AMD EPYC 7713 @ 2.0GHz - Turbo 3.7GHz (64 CPU cores)
  • Memory: 1TB
  • Network: 100Gbit/s Infiniband
  • OS: Rocky Linux 8
  • GPU: 8x NVIDIA Tesla A30
    • 10.752 CUDA cores
    • 336 Tensor cores
    • 24 GB HBM2 RAM

The Paul cluster is suitable for classic HPC applications. A total of over 4000 CPU cores are available. The main memory comprises a total of 16TB. A total of over 4000 CPU cores are available. The main memory comprises a total of 16TB.

  • 32 Server with a total of 4096 CPU cores and 16 TB RAM.

Technical data

  • 32 MegWare Saxonid C128 CPU nodes
    • CPU: 2x AMD EPYC 7713 @ 2.0GHz - Turbo 3.7GHz (64 cores each)
    • Memory: 512GB
    • Network: 100Gbit/s Infiniband
    • OS: Rocky Linux 8

Special purpose queue for long running calculations.

  • 10 Supermicro servers with a combined total of over 320 CPU cores and 2.5 TB RAM.

Technical data

  • 10 Supermicro - AMD Zen-based systems
    • CPU: 2x AMD(R) EPYC(R) 7351 @ 2.4GHz - Turbo 2.9GHz (16 CPU cores each)
    • Memory: 256 GB
    • Network: 56Gbit/s Infiniband
  • OS: Rocky Linux 8

With Sirius very main-memory-intensive applications, e.g. based on in-memory databases, can be computed.

  • 1 servers with 128 cores and 6 TB RAM each are available.

Technical data

  • 1 Huawei RH8100 V3 - Intel Haswell-based system
    • CPU: 16x Intel(R) Xeon(R) CPU E7-8860 v3 @ 2.20GHz - Turbo 3.2GHz (8 CPU cores each)
    • Memory: 6 TB
    • Network: 56 Gbit/s Infiniband
  • OS: Rocky Linux 8

Access to the infrastructure for scientific calculations

As a researcher at Leipzig University, you have access to a powerful IT infrastructure.

To use this infrastructure, please use our form.

Request SC Infrastructure

After you have authenticated yourself with your University of Leipzig user information, you can select the systems you would like to work with. In addition, you can still provide us with information about the project environment or the technical parameters (e.g. necessary storage capacity) so that we can provide you with the best possible working environment.

When you apply for SC Infrastructure for the first time, an SC user is automatically created for you and communicated to you by e-mail. You can use this SC user later to log on to the systems provided. After your request has been checked, your account will be activated for the requested resources. You will also be notified of the activation by e-mail.

If you need help (e.g. with error messages), you can use our form.

Request SC Support

To view or change your personal SC user information, please use the Scientific Computing team's self-service portal.

To the self-service portal

For further information, please consult our expert documentation (SC-Knowledgebase).

To the SC-Knowledgebase