PyMAPDL on SLURM HPC clusters#

Submit a PyMAPDL job#

Using PyMAPDL in an HPC environment managed by SLURM scheduler has certain requirements:

  • An Ansys installation must be accessible from all the compute nodes. This normally implies that the ANSYS installation directory is in a shared drive or directory. Your HPC cluster administrator should provide you with the path to the ANSYS directory.

  • A compatible Python installation must be accessible from all the compute nodes. For compatible Python versions, see Install PyMAPDL.

Additionally, you must perform a few key steps to ensure efficient job execution and resource utilization. Subsequent topics describe these steps.

Check the Python installation#

The PyMAPDL Python package (ansys-mapdl-core) must be installed in a virtual environment that is accessible from the compute nodes.

To see where your Python distribution is installed, use this code:

user@machine:~$ which python3
/usr/bin/python3

To print the version of Python you have available, use this code:

user@machine:~$ python3 --version
Python 3.9.16

You should be aware that your machine might have installed other Python versions. To find out if those installations are already in the PATH environment variable, you can press the Tab key to use autocomplete:

user@machine:~$ which python3[TAB]
python3             python3-intel64     python3.10-config   python3.11          python3.12          python3.8           python3.8-intel64   python3.9-config
python3-config      python3.10          python3.10-intel64  python3.11-config   python3.12-config   python3.8-config    python3.9
$ which python3.10
/usr/bin/python3.10

You should use a Python version that is compatible with PyMAPDL. For more information, see Install PyMAPDL.

The which command returns the path where the Python executable is installed. You can use that executable to create your own Python virtual environment in a directory that is accessible from all the compute nodes. For most HPC clusters, the /home/$user directory is generally available to all nodes. You can then create the virtual environment in the /home/user/.venv directory:

user@machine:~$ python3 -m venv /home/user/.venv

After activating the virtual environment, you can install PyMAPDL.

Install PyMAPDL#

To install PyMAPDL on the activated virtual environment, run the following commands:

user@machine:~$ source /home/user/.venv/bin/activate
(.venv) user@machine:~$ pip install ansys-mapdl-core
Collecting ansys-mapdl-core
Downloading ansys_mapdl_core-0.68.1-py3-none-any.whl (26.9 MB)
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 26.9/26.9 MB 37.3 MB/s eta 0:00:00
Collecting pexpect>=4.8.0
Using cached pexpect-4.9.0-py2.py3-none-any.whl (63 kB)
Collecting click>=8.1.3
...

To test if this virtual environment is accessible from the compute nodes, run this test.sh bash script:

#!/bin/bash
#SBATCH --job-name=myjob
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --time=01:00:00

# Commands to run
echo "Testing Python!"
source /home/user/.venv/bin/activate
python -c "from ansys.mapdl import core;print(f'PyMAPDL version {core.__version__} was successfully imported.')"

then you can run that script using:

user@machine:~$ srun test.sh

This command might take a minute or two to complete, depending on the amount of free resources available in the cluster. On the console, you should see this output:

Testing Python!
PyMAPDL version 0.68.1 was successfully imported.

If you see an error in the output, see Troubleshooting, especially Python virtual environment is not accessible.

Submit a PyMAPDL job#

To submit a PyMAPDL job, you must create two files:

  • Python script with the PyMAPDL code

  • Bash script that activates the virtual environment and calls the Python script

Python script: pymapdl_script.py

from ansys.mapdl.core import launch_mapdl

# Number of processors must be lower than the
# number of CPUs allocated for the job.
mapdl = launch_mapdl(nproc=10)

mapdl.prep7()
n_proc = mapdl.get_value("ACTIVE", 0, "NUMCPU")
print(f"Number of CPUs: {n_proc}")

mapdl.exit()

Bash script: job.sh

source /home/user/.venv/bin/activate
python pymapdl_script.py

To start the simulation, you use this code:

user@machine:~$ srun job.sh

The bash script allows you to customize the environment before running the Python script. This bash script performs such tasks as creating environment variables, moving to different directories, and printing to ensure your configuration is correct. However, this bash script is not mandatory. You can avoid having the job.sh bash script if the virtual environment is activated and you pass all the environment variables to the job:

user@machine:~$ source /home/user/.venv/bin/activate
(.venv) user@machine:~$ srun python pymapdl_script.py --export=ALL

The --export=ALL argument might not be needed, depending on the cluster configuration. Furthermore, you can omit the Python call in the preceding command if you include the Python shebang (#!/usr/bin/python3) in the first line of the pymapdl_script.py script.

user@machine:~$ source /home/user/.venv/bin/activate
(.venv) user@machine:~$ srun pymapdl_script.py --export=ALL

If you prefer to run the job in the background, you can use the sbatch command instead of the srun command. However, in this case, the Bash file is needed:

user@machine:~$ sbatch job.sh
Submitted batch job 1

Here is the expected output of the job:

Number of CPUs: 10.0

Examples#

For an example that uses a machine learning genetic algorithm in an HPC system managed by SLURM scheduler, see Genetic algorithms and PyMAPDL.