Skip to main content

LS-DYNA

The Ansys LS-DYNA installation on Hamilton includes a set of executables that run in different ways:

SMP (shared-memory processing) is suitable for small jobs that can run on a few CPU cores  on one node.

MPP (massively parallel processing, using MPI) is suitable for medium jobs that can run on 1 node.

Hybrid  is a combination of SMP and MPP - each MPP task itself uses SMP to parallelise.  It is suitable for large jobs that run on more than 1 node.

The names of the LS-DYNA executables include _smp, _mpp or _hyb to identify their type.  There are also single- and double-precision versions of each, indicated by _sp or _d.p    Full names are as in the example submission scripts for MPP and Hybrid jobs given below.  Note that the scroll bar below each section may be required to see the full width of some lines. 

The resources requested in these examples can be adjusted to suit.  When deciding on numbers of tasks and threads, it can be helpful to consider how CPU cores share memory on Hamilton.  For example, hybrid jobs may work better if each MPP task stays within one socket or NUMA region.  See the Systems page for information on Hamilton's hardware.   You may need to experiment to find an optimal arrangement for your computations.

MPP example job script (for R2023)

#!/bin/bash 
 
#SBATCH -p shared      # Choose 'shared' for smaller jobs, 'multi' for 1 or more whole nodes, 'test' to use test node 
#SBATCH -N 1         # Number of nodes 
#SBATCH -n 2         # Number of MPI tasks, mpirun will automatically use this number below for MPP LS-DYNA 
#SBATCH -c 1         # Number of threads per MPI task (for MPP, this should = 1.
#
There is a small speed-up with threading for MPP, not related to LS-DYNA) 
# Number of MPI tasks (n) x number of threads per task (c) = number of CPUs allocated to the job.
# n*c cannot exceed the number of CPU cores available on the selected nodes

#SBATCH --mem=50G     # Memory required per node 
#SBATCH -t 0-00:15:00 # Time before termination 
#SBATCH --mail-type=all # emails you an alert when a job starts or stops 
 
module purge 
module load intel/2021.4 intelmpi/2021.6
module load
lsdyna/2023R2

 
# Memory must be converted from bytes to words for LS-DYNA. For R10 and beyond memory=memory2 
# The factor of 8 converts from MB to MWords and 0.5 is to use half of the available memory 
mem=`bc <<< "(($SLURM_MEM_PER_NODE/8)*0.5)/1"`"M"
   
 
mpirun lsdyna_dp_mpp.e memory=$mem memory2=$mem I=<your_keyword_file>.k     
 
# where ls-dyna_dp_mpp.e is name of the MPP executable (double precision version)
# <your_keyword_file>.k file is the keyword file to be run 

Hybrid example job script (R12.1)

#!/bin/bash 
  
#SBATCH -p multi     # Choose 'shared' for smaller jobs, 'multi' for 1 or more whole nodes, 'test' to use test node 
#SBATCH -N 2         # number of nodes 
#SBATCH -n 4         # Number of MPI tasks.  --minimum is 2 per node, e.g. 2 sockets per node 
#SBATCH -c 64         # Number of threads per MPI task   --maximum is the number of cores per socket (64 for Hamilton) 
#SBATCH --mem=50G     # Memory required per node 
#SBATCH -t 3-00:00:00   # Time before termination 
#SBATCH --mail-type=all  #sends emails for when jobs start and stop

module purge 
module load intel/2021.4 intelmpi/2021.6
module load lsdyna/12.1.0

# Memory must be converted from bytes to words for LS-DYNA. For R10 and beyond memory=memory2
# Factor of 8 to convert to MWords, and 0.5 to use half of the available memory
mem=`bc <<< "(($SLURM_MEM_PER_NODE/8)*0.5)/1"`"M"       

mpirun -np ${SLURM_NTASKS} ls-dyna_hyb_d_R12_1_0_x64_centos78_ifort160_avx2_intelmpi-2018_sharelib ncpu=-${SLURM_CPUS_PER_TASK} memory=$mem memory2=$mem I=<your_keyword file>.k 

# where ls-dyna_hyb..2018 is the name of the executable
# <your_keyword_file>.k file is the keyword file to be run 

Specifying memory for MPP and Hybrid  jobs

LS-DYNA requires two memory specifications, 'memory' and 'memory2'.  Both are specified in units of MWords, ie. MBytes/8.  For versions R10 and greater, they should both equal half the allocated memory per node.

For earlier versions, ‘memory’ and ‘memory 2’ can be adjusted to improve the speed of the job, however ‘memory’ is dedicated purely to “decomposing” the model into smaller parts to be solved separately and therefore requires a larger amount of memory than for solving each “decomposed” part (memory2). 

# For R9 and earlier, two memory values are specified.  Use memory=$mem1 and memory2=$mem2
#
The example values below of 0.7 and 0.4 can be changed to achieve faster performance depending on model
# For R10 and beyond, memory=memory2 


# Divide by 8 to convert from MB to MWords as required for LS Dyna
# 0.7 is the fraction of memory to allocate 
mem
=`bc <<< "(($SLURM_MEM_PER_NODE/8)*0.7)/1"`"M"      


# For R9 and earlier only:
# 0.4 is the fraction of memory to allocate for mem2
mem2=`bc <<< "(($SLURM_MEM_PER_NODE/8)*0.4)/1"`"M"       

Other Options and Issues

There are numerous other settings and “fail-safes” that can be set if necessary/desired/  These are describe in the LS-DYNA Manual Appendices O and Y (https://ftp.lstc.com/anonymous/outgoing/web/ls-dyna_manuals/R14/LS-DYNA_Manual_Volume_I_R14.pdf) and in https://www.dynamore.de/de/download/papers/dynamore/de/download/papers/forum11/entwicklerforum-2011/wang.pdf.  However, most of these can be ignored on stable computers for working LS-DYNA keyword files. 

 Our thanks to Anna Weatherburn for providing advice and example scripts for LS-DYNA.