Communicating MDCS Jobs with SLURM are not finished correctly

3 Ansichten (letzte 30 Tage)
We use MDCS with SLURM on a local HPC cluster, and in principle the integration of MDCS with SLURM has worked following the instruction found here. We had to make a fix in the file communicatingJobWrapper.sh as described here.
However, now sometimes communicating jobs are not finished correctly and I was able to track down the problem to being related to the change above. Basically, the wrapper script hangs when trying to stop the SMPD:
$ tail Job32.log
[1]2017-03-09 10:56:19 | About to exit with code: 0
[3]2017-03-09 10:56:19 | dctEvaluateFunctionArray calling: iExitFunction with args
[0]2017-03-09 10:56:19 | dctEvaluateFunctionArray calling: iExitFunction with args
[3]2017-03-09 10:56:19 | About to exit MATLAB normally
[0]2017-03-09 10:56:19 | About to exit MATLAB normally
[3]2017-03-09 10:56:19 | About to exit with code: 0
[0]2017-03-09 10:56:19 | About to exit with code: 0
Stopping SMPD ...
srun --ntasks-per-node=1 --ntasks=3 /cm/shared/uniol/software/MATLAB/2016b/bin/mw_smpd -shutdown -phrase MATLAB -port 27223
srun: Job step creation temporarily disabled, retrying
This happens whenever a single node has allocated only a single CPU/core (we use select/cons_res with CR_CPU_MEMORY). In that case the srun running in the background is preventing the srun for the SMPD-shutdown to allocate ressources.
I can think of of only one way to resolve this problem using OverSubscribe (which we currently have turned off). Is there another way? The JobWrapper script we use is attached.

Akzeptierte Antwort

Stefan Harfst
Stefan Harfst am 10 Mär. 2017
found a solution:
add the options --overcommit and --gres=none (in case the use of GRESes is configured in communicatingSubmitFcn.m) to the two srun commands in the communicatingJobWrapper.sh script. Eg. for shutdown:
srun --overcommit --gres=none --ntasks-per-node=1 --ntasks=${SLURM_JOB_NUM_NODES} ${FULL_SMPD} -shutdown -phrase MATLAB -port ${SMPD_PORT}
  2 Kommentare
Brian
Brian am 25 Aug. 2022
This thread is 5 years old but I am experiencing this same issue as my organization's new HPC is using slurm (vs SGE) . I am running 2017b and unable to validate my cluster profile and the above edits to the SRUN commands are not resolving this behavior.
Matlab is not recieving a 'finished' signal even though the job goes CG and then falls off the queue.
Thanks for any further assistance.
Stefan Harfst
Stefan Harfst am 9 Sep. 2022
If the jobs are completing on the cluster but Matlab is not receiving the finished state, than you are facing a different problem I think. The problem we had was, that some Matlab jobs never terminated because the srun command to shutdown the SPMD server got stuck.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Third-Party Cluster Configuration finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by