====== Comandos de SLURM ====== ^ Comandos de usuarios ^ PBS/Torque ^ Slurm ^ |Job submission|qsub |sbatch | |Job submission|qsub -q express -l nodes=2:ppn=16 -l mem=64g |sbatch -p express -N 2 -c 16 --mem=64g | |Job deletion|qdel |scancel | |Job deletion|qdel ALL |scancel -u | |List jobs |qstat [-u user] |squeue [-u user] [-l for long format]| |Job status |qstat -f |jobinfo | |Job hold |qhold |scontrol hold | |Job release |qrls |​scontrol release | |Node status |pbsnodes -l |sinfo -N -l| ^ Environment ^ PBS/Torque ^ Slurm ^ |Job ID |$PBS_JOBID |$SLURM_JOBID| |Node list (entry per core) |$PBS_NODEFILE |$PBS_NODEFILE (still supported)| |Slurm node list |... |$SLURM_JOB_NODELIST(new format)| |Submit directory |$PBS_O_WORKDIR |$SLURM_SUBMIT_DIR| ^ Job Specification ^ PBS/Torque ^ Slurm ^ |Script directive |#PBS |#SBATCH| |Queue |-q |-p | |Node count |-l nodes= |-N | |Cores(cpu) per node |-l ppn= |-c | |Memory size |-l mem=16384 |--mem=16g OR --mem-per-cpu=2g| |Wall clock limit |-l walltime= |-t | |Standard output file |-o |-o | |Standard error file |-e |-e | |Combine stdout/err |-j oe |(use -o without -e) [standard behaviour]| |Direct output to directory |-o |-o "directory/slurm-%j.out"| |Event notification |-m abe |--mail-type=[BEGIN, END, FAIL, REQUEUE, or ALL]| |Email address |-M
|--mail-user=
| |Job name |-N |--job-name=| |Job dependency |-W depend=afterok: |--depend=C:| |Node preference |... |--nodelist= AND/OR --exclude=| |Max jobs pool |-A [m16,m32,..,m512] |--qos=[max16jobs,max32jobs,..,max512jobs]| |Account to charge |-W group_list= |--account=|