Getting Started

To start using Param Yukti, you first need to create a user account, the details of which can be found on the account creation page. Once you have received your username and password, you can use Param Yukti as described below.

Accessing Param Yukti

Login to Param Yukti from your Linux/Unix command line:

For internal users:

$ ssh username@hostname

For external users:

$ ssh username@hostname -p port_number

The hostname and the port number can be found in the email you receive on creation of your account. On login, you will be redirected to one of the four login nodes. The login nodes are only used for file manipulations and visualizations. Do not run programs in the background on the login nodes. Jobs should only be submitted via the job submission script as described below.

Loading the necessary modules

To compile and run jobs, you need to first load the modules needed by your job (libraries, compilation environments, etc).

To know which modules are available to use, type the command:
$ module avail

Load the necessary modules. For instance, if your application needs the Intel compiler, do:
$ module load compiler/intel/2018.2.199

To unload the module, type:
$ module unload compiler/intel/2018.2.199

To see which module is loaded, type:
$ module list

Unload all loaded modules:
$ module purge

Submitting a job

SLURM is the JOB submission system used in Param Yukti. Create a slurm batch script as follows:

#!/bin/bash
#SBATCH --job-name=serial_job_test        # Job name
#SBATCH --mail-type=END,FAIL           # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=email@example.com   # Where to send mail
#SBATCH --partition=standard          # Specify partition (standard,gpu,hm)
#SBATCH --nodes=2                # Maximum number of nodes to be allocated
#SBATCH --ntasks-per-node=12         # Maximum number of tasks on each node
#SBATCH --ntasks=1               # Run on a single CPU
#SBATCH --cpus-per-task=4           # Number of CPU cores per task
#SBATCH --mem=1gb              # Job memory request
#SBATCH --time=00:05:00           # Time limit hrs:min:sec
#SBATCH --output=serial_test_%j.log     # Standard output and error log

### For GPU Jobs ###
#SBATCH --partition=gpu
#SBATCH --gres=gpu:2 # The number of GPUs required

### Clear any previously loaded modules ###
module purge
###Load the modules necessary for the job###
module load compiler/intel/2018.2.199

###Working Directory###
cd <working_directory>

MACHINE_FILE=nodes.$SLURM_JOBID
scontrol show hostname $SLURM_JOB_NODELIST > $MACHINE_FILE

mpiexec.hydra -machinefile $MACHINE_FILE -n <NP> <executable>

Users can also download the example script from here and modify it to their needs.

Once the submit script has been created, the job can be submitted as follows:
$ sbatch script.sh

Managing jobs and monitoring resources

To get information about the status of which nodes are up:
$ sinfo

To show the list of submitted jobs:
$ squeue

To get the availability status of compute nodes:
$ sinfo

To delete a job
$ scancel <job-id>

To get full information about your job
$ scontrol show job <job-id>

To hold the job
$ scontrol hold <job-id>

To release the job
$ scontrol release <job-id>

For detailed information refer the Param Yukti user manual.