NCI is Australia’s pre-eminent computing facility, delivering on the critical national need for high-performance data, storage, and computing services. This blog provides basic user guiders for using the UCI servers.

Account Management

All NCI users must have a validated account to use the NCI resources, the account could be created in the NCI online self-service portal ( The information required to register an NCI account is email, name, mobile phone number and project code.

The step by step register guide could be found at NCI Account Help Center (

Gadi User Guide

Gadi is Australia's most powerful supercomputer, a highly parallel cluster comprising more than 200,000 processor cores on ten different types of compute nodes.

To run jobs on Gadi, we need ssh to the Gadi login server. For the windows users, they could download MobaXterm (, Xshell ( or Putty ( to creat SSH connections.

For example, user aaa777 would run

ssh [email protected]

The Folders

Each user has a project-independent home directory. The storage limit of the home folder is fixed at 10 GiB. Each users could create folders and run jobs in the Home folder. Besides, each user also have /scratch and /g/data folders to storage data file; $PBS_JOBFS to storage jobs. The difference between these folders and their using scenarios could be found at: folder structure introduction section in NCI document (

File Transfer to/from Gadi

Gadi has six designated data-mover nodes with the domain name We can use these nodes to transfer files to and from Gadi.

For example, aaa777 runs the following command line in the local terminal to transfer the file input.dat in the current directory to the home folder on Gadi.

scp input.dat [email protected]:/home/777/aaa777

If the transfer is going to take a long time, there is a possibility that it could be interrupted by network instability. For that reason, it is better to start the transfer in a resumable way. For example, the following command line allows user aaa777 to download data in the folder /scratch/a00/aaa777/test_dir on Gadi onto the current directory on their local machine using rsync.

rsync -avPS [email protected]:/scratch/a00/aaa777/test_dir ./

If the download is interrupted, run the same command again to resume the download from where it left off.

Gadi Jobs

To run compute tasks such as simulations, weather models, and sequence assemblies on Gadi, users need to submit them as jobs to queues. Job submission enables users to specify the queue, duration and resources needs of their jobs. Gadi uses PBSPro to schedule all submitted jobs and keeps nodes that have different hardware in different queues. See details about the hardware available in the different queues on the Gadi Queue Structure page ( . Users submit jobs to a specific queue to run jobs on the corresponding type of node.

The Gadi Job creating methods could be found at Gadi PBS Jobs guide: (

Once the job has been created, we could run the job by submitting it to Gadi using the qsub command.

For example, to submit a job defined in a submission script, called for example, run the following code on the login node.


Submission Script Example

Here is an example job submission script to run the python script which is assumed to be located inside the same folder where you run qsub

#PBS -l mem=190GB #PBS -l jobfs=200GB #PBS -q normal #PBS -P a00 #PBS -l walltime=02:00:00 #PBS -l storage=gdata/a00+scratch/a00 #PBS -l wd module load python3/3.7.4 python3 $PBS_NCPUS > /g/data/a00/$USER/job_logs/$PBS_JOBID.log

Job Monitoring

Once a job submission is accepted, its jobID is shown in the return message and can be used to monitor the job's status. Users are encouraged to keep monitoring their own jobs at every stage of their lifespan on Gadi.

Queue Status

To look up the status of a job in the queue, run the command qstat. For example, to lookup the job 12345678 in the queue, run

qstat -swx 12345678

qstat -u $USER -Esw Other commands could found at: GaDi help document (