submit job for fluent 13 in hpc

5
Submit job for Fluent 13 in HPC HPC has a 5 user 48 core (CPU) license for ANSYS 13. This means only 5 users at a time with a total of 48 cores (among all 5 users) get to use the ANSYS package at a time. For windows users, connect to the HPC using an ssh client. Steps: 1.) Ready your Fluent case and data file (.cas and .dat) using the GUI on your PC 2.) Make the batch file for the job (for sample, see Annexure I). Make the necessary changes: file set-batch-opt y y y n file start-transcript xyz123.trn file read-case-data xyz123 solve init init-flow solve it 100 file write-case-data xyz123_%i.gz file stop-transcript exit Change fields in red as per need. If you want to use the autosave option while making the case file in Fluent, then autosave it in /home/your-hpc-id/filename. If you are reading only the case files , remove the –data option in the 3 rd line. Save the above file with .jou extension. Check the file names given in the batch file. They should be same as the Fluent case and data files. The name of the batch file should be same as the one you put in your fluent-pbs-startup script. 3.) Make the pbs-script file (sample below in Annexure II). Here you can decide upon how many cores do you need to run the Fluent case and the queue in which you want to submit the job. Remember the institute has license for 48 cores parallel Fluent. a) To change the no. of cores:

Upload: abhijit-kushwaha

Post on 14-Sep-2015

249 views

Category:

Documents


2 download

DESCRIPTION

this is step by step procedure for submitting jobs on HPC clustors

TRANSCRIPT

Submit job for Fluent 13 in HPCHPC has a 5 user 48 core (CPU) license for ANSYS 13. This means only 5 users at a time with a total of 48 cores (among all 5 users) get to use the ANSYS package at a time. For windows users, connect to the HPC using an ssh client.Steps:1.) Ready your Fluent case and data file (.cas and .dat) using the GUI on your PC2.) Make the batch file for the job (for sample, see Annexure I). Make the necessary changes:file set-batch-opt y y y nfile start-transcript xyz123.trnfile read-case-data xyz123solve init init-flowsolve it 100file write-case-data xyz123_%i.gzfile stop-transcriptexit

Change fields in red as per need. If you want to use the autosave option while making the case file in Fluent, then autosave it in /home/your-hpc-id/filename.If you are reading only the case files , remove the data option in the 3rd line.Save the above file with .jou extension. Check the file names given in the batch file. They should be same as the Fluent case and data files. The name of the batch file should be same as the one you put in your fluent-pbs-startup script.

3.) Make the pbs-script file (sample below in Annexure II). Here you can decide upon how many cores do you need to run the Fluent case and the queue in which you want to submit the job.Remember the institute has license for 48 cores parallel Fluent.

a) To change the no. of cores:

Each node consists of 8 cores of processor. So, first change the no. of nodes in the following line of the script file

choose the no. of nodes based upon your requirements#PBS -l nodes=1/2/3/4/5/6:ppn=8

Then, change the no. of cores in the yellow region of the following line (no. of cores = no. of nodes * 8) -cd /scratch/your-hpc-id

32 = 4*8, 4 being the no. of cores set for the job/opt/software/ansys_inc/v130/fluent/bin/fluent 3ddp -g -cnf=$PBS_NODEFILE -t32 -i sample-batch.jou

b) To change the queue for the job:In the institute HPC, there are three queue's to submit the fluent job.QueueRange of coresMax. run time

Small8 - 32120 hrs

Medium32 - 9696 hrs

Large96 - 72 hrs

Small or Medium based upon your requirementSince, the institute has max. of 48 cores for fluent, the maximum you can go is in the Medium queue. Based upon your cores requirements, make the following change in this particular line:#PBS -q mediumUpon making all the changes, save the file with the extension .dat

Procedure to run the job:1.) Transfer all the 4 files (Fluent .cas & .dat, batch.jou & pbs-script.dat) in your scratch directory2.) Use the command :dos2unix name-of-your-batch-file.jou3.) Do the same for the pbs-script file4.) Use the command to put the job in queue:qsub name-of-the-pbs-script.datA job-id is now assigned to your case.

To check the status of your job, enter the command, qstat -u your-hpc-idTo delete a job, type qdel job-idRemove unnecessary files using the rm command.

Annexure ISample batch files. remember to store these files with the extension .joua) For Steady Case :file set-batch-opt y y y nfile start-transcript xyz123.trnfile read-case-data xyz123

No. of iterationssolve init init-flowsolve it 100file write-case-data xyz123_%i.gzfile stop-transcriptexit

b) For Unsteady Case :

No. of iterationsfile set-batch-opt y y y nfile start-transcript xyz123.trn

Time step size file read-case-data xyz123solve dual-time-iterate 3000 5e-06file write-case-data xyz123_%i.gzfile stop-transcriptexit

Annexure II

Sample pbs-script file.

#PBS -l nodes=4:ppn=8#PBS -l fluent=1#PBS -l fluent_lic=48#PBS -q medium#PBS -V #EXECUTION SEQUENCEcd /scratch/your-hpc-id/opt/software/ansys_inc/v130/fluent/bin/fluent 3ddp -g -cnf=$PBS_NODEFILE -t32 -i sample-batch.jou

#UNCOMMENT TO REMOVE THE UNWANTED FILES#rm *_PBSin*#rm *.env *.inp

#UNCOMMENT TO COMPRESS THE FILES#find . -type f -exec gzip {} \;