Hi,
I am new to ROMS. I have installed ROMS and tested the model with Upwelling problem in serial mode. Now I am trying to run the same example on our cluster. I am facing a unique problem that our system administrator is unable to solve.
My model compilation goes smooth with mpif90. Model also runs when I give command "mpirun -np 4 oceanM ocean_upwelling.in". The model runs for full 1440 timesteps. This indicates all is fine with my LD_LIBRARY_PATH settings. However, when I try to run using PBS script, the model executable unable to load the shared library libnetcdff.so.5
my batch script is simple
#!/bin/bash
#$ -cwd
#####$ -j y
#$ -l qname=long_2.q
#$ -S /bin/bash
#$ -S /bin/bash
cd ~/roms/test/Upwelling
mpirun -np 4 /home2/vihang/roms/test/Upwelling/oceanM ocean_upwelling.in >& test.out
I know this is not model related issue but some computer wizard can help me understand where is the problem. Me and my sys administrator, both are struggling with this problem.
openmp, PBS and linking of netcdf library.
- bhatt.vihang
- Posts: 11
- Joined: Thu Aug 19, 2010 12:51 pm
- Location: Indian Institute of Science
- m.hadfield
- Posts: 521
- Joined: Tue Jul 01, 2003 4:12 am
- Location: NIWA
Re: openmp, PBS and linking of netcdf library.
Well, I guess the obvious thing to check is whether LD_LIBRARY_PATH is set correctly for ROMS when it is running inside the shell script.
I haven't used PBS, but my impression is that batch scripting environments generally don't pass environment variables from the caller unless you specifically tell them to.
I haven't used PBS, but my impression is that batch scripting environments generally don't pass environment variables from the caller unless you specifically tell them to.
- bhatt.vihang
- Posts: 11
- Joined: Thu Aug 19, 2010 12:51 pm
- Location: Indian Institute of Science
Re: openmp, PBS and linking of netcdf library.
Thank you for the reply.
I have finally managed to run the code in parallel mode by using following script.
#!/bin/bash
#$ -cwd
#####$ -j y
#$ -l qname=long_4.q
#$ -S /bin/bash
#$ -S /bin/bash
cd ~/roms/test/Upwelling
LD_LIBRARY_PATH= (copy string you get by giving command "env |grep LD_LIBRARY_PATH")
/opt/intel/openmpi/bin/mpirun -x LD_LIBRARY_PATH=$LD_LIBRARY_PATH -np 4 /home2/vihang/roms/test/Upwelling/oceanM ocean_upwelling.in >& test.out
I explored batch processing further.
There is an option to send all environment variables to cluster nodes by using command "qsub -V anyscript.job"
Hope this information will be useful for others if someone is facing similar problem.
I have finally managed to run the code in parallel mode by using following script.
#!/bin/bash
#$ -cwd
#####$ -j y
#$ -l qname=long_4.q
#$ -S /bin/bash
#$ -S /bin/bash
cd ~/roms/test/Upwelling
LD_LIBRARY_PATH= (copy string you get by giving command "env |grep LD_LIBRARY_PATH")
/opt/intel/openmpi/bin/mpirun -x LD_LIBRARY_PATH=$LD_LIBRARY_PATH -np 4 /home2/vihang/roms/test/Upwelling/oceanM ocean_upwelling.in >& test.out
I explored batch processing further.
There is an option to send all environment variables to cluster nodes by using command "qsub -V anyscript.job"
Hope this information will be useful for others if someone is facing similar problem.