Double Periodic Boundary Conditions
Double Periodic Boundary Conditions
Hello,
I'm using ROMS version 2.2 (I may try version 3 tomorrow, but I don't have much hope), in a double-periodic configuration. On a single processor I have no problems, using 128X128 grid points in the horizontal, but with MPI I get in troubles. I tryed to change the size of the domain (+1, +2, -1 in each direction. In some case I could not run) and nothing worked. I get 'noise' according to the tiling I use. If for example I set NtileI == 2 and NtileJ == 8 I get a noisy line in the center of the domain, + due half lines at the borders and 8 'bumps' along the y-axis. With NtileI==1, I get 16 bumps all along y. at x=0 and x=128
The problem is that the 'noise' propagates into the domain and affect the physics of the solution. Any suggestion? Am I simply forgetting to set something properly?
thanks
Annalisa
I'm using ROMS version 2.2 (I may try version 3 tomorrow, but I don't have much hope), in a double-periodic configuration. On a single processor I have no problems, using 128X128 grid points in the horizontal, but with MPI I get in troubles. I tryed to change the size of the domain (+1, +2, -1 in each direction. In some case I could not run) and nothing worked. I get 'noise' according to the tiling I use. If for example I set NtileI == 2 and NtileJ == 8 I get a noisy line in the center of the domain, + due half lines at the borders and 8 'bumps' along the y-axis. With NtileI==1, I get 16 bumps all along y. at x=0 and x=128
The problem is that the 'noise' propagates into the domain and affect the physics of the solution. Any suggestion? Am I simply forgetting to set something properly?
thanks
Annalisa
well Annalisa it would be nice if you could update to roms 3.0. I have run many applications with mpi and am not getting the tiling lines that you seem to be getting. But if there are problems then i will spend the time to correct it.
Did you add any 'ana_*" stuff that could be causing the tiling issues?
Try the upwelling case first to see if the tiling works ok in that application.
Did you add any 'ana_*" stuff that could be causing the tiling issues?
Try the upwelling case first to see if the tiling works ok in that application.
Here it comes:
(for the wind-stress)
# elif defined QG
idum=-27
DO j=JstrR,JendR
DO i=Istr,IendR
val2=0.3_r8*(ran2(idum)-0.5_r8)
val1=SIN(2.0_r8*pi*3.0_r8*yr(i,j)/el(ng))
val1=val1*(SIN(2.0_r8*pi*3.0_r8*xr(i,j)/xl(ng)))
sustr(i,j)=0.0001_r8*(val1+val2)
END DO
END DO
run2 is defined in the standard Numerical Recipes format
(for tracer initial conditions):
# elif defined QG
DO k=1,N(ng)
DO j=JstrR,JendR
DO i=IstrR,IendR
! t(i,j,k,1,itemp)=T0(ng)+22.2_r8*EXP(0.0017_r8*z_r(i,j,k))
! t(i,j,k,1,isalt)=S0(ng)+2.36_r8*EXP(0.0024_r8*z_r(i,j,k))
t(i,j,k,1,itemp)=T0(ng)+7.0_r8+12.0_r8*EXP(0.017_r8*z_r(i,j,k))
t(i,j,k,1,isalt)=S0(ng)+2.0_r8*EXP(0.024_r8*z_r(i,j,k))
t(i,j,k,2,itemp)=t(i,j,k,1,itemp)
t(i,j,k,2,isalt)=t(i,j,k,1,isalt)
END DO
END DO
END DO
Thanks once more!
Annalisa
(for the wind-stress)
# elif defined QG
idum=-27
DO j=JstrR,JendR
DO i=Istr,IendR
val2=0.3_r8*(ran2(idum)-0.5_r8)
val1=SIN(2.0_r8*pi*3.0_r8*yr(i,j)/el(ng))
val1=val1*(SIN(2.0_r8*pi*3.0_r8*xr(i,j)/xl(ng)))
sustr(i,j)=0.0001_r8*(val1+val2)
END DO
END DO
run2 is defined in the standard Numerical Recipes format
(for tracer initial conditions):
# elif defined QG
DO k=1,N(ng)
DO j=JstrR,JendR
DO i=IstrR,IendR
! t(i,j,k,1,itemp)=T0(ng)+22.2_r8*EXP(0.0017_r8*z_r(i,j,k))
! t(i,j,k,1,isalt)=S0(ng)+2.36_r8*EXP(0.0024_r8*z_r(i,j,k))
t(i,j,k,1,itemp)=T0(ng)+7.0_r8+12.0_r8*EXP(0.017_r8*z_r(i,j,k))
t(i,j,k,1,isalt)=S0(ng)+2.0_r8*EXP(0.024_r8*z_r(i,j,k))
t(i,j,k,2,itemp)=t(i,j,k,1,itemp)
t(i,j,k,2,isalt)=t(i,j,k,1,isalt)
END DO
END DO
END DO
Thanks once more!
Annalisa
-
- Posts: 10
- Joined: Wed May 25, 2005 10:08 pm
- Location: GEOMAR | Helmholtz Centre for Ocean Research Kiel
Hi,
I'm working with Annalisa on this application, trying to configure it with MPI during the European day. And if there is an user-generated mess, I'm to be blaimed, not she...
Just to make sure if the problem is not the kernel itself, I have reloaded on the cluster the Rutgers 2.2 version with the freshest correction patch (with changes from Dec.2006). Thus, the only user-generated changes were the files: analytical.F, cppdefs.h and *.in file. I rerun the model under MPI with the tiling as before (16 and 2 vs 32 proc). All the same, a nasty stripe in the middle of the domain after 1 day and a subsequent blow-up.
Then I started switch off and on various options used (e.g climatology).
And I found the problem: it was ran2 in the forcing. When val2=0.0, I see nice spatial variability in my fields according to scales of the forcing imposed, and no nasty stripes even after 30 days.
Is it some exchange problem or maybe so imposed ran2 destroys the interpretation of the loop by paralelization somehow?
I'm working with Annalisa on this application, trying to configure it with MPI during the European day. And if there is an user-generated mess, I'm to be blaimed, not she...
Just to make sure if the problem is not the kernel itself, I have reloaded on the cluster the Rutgers 2.2 version with the freshest correction patch (with changes from Dec.2006). Thus, the only user-generated changes were the files: analytical.F, cppdefs.h and *.in file. I rerun the model under MPI with the tiling as before (16 and 2 vs 32 proc). All the same, a nasty stripe in the middle of the domain after 1 day and a subsequent blow-up.
Then I started switch off and on various options used (e.g climatology).
And I found the problem: it was ran2 in the forcing. When val2=0.0, I see nice spatial variability in my fields according to scales of the forcing imposed, and no nasty stripes even after 30 days.
Is it some exchange problem or maybe so imposed ran2 destroys the interpretation of the loop by paralelization somehow?
inga
inga and annalisa-
sorry for the late response. i was on travel today.
I am not seeing anything that should be wrong. I am not sure why the rand function is an issue. The model should only compute the sustr values once at each i,j point. Then the mp_exchange routines would exchange the information to fill the halo regions. If the model was computing the same information several times at the same i,j locations, then a rand function would cause issues becasue the information would not be the same each time. But the model should only compute the data once at each i,j location.
I will ask around to see if anyone else sees anything.
Anyone else want to chime in ??
sorry for the late response. i was on travel today.
I am not seeing anything that should be wrong. I am not sure why the rand function is an issue. The model should only compute the sustr values once at each i,j point. Then the mp_exchange routines would exchange the information to fill the halo regions. If the model was computing the same information several times at the same i,j locations, then a rand function would cause issues becasue the information would not be the same each time. But the model should only compute the data once at each i,j location.
I will ask around to see if anyone else sees anything.
Anyone else want to chime in ??
- arango
- Site Admin
- Posts: 1367
- Joined: Wed Feb 26, 2003 4:41 pm
- Location: DMCS, Rutgers University
- Contact:
You can not use this type of random number routine in parallel Your code above is completelly wrong in parallel. By the way, random numbers in parallel are extremely tricky. See Numerical recipes in Fortran 90. That book has a full chapter on this explaining the parallel difficulties of random numbers. You cannot use ran2
If you check ROMS version 3.0, you will find how the randon numbers are done in parallel. See routine white_noise.F which uses gasdev.F, ran_state.F, and ran1.F
If you check ROMS version 3.0, you will find how the randon numbers are done in parallel. See routine white_noise.F which uses gasdev.F, ran_state.F, and ran1.F
-
- Posts: 10
- Joined: Wed May 25, 2005 10:08 pm
- Location: GEOMAR | Helmholtz Centre for Ocean Research Kiel
Thank you very much. Motivated by your rip, I have remembered that also roms-2.2 must have a parallel-proof random generator because it has a vertical-random-walk option for floats. So instead of employing the module from roms-3.0, I have found nrng and urng in utility.F of roms-2.2. It's working perfectly - no more mpi-stripes in the solution .
inga