problem with Lagrangian floats

Bug reports, work arounds and fixes

Moderators: arango, robertson

Post Reply
Message
Author
seashellingolds
Posts: 39
Joined: Wed Jun 25, 2008 2:49 am
Location: Georgia Institute of Technology

problem with Lagrangian floats

#1 Unread post by seashellingolds »

Hi
Has this problem ever been met or reported?

Description :
Lagrangian floats get lost at the periodic boundary

Details :
I put floats uniformly on a square domain with periodic boundary condition
After several time step, some floats close to the boundary disappeared and even more with the model going on. This happens only I use multiple CPUs with MPI on. Multiple CPUs with OPENMP or single CPU with MPI work

Thanks

User avatar
kate
Posts: 4091
Joined: Wed Jul 02, 2003 5:29 pm
Location: CFOS/UAF, USA

Re: problem with Lagrangian floats

#2 Unread post by kate »

You may be the first to have tried this.

User avatar
arango
Site Admin
Posts: 1367
Joined: Wed Feb 26, 2003 4:41 pm
Location: DMCS, Rutgers University
Contact:

Re: problem with Lagrangian floats

#3 Unread post by arango »

Actually, we have tested periodic boundary conditions with floats in 2D and 3D. Check floats test case FLT_TEST. It is a East-West channel with a squared island in the middle. Notice that we have different scripts for 2D and 3D application: ocean_flt_test2d.in, floats_flt_test2d.in, ocean_flt_test3d.h, and floats_flt_test3d.in. This application shows the periodicity of the floats.

seashellingolds
Posts: 39
Joined: Wed Jun 25, 2008 2:49 am
Location: Georgia Institute of Technology

Re: problem with Lagrangian floats

#4 Unread post by seashellingolds »

Thanks for the replies

Tonight is so hot that makes me wake up -_-!. So I was starting to think this problem.
I guess the problem comes out of the double periodic boundary condition.
Assume there is a 2*2 partitioned domain, a particle originally in the Northwest tile goes to Southeast tile in the next step.
The code deals with EW and NS periodic boundary sequently. In EW part, only x coordinate of this float will be reset, while y coordinate is still beyond the domain. Therefore, after mp_collect broadcasts its position info to all nodes, nobody would like to accept it. It is then set back to 0.

I havent made a test. i guess it could be more or less since the problem should be located in somewhere both DISTRIBUTE and PERIODIC defined.

A quick and easy solution I think is just when determining floats status, separate double periodic and single periodic cases.

Thanks again for taking care of this.

Yisen

seashellingolds
Posts: 39
Joined: Wed Jun 25, 2008 2:49 am
Location: Georgia Institute of Technology

Re: problem with Lagrangian floats

#5 Unread post by seashellingolds »

Hi All

I just made a test with both original code and my corrected code. I think the root of the problem is right as I mentioned above.
But I am not sure the way I modified the code is the most optimal but kind of direct.

Moreover, as the thread I posted in the ROMS Discussion several days ago,
viewtopic.php?f=14&t=1915
There is another problem about float output:
In the float output, the float position doesnt synchronize with the time (ocean_time), 1 step bias

Actually this problem could be trivial, 1 step bias in most of the cases can be neglected (?)
I simply took FLOATS part in output.F out and wrote them into a new file, and call this subroutine after float calculation in main2/3d.F. I also tested this with the problem above in the meantime. Again, this modification may not be optimal.

I would really appreciate any professional suggestions

Merci !!

Y.

User avatar
arango
Site Admin
Posts: 1367
Joined: Wed Feb 26, 2003 4:41 pm
Location: DMCS, Rutgers University
Contact:

Re: problem with Lagrangian floats

#6 Unread post by arango »

First, it does not make sense to activate floats in a double periodic boundary conditions application. We have tested the floats with a single periodic boundary condition in parallel. It is possible that there are some issues with distributed-memory exchanges in double periodic applications. I will look to this in a simple test case without floats when I get some free time. It is now in my to do list. I don't know if the problem is in mp_collect since a mpi_allreduce call is used. If there is a problem, it is located somewhere else.

We only use double periodic boundary conditions to convert ROMS to one-dimensional vertical model. The number of grid points is usually 6x6 with a lot of vertical levels (see BIO_TOY or SED_TOY). We always run this in serial because it does not make sense to run such a small grid in parallel :!:

You need to be careful with subroutine ouput. The writing of NetCDF files is correct. ROMS uses a predictor/corrector scheme for both state variables and floats. There are a lot of indices at play here. The output variables are delayed by one time-step because of parallel synchronization issues but we write the correct index in the NetCDF files. If you change this, you will get the wrong time level. Notice that the barotropic engine of ROMS (step2d) is sort of called for an auxiliary time-step (nfast+1) to finalize all the time averaging filtering. I do not recommend to change this strategy in ROMS kernel because you will get into a lot of trouble :!:

User avatar
kate
Posts: 4091
Joined: Wed Jul 02, 2003 5:29 pm
Location: CFOS/UAF, USA

Re: problem with Lagrangian floats

#7 Unread post by kate »

I once created a doubly periodic domain with non-trivial extent. It is a square domain with a meddy in the middle. It was wonderful for debugging periodic options back in the SMS-ROMS days. One cool feature is that you should be able to move the meddy laterally and not change the solution at all. It makes a lot of sense to include floats in that example.

Post Reply