forrtl: severe (151): allocatable array is already allocated

Report or discuss software problems and other woes

Moderators: arango, robertson

Post Reply
Message
Author
c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

forrtl: severe (151): allocatable array is already allocated

#1 Unread post by c.drinkorn »

Hello everyone,

while trying to run ROMS in parallel mode with analytical initial conditions I receive the following error output from debugging mode (excerpt from the whole error file):

Code: Select all

forrtl: severe (151): allocatable array is already allocated
Image              PC                Routine            Line        Source
oceanG             000000000220D767  Unknown               Unknown  Unknown
oceanG             000000000217A2AB  mod_param_mp_init         626  mod_param.f90
oceanG             0000000000508C55  read_phypar_              215  read_phypar.f90
oceanG             00000000004675EE  inp_par_                   93  inp_par.f90
oceanG             0000000000420113  ocean_control_mod          86  ocean_control.f90
oceanG             000000000041FBCA  MAIN__                     95  master.f90
oceanG             000000000041F9DE  Unknown               Unknown  Unknown
libc-2.12.so       00002AC42B4BCD1D  __libc_start_main     Unknown  Unknown
oceanG             000000000041F8E9  Unknown               Unknown  Unknown
I investigated the respective lines in the source files and found out that the issue is about the IOBOUNDS type which sets the variables for the netcdf grid file. In the mod_param the variable is initialized and then passed on to the other routines in the following files. So the problmes seems to be in mod_param. However, this routine only uses mod_kinds and I can't find any other resources that could cause the error. In mod_param the IOBOUNDS is correctly defined and later allocated in the subroutine initialize_param.
I can't seem to find the reason for the error...

c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

Re: forrtl: severe (151): allocatable array is already alloc

#2 Unread post by c.drinkorn »

After setting up my model once again, I re-encounter this problem. :roll:
No suggestions on how to solve this?

User avatar
kate
Posts: 4091
Joined: Wed Jul 02, 2003 5:29 pm
Location: CFOS/UAF, USA

Re: forrtl: severe (151): allocatable array is already alloc

#3 Unread post by kate »

initialize_param is called after NtileJ is read in the input file. You don't happen to have more than one line with NtileJ set do you?

c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

Re: forrtl: severe (151): allocatable array is already alloc

#4 Unread post by c.drinkorn »

Thanks for your reply, Kate.
No, there is no double entry for NtileJ.
What I did now is to wrap the allocation of IO_BOUNDS inside an IF-condition, too. I guess, this is a nasty workaround and there sure is an issue in my model setup somewhere but it solved the problem for now. Let's see how the simulation performs...

c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

Re: forrtl: severe (151): allocatable array is already alloc

#5 Unread post by c.drinkorn »

Considering the recent repository update underlines even more that the IF-condition wrap is a workaround but shouldn't be necessary. In fact, I did not encounter the problem when using another ocean.in. I tried hard to figure which setting causes the obsolete allocation but was not successful. Which settings could be responsible? Is is related to the Lcycle switch? Or Nrrec?
Thanks for any hints! :wink:

jcwarner
Posts: 1200
Joined: Wed Dec 31, 2003 6:16 pm
Location: USGS, USA

Re: forrtl: severe (151): allocatable array is already alloc

#6 Unread post by jcwarner »

can you post your ocean.in that causes the trouble?
is there a way I can download it. dont just copy it to this widget.
-j

User avatar
arango
Site Admin
Posts: 1367
Joined: Wed Feb 26, 2003 4:41 pm
Location: DMCS, Rutgers University
Contact:

Re: forrtl: severe (151): allocatable array is already alloc

#7 Unread post by arango »

It is a weird error if you didn't repeat the NtileJ parameter in ocean.in, which triggers the allocation of several modules. In yesterday update, I put safeguards for this to never happen. I don't know what to make of your case. I don't think that a corrupted ocean.in should trigger to process the same assignment twice. Sometimes unseen characters are incorporated when editing a file. Did you edit the file in a non-UNIX computer? What kind of text editor do you use?

jcwarner
Posts: 1200
Joined: Wed Dec 31, 2003 6:16 pm
Location: USGS, USA

Re: forrtl: severe (151): allocatable array is already alloc

#8 Unread post by jcwarner »

yeah that is what i was going to look for. TABS = bad.
sometimes an errant character can get stuck in the file and cause an issue.

c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

Re: forrtl: severe (151): allocatable array is already alloc

#9 Unread post by c.drinkorn »

Oh yes, tabs! I encountered them already in the .in file. :D I use Vi on a UNIX. And .e.g tabs before the forcing file links are a bad idea! :twisted:
However, so far I couldn't find them to be the reason of double-allocation. At the moment I am still busy setting up the forcing fields to the desired structure (still receiving get_fld errors but they are solvable) so I can't look into how the model setup actually performs. When I am there, I will probably know if something is odd when running with the IF-wrapper as as workaround and maybe even what caused it. I'll report! :idea:

c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

Re: forrtl: severe (151): allocatable array is already alloc

#10 Unread post by c.drinkorn »

Hey all,
after your hints about double-entries in the .in file I finally looked really really close once again and, alas, I found an entire block of entries at the very end of my file (after the Glossary and coupling section). My guess how it ended up there: an accidental mousewheel click while scrolling (Linux). This probably pasted in the block I must have copied before. The double-allocation error is solved. :)

However, I am struggling with another problem since quite some time now. It's this error from the regrid routine:

Code: Select all

 REGRID - input gridded data does not contain model grid:

          Gridded:  LonMin = -179.2500 LonMax =  180.0000
                    LatMin =  -90.0000 LatMax =   90.0000
          Model:    LonMin = -179.9848 LonMax =  179.9680
                    LatMin =   45.0081 LatMax =   89.9205
 Found Error: 04   Line: 254      Source: ROMS/Utility/get_2dfld.F

 GET_2DFLD   - error while reading variable: sustr   at TIME index =       1
 Found Error: 04   Line: 139      Source: ROMS/Nonlinear/get_data.F
 Found Error: 04   Line: 772      Source: ROMS/Nonlinear/initial.F
 Found Error: 04   Line: 188      Source: ROMS/Drivers/nl_ocean.h 
This problem was discussed frequently here in the forum. Still, no solution worked for me so far. Some facts about my files:
I am using the respective Matlab package to generate forcing files, so they come with the "spherical" integer variable and it is set to the value=1. The grid file I am using, however, is not created by me but comes from a set up I am partially recycling for my project. Checking the spherical variable gives this for the grid file:

Code: Select all

    char spherical ;
      spherical:long_name = "Grid type logical switch" ;
      spherical:option_F = "Cartesian" ;
      spherical:option_T = "spherical" ;

  data:
    spherical = "T" ;
Also, in the header there is a grid mapping variable:

Code: Select all

	int grid_mapping ;
		grid_mapping:long_name = "grid mapping" ;
		grid_mapping:grid_mapping_name = "polar_stereographic" ;
		grid_mapping:ellipsoid = "sphere" ;
		grid_mapping:earth_radius = 6371000. ;
		grid_mapping:latitude_of_projection_origin = 90. ;
		grid_mapping:straight_vertical_longitude_from_pole = 58. ;
		grid_mapping:standard_parallel = 60. ;
		grid_mapping:false_easting = 4180000. ;
		grid_mapping:false_northing = 2570000. ;
		grid_mapping:dx = "20000" ;
		grid_mapping:proj4 = "+proj=stere +R=6371000.0 +lat_0=90 +lat_ts=60.0 +x_0=4180000.0 +y_0=2570000.0 +lon_0=58.0" ;
I have the suspicion that my grid file and forcing files are not compatible and that is why the spherical attribute is not recognized by ROMS. I tried to manipulate the matlab script in order to make the spherical variable a char of values T and F and set it to T but this did not solve the problem. Trying to set the grid file to spherical=1 didn't help either (adding the attribute flag_meanings and flag_values neither). Finally, I even attempted to directly set the value of "spherical" in the mod_scalar to true but no positive result again.
I have full understanding that ROMS cannot accept the forcing files not completely overlapping the model domain. Still, I don't know how to help it. I am very grateful for any hints! :idea:

User avatar
arango
Site Admin
Posts: 1367
Joined: Wed Feb 26, 2003 4:41 pm
Location: DMCS, Rutgers University
Contact:

Re: forrtl: severe (151): allocatable array is already alloc

#11 Unread post by arango »

Okay, that will do it. I put safeguards to the code when processing standard input file ocean.in. I made a couple of updates recently for it.

It looks that your grid is not for a regional application, but it is a global grid. Is that correct?

If that's the case, you have a problem with the design of your grid or understanding. In a regional grid, we usually use a longitude range between -180 to 180 where negative values are west longitudes, and positive values are east longitudes (degree_east units attribute). In a global grid, we usually use a longitude range between 0 to 360 degrees or any modulus of 360, mod(x,360). In cases like this, it is wiser to add 360 to the values in the range -180:180. The error is in routine regrid because it is not smart enough to figure out if you have enough data to interpolate to your grid. It is made on purpose to make the user aware of global applications when providing external data to ROMS. The regrid option is automatically triggered when you input data if it is not of the same size of ROMS grid variable, so we need to interpolate the data horizontally.

As you can see, your problem is not related to the simple spherical flag.

If it is your first application in ROMS, I suggest that you gain experience with a smaller regional grid, so you can transfer all that knowledge when building and setting up your global application.

User avatar
kate
Posts: 4091
Joined: Wed Jul 02, 2003 5:29 pm
Location: CFOS/UAF, USA

Re: forrtl: severe (151): allocatable array is already alloc

#12 Unread post by kate »

The user has an Arctic domain. I added a GLOBAL_PERIODIC flag to ROMS to handle just this case, interpolating across the dateline. I've attached the patch file.
Attachments
peri_diff.txt
Patch file - apply with "patch -p1 < peri_diff.txt"
(9.06 KiB) Downloaded 484 times

c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

Re: forrtl: severe (151): allocatable array is already alloc

#13 Unread post by c.drinkorn »

Dear Kate,

million thanks for the patch! I needed to adjust my Matlab processing a bit so that my lon/lat variables are 1d. In the Matlab package one needs to change the metadata (in roms_metadata.m) to just one variable dimension entry and in the main program comment out the repmat procedure. Otherwise the get_varcoord-routine slips into the else-condition of n_dims.eq.1 which causes an error in netcdf_get_fvar.
I also liked your added NUL-termination safeguard for the coordinates attribute value. :)

c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

Re: forrtl: severe (151): allocatable array is already alloc

#14 Unread post by c.drinkorn »

If it is your first application in ROMS, I suggest that you gain experience with a smaller regional grid, so you can transfer all that knowledge when building and setting up your global application.
Dear Hernan,
your are right, I am a bloody beginner. :oops:
But thanks to the comprehensive documentation and last but not least this forum, I already went a long way from scratch. 8) I am still enjoying ROMS very much! :D

c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

Re: forrtl: severe (151): allocatable array is already alloc

#15 Unread post by c.drinkorn »

I realized lately that global_periodic gives a seam at lon 180/-180 if the coordinates are not in the order of -180 to 180 instead of 0 to -0 (which would be the outcome of the d_ecmwf2roms.m). Therefore, I added two lines to the script:
In the section where lat and lon are read, right after the modification of the lon range, I added

Code: Select all

[ROMS_lon,ind_lon] = sort(ROMS_lon);
and further down in the loop where the fields are read and written

Code: Select all

fieldfinal = fieldfinal(ind_lon,:);
This will result in a globally continuous field when using global_periodic. :)

c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

Re: forrtl: severe (151): allocatable array is already alloc

#16 Unread post by c.drinkorn »

kate wrote:The user has an Arctic domain. I added a GLOBAL_PERIODIC flag to ROMS to handle just this case, interpolating across the dateline. I've attached the patch file.
Hi Kate,

I'm sorry I have to get back at this again but I just wanted to switch to Parallel IO and got an error while compiling about too many subscripts for the variable "wrk". I assume it has to be "Awrk" since "wrk" is defined as 1D?

Code: Select all

            DO j=Jstr,Jend
              wrk(0,j) = wrk(Iend,j)
              wrk(Iend+1,j) = wrk(1,j)
            END DO
              cff = 0
            DO i=Istr,Iend
              cff = cff + wrk(i,Jend)
            END DO
              cff = cff/Jend
            DO i=0,Iend+1
              wrk(i,Jend+1) = cff
            END DO
            Npts=(Ilen+2)*(Jlen+1)


However, when I tried this, compiling works fine and the model runs approx 2 minutes until I get this error:

Code: Select all

*** glibc detected *** /mnt/lustre01/scratch/b/b380636/Arctic20km/obcatmforceriversinisedbulk/./romsG: corrupted double-linked list: 0x0000000003e5a610 ***
... and further down:

Code: Select all

/mnt/lustre01/scratch/b/b380636/Arctic20km/obcatmforceriversinisedbulk/./romsG(for_dealloc_allocatable+0x19d)[0xc691cd]
I read somewhere else that you split parallel in and out. Can you please explain the reasons to me?

User avatar
kate
Posts: 4091
Joined: Wed Jul 02, 2003 5:29 pm
Location: CFOS/UAF, USA

Re: forrtl: severe (151): allocatable array is already alloc

#17 Unread post by kate »

c.drinkorn wrote:the model runs approx 2 minutes until I get this error:

Code: Select all

*** glibc detected *** /mnt/lustre01/scratch/b/b380636/Arctic20km/obcatmforceriversinisedbulk/./romsG: corrupted double-linked list: 0x0000000003e5a610 ***
... and further down:

Code: Select all

/mnt/lustre01/scratch/b/b380636/Arctic20km/obcatmforceriversinisedbulk/./romsG(for_dealloc_allocatable+0x19d)[0xc691cd]
I read somewhere else that you split parallel in and out. Can you please explain the reasons to me?
The parallel in vs. out is about file formats. I was too lazy to change all my input files to HDF5. If they are separate flags, you can test parallel output without having parallel input. In practice, I'm still running serial, letting the model do compression of the output instead. Note: this is not the best use of model time either, best done as a post-processing step using nccopy.

Sorry I don't know how to solve your problem.

c.drinkorn
Posts: 110
Joined: Thu Mar 08, 2018 2:47 am
Location: German Research Centre for Geosciences

Re: forrtl: severe (151): allocatable array is already alloc

#18 Unread post by c.drinkorn »

Thank you for your explanation, Kate.
The error actually vanished with the recent updates so this might have just been a bug...
I still don't use parallel I/O because I keep receiving an error about the scratch arrays in the global periodic patch. However, I don't mind. So far using a serial I/O is fine with me. :)
The serial I/O part of the patch works like charm so thank you so much for this one again!

Post Reply