How to improve modelled salinity and salinity initial condition
How to improve modelled salinity and salinity initial condition
We are trying to model a system characterized by tides of medium strength (range of spring tide ~ 3.2m over water depth around 10 to 20 meters) and intermittent freshwater runoff. The freshwater runoff over the modelled period was low, only about 1 to 10 m3/s, yet in-situ observations show that such low runoff was already enough to trigger strong stratification near some lateral freshwater sources (e.g. the Henderson Creek indicated in the attached map). The aim is to capture the subtidal flows well such that the model results can be useful for tracking debris/plastic migration in the system and estimating their dispersal. However, we have run into some problems, since the model performance regarding salinity is not very ideal (5 key figures are also attached at the end of this post; OceanModelDiscussion_SOS.pptx file includes some details of the validation of the modelled velocity). The upstream salinity is overestimated while the downstream salinity is slightly underestimated, and the along-channel salinity gradient for Spring tide seems to be underestimated. Given this, the modelled baroclinic water motion is not yet up to our expectation and the associated particle tracking results do not compare well with some field data (actually, the particle tracking results seem to have deteriorated as compared to previous freshwater-free runs).
We have tried including extra freshwater sources into the model but this only brought about marginal improvements. We also considered varying bottom friction and horizontal diffusivity, and again, nothing seemed to work (using diffusivity when there is also wet-and-dry also causes other problems and even blow-ups, so some ad-hoc “engineering” tricks were used in these cases). We also checked for the HPGE error to see if it is messing up our subtidal motions (which we care about), which was found to be less than 1cm/s for our high-resolution region-of-interest (not surprising, since the grid has a low rx1 number of 4.5).
Now we are suspecting that the culprit is the initial condition of salinity. A factor complicating our modelling is the scarcity of measurements. To make life even harder, the whole system is represented by only one or two grid(s) in some global models, it may not be meaningful to use them for initializing the model or for nudging (the model run only covers 40 to 50 days, so it is not a model spanning multiple seasons or years). Now the initial condition is a guesstimate based off some publications and our limited number of shipboard ADCP data (along-channel transects etc.).
In a data-scarce scenario, what is the best practice to come up with an initial salinity field? If the initial condition is unlikely to be the troublemaker, then what are other possible candidates? Any advice and suggestions are highly appreciated, and thanks in advance.
Attached figures:
i. map of the study area Hob. = Hobsonville, SB = Stanley Bay; these are two locations where the modelled salinity are compared against our measurements of surface salinity
The three major freshwater sources (accounting for about 60% to 80% of the total runoff) are labelled by blue texts.
ii. Initial Salinity These creeks are small and not very accessible to surveys, so we don’t really know where the salinity actually drops to zero; based on publications and our experiences, we see salinity around 25ish even at the upstream limit our boat could get to (which is also the landward limit of nautical charts). The river_salt variable in the river forcing file starts with these non-zero values and then it is gradually ramped down to zero over 24 hours.
The model boundary is far away form this region of interest, where a gradient boundary condition is used for tracers. Temperature is held almost constant (no atmospheric fluxes, river water temperature set to be the same as the initial temperature, etc.).
iii. Model validation at an upstream location, Hobsonville (black dots are the modelled; red curve is the measurement). Systematic overestimate.
iv. Model validation at a downstream location, Stanley Bay Underestimate of downstream salinity. However, the model may have captured the timing and the amplitude of the fluctuations, since the de-meaned versions compare well (see below; where some very low measured values and the linear downward trend of the red curve might be due to biofouling of our CTD; after 30th/Apr, the measured numbers at SB become very low and appear funny so they are not used for model validation): v. Modelled along-stream transect (Spring Tide) The modelled salt intrusion length appears to be longer, and the modelled stratification is higher. The along-stream depth-averaged salinity is also shown below, where an underestimate of salinity gradient can be noticed.
We have tried including extra freshwater sources into the model but this only brought about marginal improvements. We also considered varying bottom friction and horizontal diffusivity, and again, nothing seemed to work (using diffusivity when there is also wet-and-dry also causes other problems and even blow-ups, so some ad-hoc “engineering” tricks were used in these cases). We also checked for the HPGE error to see if it is messing up our subtidal motions (which we care about), which was found to be less than 1cm/s for our high-resolution region-of-interest (not surprising, since the grid has a low rx1 number of 4.5).
Now we are suspecting that the culprit is the initial condition of salinity. A factor complicating our modelling is the scarcity of measurements. To make life even harder, the whole system is represented by only one or two grid(s) in some global models, it may not be meaningful to use them for initializing the model or for nudging (the model run only covers 40 to 50 days, so it is not a model spanning multiple seasons or years). Now the initial condition is a guesstimate based off some publications and our limited number of shipboard ADCP data (along-channel transects etc.).
In a data-scarce scenario, what is the best practice to come up with an initial salinity field? If the initial condition is unlikely to be the troublemaker, then what are other possible candidates? Any advice and suggestions are highly appreciated, and thanks in advance.
Attached figures:
i. map of the study area Hob. = Hobsonville, SB = Stanley Bay; these are two locations where the modelled salinity are compared against our measurements of surface salinity
The three major freshwater sources (accounting for about 60% to 80% of the total runoff) are labelled by blue texts.
ii. Initial Salinity These creeks are small and not very accessible to surveys, so we don’t really know where the salinity actually drops to zero; based on publications and our experiences, we see salinity around 25ish even at the upstream limit our boat could get to (which is also the landward limit of nautical charts). The river_salt variable in the river forcing file starts with these non-zero values and then it is gradually ramped down to zero over 24 hours.
The model boundary is far away form this region of interest, where a gradient boundary condition is used for tracers. Temperature is held almost constant (no atmospheric fluxes, river water temperature set to be the same as the initial temperature, etc.).
iii. Model validation at an upstream location, Hobsonville (black dots are the modelled; red curve is the measurement). Systematic overestimate.
iv. Model validation at a downstream location, Stanley Bay Underestimate of downstream salinity. However, the model may have captured the timing and the amplitude of the fluctuations, since the de-meaned versions compare well (see below; where some very low measured values and the linear downward trend of the red curve might be due to biofouling of our CTD; after 30th/Apr, the measured numbers at SB become very low and appear funny so they are not used for model validation): v. Modelled along-stream transect (Spring Tide) The modelled salt intrusion length appears to be longer, and the modelled stratification is higher. The along-stream depth-averaged salinity is also shown below, where an underestimate of salinity gradient can be noticed.
- Attachments
-
- OceanModelDiscussion_SOS.pptx
- (1.57 MiB) Downloaded 321 times
Re: How to improve modelled salinity and salinity initial condition
some quick looks:
-how long did you run this for? looks like the salt is getting better as time goes on.
-in the attached ppt, the vel in Charcoal bay does not get as strong (peaky) as the observations. So the salt will not get advected as far at those peaks, and that is exactly what you see in the salt time series.
- how well are the tides (water levels) simulated?
-j
-how long did you run this for? looks like the salt is getting better as time goes on.
-in the attached ppt, the vel in Charcoal bay does not get as strong (peaky) as the observations. So the salt will not get advected as far at those peaks, and that is exactly what you see in the salt time series.
- how well are the tides (water levels) simulated?
-j
Re: How to improve modelled salinity and salinity initial condition
Hi John,
Thanks for your questions regarding the model. Below are my answers to them.
-how long did you run this for? looks like the salt is getting better as time goes on.
The model was run for 50 days in total. The validation shown here covers the period of Day 34 to 39, 6 days in total. The CTD was cleaned on Day 34, so the measurement over this period of time is of very high quality. Usually we do bi-weekly cleaning, yet somehow the biofouling during that month was happening at a very high rate, so from Day 40 onwards, the downstream measurements were hardly usable, and the upstream measurements also appeared a bit fishy, so we decided to not use them and this makes it harder to know if the results really do get better as the simulation runs for longer. Based on the along-stream transect comparison for the Neap tide (Day 46 of the model), which overestimates along-stream gradient with a relative error similar to that shown in this post, I think this improvement might not be very likely.
-in the attached ppt, the vel in Charcoal bay does not get as strong (peaky) as the observations. So the salt will not get advected as far at those peaks, and that is exactly what you see in the salt time series.
We have been battling this problem for a long time. For the spring tide condition (the last two days of the validation period), the maximum ebbing flow at CB is as high as 0.9 m/s while the maximum flood there only reaches 0.6 m/s. The underestimate is more severe for the ebb. In the model, the maximum ebb can only reach around 0.65 m/s, and the flood is underestimated at 0.45 m/s for the spring tide condition. We have tried various ways to fix this: using different Zob, using Zob that is lower on shoals, improving bathymetry interpolation there, nothing worked. The bathymetry there is very steep, having a gradient of about 3 in 100, with a scarp that drops 4 to 5 meters in about 100 meters (a nautical chart is attached). - how well are the tides (water levels) simulated?
Much better than the velocity, so I did not attach figures. At SB and CB, when comparing the modelled zeta at Charcoal Bay against the measured pressure, the NSE can be as high as around 0.94, and the maximum relative error is less than 10% Kind regards,
Gaoyang
Thanks for your questions regarding the model. Below are my answers to them.
-how long did you run this for? looks like the salt is getting better as time goes on.
The model was run for 50 days in total. The validation shown here covers the period of Day 34 to 39, 6 days in total. The CTD was cleaned on Day 34, so the measurement over this period of time is of very high quality. Usually we do bi-weekly cleaning, yet somehow the biofouling during that month was happening at a very high rate, so from Day 40 onwards, the downstream measurements were hardly usable, and the upstream measurements also appeared a bit fishy, so we decided to not use them and this makes it harder to know if the results really do get better as the simulation runs for longer. Based on the along-stream transect comparison for the Neap tide (Day 46 of the model), which overestimates along-stream gradient with a relative error similar to that shown in this post, I think this improvement might not be very likely.
-in the attached ppt, the vel in Charcoal bay does not get as strong (peaky) as the observations. So the salt will not get advected as far at those peaks, and that is exactly what you see in the salt time series.
We have been battling this problem for a long time. For the spring tide condition (the last two days of the validation period), the maximum ebbing flow at CB is as high as 0.9 m/s while the maximum flood there only reaches 0.6 m/s. The underestimate is more severe for the ebb. In the model, the maximum ebb can only reach around 0.65 m/s, and the flood is underestimated at 0.45 m/s for the spring tide condition. We have tried various ways to fix this: using different Zob, using Zob that is lower on shoals, improving bathymetry interpolation there, nothing worked. The bathymetry there is very steep, having a gradient of about 3 in 100, with a scarp that drops 4 to 5 meters in about 100 meters (a nautical chart is attached). - how well are the tides (water levels) simulated?
Much better than the velocity, so I did not attach figures. At SB and CB, when comparing the modelled zeta at Charcoal Bay against the measured pressure, the NSE can be as high as around 0.94, and the maximum relative error is less than 10% Kind regards,
Gaoyang
Re: How to improve modelled salinity and salinity initial condition
i think you need more Volume to be filled on the flood, and then this will allow increased flows on the ebb.
I was hoping to see more overtides in that water level time series (ie quicker rise to peak water level and longer tail to lower water level). do you see that further into the estuary?
i see you said something about wet/dry giving you problems. should more of the gridded area be allowed to get inundated?
I was hoping to see more overtides in that water level time series (ie quicker rise to peak water level and longer tail to lower water level). do you see that further into the estuary?
i see you said something about wet/dry giving you problems. should more of the gridded area be allowed to get inundated?
Re: How to improve modelled salinity and salinity initial condition
Thanks for the advice. My reasoning is that the zeta at CB and SB suggests here could be a 5 to 10% underestimate of the tidal volume (assuming the bathymetry is correct) ; however, the bathymetry has always been a headache; for example, the nautical chart is not up-to-date for the upstream regions (some measurements were taken back in 60s; afterwards there have been some dredging and natural deposition about which we do not know much about). The bathymetry itself can a source of volume uncertainty. So we replaced the chart with recent LiDAR data wherever possible (many upstream creeks were not measured) and this slightly improved results; yet we are still not sure how well tidal flats are represented in the model, if the inundation is well captured or not.
What we have done now is that we set zeta=0 at the High High Water mark, so grids below it are marked as water and everything in between it and the lowest-astronomical-tide level can undergo wetting and drying.
The overtides are not obvious (we do not see large tidal asymmetry in zeta); many examples covering the whole area can be found in this council report (pages 22 to 24) http://www.aucklandcity.govt.nz/council ... 008037.pdf; the most relevant example there is attached below, at a location a bit upstream of our CB site. Many thanks for the prompt reply,
Gaoyang
What we have done now is that we set zeta=0 at the High High Water mark, so grids below it are marked as water and everything in between it and the lowest-astronomical-tide level can undergo wetting and drying.
The overtides are not obvious (we do not see large tidal asymmetry in zeta); many examples covering the whole area can be found in this council report (pages 22 to 24) http://www.aucklandcity.govt.nz/council ... 008037.pdf; the most relevant example there is attached below, at a location a bit upstream of our CB site. Many thanks for the prompt reply,
Gaoyang
Re: How to improve modelled salinity and salinity initial condition
what bottom stress are you using? UV_LOGDRAG?
what is your Zob?
what is your Zob?
Re: How to improve modelled salinity and salinity initial condition
I have calculated the tidal prism volume, at the maximum of the Spring tide (it is also when the underestimate of zeta is the most severe), it is 159 million versus 177 million as reported elsewhere, so a 10% underestimate as suggested by zeta. This could be partly due to some underestimate of the storage area; here are some shallower creeks (some of them are inundated and store volume only during Spring Tides) that are too narrow to be resolved, so after interpolation they become marshes with shallower depth than the actual. Not sure how much they can contribute to the underestimate of the prism, though
I am using uv_logdrag, with a zob of 0.00144. I have tried smaller number, tried varying it between shoals and channels, and at best only minimal improvements resulted from these changes. The model is highly sensitive to the bathymetry (which we also somehow tweaked), yet not very responsive to Zob.
I am using uv_logdrag, with a zob of 0.00144. I have tried smaller number, tried varying it between shoals and channels, and at best only minimal improvements resulted from these changes. The model is highly sensitive to the bathymetry (which we also somehow tweaked), yet not very responsive to Zob.
Re: How to improve modelled salinity and salinity initial condition
Gaoyang,
First, let me say thank you for such an informative and interesting post. You show detailed plots of model and data, succinctly describe your model set-up, and ask directed questions so that we might offer an informed response. Other ROMS Forum readers take note!
That said, I have few concrete suggestions. At first I thought the transition from over to under-estimates of surface salinity might be accounted for by an incorrect along-estuary salinity gradient or compression of a salt wedge, but I could not get the logic of that to work out. Besides, your transect comparison looks very good. In truth, what you show would be declared a great success by many modelers so don't be too hard on yourself here. You don't have super high resolution in these upper tributaries of the Waitematā so this might be as good as you can reasonably expect.
In our experience with the upper reaches of Delaware Bay in a marginally resolved ROMS configuration (4 cells across estuary) it can take a very very long time to spin up. You might consider some experiments with extreme initial conditions ... everywhere 0 and everywhere 33 ... and see how long it takes for the two to approach each other. That could be useful guidance.
Other suggestions I have are:
(1) alter the vertical profile river_Vshape of your inflow sources to place the freshwater all (mostly) in the upper layers so that you suppress the vertical mixing of the "dam burst" inflow when it is uniformly distributed (this is a severe test of all advection algorithms)
(2) put individual passive tracers into the river sources to get a sense of where the fresh water from each respective source is going - this might point to a bias in inflow discharges for individual sources.
Don't underestimate the potential of small tributaries and groundwater inflows to be unaccounted for sources of buoyancy.
- John.
First, let me say thank you for such an informative and interesting post. You show detailed plots of model and data, succinctly describe your model set-up, and ask directed questions so that we might offer an informed response. Other ROMS Forum readers take note!
That said, I have few concrete suggestions. At first I thought the transition from over to under-estimates of surface salinity might be accounted for by an incorrect along-estuary salinity gradient or compression of a salt wedge, but I could not get the logic of that to work out. Besides, your transect comparison looks very good. In truth, what you show would be declared a great success by many modelers so don't be too hard on yourself here. You don't have super high resolution in these upper tributaries of the Waitematā so this might be as good as you can reasonably expect.
In our experience with the upper reaches of Delaware Bay in a marginally resolved ROMS configuration (4 cells across estuary) it can take a very very long time to spin up. You might consider some experiments with extreme initial conditions ... everywhere 0 and everywhere 33 ... and see how long it takes for the two to approach each other. That could be useful guidance.
Other suggestions I have are:
(1) alter the vertical profile river_Vshape of your inflow sources to place the freshwater all (mostly) in the upper layers so that you suppress the vertical mixing of the "dam burst" inflow when it is uniformly distributed (this is a severe test of all advection algorithms)
(2) put individual passive tracers into the river sources to get a sense of where the fresh water from each respective source is going - this might point to a bias in inflow discharges for individual sources.
Don't underestimate the potential of small tributaries and groundwater inflows to be unaccounted for sources of buoyancy.
- John.
John Wilkin: DMCS Rutgers University
71 Dudley Rd, New Brunswick, NJ 08901-8521, USA. ph: 609-630-0559 jwilkin@rutgers.edu
71 Dudley Rd, New Brunswick, NJ 08901-8521, USA. ph: 609-630-0559 jwilkin@rutgers.edu
Re: How to improve modelled salinity and salinity initial condition
Kia ora John,
Thanks for the suggestions and sharing your experiences with the Delaware Bay model. I am not very familiar with setting up long time-span models yet, so this might be a dumb question to ask, yet I wonder if we now aim for using some more extreme initial conditions (such that we can estimate the time needed for spin-up, right?), then over such long, multi-months run, the atmospheric forcing will become a crucial part of the model, right? Moreover, in this case, variational data assimilation or nudging might be necessary to constrain the model? Or alternatively, since the focus at the moment is just on estimating the spin-up time scale, we might just leave these ingredients out at the moment? Thanks for any advice here.
We will try altering the Vshape (this sounds a very promising solution to me) and the passive tracer methods and update on this later.
Kind Regards,
Gaoyang
Thanks for the suggestions and sharing your experiences with the Delaware Bay model. I am not very familiar with setting up long time-span models yet, so this might be a dumb question to ask, yet I wonder if we now aim for using some more extreme initial conditions (such that we can estimate the time needed for spin-up, right?), then over such long, multi-months run, the atmospheric forcing will become a crucial part of the model, right? Moreover, in this case, variational data assimilation or nudging might be necessary to constrain the model? Or alternatively, since the focus at the moment is just on estimating the spin-up time scale, we might just leave these ingredients out at the moment? Thanks for any advice here.
We will try altering the Vshape (this sounds a very promising solution to me) and the passive tracer methods and update on this later.
Kind Regards,
Gaoyang
Re: How to improve modelled salinity and salinity initial condition
I was taking a brief break and now I'm back at the model. I was running into a weird problem with adding passive tracers to the model; although I have found a work-around, yet I have no idea why things were going that way. The log file (unfortunately, the version storing the error message has been overwritten, though I have located the code causing trouble and can replicate the error message as shown below) said that:
--- InqVar of the netcdf library cannot locate a variable in my nc file storing swflux (surface water flux, E-P) ; however, the name of this variable was shown as a blank in other words, the model was trying to access a "nameless" stuff in a forcing file...
--- the call stack can be traced back to line 670 of initial.h. I looked at it, and the code structure (see figure below; my header file is also attached) there can be summarised as
1. the model checks if 4d-var has been defined, or otherwise, whether either TLM or ADJOINT is not defined. If any of the above is satisfied, the following code will be executed. In my case, I don't
have 4d-var and neither TLM nor ADJ is defined;
2. so, the model executes the code up to line 670 (because #if !defined CORRELATION is satisfied in my case), where it throws an error (as described above, the model tries to find a "nameless" stuff in my file!). I have switched on ana_spflux and ana_bpflux, so I reckon the model is trying to find a 4d-var related variable in my input, yet definitely I do not have it there! I can't quite figure out why here is an error involving a "nameless" variable. Should we change #if !defined CORRELATION to #if (!defined CORRELATION && something) ?
3. So I have tweaked the code a bit to let it bypass this part (since the code comment says that the code causing my trouble "is essential in iterative algorithms"; although it does not say it is not essential in other applications, I have assumed so...). Then the code runs without throwing any error.
Furthermore, is here a way to analytically give swflux in the model? I have not found an ana flag to do this, though. I know that rainfall and evaporation can be analytically computed using bulk_flux, yet which seems to be an overkill if I just want to use zero swflux at this stage (and creating a swflux variable that is uniformly zero is also a bit inconvenient).
Any help will be appreciated. Thanks in advance.
Gaoyang
--- InqVar of the netcdf library cannot locate a variable in my nc file storing swflux (surface water flux, E-P) ; however, the name of this variable was shown as a blank in other words, the model was trying to access a "nameless" stuff in a forcing file...
--- the call stack can be traced back to line 670 of initial.h. I looked at it, and the code structure (see figure below; my header file is also attached) there can be summarised as
1. the model checks if 4d-var has been defined, or otherwise, whether either TLM or ADJOINT is not defined. If any of the above is satisfied, the following code will be executed. In my case, I don't
have 4d-var and neither TLM nor ADJ is defined;
2. so, the model executes the code up to line 670 (because #if !defined CORRELATION is satisfied in my case), where it throws an error (as described above, the model tries to find a "nameless" stuff in my file!). I have switched on ana_spflux and ana_bpflux, so I reckon the model is trying to find a 4d-var related variable in my input, yet definitely I do not have it there! I can't quite figure out why here is an error involving a "nameless" variable. Should we change #if !defined CORRELATION to #if (!defined CORRELATION && something) ?
3. So I have tweaked the code a bit to let it bypass this part (since the code comment says that the code causing my trouble "is essential in iterative algorithms"; although it does not say it is not essential in other applications, I have assumed so...). Then the code runs without throwing any error.
Furthermore, is here a way to analytically give swflux in the model? I have not found an ana flag to do this, though. I know that rainfall and evaporation can be analytically computed using bulk_flux, yet which seems to be an overkill if I just want to use zero swflux at this stage (and creating a swflux variable that is uniformly zero is also a bit inconvenient).
Any help will be appreciated. Thanks in advance.
Gaoyang
- Attachments
-
- waitemata.h
- (2.11 KiB) Downloaded 342 times
Re: How to improve modelled salinity and salinity initial condition
The question here is why ROMS is looking for 'swflux' at all when you have ANA_STFLUX, which should set the surface salinity/freshwater flux in ana_flux.h.
You might have some conflicting or contradicting CPP defs.
Rather than look at the .F code as you were, go into the build directory and look at the .f90. Then all the CPP defs have been applied and you may find code active that you did not expect. Then trace back to see which CPP options led to that that you did not intend.
I notice you have CPP directives for advection scheme options (MPDATA etc.) which means you have a rather old version of the code. This might be a good time to update since we have made some changes to the I/O processing of netcdf files (introducing varinfo.yaml instead of varinfo.dat) so going forward it becomes difficult to offer help when you are working with a version we aren't using anymore.
You might have some conflicting or contradicting CPP defs.
Rather than look at the .F code as you were, go into the build directory and look at the .f90. Then all the CPP defs have been applied and you may find code active that you did not expect. Then trace back to see which CPP options led to that that you did not intend.
I notice you have CPP directives for advection scheme options (MPDATA etc.) which means you have a rather old version of the code. This might be a good time to update since we have made some changes to the I/O processing of netcdf files (introducing varinfo.yaml instead of varinfo.dat) so going forward it becomes difficult to offer help when you are working with a version we aren't using anymore.
John Wilkin: DMCS Rutgers University
71 Dudley Rd, New Brunswick, NJ 08901-8521, USA. ph: 609-630-0559 jwilkin@rutgers.edu
71 Dudley Rd, New Brunswick, NJ 08901-8521, USA. ph: 609-630-0559 jwilkin@rutgers.edu
Re: How to improve modelled salinity and salinity initial condition
I was a bit surprised by ROMS looking for swflux as well, yet just pretended that nothing wrong was going on, that the E-P stuff is "different" from surface tracer flux (e.g. stflux is the flux of the tracer per se, E-P takes dilution into account, etc.).
1. As for the MPDATA flag in .h file, it is a bit messy there: I actually specified advection algorithms in the *.in file. We updated the code a few months ago (now it is Build no.1098, as seen in the attached log file) and after the update, I forgot to update my h file...so these are some legacies from previous versions; now I have deleted them from the header.
2. I have looked at the Build_roms folder, and indeed I can see r4dvar.f90 is present there and it only contains two lines:
" MODULE r4dvar_mod
END MODULE r4dvar_mod
"
An "empty file". Other similar *.f90 files related to adjoint models and 4d-variation can also be found there. Yet my h file apparently does not include anything associated with the adj model or the 4d-var algorithm. The cpp flags printed out in my log file (attached here) do not indicate so, either. I have tried to compile with T_PASSIVE and ANA_PASSIVE in the header file undefined, yet the Build_roms folder still contains the 4dvar and r4dvar related *.f90 files (though "empty"). However, previously, before I added passive tracers to the model (while salinity flag is on), the model was also asking for the swflux file, yet it did not report any bug such as looking for a "nameless" input in swflux, etc.
Any hint and help will be appreciated, thanks.
Gaoyang
1. As for the MPDATA flag in .h file, it is a bit messy there: I actually specified advection algorithms in the *.in file. We updated the code a few months ago (now it is Build no.1098, as seen in the attached log file) and after the update, I forgot to update my h file...so these are some legacies from previous versions; now I have deleted them from the header.
2. I have looked at the Build_roms folder, and indeed I can see r4dvar.f90 is present there and it only contains two lines:
" MODULE r4dvar_mod
END MODULE r4dvar_mod
"
An "empty file". Other similar *.f90 files related to adjoint models and 4d-variation can also be found there. Yet my h file apparently does not include anything associated with the adj model or the 4d-var algorithm. The cpp flags printed out in my log file (attached here) do not indicate so, either. I have tried to compile with T_PASSIVE and ANA_PASSIVE in the header file undefined, yet the Build_roms folder still contains the 4dvar and r4dvar related *.f90 files (though "empty"). However, previously, before I added passive tracers to the model (while salinity flag is on), the model was also asking for the swflux file, yet it did not report any bug such as looking for a "nameless" input in swflux, etc.
Any hint and help will be appreciated, thanks.
Gaoyang
wilkin wrote: ↑Mon Jul 04, 2022 12:23 pm The question here is why ROMS is looking for 'swflux' at all when you have ANA_STFLUX, which should set the surface salinity/freshwater flux in ana_flux.h.
You might have some conflicting or contradicting CPP defs.
Rather than look at the .F code as you were, go into the build directory and look at the .f90. Then all the CPP defs have been applied and you may find code active that you did not expect. Then trace back to see which CPP options led to that that you did not intend.
I notice you have CPP directives for advection scheme options (MPDATA etc.) which means you have a rather old version of the code. This might be a good time to update since we have made some changes to the I/O processing of netcdf files (introducing varinfo.yaml instead of varinfo.dat) so going forward it becomes difficult to offer help when you are working with a version we aren't using anymore.
- Attachments
-
- LogFile.txt
- (23.14 KiB) Downloaded 334 times
Re: How to improve modelled salinity and salinity initial condition
As you have discovered, when whole chunks of code are not active (such as 4dvar in your example) the relevant subroutines simply have MODULE ... END MODULE and no executable code. We do this because otherwise the BUILD script would have to have logic that modified the list of routines to be compiled into libraries and the final executable. The logic for that is tortuous.
We would have to duplicate the logic in the CPP/FORTRAN configuration in the build shell script before compilation begins, which just invites bugs. We used to have an elaborate make-depend process for this but it is easier to have one set of logic ... in the CPP and FORTRAN ... to handle this by creating "empty" f90 files that are passed to the compiler. When compiling in parallel (with the -j option) these trivial subroutines are handled very quickly.
All this said, I see what you are missing is #define ANA_SSFLUX. That's why ROMS is looking for "swflux" because the default is to read from a netcdf file if no other instructions are given for the surface salinity boundary condition.
We would have to duplicate the logic in the CPP/FORTRAN configuration in the build shell script before compilation begins, which just invites bugs. We used to have an elaborate make-depend process for this but it is easier to have one set of logic ... in the CPP and FORTRAN ... to handle this by creating "empty" f90 files that are passed to the compiler. When compiling in parallel (with the -j option) these trivial subroutines are handled very quickly.
All this said, I see what you are missing is #define ANA_SSFLUX. That's why ROMS is looking for "swflux" because the default is to read from a netcdf file if no other instructions are given for the surface salinity boundary condition.
John Wilkin: DMCS Rutgers University
71 Dudley Rd, New Brunswick, NJ 08901-8521, USA. ph: 609-630-0559 jwilkin@rutgers.edu
71 Dudley Rd, New Brunswick, NJ 08901-8521, USA. ph: 609-630-0559 jwilkin@rutgers.edu
Re: How to improve modelled salinity and salinity initial condition
Aha, I can find ssflux in globaldefs.h but not in the Functionals folder. Not being able to see it in the Functionals folder is why I always missed it when preparing the header file. Thanks for pointing this out!
So the code has not really compiled anything involving 4dvar etc. Then it really leaves me quite confused how the following bug I mentioned in my 2nd Jul. post occurred: the code ran into trouble at line 670 of initial.h, which I believe is related to the 4dvar cpp flags (if I was not interpreting the code in a horribly wrong way). For now, I just commented out all this section in initial.h, then the model compiled and ran smoothly with t_passive and water age flags switched on and assume that this is a "fix" (the modelled whereabouts of dyes and water age seem reasonable, though we do see high retention of freshwater in some creeks: for which we just don't have data and whose main channels are just too narrow to model).
Hopefully we can soon conclude this part of the model validation (we can still spend at most 1 more month on this).
So the code has not really compiled anything involving 4dvar etc. Then it really leaves me quite confused how the following bug I mentioned in my 2nd Jul. post occurred: the code ran into trouble at line 670 of initial.h, which I believe is related to the 4dvar cpp flags (if I was not interpreting the code in a horribly wrong way). For now, I just commented out all this section in initial.h, then the model compiled and ran smoothly with t_passive and water age flags switched on and assume that this is a "fix" (the modelled whereabouts of dyes and water age seem reasonable, though we do see high retention of freshwater in some creeks: for which we just don't have data and whose main channels are just too narrow to model).
Hopefully we can soon conclude this part of the model validation (we can still spend at most 1 more month on this).
gli353 wrote: ↑Sat Jul 02, 2022 6:32 am I was taking a brief break and now I'm back at the model. I was running into a weird problem with adding passive tracers to the model; although I have found a work-around, yet I have no idea why things were going that way. The log file (unfortunately, the version storing the error message has been overwritten, though I have located the code causing trouble and can replicate the error message as shown below) said that:
--- InqVar of the netcdf library cannot locate a variable in my nc file storing swflux (surface water flux, E-P) ; however, the name of this variable was shown as a blank in other words, the model was trying to access a "nameless" stuff in a forcing file...
--- the call stack can be traced back to line 670 of initial.h. I looked at it, and the code structure (see figure below; my header file is also attached) there can be summarised as
1. the model checks if 4d-var has been defined, or otherwise, whether either TLM or ADJOINT is not defined. If any of the above is satisfied, the following code will be executed. In my case, I don't
have 4d-var and neither TLM nor ADJ is defined;
2. so, the model executes the code up to line 670 (because #if !defined CORRELATION is satisfied in my case), where it throws an error (as described above, the model tries to find a "nameless" stuff in my file!). I have switched on ana_spflux and ana_bpflux, so I reckon the model is trying to find a 4d-var related variable in my input, yet definitely I do not have it there! I can't quite figure out why here is an error involving a "nameless" variable. Should we change #if !defined CORRELATION to #if (!defined CORRELATION && something) ?
3. So I have tweaked the code a bit to let it bypass this part (since the code comment says that the code causing my trouble "is essential in iterative algorithms"; although it does not say it is not essential in other applications, I have assumed so...). Then the code runs without throwing any error.
Furthermore, is here a way to analytically give swflux in the model? I have not found an ana flag to do this, though. I know that rainfall and evaporation can be analytically computed using bulk_flux, yet which seems to be an overkill if I just want to use zero swflux at this stage (and creating a swflux variable that is uniformly zero is also a bit inconvenient).
Any help will be appreciated. Thanks in advance.
Gaoyang
Re: How to improve modelled salinity and salinity initial condition
An update:
1. Now we are happy with our model results. The reason it didn't work was due to some (silly) mistake on my part. We were using a synthesised initial field based on some VERY OLD references (1986ish) and three or four transects measured over the past few years. They provided a good starting point, and we used them to test if the grid quality was good enough to tolerate some high discharge rate. Then, when we ended these tests and switched to using the actual data from year 2021, I simply forgot that it was an exceptionally dry year and the salinity near the eastern boundary of Waitemata could be as high as 35.7 in late March. We do have a long term CTD installed there measuring surface salinity, yet in late March biofouling had made the data not so usable. HYCOM data don't really cover theses smaller bays. In the end, we re-designed the initial field based on HYCOM data in the outside ocean and some reliable measurements inside the harbour.
2. Here was a dry-wet transition during late March and early May; rainfall and evaporation balance throughout the model domain played a very big role in it. Combining ERA5 data with some local meteorological gauges further improved our results.
Figure 1. Upstream Surface Salinity (Hobsonville) (black dots: model, red curve: CTD)
Figure 2. Downstream Surface Salinity (Stanley Bay)
Here is still some underestimation in both value and magnitude of variation, yet we are OK with this --- could be due to the relatively short ramp-up period and inherent errors in our very simplified initial and boundary data.
Figure 3. Depth-averaged Along-stream Salinity Transect (spring tide) Figure 4. Depth-averaged Along-stream Salinity Transect (neap tide) Not so good... but given the data we have and the model complexity we would like to accept at this stage, I reckon it is still fine.
3. Remaining issues: the model consistently predict higher level of stratification throughout the system. Neglected wave mixing/boat wakes, errors in field measurements themselves, etc., all these could be contributing factors.
4. Dye experiment: I did this experiment before carrying out the fixes described in 1 and 2; this experiment has not helped a lot in model development, though; maybe it's because I have not figured out how to interpret results in a very proper way; yet an interesting thing I noticed in the results is that over a period of 40 days the mean water age at the outer boundary of waitemata seems to asymptote to a plateaus of ~22 days; I excluded the dye concentration <1e-06 from analysis, when the concentration at the eastern boundary of the harbour reached this threshold for the first time, the recorded mean age was 12 days. This definitely tells me something about the water and tracer exchange between the harbour and the gulf (something we cannot validate using our existing datasets, though), but at the moment it has not proven very useful in our model development.
1. Now we are happy with our model results. The reason it didn't work was due to some (silly) mistake on my part. We were using a synthesised initial field based on some VERY OLD references (1986ish) and three or four transects measured over the past few years. They provided a good starting point, and we used them to test if the grid quality was good enough to tolerate some high discharge rate. Then, when we ended these tests and switched to using the actual data from year 2021, I simply forgot that it was an exceptionally dry year and the salinity near the eastern boundary of Waitemata could be as high as 35.7 in late March. We do have a long term CTD installed there measuring surface salinity, yet in late March biofouling had made the data not so usable. HYCOM data don't really cover theses smaller bays. In the end, we re-designed the initial field based on HYCOM data in the outside ocean and some reliable measurements inside the harbour.
2. Here was a dry-wet transition during late March and early May; rainfall and evaporation balance throughout the model domain played a very big role in it. Combining ERA5 data with some local meteorological gauges further improved our results.
Figure 1. Upstream Surface Salinity (Hobsonville) (black dots: model, red curve: CTD)
Figure 2. Downstream Surface Salinity (Stanley Bay)
Here is still some underestimation in both value and magnitude of variation, yet we are OK with this --- could be due to the relatively short ramp-up period and inherent errors in our very simplified initial and boundary data.
Figure 3. Depth-averaged Along-stream Salinity Transect (spring tide) Figure 4. Depth-averaged Along-stream Salinity Transect (neap tide) Not so good... but given the data we have and the model complexity we would like to accept at this stage, I reckon it is still fine.
3. Remaining issues: the model consistently predict higher level of stratification throughout the system. Neglected wave mixing/boat wakes, errors in field measurements themselves, etc., all these could be contributing factors.
4. Dye experiment: I did this experiment before carrying out the fixes described in 1 and 2; this experiment has not helped a lot in model development, though; maybe it's because I have not figured out how to interpret results in a very proper way; yet an interesting thing I noticed in the results is that over a period of 40 days the mean water age at the outer boundary of waitemata seems to asymptote to a plateaus of ~22 days; I excluded the dye concentration <1e-06 from analysis, when the concentration at the eastern boundary of the harbour reached this threshold for the first time, the recorded mean age was 12 days. This definitely tells me something about the water and tracer exchange between the harbour and the gulf (something we cannot validate using our existing datasets, though), but at the moment it has not proven very useful in our model development.