fix the calculation of the derivatives of VAPPARS
the values of the scaling factor were correct but the derivatives were
not because the maximum of the oil saturation and some small constant
was taken. If the maximum was the constant, the derivative of the
scaling factor w.r.t. the oil saturation is zero, but this was
ignored. (Note: this is not the first time where this issue occurred
in opm-simulators; e.g. the old saturation functions implementation
also fell victim to this mistake, so it is very well possible that
this has not been the last remaining piece of code which is affected.)
If it is a recurring theme I guess time is ripe for a test!
> If it is a recurring theme I guess time is ripe for a test!
I agree on the goal, but that's plenty of work -- or may even be impossible -- because you have to inspect *every* piece of code which uses calculates the derivatives manually. (there's currently a lot of such legacy code.)
IMO, the technically sound alternative is to replace that legacy code by something which uses ADBs properly. (or something that uses the DenseAD stuff, which is probably easier in most cases.)
> I agree on the goal ....
Yes - of course I understand adding a test is massive - unrealistic task. But that is kind of my point, we should strive hard to make sure the code is *testable* from the outset, making the code testable as an afterthought is a momentous task.
The code looks good, but as always I'll ask: Have you run a thorough check with the SPE cases and Norne? Did the results change at all? If so, how much? Especially: did they change enough to cause different timestepping?
Having said that, would not simple AD usage (with the Evaluation class) solve this efficiently and correctly?
> Have you run a thorough check with the SPE cases and Norne?
yes: for the SPEs, VAPPARS is not used IIRC, and for Norne the results were identical, but performance was slightly worse due to deteriorated convergence for some timesteps. I've been told that @osae was investigating this; are there any news on that front @osae?
> Having said that, would not simple AD usage (with the Evaluation class) solve this efficiently and correctly?
yes it would. (that's the reason why I stumbled over this in the first place, btw.)
Any news on this?
I'll ping this again... Ping!
@atgeirr What is missing? Do you want me to test this and discuss it with @osae
It was said earlier that it was being investigated, has it been tested? If so, and successful, we should merge this.
As also @andlaus pointed out this PR effects the convergence. The derivative will be discontinuous at the cut off. For vap>=1 this shouldn't lead to a big problem. (But then you wouldn't need a cutoff anyway) For vap < 1 the derivative is singular in 0, and you will need the cutoff. The problem with the current approach is that it sets the derivative to 0, also for vap < 1 when it should --> inf. From Newtons perspective this could cause problems. Maybe removing the cutoff and instead limit the derivatives would be a more robust approach. Is this a problem in other part of the code as well?
As now I don't see any point in merging this other than being constant with ebos.
> As now I don't see any point in merging this other than being constant with ebos.
yeah, that was my main motivation. Another was to ask around if somebody could think of an alternative approach which does improve convergence compared to the existing one.
It sounds to me like the approach with no cutoff but limiting the derivatives may be worth a try.
> It sounds to me like the approach with no cutoff but limiting the derivatives may be worth a try.
I guess that would be relatively easy to implement in the denseAD context, *but*: if only the derivatives are changed, they are wrong in the sense that they are inconsistent with the evaluated function and the Newton-Raphson method thus looses its quadratic convergence property. maybe it's better to use a "smooth" function instead of a simple cut-off? (That's just a wild idea -- I haven't yet wasted any brain cycles about how this can be done or if this is a good idea in the first place.)
> Newton-Raphson method thus looses its quadratic convergence property
I don't think the method has quadratic convergence close to the singular location anyway, so this would not hurt (more) I think. I have no proof for this claim however.
In any case: there seems to be no consensus on merging this, so should we close it and make an issue of it instead, to remind ourselves that there is work to be done on this?
I agree: let's close it for now. (mental note to myself: keep this in mind when comparing `flow` and `flow_ebos`.)