[POC] flow_ebos: simulator using localized linearization and dense AD
I've been repeatedly asked to do this, it took a lot of work, but here it finally is: ladies and gentlemen, let me present you
The Frankenstein Pig
(this is work done in collaboration with @dr-robertk . be aware that this PR is not yet fully polished so test it well before merge.)
The name of the PR is due to the fact that the new simulator is analogous to a pig with wings, i.e., it is a hybrid between ebos (which linearizes the mass balance equations) and opm-autodiff (which does the rest).
the differences between the results of the two simulators for Norne are quite tiny (the screnshot below is quite representative; also note that the deviations get quite a bit smaller when comparing to the flow master version from yesterday):
- so far it has been only tested for SPE1, SPE9 and Norne. the results seem to be much identical for the SPEs and match very well for Norne
- no MPI-parallelism has been tested. (ebos supports MPI parallelism but it is rather unlikely that `flow_ebos` will work out of the box.)
- the new simulator has ebos/eWoms as a hard dependency (because IMO re-inventing that wheel again would not have lead to better code and would have been _way_ more work)
amazingly this new animal does not only fly, but it seems to move about as fast as the non-transmogrified animal does:
- for SPE1, `flow` seems to be slightly faster. (but given that `flow_ebos` only takes about 1.4 seconds on my machine, this is irrelevant.)
- for SPE9, both simulators are approximately equally non-performant: 27.5 seconds for flow_ebos and 27.0 for flow on my machine
- there's some additional performance to be squeezed out of this because the stand-alone version of ebos only needs about 15 seconds for SPE9 on my machine
- for Norne, flow_ebos is about 60% faster when only looking at the time required for the linearization of the mass balance equations (2.36 seconds vs. 3.48 seconds for the first 10 linearizations)
- the number of number of total Newton iterations is lower for flow_ebos than for flow (1568 for flow_ebos vs. 1812 for flow) and it is about 20% faster when the wall time is concerned (about 1826 seconds for `flow_ebos` and 2174 seconds for `flow`). I've spend quite a bit of time to analyze this, and besides finding some bugs in opm-simulators and some missing features in opm-material and ebos, I arrived at the following conclusions:
- the MINPV changes which have been merged earlier today seem to have a slightly negative effect on performance
- flow_ebos and flow fail for about the same number of time steps, but the linear solver tends to sometimes make a small difference (but it is not uniformly tending towards one simulator)
- in `flow_ebos`, converting the results to ADBs takes about 20 to 30% of the time which is required to do the actual work
- both simlators are really unstable w.r.t. numerical noise. if the tolerance of the linear solver is reduced, it becomes a bit better (to do so, pass them e.g. `linear_solver_reduction=1e-4 linear_solver_maxiter=500`) but then the linear solver totally dominates the overall CPU time
- IMO the most important next step is to integrate a well model which can work directly on `Evaluation` objects as inputs. This would immediately make the linearization of the mass balance equations 10% to 15% faster.
- getting rid of the ADB object detour to the linear solver. this also would accelerate the linearization by about 10%
- making sure MPI works
- making sure that simulations can be properly restarted (like MPI, this may work out of the box, but I have not tried)
- testing on additional cases
if you want to try out this PR, be sure to also merge OPM/opm-common#162 or else you'll observe spectacular build system fireworks ;)
I am not qualified to say much about this - but it sure looks impressive :fireworks:
**Warning - some guessing follows:**
If ewoms now is getting closer to center stage I *really* think we should have the Travis build testing properly in place. I did try a bit at some stage - but my dune knowledge really was not sufficient and I backed out. I also had the feeling the ewoms *requires* dune 2.4 - whereas the rest of opm manages with dune 2.3. Is this the time to officially bump the dune version requirements to 2.4?
thanks for the flowers. Please be aware that I consider this PR more like a "dojo" for playing around with alternative linearization approaches, i.e., I don't like to propose to merge this in its current form.
If this turns out to be the way to go, I'm also open to move that the parts of ewoms needed for something like `flow_ebos` into opm-simulators. (or the other way around; I would very much appreciate input on this from people who know both codebases.)
> If ewoms now is getting closer to center stage I really think we should have the Travis build testing properly in place.
I agree. the problem is that I don't know how to beat travis into shape :/
> I also had the feeling the ewoms requires dune 2.4 - whereas the rest of opm manages with dune 2.3.
At the moment I try to keep it compatible with Dune 2.3, though this has not seen much testing recently. (mainly because the Ubuntu 16.04 Dune packages are 2.4.) if you encounter a snatch because of Dune 2.3, please open an issue. (or if you encounter any other build issues.)
> Is this the time to officially bump the dune version requirements to 2.4?
jenkins test this opm-common=162 please
(maybe this works, probably not because OPM/opm-common#162 introduces a new dependency for opm-simulators.)
Not till we build on vanilla 16.04 and can drop 14.04 support please. At least not until i have investigated how well 2.4 backports.
ewoms builds fine against 2.3, you just need to add dune-localfunctions
jenkins build this opm-common=162 please
Great work! I will start testing it immediately.
jenkins build this opm-common=162 please
some small status update: with @totto82's new well model, Norne takes 20% longer on my machine with the current master than the simulator which this branch adds (`flow`: 1349.8 seconds; `flow_ebos`: 1109.7 seconds)