[[ Check out my Wordpress blog Context/Earth for environmental and energy topics tied together in a semantic web framework ]]

Tuesday, April 03, 2007

How to generate convenience

To add some context to the preceding post on matching the dynamics of a Logistic curve, I will step-by-step deconstruct the math behind the governing Verhulst equation. Although deceptively simple in both form and final result, you will quickly see how a "fudge" factor gets added to a well-understood amd arguably acceptable 2nd-order differential equation just so we end up with a convenient closed-form expression. This convenience of result essentially robs the Peak Oil community of a deeper understanding of the fundamental oil depletion dynamics. I consider that a shame, as not only have we wasted many hours fitting to an empirical curve, we also never gave the alternatives a chance -- something that in this age of computing we should never condone.

Premise
If we consider the discovery rate dynamics in terms of a proportional growth model, we can easily derive a 2nd-order differential equation whereby the damping term gets supplied by an accumulated discovery term. The latter term signifies a maximum discovery (or carrying) capacity that serves to eventually limit growth.

Now, if we refactor the Logistic/Verhulst equation to mimic the 2nd-order differential equation in appearance, it appears very similar apart from a conspicuous non-linear damping term shown at the lower right above.

That non-linear term in any realistic setting makes absolutely no sense. The counter-argument rests on a "you can't have a cake and eat it at the same time" finding. On the one hand, we assume an exponential growth rate based on the amount of instantaneous discoveries made. But on the other hand, believers in the Logistic model immediately want to turn around and modulate the proportional growth characteristics with what amounts to a non-linear "fudge" factor. This happens to just slow the contrived exponential growth with another contrived feedback term. Given the potential chaotic nature of most non-linear phenomena, we should feel lucky that we have a concise result. And to top it off, the fudge factor leads to a shape that becomes symmetric on both sides of the peak since it modulates the proportional growth equally around dD/dt=0, with an equal and opposite sign. As the Church lady would say, "How convenient!". Yet, we all know that the downside regime has to have a different characteristic than the upside (see the post on cubic growth for the explanation, and why the exponential growth law may not prove adequate in the first place).

Unfortunately, this "deep" analysis gets completely lost on the users of the Logistic curve. They simply like the fact that the easily solvable final result looks simple and gives them some convenience. Even though they have no idea of the underlying fundamentals, they remain happy as clams -- and I and people like R2 and monkeygrinder as angry as squids.

Labels:

4 Comments:

Professor Anonymous Anonymous said...

Did you ever get an empirical lognormal distribution for the discovery field sizes?

9:16 AM  
Professor Blogger @whut said...

That doesn't matter. Field sizes only add statistical noise on top of the curve.

Besides the log-normal only comes about from real geology and has nothing to do with the dynamics of discovery.


I can kind of see where you are coming from considering the old adage that "you find big things first", but then you have to ask the question, "first relative to what?". So I have yet to be convinced that when you you find a big field will have any more than a subtle shift in effecting the profile of the curve, but it will have a signifcant effect on the real discovery data. Remember, the model is always smooth because it has to do with probability and the data is noisy because the big fields can swamp yearly discoveries.

Try a little mind experiment. Say that you came up with a dicovery model that only include small fields and then you plotted the #discoveries over time. Then say you did the same with medium fields and big fields. Next considering the huge surface area of the earth, would you think the model would change much between the regimes? The analogy would be how long it would take you to find a needle versus a nail versus a bolt in a haystack. And then you add to that and say you were looking for one of 10,000 needles vs 100 nails vs 1 bolt. The point is you will run into any one of these and the average find will likely be a weighted average of each one of these psosibilities and that is what goes into the profile.

10:47 AM  
Professor Anonymous Anonymous said...

Actually, I buy the stochastic discovery, so big fields don't have to be found first. Economics would probably put them into development first, but that is another issue.

I'm more interested in understanding what the distribution of discovery has been. Secondly, it would be interesting to know what has been explored vs. how much is left unexplored. From that you could generate a potential field discovery distribution which could constrain the future discovery rate.

Your shock model is the freshest thing to come along in a while. I am here via TOD, just so you know.

11:41 AM  
Professor Blogger @whut said...

The problem is that there is little discovery data prior to 1900. Otherwise look at this post for how we can infer discovery data during the 1800's.

8:50 PM  

Post a Comment

<< Home


"Like strange bulldogs sniffing each other's butts, you could sense wariness from both sides"