[[ Check out my Wordpress blog Context/Earth for environmental and energy topics tied together in a semantic web framework ]]

Monday, October 29, 2007

Discover Redux

When I developed the dispersive discovery model earlier this year, I lacked direct evidence for the time-invariant evolution of the cumulative growth component. The derivation basically followed two stages : (1) a stochastic spatial sampling that generated a cumulative growth curve and (2) an empirical observation as to how sampling size evolves with time, with the best fit assuming a power-law with time. So with the obvious data readily available and actually staring me in the face for some time, from none other than Mr. Hubbert himself (hat tip to Mr, McManus), I believe this partial result further substantiates the validity of the model. In effect, the stage-1 part of the derivation benefits from a "show your work" objective evaluation, which strengthens the confidence level of the final result. Lacking a better analogy, I would similarly feel queasy if I tried to explain why rain regularly occurs if I could not simultaneously demonstrate the role of evaporation in the weather cycle. And so it goes with the oil discovery life-cycle, and arguably any other complex behavior.

The basic parts of the derivation that we can substantiate involve the L-bar calculation in the figure below (originally from):

The key terms include lambda, which indicates cumulative footage, and the L-bar, which denotes an average cross-section for discovery for that particular cumulative footage. This represents Stage-1 of the calculation -- which I never verified with data before -- while the last lines labeled "Linear Growth" and "Parabolic Growth" provide examples of modeling the Stage-2 temporal evolution.

Since the results come out naturally in terms of cumulative discovery, it helps to integrate Hubbert's yearly discovery curves. So the figure below shows the cumulative fit,

while the original numbers came from this data set:


I did a least-squares fit to the curve that I eyeballed from the previous post and the discovery asymptote increased from my estimated 175 to 177. I've found that generally accepted values for this USA discovery URR ranges up to 195 billion barrels in the 30 years since Hubbert published this data. Which in my opinion indicates that the model has potential for good predictive power.

So at a subjective level, you can see that the cumulative really shows the model's strengths, both from the perspective of the generally good fit for a 2-parameter model (asymptotic value + cross section efficiency of discovery), but also in terms of the creeping reserve growth which does not flatten out as quickly as the exponential does. This slow apparent reserve growth matches empirical-reality remarkably well. In contrast, the quality of Hubbert's exponential fit appears way off when plotted in the cumulative discovery profile, only crossing at a few points and reaching an asymptote well before the dispersive model does.


Just like we were taught in school, provide a hypothesis and then try to verify with data. Unlike the wing-nuts who believe that school only serves to Indoctrinate U. (seriously, click on the link if you want to read my review of one of the worst documentaries in recent memory)

Tuesday, October 23, 2007

Blogorithm

For those wishing to seek out non-replenishable energy news, The Oil Drum surpassed quality of content over PeakOil.com long ago. In particular, TOD remains the only practical place to hash out arguments and develop what I call blogorithms and other models to predict evolving energy usage (shoot me for pinching this term from a fundie anti-science web site). I can't say that TOD has become the cat's pajamas of energy discussion, but you typically know you have a good thing going when someone starts attacking your approach.

As a case in point consider this commentary, A Terrifying Prospect, courtesy of TOD.
These factors have led to criticism of the modelling methods of peak oil theorists. Cambridge Energy Research Associates, a US-based energy consultancy, is damning in its assessment, saying that peak oil theory is garbage. Highly-respected (???) energy economist Michael Lynch has described peak oil theorists as practising pseudo-science and claims that: "The quantitative models used by peak oil theorists would earn a university student in elementary statistics a failing grade".

Lynch also questions the quality of peak oil research, noting that nearly all of it has been published on the internet rather than in peer-reviewed journals.
I will call this quote of Lynch's a keeper, and use it as a yardstick for how far we have progressed. I know for certain that Lynch means PeakOil.com or TOD when he refers to the "quality" of internet-only research, recalling several on-line discussions I had with Mr. Lynch himself at PO. And with even more certitude, I assert that Lynch exhibits pure projection in accusing us of practicing pseudo-science. He, not us, lacks the peer review in his published work; I dare anyone to find an article of his that goes beyond rhetorical flourishes. Failing grade, my foot -- the role of pseudo-statistician really would fit much better on Lynch's foot.

The article goes on to say:
The dearth of peer-reviewed scientific work from the peak oil theorists is frustrating to the layperson trying to form an opinion on the subject - particularly when reputable organisations such as the US Department of Energy don't predict peak oil until after 2030. For some readers, this frustration will be compounded by Professor Heinberg's books, which, although packed with fascinating information, are astonishingly light on references.

His focus on generalist publications means that - surprisingly - his work has never appeared in peer-reviewed academic journals. Professor Heinberg explains the minimalist referencing in his books by saying that: "I didn't have rafts of graduate students to go out and look things up for me."
I don't find this argument convincing. For one, you need peers to do peer reviews and the reality speaks to the dearth of any peers ready to defend a contrarian or even supporting position in the scholarly world. For this narrow, arcane topic of oil depletion, everything points to a coalescing of knowledge and a universal truth within arm's reach, and we shouldn't have to sell ourselves short. Remember that the oil companies and their funded research have no desire to prove a limited supply of fossil fuels; so only independent researchers (amateurs and former oil industry types) and perhaps a few daring academics remain to pursue the math and statistics. In essence, and as a prerequisite, first you need to care, which by contrast the USGS doesn't seem to and who have sponsored some awful research.

And as for the lack of references, you really need some quality work to reference to. We all know the basic citations such as Hubbert and others, but some of the new techniques that TOD'ers employ really leapfrog the historical knowledge base. Just recently, you can find researchers sifting through Google Earth visuals to count oil rigs which I contend have no historical analogy. And one of my favorite references remains a mid-70's editorial from Fishing Facts magazine, which essentially encapsulated Hubbert's idea for the layman. Furthermore when it comes to developing a good blogorithm, I hope I don't sound too arrogant in saying that Isaac Newton didn't need a heck of a lot of references as he engaged in a paradigm shift in thought. Perhaps in a few years, as this stuff shakes out, yes we will build up a good citation index, but at this early stage, web-based research remains at the "apple falling off a tree" stage.

In keeping with this spirit Jerry McManus contributed an interesting bit of data to a TOD discussion started by Luis de Sousa. He dug this chart out of an old Hubbert article from some library stacks showing a histogram of USA discoveries plotted against cumulative drilled footage.

I like this curve because I think it substantiates the dispersive growth model of oil discovery that I posted a few months ago and to TOD.

The shape of the curve that Jerry found due to Hubbert has the characteristic of a cumulative dispersive swept region in which we remove the time dependent growth term, retaining the strictly linear mapping needed for the histogram.


For the solution, we get:
  dD/dh = c * (1-exp(-k/h)*(1+k/h))
where h denotes the cumulative depth.

I did a quickie overlay with a scaled dispersive profile, which shows the same general shape.


The k term has significance in terms of an effective URR as I described in the dispersive discovery model post. I eyeballed the scaling as k=0.7e9 and c=250, so I get 175 instead of the 172 that Hubbert got.

But, here comes the weird part, those same charts show up in that obscure Fishing Facts article dated 1976, where the magazine's editor decided to adapt the figures from a Senate committee hearing that Hubbert was invited to testify to.
Free Image Hosting at www.ImageShack.us Free Image Hosting at www.ImageShack.us
Fig. 5 Average discoveries of crude oil per loot lor each 100 million feet of exploratory drilling in the U.S. 48 states and adjacent continental shelves. Adapted by Howard L. Baumann of Fishing Facts Magazine from Hubbert 1971 report to U.S. Senate Committee. "U.S. Energy Resources, A Review As Of 1972." Part I, A Background Paper prepared at the request of Henry M. Jackson, Chairman: Committee on Interior and Insular Affairs, United States Senate, June 1974.

Fig.6 Estimation of ultimate total crude oil production for the U.S. 48 states and adjacent continental shelves; by comparing actual discovery rates of crude oil per foot of exploratory drilling against the cumulative total footage of exploratory drilling. A comparison is also shown with the U.S. Geol. Survey (Zapp Hypothesis) estimate.
Like I said, this stuff is within arm's reach and has been, in fact, staring at us in the face for years.

Monday, October 15, 2007

Mini vs. Big

A local yokel Twin Cities columnist has regular conniptions and a hissy fit over what she considers deranged bicyclists. This curiously comes after the Minneapolis Parks department had opened up the grand rounds for a September bike tour. Taking cues from what places like Manhattan have regularly scheduled over the years, somebody evidently thought the biking citizenry should get rewarded with a special treat of cordoned-off roads. I checked the event out and noticed a huge turnout of bikers; in certain places I found it hard to cross the pelaton and found it a touch unnerving to ride in, mainly due to the huge disparities in the riders' speeds.

Trying to understand her obsession over bicyclists, I think the StarTribune columnist, Katherine Kersten, has tried to frame and conflate other recent Critical Mass events with the sanctioned ride. And another local assbag blogger, thinks it has something to do with prepping "greens" for bad behavior when the RNC comes into town next year. I guess what better way to practice intimidating conservatives than a bunch of bicyclists roaming the streets?

As an antidote to that attitude, I enjoyed reading David Byrne's recent blog entry concerning a biking multimedia event held in NYC recently. Byrne links to another report on the event where we learn from famed columnist Calvin Trillin :
"I have been riding a bike in Manhattan for 40 years, and I have yet to shift gears."
All I can say, Katherine Kersten ain't no Calvin Trillin.

Go out and ride, take over the planet.


Update: Needing someplace to refer to her as a snaggled-tooth witch, I spotted this comment on another anti-biking tirade that Kersten penned:
Oh great, you gas-guzzling troll, now all the reactionaries that read this crap are going to run me down when I’m trying to bike to work in the morning… not like you have to worry about that since you ride to work on a broomstick.

Tuesday, October 09, 2007

Black Swan

I've noticed a bit of a buzz surrounding the book "The Black Swan: The Impact of the Highly Improbable" by Nassim Nicholas Taleb. I haven't read it but I think I understand the premise -- which asserts that straight probability and statistical analyses break down outside of a closed world assumption. In other words, when one considers that something odd can happen outside of the scope of your well understood reality, the infinite possibilities out there can bring about some surprising new eventualities. The classic case that Taleb brings up concerns the case of a previously undiscovered Australian black swan, of which European naturalists predicted would never occur, as the genetic probability of only white swans occurring amounted to 100%. But of course, the new species of black swan turned the old paradigm on its ear and the conventional math wisdom proved pointless.

Due to the popularity of The Black Swan book, a certain species of cornucopian tends to think the swan allegory portends optimism for our oil future. I first saw it here on TOD.

I assert that instead of a Black Swan representing new discoveries of oil, we have here a Black Passenger Pigeon. As I understand it, because of a large population of swans, you could get mutations leading to the evolution of a black species. I would consider this rare but mathematically possible. But it would seem vanishingly small to assume the possibility that a black passenger pigeon would suddenly appear as the overall population gets decimated and eventually becomes extinct. And this has nothing to do with a renewable resource like birds, but rather that we have pretty much scoped out every hiding area on the earth. See the Dispersive Discovery model to back this up. (note that the original TOD commenter said "couldn't follow I'm afraid" to my counter-allegory)

By the same token, the idea that large oil reservoirs get discovered first presents an optical illusion of sorts, if not another inverse Black Swan. The unlikely possibility of a huge new find hasn't as much to do with intuition, as to do with the fact that we have probed much of the potential volume. And large finds occur at the peak of the dispersively swept volume.

Looking at some of the Amazon reviews for The Black Swan, I have to agree with the author on the misapplication of the Normal distribution for many situations.
The whole point is that traditional stat, econ, finance techniques are mostly around the first moment (mean) but the distributions in finance tend to be non-normal and it's the risk that we should pay more attention to. That's a point few people would disagree with. What the author may not have known is that there are stat techniques out there that handle all the issues mentioned - while it's true that there's a lot of room for improvement, it's misleading to say that this is an area ignored by the academics and practitioners.
According to other reviews, the author disses math and religion in equal amounts -- something that displeases that wide a range of people must have some rhetorical substance and probably a worthwhile read (his earlier book "Fooled by Randomness" looks interesting as well).



As I type this I have got on the radio Sam Seder subbing for Mike Malloy talking to author Michael Klare, the author of several oil depletion books, including "Blood and Oil". Sam skewered Freddie Thompson and his presidential debate assertion that we have plenty of oil left.









Monday, October 01, 2007

Global Update of Dispersive Discovery + Oil Shock Model

Jean Laherrere of ASPO France last year presented a paper entitled "Uncertainty on data and forecasts". A TOD commenter grabbed the following figures from Pp.58 and 59:


I finally put two and two together and realized that the NGL portion of the data really had little to do with typical crude oil discoveries, which only occasionally coincides with natural gas findings. Khebab has duly noted this as he always references the Shock Oil model with the caption "Crude Oil + NGL". Taking the hint, I refit the shock model to better represent the lower peak of crude-only production data. This essentially scales back the peak by about 10% as shown in the second figure above.

So I restarted with the assumption that the discoveries comprised only crude oil, and any NGL would come from separate natural gas discoveries. This meant that that I could use the same discovery model on discovery data, but needed to reduce the overcompensation on extraction rate to remove the "phantom" NGL production that crept into the oil shock production profile. This essentially will defer the peak because of the decreased extractive force on the discovered reserves.

I fit the discovery plot by Laherrere to the dispersive discovery model with a cumulative limit of 2800 GB and a cubic-quadratic rate of 0.01. This gives the blue line in the following figure.

For the oil shock production model, I used {fallow,construction,maturation} rates of {0.167,0.125,0.1} to establish the stochastic latency between discovery and production. I tuned to match the shocks via the following extraction rate profile:

As a bottom-line, this estimate fits in between the original oil shock profile that I produced a couple of years ago and the more recent oil shock model that used a model of the perhaps more optimistic Shell discovery data from earlier this year. I now have confidence that the discovery data by Shell, which Khebab had crucially observed had the cryptic small print scale "boe" (i.e. barrels of oil equivalent), should probably wholly contribute the total Crude Oil + NGL production profile. Thus, we have the following set of models that I alternately take blame for (the original mismatched model) and now dare to take credit for (the latter two).

Original Model(peak=2003) < No NGL(peak=2008) < Shell data of BOE(peak=2010)

I still find it endlessly fascinating how the peak position of the models do not show the huge sensitivity to changes that one would expect with the large differences in the underlying URR. When it comes down to it, shifts of a few years don't mean much in the greater scheme of things. However, how we conserve and transition on the backside will make all the difference in the world.