Tuesday, November 26, 2013

The Signal and the Noise

Some time ago Zhiyong pointed me to Nate Silver's book "The Signal and the Noise" which is about the science of prediction. I'm now reading it again and in fact go back to it regularly because it is, I believe, essential (and sobering) reading for anyone engaged in the modeling business.

One of the key things I took away from this book is the human tendency to psychologically "anchor" on any scenario which we have put a lot of time and effort into constructing. This is a real danger for anyone who has toiled through the process of building a 3D model for a basin where much data gathering, entry, mulling over input parameters, resolving IT issues etc is often necessary to reach the point where the model can be executed. This typically takes days to weeks in my experience. Then, depending on the size of the model, it might take hours to days to complete a single run. At the end of this process one is well and truly "anchored" on the particular scenario chosen in the process of setting the model up. It is very hard, and very time consuming, to then go back and test alternative scenarios. It has been made easier by recent software and hardware improvements but it is still a difficult thing to do psychologically.

In training courses I am fond of a particular analogy for the model building process which involves some pictures of bridges. The idea is that any model is a framework of physical law that we use to connect the known (data, analogs etc) to the unknown (our target play or prospect). We need both good data and a good framework to build a good bridge and so get to the other side safely. In a way, the model algorithms encode prior knowledge (in the Bayesian sense) about how petroleum systems work in general, hopefully preventing us over-fitting our data and indulging in the wilder of our fantasies.

While reading the book last week an old and favourite movie appeared late one night on the TV. This was David Lean's classic 'The Bridge on the River Kwai", Alec Guinness in the lead role. For those who don't know this movie, the story describes a group of English prisoners in WWII Thailand, being forced by the Japanese to build a rail bridge over the river Kwai. The martinet colonel (Guinness) keeps his soldiers alive, in the face of appalling mistreatment, by giving them the focus of building the bridge. At the end of the movie the bridge is complete and the first Japanese train about to cross it. The British plan, all along, was to blow up the bridge with hidden charges just as the train crosses. However, when it comes down to it Guinness cannot bring himself to blow the bridge and tries to prevent it. He has become "anchored" to the bridge through the pain and toil of building it and is unable to see the "bigger picture" of hampering the Japanese war effort.

Happy Modeling

Monday, November 25, 2013

Using fluid inclusions to infer the presence of a paleo hydrocarbon column

Analysis of diagenetic fluid inclusions, whether by optical or chemical means, is a technique used to learn something about a petroleum system. The main utility is for old wells where no fluids samples are available and where all that remains may be some old cuttings in a store.

One of the best known methods is "fluid inclusion stratigraphy (FIS)" provided by Fluid Inclusion Technologies (FIT) in Tulsa, USA. FIS is one of many possible fluid inclusion analysis methods which can perhaps be grouped together under the term "Microshows".

A common question asked of fluid inclusion data is "Was there ever a hydrocarbon accumulation in my (now water wet) reservoir ?" In other words, is there a paleocolumn ?" The government research body in Australia (CSIRO) uses the "grains with oil inclusions (GOI)" method which involves counting the number of grains with oil inclusions visible through a microscope under UV light and expressing the result as a percentage (%GOI). CSIRO suggests a threshold of 3.5% GOI as the minimum consistent with a paleocolumn. From calibration studies in one province my company set a similar threshold for the FIS paraffin response some years ago.

Any comparison between optical and chemical indicators of fluid inclusion "strength" is difficult. For both practical and theoretical reasons we would not expect a simple relationship between the two. For one thing, an optical method such as %GOI counts the frequency of grains with visible oil inclusions whereas FIS measures the concentration of volatile hydrocarbons (and other species) released by crushing a bulk sample. Obviously, if the sample has a few big inclusions it would give a smaller GOI and larger FIS signal cf. one with a lot of small inclusions. There are several other reasons why the measures are not equivalent. However, we would at least hope that they would give the same answer to the question of paleocolumn presence or absence.

The figure below shows the results of a comparison for 49 samples from 9 wells. In all cases, where GOI indicated a paleocolumn FIS agreed. Similarly in most cases where GOI indicated no paleocolumn, FIS agreed. However, there were a few samples where FIS indicated a paleocolumn and GOI did not. From the location of these samples (all from one well) and the signal character it is likely that this is a paleo-gas condensate zone and hence the optically detectable oil inclusions are rare. This comparison is for one particular province and I cannot warrant that it would work out this way for other geological circumstances. 

So what does all this mean for evaluation of a petroleum system ? Clearly if we found that the target reservoirs in dry holes had high fluid inclusion abundance we might conclude that trap breach rather than lack of charge was the reason for failure. If the abundance is high enough we could carry out further analysis to permit oil-source or gas-source correlation to confirm activity of the prognosed source rock or the presence of a previously unrecognised one. We would hope to be able to do this on simple extracts of the cuttings (rather than on the tiny amounts in fluid inclusions)  but this isn't always feasible due to loss during storage or severe contamination with drilling mud. 

Tuesday, August 13, 2013

How to Calculate "Organic Porosity" for a Shale?

Here is a simple formula for calculating "organic porosity" formed as a result of converting kerogen to petroleum. 

Organic porosity (% rock volume) = TR(fraction)*HI (mg/gTOC)*TOC(% weight)*2.5/1.2/1150,

where TR is transformation ratio (the fraction of the labile kerogen that has already converted to petroleum), and HI is hydrogen index when it was immature, and TOC is original TOC. The constant 2.5 is rock density and 1.2 is kerogen density in g/cc, and 1150 is the equivalent HI of hydrocarbons.

For a source rock with 5% initial TOC and 600 HI, the organic porosity generated is 2.7% at TR=0.5 and 5.4% rock volume at TR=1 (full transformation). TR is usually calculated using a kinetic model. Original HI and TOC before source rock is mature can be obtained from an immature part of the source rock - typically up dip. Various methods have been proposed to estimate original TOC and HI from mature samples - but typically they are not reliable - due to the assumptions made. It will be a subject for a post another day...

The organic porosity may or may not be actually preserved in a shale. Studies have shown that source rocks with higher clay content (the KCF shale, Shahejie shale of Bohai Bay) may not have preserved such porosity. This is perhaps because the ductile clay would continue to allow compaction of the rock during petroleum generation. Most of the North American producing shales, such as the Eagleford and Barnett, have very low clay content - and some show significant early cementation which may have prevented further compaction  and help preserve the organic porosity formed during hydrocarbon generation. 

The organic porosity is different from normal inorganic porosity in that it is likely petroleum wet. This means high (100%) saturation petroleum is expected that help retain petroleum in the source rock.


Sunday, January 13, 2013

Why we should NOT use percentages for migration losses

Migration loss is one of the least constrained parameters in petroleum system analysis and modeling. When you ask a geologist how much of the expelled hydrocarbon volumes could have been lost before reaching the traps, the answer typically is a wide range of percentages. I have heard of numbers as low as 20% or as high as 98%. Here I want to argue that is not a very useful way to look at migration loss, and perhaps even the wrong way.

In the follow hypothetical case we have 4 prospects in an area, each having the exact same geology (fetch area, geometry, migration distance, and complexity of the carrier beds, …), with the only exception the estimated expulsion volumes in each of the fetch areas are different as shown below because source variability.

Say we just drilled b and found 500 mmbls of oil in-place, and we estimate 1 billion barrels were expelled in its fetch area. Therefore 500 mmbls (or percentage wise 50 %) were lost in the rock volume between the source and reservoir. What would you then predict the volume of oil charge to be at prospect c, where 500 mmbls were expelled? Is it going to be 250 mmbls, since we would lose 50%, or zero since we would lose 500 mmbls as the two have the same geology?

I ask this question in my training class, while most my mostly explorer students agree it should be zero charge at c, but a few of them may take a bit more to be convinced. If the volume of the rock hydrocarbons migrated through are exactly the same, the lost volume (as residual saturation, and what is trapped in micro or macro traps along the way), required before hydrocarbons reach the trap, should be the same - ie 500 mmbls. For those who still insist on a percentage, I then ask them what would happen if the source rock expels only one barrel.

The implication is really important: In this case, half the number (2) of the traps would not receive charge if we assume the same migration loss volume. But if we used a percentage for efficiency, we would have predicted all traps would have oil. Using percentages would make prospects associated with poor source strength look better, and vise versa, other things being equal.

Since it is very difficult, or impossible, to estimate migration losses, we tend to spend very little time considering it. Based on the above analysis, I recommend running scenarios with different migration losses per unit fetch area to compare prospects. For a given loss volume, a certain number of prospects would not receive charge. More prospects would not receive charge when the assumed loss volume is increased. Prospects with high tolerances to migration loss scenarios tests are safer than those that are too sensitive. This may be an effective way to include migration loss in prospect ranking, even if we cannot accurately quantify this huge uncertainty.