“A Prank, A Cigarette and a A Gun”

Amanda+Knox+Amanda+Knox+Awaits+Murder+Verdict+hndaoOijPOEl

An article about the murder of Meredith Kercher. I’ve agreed not to say much but if you’re interested in this case this is the most important read of all. The truth of what happened is in here.

Click below:

The So-called “Best Fit Report”

by Sigrun M. Van Houten

What is Best Fit Analysis, a quick intro

Students of statistics and stochastic analysis will recognize the term “Best Fit” as a statistical construct used to determine a most probable graph on a scatter plot. In the same manner when the clandestine services seek to resolve a most probable narrative of events when information about that narrative is limited or inaccessible, one can assess the information that is available to construct a probabilistic scatter plot. Once done, it is then possible to graph a “line” that represents a best fit to those data points. In this case, the data points making up the scatter plot are individual facts or evidence and the associated probability they possess as inference to a larger narrative. And the “line” drawn as a best fit is the most likely narrative of events.

This procedure has parallels in normative notions well understood in western law for some time. Namely, it deals with probative value which, though perhaps not used in the strict sense used in law, is used here as a catch-all to describe each data point. Each data point reflects the probability of a given piece of evidence. But what do we mean by “probability of a given piece of evidence”? In Best Fit analysis (BFA) we begin by constructing a hypothesized narrative. When applied to criminology, the hypothesized narrative usually presents itself fairly easily since it is almost always the “guilt narrative” for a given suspect or suspects in a crime. In this short introduction to BFA, I will show how it can be used in criminology. The advantage in criminology is that rather than having to sort through innumerable hypotheses as is common in the clandestine services, here we have the advantage that we usually have a hypothesis presented to us on account of an accusation or charge. We can then use BFA to test the narrative to see if it is the most likely narrative. With perturbations of the same, we can likely identify alternative narratives more likely to be correct.

Norms of western law are dated in some cases, and some have not been updated for a long, long time. One of those areas apparently is the area of probative value. Typically in Courts of western nations it is presumed that a piece of evidence has “probative value” if it points to a desired inference (which may be a guilt narrative or some more specific component thereof). I’m not an attorney so I can’t categorically state that the concept of a “desired inference” really refers to an overall guilt narrative or simply the odds that the evidence points to a guilt narrative. But what I can say is that in practice it almost always is used in the sense of an overarching narrative or reality.

A case in point is a famous case in which a man was accused of murdering his wife during a divorce. It turned out that his brother had actually committed the crime. But once his brother was convicted an attempt was made to convict the husband of the crime by the accusation that he “contracted” with the brother to commit the crime and end his divorce case favorably. In the second trail of the husband the evidence was almost entirely circumstantial and the jury relied heavily on an increase in phone activity between the husband and his brother leading up to the murder. Normally, the brothers had not spoken on the phone often and there was a clear and obvious sudden increase in the frequency of calls. The jury interpreted this as collusion and convicted the husband of murder. Thus, when brought to Court, the desired inference of testimony and records of phone calls was that collusion existed. This is a piece of evidence being used to point to a guilt narrative. The problem however, was that it was never shown why it should be more likely to have inferred collusion than simply distress over a divorce. It is not unusual for parties in a divorce to reach out to family and suddenly increase their level of communication at such a time. In other words, and on the face of it, one inference was just as likely as the other.

What legal scholars would say is that this is a reductionist argument and fails because it does not take into account the larger “body of evidence”. Unfortunately, this is mathematically illiterate and inconsistent with the proper application of probability. This is because it takes a black and white view of “reduction” and applies it incorrectly, resulting in a circularity condition. The correct answer is that

… One takes a body of evidence and reduces it to a degree sufficient to eliminate circularity and no further.

In other words, it is not all or nothing. In fact, this kind of absolutist understanding of “reductionist argumentation” is precisely what led to the results of the Salem Witch Trials. In those cases, probative value was ascribed based on a pre-existing hypothesis or collection of assumptions; essentially a cooked recipe for enabling confirmation bias either for or against guilt.

To explain what we mean, in the case of the phone calls between brothers, one cannot use a hypothesized narrative (the inference itself) to support the desired inference. This is circularity. But one also cannot reduce the evidence to such a degree that the body of evidence in toto is not weighed upon the merit of its parts. From the perspective of probability theory, this means that we must first determine whether, as an isolated construct, the probability that the phone calls between brothers were for the purpose of collusion must be greater than the probability that the calls were due to emotional distress. And it must be something we can reasonably well know and measure. While we can never apply numerical values to these things, it must at least be an accessible concept. Once we’ve looked at the odds of each of the two possible inferences we can then ask which is more likely. Unless the inference that the calls were for the purpose of collusion is greater than the odds that the calls were for the purpose of emotional support, there can be no probative value (in the sense we are using that term here).

The reason for the “isolation” is that we cannot determine aforesaid odds by using the inference, or the larger narrative, to support those odds because it is the narrative that is the hypothesis itself. Having said that, once we have done this, if we can show that the odds are greater that the calls between brothers were for the purpose of collusion, even if that difference of probability between the two inferences is very small, the phone calls can then be used to assess the likelihood of the guilt narrative by considering it in the context of the body of knowledge. In other words, if we could associate numbers with this analysis as a convenience for illustration, if we have 10 pieces of evidence bearing only, perhaps 5% probability difference favoring the guilt narrative, it might be possible nonetheless to show that the guilt narrative is the most likely narrative. In other words, we consider all evidence, each with its own net odds, in order to frame the odds of the guilt narrative. And we are therefore using reduction only to the extent that it excludes circularity, and no more. And both the number of evidentiary items and the odds of each matter. If we had 3 pieces of evidence each bearing a net probability of 90% favoring a guilt narrative, it might be just as strong as 10 pieces bearing a net probability of only 5%. And it is these odds that must be left to the jury, as it is not a mathematical or strictly legal exercise but an exercise in conscience and odds.

Sadly, it is routine practice in western Courts to employ probative value in such a manner as to establish in the juries thinking a circularity condition whereby the larger narrative of guilt or innocence is used to substantiate the probative value of individual pieces of evidence. The way to control this is for the understanding of probative value to change and modernize, and to require Judges to reject evidence (rule inadmissible) that either does not point in any direction (net odds of 0%) or points in a different direction than the desired inference. This is a judgment call that can only be left to the Judge since to leave that in the hands of the jury effects prejudice by its very existence. While there seems to be lip service to treating probative value as we’ve described, it appears to almost never be followed in practice and most laws and Court regulations permit Judges to use their “discretion” in this matter (which, in practice, amounts to accepting evidence with zero probative value). Standards are needed to constrain the degree of discretion seen in today’s Courts and to render the judgment of Judges in matters of probative value more consistent and reliable. One way to do this is to treat evidence as it is treated under BFA.

While many groups that lobby and advocate against wrongful conviction cite all sorts of reasons for wrongful convictions, tragically they seem to be missing the larger point which is that these underlying, structural and systemic issues surrounding probative value are the true, fundamental cause of wrongful conviction. For without proper filtering of evidence, things like prosecutorial misconduct, bad lab work, etc. find their ways to the jury. It is inevitable. But the minute you mention “structural” or “systemic” problems everyone runs like scared chickens. No one wants to address the need for major overhauls. But any real improvement in justice won’t come until that happens.

Thus, with BFA, in the clandestine context, we take a large data dump of everything we have. Teams go through the evidence to eliminate that which can be shown on its face to be false. Then we examine each piece of evidence for provenance and authenticity, again, only on what can be shown on its face. I’m condensing this process considerably, but that is the essence of the first stage. We then examine in each piece in relation to all advanced hypotheses and assign odds to each. Once done, we look at the entire body of evidence in the last stage to determine which of the narratives (hypotheses) requires the least number of assumptions to make it logically consistent. That one with the least number of assumptions is the Best Fit. If we were to graph it we would see a line running through a scatter plot of probabilistic evidence. That line represents the most likely narrative. On that graph assumptions appear as “naked fact” and are “dots” to be avoided.

To see a good example of how BFA is employed, you can see my work on the Lizzie Andrew Borden, Jon Benet Ramsey, Darlie Routier and Meredith Kercher cases. That this method is remarkably more effective than what we see in police investigations and Courts is well-known by those that have used this technique for at least three decades now. But it has been somewhat outside the radar of the general public because of its origins. My hope is that through public awareness this method can be applied to criminology and jurisprudence resulting in a far greater accuracy rate in the determination of what actually occurs during criminal acts, especially in matters of Capital crimes where the added overhead is well worth it.

~ svh

Notice: I am told that Mozilla users can experience a copy of this report that has sections of text missing. I recommend that Mozilla users download the pdf and view it on their desktop. – kk

big_SIS

Big Sis modeling in … an oilfield. Our second tour of Bakken with our dad.

I’ve written on the petroleum issue before and as I learn more the worse it seems to look. Yes, there is good news about unconventional oil, like tight oil. And there are all kinds of alternative energies out there, too. But the choices for humanity are beginning to narrow to a point that open, frank discussion about what is going on is desperately needed.

First, I should mention something about the developments here in the States regarding new sources of oil that has everyone in the industry so excited. These sources are unconventional oil. There are two types of “unconventional” oil. First, there is what is called shale oil, AKA “tight oil”. This is really just conventional petroleum that happens to be found inside shale rock. The only reason it hasn’t already been exploited is because the drilling techniques needed to get to it are a bit more complicated than a simple, vertical bore shaft drilled in one spot in which the petroleum is “loose” enough to flow into the pipe on its own. With tight oil not only do you have to drill horizontally to exploit the relatively horizontal orientation of the shale rock, you have to “encourage” the oil to move into the bore pipe because it is tightly bound to the shale rock. This is done by a process that has come to be known as “fracking” in which explosive charges are placed in the bore pipe to perforate the casing and water is pumped in to crack the shale and create small paths within it and through which oil can flow into the bore pipe. Second, we have what is known as oil shale. Read that carefully, I just swapped the order of the words to get a new beast. And that’s why people get confused over these two types of resource. Their names are about as similar as one can get. But oil shale is a totally different thing. Here, the oil is not simply conventional oil tightly trapped in a rock. Here, the “oil” is not fully developed by nature and has not reached the final stage of its conversion into petroleum.  It is in an intermediate stage between fossil and true oil. This fluid is called “kerogen”. The problem with kerogen is that in order to complete the transition to petroleum, which you must do to make it a viable fuel source, requires considerable heating. If we just look at the energy equation it means that we are putting more energy into the production of oil from oil shale than we get out (with current techniques). And what many economists and optimists don’t seem to realize is that this problem is a physics problem, not just an economic one. In other words, there is no technology or economic model that will change this. Kerogen, in the form we know it, will never be economically viable in and of itself.

So-called “Peak Oil” tells us that, because petroleum is a finite resource, it must exhaust at some point in the future. Like so many academic statements, one can see that this is incontrovertible but as is often the case, the practical reality does not so easily admit of a simple application of a general principle to a specific problem. Such is the case with Peak Oil. People promoting this theory are effectively overgeneralizing to a specific set of circumstances and reaching erroneous conclusions. I’m going to try to sort out this mess here and explain what Peak Oil means for humanity in realistic, probable terms. First, I noted that the energy one puts into extracting petroleum must obviously not equal or exceed the energy they can extract from the recovered petroleum itself. Otherwise, there isn’t much point in extracting it. But from this point forward in the popular discussion of Peak Oil the conversation diverges into Wonderland. The crux of the problem, from what I can see, is that those that understand the geology and science of petroleum don’t understand economics and those that understand economics don’t understand the fundamentals of science. Add to that the inherent opaque nature of the petroleum industry and its methods and it is no wonder that there is immense confusion over this topic. Okay, so why is “Peak Oil” an “overgeneralization” of the human energy consumption problem? First, we need to point out that the idea that something is finite, and that one’s ability to extract it in situ will likely follow a bell curve in which the rate of recovery rises and then falls is an incredibly general proposition. And it’s that phrase rate of recovery that we need to understand better.

All finite things will tend to exhibit bell curve, or normalized behavior; that is, one’s extraction of them in situ (limiting the generality to resource exploitation for this discussion) will likely get faster in the beginning, then slow down as it depletes. But global Peak Oil is just one application of this broad generalization. Notice that an oil well, if all else remains the same, will also tend to extract petroleum at normalized rates, increasing sharply in the beginning and tapering as its reach into a reservoir diminishes. This has nothing to do with global peak oil. Likewise, a reservoir will, all else being equal, tend to follow a normalized pattern of extraction rates. This also has nothing to do with global peak oil. And please notice the qualifier “all else being equal”. Let me explain. The rate at which an oil well can extract oil from a reservoir, assuming the supply from the reservoir remains essentially constant (its really big), depends on numerous factors. The depth, diameter and bore length of the bore hole all affect that value. The fatter the pipe, the faster you can get petroleum out. Depth can affect pressure which will affect how fast you can pump it out. Indeed, even your pumping equipment can affect those rates. But things like the permeability of the rock also matter. I should point out that oil doesn’t usually sit in the ground in pools. Rather, it is “locked up” in the pores of rocks. Different rocks allow it to escape at different rates. Shale, for example, doesn’t give it up easily. So, that too, affects the rate of recovery. So, the reach an oil well has into a reservoir is a time dependent function that is highly localized and dependent on all the factors mentioned. Thus, it may be possible to drill another well nearby, but importantly, no less than some minimum distance away, to increase the flow rate. That minimum viable distance is determined also by those factors. Finally, for any given well, as the pressure begins to drop due to the peaking of that single well, not necessarily the entire reservoir, one can increase the internal pressure, forcing petroleum out faster, by boosting it with water. If that isn’t enough, you can inject gases under pressure to increase the flow rate.

In other words, the rate at which a single well delivers petroleum product is highly dependent on capital investment in the well. And producers have to consider how much they want to invest based on market conditions and overall performance of their overall recovery operations. Thus, the so-called “bell curve” becomes a joke. One can artificially shape this curve however they want depending on all the factors mentioned because, at the oil well level, the supply is halted only as a time dependent function of the presence of oil locally around the well bore. What this means is that, you can drain the region around the bore hole but over a very long time the rest of the reservoir will push oil back into that region and refill it. So, that also can be seen as a production rate variable. The reader should be able to clearly see now that the “peaking” of an oil rig is totally dependent on numerous variables, only one of which is the presence or availability of oil locally around the bore hole. Thus, simply yanking production rate figures for a well out and suggesting that it or its reservoir has hit a fundamental peaking of capacity based on those numbers is absurd. You cannot know that unless you have access to all the data and variables I’ve mentioned, and only then can you analyze the well and understand if an observed peaking is due to some natural, finite barrier or is rather due to the particulars of the well design and operation.

We can extend this discussion in scale and apply similar logic to the reservoir itself. We cannot know if a reservoir is reaching a true, finite and natural peak unless we know about each of those wells and, importantly, what percentage of the acreage from which a well is viable is actually covered by a well. So, in the same way, one cannot pluck data from a reservoir and conclude anything from that.

At the global level the same limitation applies. We need to know the true facts about each reservoir in order to reach any conclusions about:

  1. Actual, existing production capacity globally
  2. Total, defined reserves remaining

But can’t we see that if global well spudding is increasing and peak production in various countries has occurred that it must be occurring in the near term globally? Yes … unless we consider the powerful impact economics has on all this. The United States reached a peak around 1970 and its domestic production declined thereafter (until recently as shale oil has pushed production up considerably). But what we don’t know is why. Was it because the actual recoverable oil had diminished to something below one-half its original amount? Or was it because the investments necessary to continue producing the fields in the States were considered economically unsound given the global prices for petroleum at the time? Did petroleum companies just forego water and gas pressurization, increased drilling into existing reservoirs, etc. because it was cheaper to buy overseas? Did environmental regulation drive this? There is reason to believe that other factors were in fact at play because domestic production in the United States has risen again even if we control for shale oil production. And much of that is occurring from existing fields. But there’s more. Various agencies tasked with estimating reserves continually come up with reserve figures much, much higher than peak oil advocates claim. USGS and IEA, while they don’t agree on all the numbers, clearly state that conventional oil reserves in the United States are over 20 billion barrels. Where did that come from? It comes from the same fields that have always been producing petroleum in the United States. But for whatever economic reason, the additional investments in those wells simply have not been made. That is changing now. If the United States were to continue consuming at its present rate, and if that 20 billion barrels was the only source of oil for consumption in the States, it would last about 3 years. But since Canada supplies about ¼ of U.S consumption and shale oil is providing an ever increasing portion (quickly approaching ¼) that number is likely closer to 10 years.

Numbers for shale oil are about 20 years; that is, if all oil were drawn from those fields it would last about 20 years. This combined with the remaining conventional oil is 30 years, at least (and assuming Canada disappeared), of petroleum supply. But Canada’s reserves are yet larger and their consumption is an order of magnitude lower than that of the United States (their population is an order of magnitude lower than that of the U.S.). Thus, realistically, the U.S./Canada partnership, which is unlikely to be broken, will easily put the U.S. supply beyond 50 years. And that assumes that the middle east and everything else just vanishes. If we plug that back in its even longer. Let’s be clear, regardless of what’s going on around the globe, the U.S. and Canada are not going to trade their own oil away if it means their own consumption must drop. Nor would any other nation. Shale oil production in the United States is climbing meteorically, to about 4 million barrels a day in 2013. This is unheard of since less than 5 years ago it was virtually zero.

The more challenging oil shale; that is, kerogen bearing rock, is a U.S. reserve so large it is hard to calculate or predict where it might end. Needless to say, we have about 50 years to develop it and get it online. It seems unlikely that this goal will not be achieved, but I’ll discuss its challenges more later.

Okay, so is the problem solved? Can we all go home now? Not hardly. The same nuances mentioned earlier that better inform our discussion of peak oil also inform our understanding of the current petroleum situation, to include shale oil and oil shale options. Thus far, we’ve spoken only of production rates of petroleum. But here is the real, fundamental problem with petroleum: when it was first discovered and used on a wide commercial basis, beginning about 1905, it was so easy to obtain that in terms of energy it only cost us about 1 barrel of crude in power generation to draw and collect 100 barrels of crude for sale in the marketplace.  Some speak of this relation as the Energy Returned On Energy Invested, or EROEI ratio. I alluded to it above. It basically begins by noticing that if a fuel source is to be viable then we cannot expend more energy to get it than the energy it provides to us. In the case where those energies are equal, EROEI = 1. In the event that we consume more energy to get petroleum than the petroleum recovered provides, then the EROEI < 1. This is unsustainable also. Therefore, for petroleum, or any fuel, to be viable it must have an EROEI > 1. Having cleared that up, some confusion over how physics and economics overlaps on this matter has gushed out on the internet and elsewhere like water over Niagara Falls. Why? If we will recall, around 1905 the EROEI must have been 100, since for every 100 barrels of crude we could sell we expended 1 barrel’s worth in energy to get it out of the ground. The problem is that since that time the EROEI has dropped precipitously by about one order of magnitude. Thus, the global average EROEI is about 10 nowadays. But what this implies is what seems to be confusing people. Some think that if the ROEI gets any closer to 1 we’re doomed. Some have even said that you need an EROEI of 3 or 4 to make petroleum economically viable. This is not true and is based on certain assumptions that need not be true either. In order to be not only economically viable but economically explosive in its market power the ROEI simply needs to be greater than 1. That’s all. Let me explain.

There is this thing called “economics of scale”. To explain it’s relevance here, consider the following thought experiment. Suppose we discover a massive petroleum reserve in Colorado that contains some 2 trillion barrels of recoverable “oil”. At current U.S. consumption rates, if every drop of petroleum consumed in the U.S. were pulled from that one field, it would last 275 years. Ah, but you say, that reserve is kerogen. Kerogen is the play I referred to above where I pointed out that we had about 50 years to figure out a way to economically utilize it. This is because the other oil, so-called “tight oil”, or shale oil, will run out by then. But the big, big problem with kerogen is that lots of energy are needed to make petroleum out of it. Current retorts (heaters for heating kerogen) run at about 400 C and have an EROEI of about 3 to 4. Of course, this is first generation technology, but for the sake of discussion, let’s assume it is 3. For demonstration, we assume that the current, conventional EROEI on oil is about 10. How could kerogen possibly be cost effective? Economies of scale. Great, problem solved? Nope. Let me finish.

Let’s assume that, for the sake of discussion, we have an infrastructure that can begin producing petroleum at incredibly high rates. How is this? Kerogen is located only about 500 meters in the ground and can be manually extracted. This means that there are no “pressure curves” or constraints on how much can be removed how fast. It’s simply a matter of having sufficient resources to do the work. But more importantly, these rates can be achieved because, as one increases the rate of recovery, you are not fighting against a finite maximum lode (effectively) and the economics of scale work because it is one field, not several fields geographically separated over great distances. Thus, as petroleum flows out at rates far exceeding what was possible before, the price of that petroleum drops. And it keeps dropping as the market is flooded with petroleum. Imagine that before this operation commences oil costs 1 dollar a barrel (to make the math simpler). Let us say I have 100 dollars to spend on energy. So, I purchase 100 dollars worth of energy. But, it took 10 dollars worth of energy to get the oil I’m using as energy. So, my net return is 90 barrels of crude. Now, suppose after operations commence 100 dollars now buys 1000 barrels of crude. This means that I can net 900 barrels of crude for the same 100 dollars. My energy has gone up dramatically but my economic cost is constant. Of course, our EROEI is lower now, so we have to adjust and recalculate. 100 dollars buys 200 barrels of crude with an EROEI=3. Thus, for the same economic cost I have doubled my energy and have done so in the same amount of time because, by economy of scale, I can obtain that petroleum twice as fast as before. And I can achieve that production rate because I do not have to worry about running out for quite a while.

So, as with peak oil, simply blurting out EROEI doesn’t explain everything. You have to take all variables into account. Okay, will we finally get to the bad news? Yes, we are now ready to see the deeper problem and the key point so many are tragically missing. I somewhat glossed over economies of scale and production rates for kerogen and assumed that we actually had the ability to ramp up to that. In other words, we have to be able to invest in that massive infrastructure in Colorado to start this voracious beast up. Do we have what we need? Well, we have the petroleum in shale oil. But is that really all that matters? Of course not. We will not be able to reach that kind of production rate in kerogen to petroleum with excesses of tight oil alone. And this is where it gets interesting.

Those that study economics and petroleum often point out that the strength of an economy is largely dictated by the per capita energy per unit time that a country or region achieves. Energy per unit time is power and it is measured in watts. So, what they are saying is that the strength of an economy ultimately falls back to per capita power consumption. This is why climate change is so controversial overseas. Other countries know this and they see attempts by western, industrialized nations to limit CO2 emissions as nothing more than curbing per capita power consumption; thus derailing economies. For the western world, the association between per capita power consumption and CO2 is not nearly as strong, so it does not affect them as badly. But for countries still burning lots of coal and for countries without efficient cars and trucks, such cutbacks in CO2 would have drastic effects on their ability to industrialize. But for our discussion it is important for what it does not capture. To explain this, another example is in order. Consider a farmer living in the 1700s in North America. They plow fields using a mule and bottom plow. The per capita power consumption for the farmer is, say, x. Now, a farmer in North America in 2013 performs the same task using a very small, diesel powered tractor with plows, harrows and the like. In this case, the per capita power consumption is considerably higher and we’ll denote it y. Notice that over the years the transition from x to y is gradual as each new technology and piece of equipment increases the power consumption available to the operator. But why, exactly, does this seem to be correlated with overall quality of life? Why is it that better health, education and so on are so common as power consumption increases? The reason lies in the definition of energy and power. In physics the term “useful work” or “work done” in an “environment” is a term that refers to the effect, or result, of applying energy to a defined “environment”. Thus it is often called the negative of energy. Thus, when we apply energy to an environment we are dumping energy into that environment in some controlled, intelligent manner. In the case of the farmer example, the “environment” is the soil, or the Earth itself, which we transform intelligently into something favorable to plant growth. This takes lots of energy. In fact, the mule and plow ultimately expend exactly the same total energy as the tractor does. The difference however, is how fast it happens. The tractor does it orders of  magnitude faster. In other words, it is the power of the tractor over the mule that makes the difference. Thus, we can fundamentally improve our lives by intelligently applying power to satisfy a human need with speed, giving us the time to engage in other worthy tasks. We can use that time for leisure, education or other work related tasks. In the end, the quality of life improves.

Thus per capita power consumption is key to the advancement of humanity, period. We have no time to march in protest of an unjust ruler, time to educate our children, time to do other useful work such as plow our neighbors garden for them, or anything else, if we are captive to spending most of the hours of our lives slowly expending all the energy necessary for our survival. Power is freedom.

But power provides other improvements to quality of life indirectly as well. We can afford to have clean, running water because we had the power to dig enough water wells, we have the power to run the factories that make our pharmaceutical medicines which alleviate suffering, we have the power to build massive buildings called schools and universities in which our capacity to learn is enhanced, and on and on.

So, in the petroleum discussion, when we speak of “ramping up” to a new way of obtaining petroleum which requires more upfront energy than the old forms of petroleum, we are talking about a per capita power consumption problem. The shale oil can solve that for us. But what this discussion is missing is a key ingredient in this ramp up. Once again, we have to be careful not to overgeneralize. Generally speaking, this power statement is correct. But in reality, we have to consider something else. And that something else is the “environment” we just discussed. We can usually just ignore it because

The rate at which we can access the environment is assumed to be infinite, or, at minimum, proportionately greater than the rate at which we are expending energy into it.

This will not always be the case. Let me explain. In the case of the tractor, of course we have access to the ground because that is what we’re farming and there is no obvious constraint on how fast we can “get to it”. But what if we change the example a little. Suppose we now think of a factory that takes aluminum mined from the Earth and smelts it, producing ingots of aluminum that can then be shipped to buyers who may use it to build products that society needs. Well, the mines that extract the aluminum can only do so with finite speed. And if that resource is finite, and especially if it is rare or constrained in volume, the rate at which we can recover it is indeed constrained. Now, if I am a buyer of ingots and I make car parts out of the ingots, the rate at which I can make cars no longer depends solely on the power consumption available on my factory assembly line. Now, I have to consider how fast I can get ingots into the factory. This is a special case and we can see that generally this is not actually an issue. But to understand that we have to increase our altitude yet more over the forest to see the full lay of the land. Ultimately, all matter is energy. We should, in principle, be able to generate raw materials from energy alone if the technology for it exists. However, as a practical matter, we can’t do that. We depend on raw materials, aluminum being but one example, which come from the periodic table of the elements, plants and animals and minerals of the Earth. But the most constrained of all of them is the periodic table. As it turns out, petroleum is not our only problem and, not surprisingly, the crisis of the elements is of a very similar nature. It isn’t really that we are “running out”, it’s that the rate at which we can access them is slowing down while consumption goes up. And that’s the problem with petroleum, too. We have plenty of reserves, but our ability to access it fast enough is what is getting scary. Unfortunately for us, raw materials and rare Earth metals especially, are hard to find on the Earth’s surface. Almost all of the rare elements are deep within the Earth, much too far down to be accessible. Thus, our supply chain is constrained. This is why plastics have become so popular over the last three or four decades. In fact, some 90% of all manufacturing in the world now depends in some way on petroleum, ironically, because the raw materials we used to use are drying up. And the rate at which we can recycle it is not nearly fast enough.

So, the very same problem of production rates in petroleum exist for the elements and what we have not discussed is the world outside the United States. I have deliberately focused on the U.S. and Canada for a reason. The global situation beyond is dire. Why is this? Because, even if we solve the petroleum production rate problem in the United States, as I’ve suggested it will be,

It will be frustratingly constrained in its usefulness if dramatic improvements in the rate of production of elements of the periodic table are not found rapidly.

And that’s just the U.S. and Canada. The situation in the rest of the world is far, far worse. There is only one place where such elements in such large quantities can be found and exploited rapidly. And it is not in the ground, it is up, up in the sky. Near Earth asteroids are the only viable, natural source that can fuel the infrastructure creation necessary to drive the kerogen production needed. But said more fundamentally, if we don’t find a solution in the staged development of shale oil, then kerogen, coupled with massive increases in natural resources which increases in power consumption can take advantage of, humanity will die a slow, savage and brutal death.

What???

What we really need here to express this economic situation is a new figure of merit that combines per capita power consumption with the rate at which we can access the raw materials that are being manipulated by any given power source. We cannot perform a meaningful study of this issue without it. Thus, for now, I will call it Q and it shall be defined on the basis of a per operator required figure (analogous to per capita but based on a per operator figure, a technologically determined value). And I shall define it as the product of a given power consumption and the raw materials in mass kg operated on by the power source per second time. Q would be calculated for each element, mineral or defined material it operates on using a subscript. So, for aluminum it would be:

QAl

And for any particular, defined economic enterprise the collection of such materials I will take to be a mean of all such Q and denote it:

Qm

Now, the Qm for the kerogen to crude conversion (retorting) must be greater than some minimum value that is actuarially sound and economically viable. For a sufficient value we can expect economic prosperity and for some lesser value we can expect a threshold of survival for humanity. That threshold is determined by the pre-existing Qc (Q based not on an operator but on true per capita basis) and the maximum range of variance an economy can withstand before becoming chaotic and unstable (meaning, before civilized society breaks down). So, what do we mean by death and destruction. Well, here’s the bad news.

The problem we are facing is a double faceted one.

We are seeing a reduction in global “production” rates for both energy and matter.

As populations increase this will get worse. Only Canada and the United States appear to be in a position to respond with the favorable geology and sufficient capital, technology and effort to compensate for dramatic losses in conventional oil production rates: if you are pumping water and gases into oil wells now to boost production the drop off after peak won’t be a smooth curve but will look more like a cliff. And now we can see why the Peak Oil concerns are real, but for the wrong reasons. The problem is that though the oil is there, it is costing more and more to get it out and the raw materials (capital) needed to invest in ever increasingly expensive recovery – economies of scale – are not forthcoming. The “cliff” is economic, not physical. Thus, even in the few countries where reserves are still quite large, economies of scale do not appear to be working precisely because of a lack of raw materials (capital) and, to some degree, energy. The divergent state of affairs between North America and everywhere else is due to several factors:

  1. Whatever the cause, conventional petroleum production rates are declining or are requiring greater and greater investment to keep up with prior production rates. This could be because of fundamental peaking or it could be because nominal investments needed to improve production rates have simply not been initiated until now.
  2. Tight oil is the only petroleum product that has been shown to be economically viable and that is not affected by the problem in 1.
  3. North America has by far the most favorable geology for shale, which is why it has been possible to start up tight oil production in Canada and the U.S.
  4. North America has the strongest economy for fueling the up-front, very high investment costs that a new infrastructure in tight oil will require
  5. The U.S. and Canada have been studying the local shale geology for over 30 years and have developed a sufficient knowledge to utilize it, to a degree far surpassing what has been done anywhere else.
  6. North America has the most advanced drilling technology for this purpose than any other locale can call upon or utilize.
  7. Despite the massive consumption in the United States, Canada and the U.S. appear to be at or near energy independence now, which means that instabilities around the globe will not likely have a negative impact on tight oil production as a result of its economic shock (at least not directly).

The biggest question for the United States is this. What are you going to do about raw materials? The good fortune found in tight oil will avail nothing if the United States doesn’t also dramatically increase the rate at which it can “produce” raw materials, particularly elements of the periodic table. The only way to do this is to create a crewed space flight infrastructure whose purpose is to collect these materials from asteroids, where they appear in amounts astronomically greater than anything found on Earth. If the United States fails to do this, it and Canada will go the way of the rest of Humanity. To explain, it may survive the tight oil period. The problem won’t present until the switch to kerogen is attempted in some 30 or more years. But it would take 30 years to develop such a space flight infrastructure. There is no room for gaps. Because of kerogen’s poor EROEI, it will absolutely depend on higher production rates of raw materials; i.e. increased flow of capital.

Of course, at some point alternative energy will have to be developed and the entire primary mover infrastructure will have to be updated. That is really the end goal. But this is no small task. It will cost trillions and will take decades to convert humanity over to a fully electric infrastructure. That is one of the key requirements for comprehensive conversion to alternative energies. And alack, we do not have the raw materials on Earth to build enough batteries for all of it. Thus, once again, the asteroids loom as our only hope. When and if we achieve an energy infrastructure that does not include fossil fuels we will have taken a key step in our development. At that point, for the first time, humanity will be progressing using the fundamental physical principles common throughout the universe and not specific to Earth. It will be a seminal transition.

What does this mean? I had written a few paragraphs on that question but, realizing how depressing it all is, I leave it at this. USG needs to start developing this space infrastructure yesterday and they need to keep hammering away at kerogen. I hope I’m wrong about this.

- kk

surfaceOfTitanHuygensLandingZone

An image of the surface of Titan

As you may know, I try to keep my ear to the ground on matters of crewed space flight. I wanted to share with my readers a major development, a paradigm shift, going on right now in space transportation. At the close of the fifties the United States and the Soviet Union were competing with each other to fly higher, faster and farther than any before them. Yuri Gagarin lit the candle when he was the first human being to orbit Earth. But the less well known factoid about space flight has been in it’s irony: lifting objects into low Earth orbit (LEO) was a technological barrier for humanity that was briefly overcome only by sheer bravado and brute force in a way that would never be economical. Rockets were constructed that staged their fuel on the way up, dropped their airframe in pieces like a disintegrating totem pole and reached over 17,000 mph just to place a few hundred pounds into orbit. The truth is, no one really had the technology needed to do this economically. But both the US and the USSR convinced each other of one thing: While it may be a silly waste of money to do such a thing, both of them could place a relatively lightweight nuclear warhead into orbit and pose a threat to the other. Once they proved it, everybody went home. Space flight for the last 50 years has been a stunt that only governments could afford and that only mutually assured destruction could inspire. Unless someone could find a way to reuse these craft, especially the most expensive components, flying to LEO by throwing away all of your hardware on every flight would never make sense in the hard reality of economic necessity.

But reusing these machines was a technological leap beyond Sputnik and Apollo … a big leap. And for that reason space flight floundered for decades. And yes, we’ve all heard what a big waste the Space Shuttle was. But I want to offer a counterpoint that history gives us in 20/20 hindsight. The two most expensive components of rockets, by far, were absolutely dominating all discussion of space flight. Since we couldn’t overcome those two problems mass, yes weight, was the dominating factor in every discussion of space flight. From space probes to deep space to space stations, to the shuttle and to the moon and Mars; weight was the big nasty sea dragon that ensured that talk of frequent missions such as these was hopeless. You can’t go to Mars with a pocketknife and a Bunsen burner, as I like to say, but many proposed it anyway. But the reality was that limitations on weight, borne of the extremely constraining limits placed by the technological limitations that in turn affected the economics, ensured we’d get nowhere. Everything hinged on making LEO economical and loosening the maddening mass restriction that has bedeviled the human space enterprise for some fifty years now. I am happy to report that one of the two technological hurdles needed to overcome this limitation has been cleared and the second is being aggressively run to ground.

The first problem is that rocket engines are powerful; so powerful in fact that they are something like 20 times more powerful than jet engines by either weight or volume. The temperatures at which they operate are in the 6000 degree F range with combustion pressures over 1000 psi. Jet engines come nowhere near being able to handle this. And you need rockets because, frankly, we don’t have any other way to get to LEO. Air breathing hybrids are, contrary to popular myth, decades away (kind of like fusion power), because we still don’t know how to combust a fuel/air mixture over a wide range of speeds in a single scram jet design, for example. The only foreseeable technology is rocket engines. But there’s the rub. We can’t just throw them away on every flight because they are, along with the actual airframe itself, by far the most expensive components of the rocket. And that is what I meant by technological barriers: we threw these expensive things away before not just because we couldn’t carry their weight into orbit, but because our technology was too primitive to build a rocket that didn’t destroy itself after a single flight. Specifically, the key component is called a turbo-pump and in reality, rockets are just pipes and wires built around the turbo-pump. The turbo-pump is the key technology and the really expensive part of the machine. And we didn’t know how to build a turbo-pump that you could keep firing over and over, like driving your car to work every day. Previously, we had to throw them out after every drive, like throwing out your car engine every time you go to the grocery store. This would never be economically sustainable. And it took us nearly 50 years to solve this problem. And that’s where the Space Shuttle comes in.

The Space Shuttle was supposed to be reusable, but as we all know, it never really was. But the one component of the Shuttle that gets little attention is the high pressure turbo-pumps of its RS-25 engine. Over nearly 30 years of operating the Shuttle, and because it was supposed to be reusable, NASA kept tinkering away at the turbo-pumps; flying them, studying them and enhancing them. It took billions of dollars, years of time and thousands of people-hours. But over those years engineers at NASA finally began making headway on the engine that was supposed to be reusable, was a total throwaway, but which was gradually becoming a true, reusable, high powered rocket engine. They figured out how to reduce the temperature and pressure a bit and “Block I” was born. Then they figured out how to handle the massive cavitation and “out of round” motion of the turbo shaft at 37,000 rpm and 85,000 horsepower, calling it “Block II”. They shot-peened the turbine blades to resist Hydrogen wear and used Silicon Nitride as ball bearings, something that took months of jig testing trying all sorts of workarounds to resolve heavy wear on ball bearings and surfaces that was forcing them to either throw the engines out after one flight or overhaul them after every flight. The advantages of this new engine (10 flights before overhaul, an incredible advance) were so impressive NASA set about capitalizing on what it had learned with plans to build a new engine to replace this one, which would have a 100 flight before overhaul (this was to be the RS-83 and RS-84). In other words, for the first time they really knew how to build a truly reusable engine. Of course, by the time Block II came out and they had started on the new engine, the Shuttle was about to retire and the new engine program was cancelled. But NASA is more or less open source and their work and findings spread around the research community. People learned the lessons so hard-earned by NASA. People started building turbo-pumps with Silicon Nitride bearings, used new computer coding models developed by the NASA Shuttle team for dampening (another huge problem) and generally incorporating every lesson they could from NASA’s experience. Numerous papers have been written on this and Aeronautical Engineering professors write almost verbatim from NASA documents extolling the lessons learned. Around 2012 SpaceX tested a totally new turbo-pump. It’s overall thrust is a dramatic increase over its predecessor. The turbo-pumps can be re-fired and the rocket is reusable. Something big has happened. There are, of course, narcissists in our capitalist world that will never acknowledge credit where credit is due, but yes, all this came from NASA and it’s clear and unambiguous.

SpaceX is, for now, also the only company looking at reusing the airframe, the other truly expensive component, and is the first to make real, tangible progress in that direction. But interestingly, that problem is intermixed with the turbo-pump problem, for the simplest technological solution is just to fly the booster right back to the launch pad when it’s done lifting your cargo. This was unthinkable before reusable turbo-pumps were perfected (it would require three separate firings in one flight – most turbo-pumps will literally blow up if you try to shut them down gracefully, much less start them back up). It won’t be long before the rest of the industry jumps on this bandwagon and does the same. In fact, we think they have but cannot confirm that it is coming from NASA research directly. Everyone is now fascinated with reusable rocket engines and airframes. The engineering streets are abuzz with talk. To see why, for a 50 million dollar launch only about 200,000 dollars is used for fuel. The turbo-pump suite costs from 10 – 40 million (which today are just thrown away). When the books are settled, at the least, it costs about 3500 dollars or more per pound for taking cargo to LEO. But if you reuse the rocket and its turbo-pumps that cost will plummet to less than 100 dollars a pound. In the next 10 years the boundaries of space are about to explode with activity. There are more resources that are easy to recover and bring to Earth than anything imaginable from past experience. It will be like the 49 Gold rush squared. And all this talk about “lightweight” and “mass restrictions” will all sound rather quaint.

And btw, this is why NASA is now focusing on a new Space Launch System (that Mondo rocket that looks awfully like the Saturn V), which is for deep space flight. They already know the LEO problem is solved and they are leaving it to private industry. That’s what all the confusion and hoopla in the political world, vis-à-vis space flight, is all about; people are realizing that a paradigm shift is occurring. Bring it on!

Neil, I will wink at the moon for you Sunday night.

kk

Hi all,

The truth is stranger than fiction. I’m going to make a point and I’m going to use one of the wildest conspiracy theories out there to make it: the idea that UFOs exist and that they are spying on us. Indeed, that they are spying on nuclear weapons facilities around the globe as if a preface to global invasion. Good stuff, but let’s read between the lines and I think the truth might be in there somewhere. Something is going on, but it’s not what some think. Call me a skeptic, but first we need to explain what that word means, which takes me right to my point.

First, the term “skeptic” is confusing in the UFO research arena. Apparently, those that hold the orthodox view that UFOs do not exist seem to view themselves as “skeptics”. This was a bit confusing to me at first but it’s only a matter of semantics. Or is it? I think it is more accurate to call those that challenge the orthodox opinion to be skeptics, not the reverse. For the view that UFOs are real is a heterodox opinion and, by definition, skeptical of the orthodox view.

Having said this, most attempts by the orthodoxy to refute UFO evidence seems to revolve around a style that is frustrating to read and research: most attempts to “debunk” seem to be tomes of Ignoratio elenchi (basically, making arguments that are cleverly irrelevant to the presenting claim). And that itself gives the appearance of dishonesty. Whether it is honest or not, those that make these arguments should consider this in their analysis because it is a source of considerable suspicion for most readers. One of the common tools used in this approach, among many others, is the tendency to over-emphasize things like witness credibility, particularly the credibility of the researcher themselves. For if the researcher can be found to be dishonest or misleading (or just crazy), then it is assumed that everything they say or claim is false.  Ironically, this is the same approach used in Western law and it has been shown in numerous psychological studies to be fallacious. This is because there are two types of “believers”; those that are charlatans and do not in fact believe and those who engage in wishful thinking but still believe the over-arching premise. And for most researchers who reach heterodox conclusions the latter is more common. It’s the age-old fallacy of throwing the baby out with the bath water without realizing that while some parts of a person’s research may be fallacious or faulty, that does not, by itself, imply that all of it is. The astute researcher has to know how to sort this out.

But what many orthodox researchers do, once they find any fault or error in an analysis, is engage in an argument of Ignoratio elenchi by focusing only on the matters of credibility, not the actual claims being made. Thus they spend volumes discussing tangential factors to the overall story that, by themselves, have nothing to do with the truth or falsity of the over-arching claim. That is not to say that credibility has no place: if one finds that, for example, the so-called “Majestic 12” documents originated from U.S. Air Force counter-intelligence agents (Special Agent Richard Doty, to be precise), then it can be safely assumed that any document referencing Majestic 12 is probably a fabrication. That is a credibility assessment. But what makes it germane and different than broad, context-void assessments of credibility, is the fact that it is contextually relevant. On the other hand, suggesting that Robert Hastings, who researches odd events at nuclear weapons facilities, has deliberately fudged facts about one incident at one location, say, in 1967 or 1968, does not by itself provide sufficient evidence to “debunk” evidence that exists for the same kind of phenomenon at another site in 1975, whether that evidence originated with Hastings or someone else. Critical analysis just isn’t that simple and explanations that try to do this are catch-penny arguments used to appeal to human bias and prejudices; namely those having to do with people’s natural tendency to disbelieve anything that comes from a source that has been dishonest at some point in the past. This is the same reason why a known and admitted prostitute on the witness stand is not guaranteed to lie for any and all questions posed to her. It just isn’t that simple. And what we are doing here has nothing to do with trying an individual; we are seeking confirmation of facts and assertions that may or may not be independent of that witness.

Two excellent examples of deceptive “debunking” are the Air Force “Case Closed” report of 1995 and the internet article by James Carlson found at http://www.realityuncovered.net/blog/2011/12/by-their-works-shall-ye-know-them-part-1/. In the case of the Air Force an attempt to debunk the skeptical claims about the official narrative of the Roswell incident of 1947 was proffered in 1994 and 1995 when the Air Force changed its official story of it being due to weather balloons to a claim that it was due to something called project Mogul. This tells us, implicitly but loudly, that the Air Force was engaged in counter-intelligence when it lied about the weather balloons. Forgetting for the moment that germane questions of credibility about the Mogul claim are thus obvious, the Air Force spent almost all of its report talking about project Mogul. It was basically a history lession about Mogul. But that is really not relevant to the skeptical claim. And sure enough, the explanation was riddled with problems. There were crash dummies employed in 1947 that didn’t exist until 1952, the Ramey memo in which it can clearly be seen by any modern computer user to have referenced “victims of the wreck”, and so on. Thus, given that we know that disinformation was the source of the weather balloon explanation, and given the obvious application of Ignoratio elenchi in its 1995 Mogul diatribe, it is no wonder the American public doesn’t believe it. That USG can’t see this is astonishing but reinforces the view that they are out of touch with the public.

In a similar way, Carlson makes an elaborate argument that the 1967 and 1968 phenomenon at one missile base was, at best, wishful thinking of a skeptic named Robert Hastings. Once the “gotcha” was in place, it was then assumed by character assassination that all the other events must be of a similar nature. It was an exercise in Ignoratio elenchi.

And this is why these kinds of analyses are a turn-off for most readers. Most readers see this as a personal attack on a person rather than an honest pursuit of truth. That Establishment figures in government, who do the same thing, have not apparently noticed this is astonshing but it shows how out of touch they are with everyday people. If there were ever a sign of elitism sticking out like a sore thumb, this is it. Thus, in order to examine the Hastings research, we need to examine each case of purported tampering at each base on each date and ask only the questions of merit:

1.)  Who actually witnessed the visible phenomenon? Are their names known to us? What is the chain through which this information reaches us now? What have they said? What written or electronic data is available to corroborate it? Where is it? Can we see it? Is it clear and convincing?

2.)  What witnesses can report on the radar data? Have they also been identified? What is the chain through which this information reaches us now? Do records of these radar sweeps exist? Can we see them? Is it clear and convincing that solid objects were present?

3.)  What failure mode, if any, was witnessed and who witnessed this? Have they been identified? What is the chain through which this information reaches us now? Did anything actually fail? What was the nature of the failure? What records exist to corroborate this failure? Is it clear and convincing that a failure mode without prosaic explanation occurred? (classified aspects of their operation can still be protected by a careful review of how the systems are explained – we don’t need to know how they work).

4.)  Does the movement of objects, if established as above, correlate well between visual sightings and radar tracks? Does it appear to be intelligently directed as a response or anticipation of human behavior? Time, location and altitude are critical here.

We don’t need diatribes and tomes of personal attacks, tangential information and digressions of Ignoratio elenchi to resolve this. We need data.

The global public is becoming more and more sophisticated in their understanding of geopolitics and disinformation. They more easily recognize it and its common attendants, such as catch-penny reasoning, Ignoratio elenchi, the role of greed and money and the extremes to which power corrupts. It’s time for USG to catch up.

Virtually every government “investigation” and every “debunker” out there has done nothing to address the four questions that could be equally applied, in different form, to just about any “conspiracy theory”. And the field is chock-full of charlatans and fairy tales that can be easily discounted with a modicum of background research into the provenance of documents and the nature of the claims by simply applying the questions above, even if they cannot be fully answered. There is truth between the lies.

I would submit that USG should rethink the way it approaches conspiracy theories by avoiding catchpenny silliness and responding to them directly and in a hyper focused manner by releasing data specific to the claims of merit because the tactic they’ve been using, since at least 1947, is itself becoming a National Security issue because of the distrust in government it has caused. And Popular Mechanics commentary – which most who have two brain cells to put together know this to be a USG shill – in their replies isn’t needed. Just the relevant data. They need to do this with the Kennedy assassination, 911, UFOs and anything else of popular lore. Sadly, I’m afraid their hubris is too inflated now to ever do that, but for my part, I’ve illuminated the path. Listen to me now, hear me later.

- kk

P.S. Watch how disinformation works. Everybody is focused on Edward Snowden. But is that really the story? How about the fact that his revelations have confirmed that you are being watched, Orwell-style? All the way down to your Safeway discount card application and purchasing data. Yep, that’s right. Go read what he actually handed over to the journalist that reported it. Google is your friend.

As some of you might have heard, a site suspected of being a Chachopoya site was discovered east of the Andes in the western Amazon jungle back in 2007. I don’t know if it’s going to be excavated or not. The site has been called “Huaca La Penitenciaría”, or Penitentiary Ruins, so named because it looks like a fortress of stone. It is buried in deep jungle growth at an elevation of about 6000 feet. This means that archaeologists now have to consider the fact that the Chachapoya’s eastern boundary extended into the Amazon. Anyway, numerous structures have now been verified in the Amazon jungle and it is clear that this “jungle” wasn’t always so. Human beings have been “terraforming” it for centuries creating a very rich, black soil on top of the acidic rain forest soil.

chachapoya_Penitentiary_Ruin

Huaca La Penitenciaría

The “black soil” (terra preta) has been a mystery until recent years. Now, it has been learned, these people had an ingenious system of settled agriculture in which they enriched poor soil with charcoal and other materials by building dikes, water channels and artificial lakes. Then they harvested fish in those lakes. Pretty clever. Archaeologists now believe the Amazon area was host to a population as high as 5 million with “urbanity” ratings exceeding that of Ancient Rome. They essentially built an artificial archipelago. This civilization for which we still know so little extended from sea to shining sea, all the way across the Amazon. The finding of the “penitentiary” means that we now know that heavy stonework was employed in the Amazon. This is a sea-change in thinking as it means we can expect to see more of it.

soil_Comparison_Amazon_001Amazonian Terra preta and regular Amazonian soil.

I decided to grab some of the satellite data on where this “black soil” has been found and do some google searching in satellite images. I was hoping others could help me identify some new prospects. I’ll be writing a big piece on anthropology and archaeology about all of this and more pretty soon. But for now, I was just wondering if anyone knew about these places and what might be here (such as modern constructions).
You’ll notice a lot of circles and berms. These are the types of shapes seen before, called geoglyphs, which it has now been found represent raised earth mounds. Anyway, peruse and enjoy. The locations are in the file name, so click on the file and get the filename to find them on google.

possible_Yet_Another_Geometrically_Regular_Field_With_Cube_At_Decimal_12.332623_South_68.87198_West_001possible_Geometrically_Regular_Field_With_Xs_And_Ls_At_Decimal_12.306922_South_68.891402_West_001possible_Geometrically_Regular_Field_With_Rectangular_Objects_And_Xs_And_Ls_At_Decimal_12.306922_South_68.891402_West_001possible_Artificial_Structure_Zoom_001

Here, what looks like a pyramid or platform structure with a couple of trees growing on top. Notice the regular geometric seams in the rock.possible_Artificial_Object_Field_Shaped_Like_Arrow_Head_At_12.364966_South_68.867673_West_002

This is very strange. The picture is taken from an angle, but when corrected, this raised Earth appears to take on the shape of an arrowhead. The pyramid/platform/whatever is at the bottom right. Oddly, the arrowhead points on an azimuth direct to Puma Punku (about 2.5 degrees), not far away. The figures are (reverse azimuth):

Distance: 467.1 km
Initial bearing: 357°29′53″
Final bearing: 357°32′42″

possible_Artificial_Object_At_12.364966_South_68.867673_West_002possible_Another_Geometrically_Regular_Field_With_StoneWork_At_Decimal_12.336082_South_68.869891_West_001possible_Another_Geometrically_Regular_Field_Lower_Portion_At_Decimal_12.336082_South_68.869891_West_001A geometric object lying in the same field as the platform below.possible_And_Another_Geometrically_Regular_Field_Zoom_At_Decimal_12.337193_South_68.846632_West_001

Compare this to the layout and overhead at Tiwanaku

pumuPunka_OverheadAnd the layout scheme at Puma Punku:

puma_Punka_Layoutpossible_And_Another_Geometrically_Regular_Field_At_Decimal_12.337193_South_68.846632_West_001Here is what looks like a stone platform like the one at Tiwanaku

Compare this to the Chechapoya Penitentiary layout:

chechapoya_At_6000_Feet_Peru

You might have noticed something odd. What are those cauldron, casket looking structures on the roof? Who knows. But check this out:

possible_Penitentiary_Analog_At_Decimal_12.363672_South_68.846252_West_001

Unfortunately, the video quality degrades inline, but you can click on the image, then click on full size to get a better view. This is just east of the arrowhead, about a mile or so. Notice the odd roof structures. This site is very close to Puma Punku, and I will have much to say about masonry in my article on this subject. The masonry problem may have been solved in the Amazon jungle.

And much, much more. I’ve found stuff like this all over the Amazon. Archaeologists are now saying that there may be hundreds or even thousands of Earthworks in the Amazon that people have practically lived on for centuries and not noticed until now.

-kk

Never interrupt someone doing what you said couldn’t be done

~ Amelia Earhart

amelia_Mary_Earhart_Age_16

Amelia Earhart, 16 years old

Wreck of the Earhart Electra

by Lloyd Manley

So, eat, drink and be merry, for tomorrow you may be dead. Seize the day you know it never ends, just look behind you. You are but a man.

~Bastard Fairies

Notice: 2010 images of the Electra at 980 feet below sea level are in the Appendix and you can view the full size versions by clicking on the image, then clicking “original image size” on the left. Notice that these images came from a video that this author obtained and that video is the basis of the debris field analysis discussed. TIGHAR has still not released the 2012 video that shows the same objects.

If you’ve paid attention to the news you might have noticed that some kind of anomaly was found on a remote island in the Pacific that some think is Amelia Earhart’s missing plane. Now, it’s not like this hasn’t been looked into for a long time by a lot of people. The difference this time however, is that the debris found is a bit more than just a “sonar anomaly”. And, oddly, its not where it should be.

And now, it is time for the news story you have not yet heard.

You see, there is a reason why the plane is not where it should be. I am going to advance a hypothesis that will be controversial, if my own experience in researching this over the years is any indication, and it has nothing to do with Saipan, Irene Bolam, UFOs, castaways or other such nonsense. No, the reason why Earhart died is sadly more mundane and familiar to all of us than we’d like to think. It’s about dark secrets, shame, intellectual insecurity and, yes, sexism and politics.

What one finds when they examine the evidence in the case of the Earhart mystery is that a culture of partiality has existed since the very day she disappeared, almost as if frozen in time, that continues to seek solutions to the mystery based on assumptions that seem to have more to do with preconceived notions and a pre-existing agenda than any genuine desire to resolve what happened. Many go to extraordinary lengths to find fault with Earhart herself and to point fingers of blame for the accident solely on the pilot. In fact, many will flatly claim that the accident was solely Earhart’s fault with no contribution of error from anyone or anything else. As we shall see, a more mature and sobered examination suggests this is probably not the case. John P. Riley wrote a piece in the research journal Naval History Magazine called “Earhart Mystery: Old mystery, New Hypothesis” (August, 2000) which identified several mistakes of the USCG that we will also bring to light here but which are routinely ignored or denied by researchers. In this piece, Riley notes major errors on the part of the USCG Commander Warren K. Thompson who, he suggests, fouled up and then panicked and covered up the mistakes when it was apparent Earhart was lost. When we examine the radio log evidence we will be discussing some of those same observations regarding the instructions Earhart gave and the USCG’s failure to follow radio instructions, a point strenuously and pathologically ignored or denied even to this day. The author notes considerable evidence to show that the USCG knew they botched the job badly and not only concealed records of the event (not available again until 1988 through a Freedom of Information Request) but fabricated and altered facts surrounding the tragedy. Frankly, it is no wonder the Earhart mystery has endured 76 years considering the extreme partiality that still exists today, much of it a direct descendent of the Thompson disinformation campaign that began the very day Earhart went missing (Thompson was the unique radio listener who seemed to believe Earhart sounded “frantic” over the radio). Riley not only was amazed at what was and wasn’t done, but was outright bewildered at some of the USCG’s actions. This author had the same reaction, particularly with regard to the behavior in the radio room aboard the Itasca after about 6 a.m. that morning. And we shall see shortly just how obvious the USCG errors were to an impartial reader.

What is perhaps more tragic is the realization that this very bias may be the reason why this “mystery” has endured so long. And this is because the solution to the “mystery” appears to be a somewhat elementary deduction from facts that have been known from the beginning but never before examined due to this distraction. This work examines that deduction and explains what actually happened to Amelia Earhart and Frederick Noonan. By simply taking the known facts for what they are and doing some basic math one wonders if others haven’t already done these calculations and known for some time where the plane is actually located. We will let the reader ponder that question.

I have communicated over a period of several years with numerous persons involved in this case, some of them with quite familiar names, and have collected several gigabytes of data related to the incident. From that data and from my experience in dealing with individuals who research the Earhart mystery I am going to inventory a set of causes for the death of Amelia Earhart, pilot and Frederick Noonan, navigator, which are many but include the ineluctable conclusion that between about 7 hours and 20 minutes into the Great World flight of Amelia Earhart and near its end at about 20 hours from the start, the navigator of this plane was impaired. And I’m going to also show that there has been ever since that time a “silent conspiracy” in aviation circles to suppress the true facts of the Earhart mystery to conceal this from the public. The reader might have known that Frederick Noonan (FN) was a world-renown aviation navigator, probably the best of the time, who worked pacific routes for Pan Am, and that Pan Am fired him for alcoholism. To be clear, this is not what led me to think FN might have been impaired, it is just one conjecture as to how he might have been impaired. In reality it turns out to be just one more extraneous fact that happened to follow the elephant in the living room. And as a hard core skeptic with a decided disdain for conspiracy and anti-social rant, it took quite a bit of evidence to bring me to write this piece.

The International Group for Historic Aircraft Recovery (TIGHAR, pron. “tiger”) was a non-profit created in the late eighties by a aviation researcher named Richard (Ric) Gillespie. This group quickly developed the shocking hypothesis that Earhart might not have simply crashed and sank into the water as had been believed even within the first few hours of her loss. The latter view is championed by a group following the work of Elgen Long, an explorer and record-breaking pilot himself. Thus TIGHAR and Elgen Long (and his late wife Marie Long) hold the keys to the two key schools of thought in the Earhart mystery.

With all his flaws of reasoning Mr. Gillespie still managed to build a fairly successful following and research plan, with numerous expeditions to the island over several years to recover “artifacts” which he believed belonged to Earhart (AE) and FN. The basic hypothesis developed and was refined over the years but eventually matured to the current view that AE performed an emergency, powered landing on a unique section of reef on the island (unique because it was the smoothest and flattest section of the entire reef) successfully with minimal damage. Gardner, now called Nikimaroru, is a small atoll with a circumference of some 6 miles. It has a storied past with all manner of human habitation coming and going over the years. People’s of all races and creeds have lived there at one time or another.

After numerous expeditions and around 2010 a philanthropist named Tim Mellon donated about 1 million dollars in stock to TIGHAR to fund the 2012 “Niki” expedition in hopes of finding some evidence of AE’s presence there. What happened after that was a train wreck. The plane was clearly found but the public was not told because, as Mr. Gillespie put it, he could “only see a bunch of coral” (communication between author and inside source 1) where Tim Mellon saw an airplane in the various video footage clips taken from rovers sent deep over the side of the reef at Gardner. A skeptical public wisely took the position (later on when they heard about all this) of the official TIGHAR statement since, firstly, this entire discussion was occurring in private the entire time, all the way up to 2013, and no one knew that ROV video even existed. Gradually, Mr. Mellon learned about this video as even he was not told all the particulars, and became disillusioned with Mr. Gillespie. Mr. Mellon came to believe I think sincerely that Mr. Gillespie was engaged in a fraud to conceal the existence of this plane from the public. As a result, as is often the case in legal wranglings, Mr. Gillespie was clearly compelled to release more and more information from both the 2010 expedition and the follow-on 2012 expedition to avoid the accusation that he had something to hide. The seemingly inexplicable motive for concealing the video was that Mr. Gillespie was wanting to “keep the gig going” (inside source 2). In other words, Mr. Gillespie, it was claimed, was withholding the find in order to gin up interest by the public in the perpetual funding of his expeditions, each of which would release a fraction of new information to keep the funding coming in. But what is important for our purposes here is not whether Mr. Gillespie really knew the aircraft had been there all along but, was the plane actually there and is it now? This entire dispute came to broader public attention when in early 2013 Mr. Mellon filed a civil suit against Mr. Gillespie and TIGHAR alleging fraud. At this time, not coincidentally, information began to flow from TIGHAR. It hit the airwaves for public consumption as a release of sonar data and other dive related finds, but none of this was the find that Mr.  Mellon was talking about. As of this writing, that is still true now. The true scope of what is below the reef simply has not been reported by any news agency yet; at least not in a way that identifies it as a distinct find from the other sonar surveys that were conducted further up the slope in the same debris path as Mr. Mellon’s find.

TIGHAR had released some of this information on its own website, but only in fragmentary, low resolution form. This is a critical point to make. Because of coral reef growth, high resolution imagery is absolutely essential for the layperson to make heads or tails of what they are seeing. Without it the Electra might as well be Sasquatch. It was only this author’s interest in the Earhart case that compelled him to dig deeper. Mr. Mellon, a contributor to the forum discussion on the TIGHAR site, attempted to make his case there. But this was difficult to do with the poor quality imagery. Mr. Mellon had his own high quality copies, but he couldn’t show them because of the legalities of the case. Only when the viewer can see the dramatic difference not just in intrinsic quality of the videos in low vs. high resolution, but in terms of how that affects the ability to separate artificial objects from natural, does one appreciate the veracity of Mr. Mellon’s claims.

On July 2, 2013, 76 years after the loss of AE and FN, this author obtained the high resolution copies of the video in which Mr. Mellon claimed that aircraft wreckage could be identified. The presence of the remains of a Lockheed 10E Electra modified per Earhart was clear and convincing.

 AEOS024

Amelia Earhart, 1930

A Word about Pareidolia

Pareidolia says is that our brains tend to see order in chaos. This is true. If one sees what appears to be an artificial object in a natural setting, it is probably natural, all else being equal. But there are two ways we can test this to see if it is pareidolia or not:

1.)   If we take a viewing angle that just covers the object of interest and examine it, it may be possible to see symmetry or other things that make it seem artificial. But based on that alone, the claim is vulnerable to challenge as pareidolia. But if we now zoom out and look at the surrounding area and include other objects of interest in that same area, what the pareidolia hypothesis will say is that the symmetry and order of the scene should decrease, not increase. When the order of the scene increases it is highly likely that you are seeing an artificial object.
2.)   We can also do something that I noticed that Mr. Tim Mellon did on the TIGHAR discussion board quite well. And it tends to be the most conclusive way to eliminate false positives arising from pareidolia. You can view an object of interest and notice that it has an apparent order to it. If you come back later, after a definite time lapse, nature will tend to disrupt the symmetries and order over time if the cause is purely natural. But if the order in the same increases or remains the same, it is highly likely you are looking at an artificial object. If they were purely natural objects, we simply would not see these kinds of features in objects over time as nature shifted and moved them.

But if it is artificial, the correlation with other material at the site and other facts we know about the Electra makes it almost certain this is the subject aircraft. TIGHAR probably should have had experts that understand this: they could have consulted veterans of the intelligence community (NSA and NRO) who do this for a living and who have even written software that exploits these very two points I made to correctly identify artificial objects on the ground from orbit in highly cluttered and chaotic environments. We have included still shots in the Appendix and you can view at least a fraction of what’s down there. It is encrusted in growing coral, but you can clearly see what it is.

The two prevailing theories

Elgen Long is an aviator with considerable experience, both in commercial flight (flying Boeing 747s for some time) and experimental or “record” flights. He is familiar with the navigational techniques used by FN and has written on the subject, including a seminal work entitled, “Amelia Earhart: the mystery solved”, in which he lays out in more formal fashion the oldest of all theories: that the plane simply ran out of gas and crashed into the Pacific. In a phone interview with him some time ago I noted that he was a very sobered, clear and rational thinker who obviously had the mettle to think through the Earhart mystery very carefully. But Long does something peculiar, and I’ll explain later why it is peculiar, whereby he uses the navigational techniques of that time to show why FN should have been:

  1. Very close to Howland at the time the plane ditched
  2. Been able to place the aircraft in an area of uncertainty not much larger than 30 x 80 statute miles

Based on this theory, he confidently predicted the plane would ultimately be found in that area of uncertainty. Years later, in the 2009 time frame, expeditions were sent to this area to perform surveys of the sea floor that used technology sufficient to not only find an airplane of such size but positively exclude it if it were not there. The entities doing these searches were Nauticos and the Wait Institute. And alack, they positively excluded the Electra from this location. And now we get to the reason why this prediction was so peculiar. As we shall see with the navigation techniques, the area of uncertainty Long calculated was in fact what we would expect assuming a competent navigator had done their work the night before. To understand this, we need to know how the navigation technique worked, in detail, because what we need to verify is that the technique itself is unassailable. To do this, we first note that they began their last flight from Lae, New Guinea in the morning local time of 10 a.m. This happened to conveniently correspond with 00 time at the prime meridian. As the day wore on they flew on through the night and did not see sunrise until about two hours before their expected time at Howland island. The entire flight was to be about 20 hours. And we shall see that Long’s area of uncertainty assumed that a certain minimum number of celestial observations were correctly made during the night.

takeoff_Lae_NG

Last known still image of AE and FN departing Lae, New Guinea, 02 July, 1937

At that time the most accurate way to determine one’s position was for a bunch of mathematicians and astrophysicists (astronomers in that day) to sit down and calculate where several celestial objects would appear in the sky every second of every day of the year at a reference position called the prime meridian (at Greenwich). The prime meridian is a longitude. You then publish this in what is called an almanac. The trigonometry of this situation is such that if you did that very same measurement yourself of an object overhead, you could just subtract the angle (navigator’s call this an “altitude”) you get from the predicted angle for the prime meridian longitude from the corresponding referent in the almanac, at the same time and date as it was entered therein. If you draw a diagram on paper you can see that these “angles” are the same angle. The result of that subtraction (technically, a signed addition) is your longitude at the moment and place you made that measurement. The tool for measuring that is called a sextant (which measures this “angle” by comparing the object’s position to the visible horizon) or an octant (which does the same but substitutes a leveled, bubble referent for the horizon). Thus, it is important to understand that, all else being equal, this method will only give you a longitude. But in reality, because the objects don’t rise and set exactly at the equator, the paths they take through the sky are offset from east and west and the “longitude” you get is in fact a “tilted” vertical line that is approximately parallel to a longitude (and perpendicular to the objects motion in the sky). In more extreme cases, this tilt can result in a line that produces a large latitude error and is thus not a “preferred” object for this purpose. Having said that, if an object passes overhead during the Earth’s rotation along a path perpendicular to longitude you can use that one object by itself to determine latitude. There aren’t many objects like that, but some are positioned just right for that purpose. In the almanac FN carried, there were 11 stars that could be used to fix latitude and about 57 that could be used to fix longitude. It is important to point out that how many of these objects you have depends on the mechanics of the particular flight. For example, the Earth’s tilt with respect to any of the path of 11 objects across the sky must align at right angles. This is further truncated by the weather: if you can’t see it you can’t use it. In order to get an exact position, as in latitude and longitude, one has to either use two or more of the 57 provided objects or 1 of the 11 provided objects, effectively truncating the 57 to about 29 objects. Furthermore, about 15 of the 57 objects would be difficult to see from the southern hemisphere, near equator region at that time. This means that in reality, there were more like 42/2 = 21 object pairs available to obtain fixes. Finally, finding the appropriate star also can lead to truncation of available stars since doing so often requires identification of constellations before a star can be identified individually. FN in fact made just such an error on the Oakland to Honolulu flight when he incorrectly identified a star. When you are able to measure both latitude and longitude using this general method it is called a “fix”. When you can only get longitude, it can be worked out into what is called a “line of position”, which is what AE reported by radio that they had “near” Howland. Common sense suggests that if this is what she reported (just before ditching) she hardly would have said that if she actually had a full fix.

Obviously, they did not know their latitude and only had good information on their longitude, within a margin of error.

This point will be crucial in the conclusion section, so we make special note of it here. For even this greater certainty of longitude ended up involving two, not one, value for same. But the two had far smaller margins of error than latitude. This knowledge of longitude, with an appropriate error margin, would have been calculated by measuring the angular separation between the sun and the horizontal reference within a bubble sextant once it cleared 6 degrees above the horizon. But it could also have been done by performing a similar procedure on a celestial body during the night, just before sunrise. Thus, if this message were heard, searchers could only look along the “line of position” (LOP) because they didn’t know where she was on that line. In other words, the line was almost parallel to longitude, so it represents a one-dimensional zone of uncertainty whose largest component is latitude. The final, truncating factor in the choice of celestial bodies available constrained by the aircraft itself. The navigator had a small window through a hatch he could make sights through. But that was all. The plane had to be maneuvered to get shots. This required cooperation between navigator and pilot and set limits since this airplane could not fly in any orientation. That also means the pilot would have been well aware of any fumbling or difficulty encountered by the navigator during this time, a point that will come up later. Additionally, we should note from hereon that when we refer to latitudes and longitudes in the context of measurement, the measurement error is assumed in the statement. In other words,

when we speak of a longitude or latitude in that context it is to be understood, unless stated otherwise, that we are speaking of the mean longitude or latitude of the error margin for that value.

 waitt_And_Nauticos_Exclusion_Area

In one of the most comprehensive searches in history, Nauticos and the Waitt Institute positively excluded the Lockeheed 10E from being in any of these areas shown above. This is precisely the uncertainty area postulated by Elgen Long. This was possible due to advances in sea floor searching realized only in recent years. It now means that anyone postulating a “northern solution” must contend with the same latitude error that a “southern solution” must. The set of theories falling into either category are chiral reflections of each other.

A more crude way to determine change in position is called dead reckoning, and as a practical matter must be used in conjunction with celestial navigation. Dead reckoning is exactly what it sounds like; you measure your change in position from a known position by measuring your speed and direction (velocity vector) over that transit. Ground speed can be tricky because the air isn’t stationary. So, you can measure “drift” or wind speed and estimate ground speed by combining that with throttle information, another area of cooperation between pilot and navigator. This method has been found to be accurate to about 10% of the transit length. In other words, if a crosswind “pushes” you one way, you can expect an estimate for deflection in that direction within an error range up to 10% of the transit length. This means that if the transit length from a known position is short the deflection error from cross winds will be fairly small. By combining a LOP with a dead reckon “run” you can expect some loss of information in latitude but very little in longitude because the celestial fix on longitude has mostly eliminated that error. And the error margin in latitude will depend, again, on the transit length between this point and the most recent prior fix. This is called “advancing the LOP”. So, if FN shot the sun, for example, only two hours before expecting to reach Howland, his longitude would still be pretty accurate, even though it has some dead reckon error margin built-in. But his latitude will depend on prior fixes or his most recent, known position. Thus FN would “advance” his LOP taken two hours earlier to overlay Howland island, but if it were only 200 miles from the point where he measured the Sun, his added error margin to longitude would be only at most 20 miles (10% of 200 miles). On the other hand, if FN had not gotten a full fix in something like 13 hours, his latitude error margin could be as high as 170 miles. And again, what we hear obviously and clearly over the radio transmissions is that the last true fix was some 13 hours prior. There would be no reason for AE to report a LOP and fail to report her position at any other time if she knew more about her location. Common sense tells us that, for whatever reason, FN got only that one fix 13 hours before. This obvious fact has been avoided and danced around for 76 years because in this emotionally driven subject everyone is doing their level best to avoid any possibility of assigning blame to FN, whether anyone was trying to or not. Could there be a subliminal reason for this? Did those who are intimately familiar with Noonan’s trade know from the beginning what the author learned recently? Another clue, but one we don’t need to rely on to figure this out, is that AE had a signature signoff on radio for which she was well-known. It was, simply, “everything okay”. This very message was received with that last known, good fix just mentioned. What is peculiar is that habits didn’t seem to sustain on this flight, making her behavior anomalous barring any other explanation. In other words, AE never repeated those words even though this was commonplace in all her other flights and, as we noted, a signature line of hers. Whle this does little to prove a point to a skeptic, it is one of the most compelling pieces of evidence to the author. Something was going on in the plane that AE was not advertising. It needn’t be catastrophic at first, but it was at least ominous.

This author posed the question on internet forums to assess how well “researchers” and “experts” truly appreciated the limitations of celestial navigation in aircraft. Due to the short flight times, limited visibility due to being confined to the airframe and the considerations of weather, the number of choices isn’t as wide as many assume. The table below lists the number of celestial objects FN had at his disposal on this flight (the numbers indicate brightness). Anything above 0 will be hard to see at all in moderately foul weather.

Brightness 1.3 0.1 1.3 0.6 -3.7 -2.8 -12.6 -0.24 1.1
“200 miles out” Deneb Vega Fomalhaut Acherner Venus Jupiter Moon Saturn
“100 miles out” Deneb Vega Fomalhaut Acherner Venus Jupiter Moon Saturn Alderbaran

continued

Brightness -12.6 -0.24 1.1
“200 miles out” Moon Saturn
“100 miles out” Moon Saturn Alderbaran

The table shows the total number of celestial objects available to FN for navigation on that night. As we can see, it isn’t very many. Other objects (a few) were available at other times but note that there is no indication from the pilot that this occurred, even though she had a history of frequent position reports (see Honolulu Oakland flight as one example). Having noted the difficulty, an unimpaired, competent navigator should have been able to get a full fix with this many celestial objects, at least at these positions. The Waitt Institute obtained similar results to these.

Thus in this particular flight, it is evident, if we take the known facts on their face, that the only fixes obtained during the entire flight were at takeoff (prima facie) and once again after about 7 hours into the flight. That’s it. There were no fixes afterward. Of course, the cop out over the years has been that, rather than to possibly put FN on the hot seat, to simply dismiss AE as an imbecile who didn’t understand that her location might be one of the most important transmissions she could possibly make; and we already know that she would report this because she did when the information was known to her. If we take the facts for what they are telling us, we don’t get a position report beyond what we got because there weren’t any to give. And the simplest explanation truly is usually the correct one. Furthermore, that AE didn’t blab about problems with the navigator or with positions might have something to do with radio discipline and the fact that she was covering for him. But when your bias is against the person to start with, these kinds of things never occur to you. Thus, the question is why? Shouldn’t this have been easy? Given the heavy truncation of options from the almanac, this author is not so convinced that it would be easy if cloud cover existed, even in patches. Even so, the author must admit that surely more fixes than this could have been obtained. When the most likely is clearly not possible (the Electra found northwest of Howland), the less likely has to be considered. FN was eminently qualified to do the measurements required and even with clouds and limited visibility, he would have found the challenge insignificant.

 800px-Amelia_Earhart

AE and FN standing next to the port after hatch of the Lockheed 10E mod per Earhart

A quick note here is that the author will assume that the simplest explanation sufficient to explain what happened is most likely the correct one. Therefore, we are going to take the evidence on its face value unless there is some compelling reason to question it.

The Waitt report on the last flight of AE and FN reached the same conclusion as this author in at least one case. They wrote that:

“Of thirteen position reports made by Amelia Earhart from Lae–‐Howland, only two included a latitude and longitude position, and one of those is potentially in error in time and/or location. This is unusual given Fred Noonan’s experience with making detailed position reports on South Pacific proving flights with Pan Am in 1935.

Before joining the World Flight, Fred wrote about the importance of complete position reports, including latitude and longitude, air and ground speeds, wind direction and speed, and outside air temperature, in a post–‐flight report following one of these trips.”

The “unusual” report is the 200 and 100 mile report, which we will discuss in the foregoing.

Enter the Circus of “Experts”

How could the splash and sank “experts” be wrong? Because they are not infallible and have sought for 76 years, in most cases subconsciously, to isolate Earhart as the sole cause of the failed flight; they are partial to a predetermined conclusion. But specifically, the conventional wisdom breaks down when they assume, but do not verify, that valid celestial fixes were taken in the night. What we actually will verify shortly is that fixes were in fact taken, but they were invalid. On the night of the flight from Lae, New Guinea to Howland the United States Coast Guard (USCG) Cutter Itasca was positioned offshore of Howland to provide services for Earhart’s approach and landing. This and other assets had been ordered by the President to provide assistance, especially on this Pacific portion of the journey. One of the services Itasca offered was to broadcast by radio weather reports to Earhart during her transit. Of course , at 1623 mission time AE was a very long way from Howland and the weather being sent was taken at Howland. In any case, messages were sent that included weather reports from Howland, one of which occurred at 1623 mission time. This won’t tell us much about weather where AE was located, but one minute later, at 1624, AE transmitted “partly cloudy”. One will often hear the claim to the effect that the weather upon this transmission at 1624 mission time indicated that FN must have taken a fix since that report showed what appeared to be at least moderate weather. This is fallacious and as we’ll see, we have good reason to believe that, for whatever reason, celestial fixes were not successful during the night in any case. Without this erroneous assumption, which is called a type I error (partly cloudy skies does not, by itself, imply good fixes), we cannot pin down an area of uncertainty as small as Elgen Long would like. Thus conclusions of merit cannot a priori be had regarding the number and quantity of celestial fixes, even though this has been a staple of Earhart research for 76 years. Self professed “experts” are quite vocal on the subject of Earhart, even when they couldn’t possibly know what they are talking about, as in the example just given. The previous example is but one. Something more definitive is needed to show that celestial fixes were either not being taken or were not being taken correctly. We could spend a career on the mass of material generated by this cottage industry of hate-based, historical figure bashing. But the author will need to at least briefly address some of them as they may directly affect the conversation.

The example in which longitudinal error margins are very small relative to latitude error margins must deal with objections centered on the idea the plane ended up anywhere other than the immediate vicinity of Howland island. This objection in turn is based on the belief that there would be no reason to “search” to the south (or anywhere else) if one is really so close to their target and they know that. This is a non-sequitir. Nobody said they were looking for Gardner; thus one need not posit that there was in fact any “reason” to search to the south other than ignorance of position itself.

 distinguished_Flying_Cross

The pilot was the first woman to ever be awarded the Distinguished Flying Cross for being the first woman to cross the Atlantic Ocean solo and nonstop. She would later become the first human being to transit the Pacific solo in 1935. But if you ask most AE loss “experts”, she means much of nothing to aviation history. Contrary to the fantasies of second-rate, risk averse pilots she was not an “incompetent pilot”, she was a risk prone pilot who broke records. Her service and sacrifice had an enormous impact on those that followed for generations, especially women. Sadly, it is her accomplishments and what she meant to aviation history that have been swept under the rug because of the “mystery” of her loss. These “foolish” antics and “ditzy” goof-ups attributed to her over the years were precursors to the epic voyages of Yuri Gagarin, Neil Armstrong, Viking, Venera, Voyager and many to come. It’s time for this aviator to be remembered for the big picture and what impact she, regardless of sex, had on human discovery. And that is why the truth is finally being told.

AE was a risk-taking explorer, not a bad pilot, as has been claimed over the last 76 years ad nausea. There is a difference between the two. Some people take risks others never would. That doesn’t make them a bad pilot. It depends on cause. Neil Armstrong performed a mind-blowing landing in a spacecraft under conditions that he would never have performed with fare-paying passengers on a flight from New York to Seattle to see the grand-kids. It is patently absurd to hold AE’s judgement of safety to the same standard a pilot flying regular commercial flights would. Neil Armstrong also crashed many times before but other pilots doing the same would have never gotten off the ground to start with. To show this difference, there are innumerable reports of pure, raw flying skill that are conveniently forgotten by naysayers. One of them is the event in central Asia which Long, to his credit, faithfully recorded and noted as exemplary. But what this author admires about Long is that he was a risk-taker also. In addition to flying 747s for passengers, he also flew record-breaking flights. He gets it. In the takoff there AE judged time and distance extremely well with an over-loaded aircraft and almost touched tree tops with her landing gear after lifting off from a muddy runway in the middle of monsoon season. All the men there told her it couldn’t be done and not to try it. She did it anyway. At Lae, New Guinea, the Chater Report tells us that when AE took off on the final flight she was hardly impaired or incompetent:

“The take-off was hair-raising as after taking every yard of the 1000 yard runway from the north west end of the aerodrome towards the sea, the aircraft had not left the ground 50 yards from the end of the runway. When it (NR16020) did leave it sank away but was by this time over the sea. It continued to sink to about five or six feet above the water and had not climbed to more than 100 feet before it disappeared from sight. In spite of this however, it was obvious that the aircraft was well handled and pilots of Guinea Airways who have flown Lockheed aircraft were loud in their praise of the take-off with such an overload.”

How is it that an incompetent pilot was able to judge time and distance so well it impressed Lockheed experts in the one case and, to do this magic not once, but twice in less than two months? Was Chater lying? Why?

What is the moral of this digression? The moral of this digression is the moral of this story: Amelia Earhart’s contributions to aviation and history are being smothered by an over-emphasis on her loss, and the conjecture that her loss was due to poor flying skill. It’s time to set the record straight.

For his part, Captain Frederick Noonan was one of the most skilled and talented celestial navigators of all time. His contributions to aviation cannot be understated and should not be forgotten. And speaking of partiality and finger pointing, citing mistakes of others in this tragedy must be done only if it has a causal relation to the outcome and can be shown by evidence to have been more likely true that false. Noonan’s reputation as a navigator was unimpeachable and everyone knew he was, on the face of it, the right choice for the job of World Flight. Having said that, he was a fallible human being like anyone. While it is impossible to state the manner in which a person can be “incapacitated”, as that covers a lot of territory, there are pedestrian explanations for it that cannot be ignored either. But more importantly, for an agency such as the NTSB, for example, to investigate a crash and find that a pilot made an error, but then fail to acknowledge that fact, harms aviation by not informing the public of the real causes of the crash. Only by separating an aviator’s considerable successes from their mistakes can one both appreciate their contributions to aviation and learn from their mistakes at the same time. That is the approach taken here.

Thus, we are going to provide some background on old theories and conjecture about the state of mind of the navigator, particularly involving the rumors of alcoholism, but point out that such a condition is but one conjectural explanation for cognitive impairment and need not be the only one. We do not examine the cause of impairment itself, only the question of impairment, whether due to the exhaustion of such a long journey, physical maladies and illnesses contracted along the way (for which evidence does exist) or other factors. Indeed, the impairment we will show is almost certainly causally involved in this accident could have emerged froma  combination of factors, which is the view we assume until evidence proves otherwise. The rumors about alcoholism seemed to grow over the years to an echo that begged further inquiry.

In 1972 the Executive of Pan Am, John Leslie, was taped in an interview with Victor Wright (1972, PAA archives, Richter Library, Miami) and the following was recorded in April 2006: 35 years later, in a 1972 interview with Pan American executive John Leslie, Victor Wright (a crewmember who flew with Noonan on Clipper flights) said “he drank himself out of a job … It got to Noonan by way of drink. We had no one to do the navigating except Noonan. Harry Canaday then took over and navigated on the way back.”

In other words, when FN was navigating a plane in which survival depended ostensibily and potentially solely on him, he was intoxicated. So badly in fact, an amateur had to navigate them back.

Yet another report of this habit was provided on the PBS program “American Experience”. The author Gore Vidal was interviewed regarding Noonan’s drinking habits and stated that, “Well, just the night before the final flight, she reported in and they had a code phrase, “personnel problems,” which meant Noonan was back drinking. And my father said, “Just stop it right now and come home,” and G.P. agreed and said, “Come back, abort the flight, forget it, come home.” And then she said, “Oh, no,” and she said, “I think it’ll be all right,” something like that. So you may put that down to invincible optimism or it may have been huge pessimism.”

Yet again, another witness clearly stated that FN had a drinking habit likely against better judgement. TIGHAR was given a copy of a letter alleged to have been written from Honolulu by Russ Brines on 3 August 1937 to another journalist. Oddly, TIGHAR waxes defensive on any claim of alcoholism vis-à-vis FN, but sees no issue slandering and defaming AE routinely. In their eyes, AE has not contributed anything meaningful to society and is not deserving of the same respect; sentiments that sound decidedly partial and ideological to this author. Russ Brines wrote on 3 August, 1937:

“My cynical theory, and that of many other lads around here, is that Noonan found Lae a much too interesting town for anyone’s good. Privately, and only between us, I know Fred; know that he is — or was — one of the country’s best aerial navigators and one of its most accomplished six-bottle men, having cut his teeth on the foresail stays of an old square-rigged ship. Nursing such a developed thirst, he probably went for broke in Lae which, as you know, is an old-fashioned pioneer town with airplanes instead of covered wagons to cater to the gold rush. Therefore, if this is true, the chances are that Amelia had him poured into the plane and decided to do the naviqating herself. Well, she can’t — couldn’t — navigate for sour apples. … “.

Though it isn’t clear how he knew what AE’s navigating skills were, clearly they were not comparable to FN’s; provided he were sober.

Finally, Earhart reported in writing that Noonan had been drunk on the world flight at a stopover in Calcutta, India. Another report from Earhart repeated this claim, but this time at Lae, NG. While apologists with varying agendas have tried vehemently to silence this information, the fact is that one cannot claim there is no evidence of this habit. There are just too many reports of it. Earhart stated that FN “had been drinking [on the world flight] …” and that she “didn’t know where he had been getting it”. Was AE being untruthful? If so, why? It will shortly be clear why positions weren’t reported and also clear why “personnel problems” would not be conveyed by radio mid-air. They were communicated confidentially, and if the “personnel problems” was in fact a code, this makes that point clear. Speaking of which, if it were not such a code, why would AE say something so cryptic in the first place? Clearly it was a code, and we can debate what it meant, but it was a code.

It is likely that AE would not have known about this particular episode, had it occurred, until after the point of no return, which is the point where there is insufficient fuel to return to Lae. Depending on who you ask, this would have occurred anywhere from 10 hours into the flight up to about local midnight, a concept we’ll examine shortly. The differences in estimates are based on assumptions about headwinds and tailwinds which are subjective as regards a pilot’s rationale regarding safe return. There is no way a public figure like AE was going to publicize this episode, should one have occurred, by transmitting it in the open. That is why we got so little information about the errors that were about to compound astronomically for the crew. We would not make this claim without a much stronger argument independent of the anecdotal reports, and we will see that in the section “Denouement”. It is this PR angle to it that suggests that the impairment was in fact alcohol related, but at this point that is purely conjecture.

lockheed_10E_Profile_View_002

Fuel consumption Part One

On this flight the issue of fuel consumption has consumed tomes of material. The bottom line is that no one can definitively state how far the plane flew. To get a first principles understanding of how much fuel they were burning we can at least look at how AE was trained to fly for maximum range; for we can presumably safely assume that AE was intelligent enough to seek the longest duration burn possible. What was she taught and what procedure did she follow? While it is not the case today, at that time most pilots were taught that for maximum range the wing of the airplane must be in an attitude that will give maximum lift and minimum drag. This meant that there was an optimum speed to fly the airplane that would put the wing in that attitude. We know that Lockheed told AE to fly the airplane at 150 mph true airspeed (indicated air speed –what you read on the dial– corrected for air density) for maximum range, regardless of winds. She was further taught that if the headwinds were too strong, postpone the flight. Our first hint that she was in fact following her own training is found there, for she did in fact postpone flights due to headwinds. If headwinds were 30 mph, her groundspeed was, therefore, 120 mph. According to Long, this is, at least, about the best estimate we can get for headwinds. This is a critical point because AE had established a strong pattern in her career (something NTSB investigators often look at very closley) of following the recommendations of experts very closely. In other words, what Lockheed reocmmended with regard to handling the aircraft was probably what she did, and we know that from her own past behavior. This is why Elgen Long gets the large fuel burn he estimates and we think he is mostly correct.

The Waitt Institute provides some detail on the Long theory of fuel consumption with the Fuel Analyzer failure scenario. This is a theory cooked up by Long to make the gas burn up faster but for which no evidence exists that it actually occurred on that flight. In any case, assuming the failure of this device (which automatically increases optimum engine rpm for fuel efficiency), the Waitt Institute suggested that engine run time would expire by 9 a.m. As far as this author knows, this is the most conservative estimate as yet on fuel burn. We shall see in what follows that the Electra ditched at approximately 9:45 a.m., suggesting the Fuel Analyzer worked for at least some portion of the journey. It is the issue of the Fuel Analyzer where this author parts ways with Long and suggests that the device did in fact work properly and that is why the exhaustion occurred about one hour later than Long assumes.

Internet “experts”, and also Elgen Long, claim that AE used more fuel than she had on board and ditched before she found Howland Island because she flew faster than planned to compensate for strong head winds, as just explained. Long wrote, “The stronger the head wind the faster the plane must fly for maximum range.” Again, the basis for this, Long wrote is that for every head wind component there is a recommended speed for maximum range. However, increased speed means increased fuel consumption. He is correct, but his assumption is dead wrong. AE did not know this, she knew what Lockheed told her. She used the best information available at that time. Long finds a burn rate of about 55 gph average. This author finds about 50 gph average. Given the information Long had, this author finds his conclusions remarkably close to what he now thinks was the actual value (to be shown in what follows).

Whenever there is uncertainty about a variable, such as fuel burn functions and thus powered run-time, it helps if we can bracket the problem. Using the most conservative fuel estimates to stear clear of controversy, we can accept Long’s lower range estimate (though I do not think we can merely assume a failure or fault in the fuel system that reduces powered run-time, a recent innovation in the Long theory, but a case of special pleading in the absence of direct evidence). There is little controversy, if any, known to this author regarding the time at which AE reached the advanced sun line at Howland. This would be, loosely stated, the point in space at which the plane reached the longitude equal to that of Howland (it won’t quite be equal because it runs north and south at an angle to meridian lines, but its close). With some provisos to be elucidated in the following, if we start with Long’s theory we see that AE reached this line when she said she did, so, once again, we are taking the evidence on face value and accepting it for what it is. This occurred at 7:42 a.m. with the following message from AE:

19:12 7:42A.M. Earhart:” KHAQQ CALLING ITASCA WE MUST BE ON YOU BUT CANNOT SEE YOU BUT GAS IS RUNNING LOW BEEN UNABLE TO REACH YOU BY RADIO WE ARE FLYING AT ALTITUDE 1000 FEET.”

The proviso is that AE was within a reasonable distance of Howland. We shall see later that she was not. AE notes that gas is running low, and one log (there were two logs being recorded in the radio room of the Itasca during this time) records that she said there was only 30 minutes remaining. However, it was typical of aviators to refer to being “empty” as having reached their reserve, which was typically 20%. If that were the case, she would have about 4 hours flying time beyond that report. Thus, in either case, the simplest explanation is what the radio logs are about to tell us next anyway:

20:14 8:44 A.M. Earhart: “WE ARE ON THE LINE OF POSITION 157-337, WILL REPEAT THIS MESSAGE ON 6210 KCS. WAIT, LISTENING ON 6210 KCS. WE ARE RUNNING NORTH AND SOUTH.” Itasca log: ?On 3105-volume S-5.”

In other words, the plane was still airborne at 8:44 a.m., so clearly they had more than 30 minutes fuel remaining. Taking these words on face value, we have no way of knowing what she meant by 30 minutes fuel remaining, if she said that at all. Again, we take the most conservative estimate this implies and use it to determine whether or not AE and FN could have reached Gardner. But we should note, however, that AE’s transmission schedule was 15 and 45 after the hour, so the next transmission after the last was at 9:15 a.m. (according to AE’s transmission plan), which was never received. Therefore, we must conclude that it was possible that the plane flew up to that time. We’ll examine even longer times later. But for now, we have to examine the radio situation in more detail to see what this really means to resolve all the chaos and confusion over just how far AE could have flown (one of those pseudo-mysteries in researching the Earhart case).

 electra_Basic_Deck_Plan

We start by noting that AE could have flown at least 9:15 – 7:42 = 1 hour and 33 minutes beyond the point at which the sun line was intercepted. What’s really needed at this point is a way to resolve this with less speculation and more certainty. And we can. The Itasca radio logs that this author was able to collect all end after AE’s last transmission. We have no data on what happened at 9:15 and 9:45. But given the pattern we’ve seen, it is quite reasonable to expect that Itasca was transmitting over AE anyway; indeed it was likely (see below). Therefore, the radio data only really helps pin down possibilities in terms of the nearest pass to Howland and does little as far as telling us how long they were airborne after turning south. And the radio strength data tells us that they were probably not much more than 100 miles from Howland when the strongest signal was received, about 7:42 a.m. And at this point we digress back to the issue of the failure of the USCG to properly communicate with AE. Earhart researchers like to create this aura of “complexity” to matters that is, to put it bluntly, obfuscation by fiction. There is nothing complicated about the fact that the USCG did not do what they were told and this was a major contributing factor in the deaths of AE and FN.

No, it really isn’t that complicated.

AE, in writing and before the flight, told the USCG what frequencies to use. She told them what schedule to use for transmission and reception. She told them what time zone to use. She told them what her backup landing site was (Gardner), she told them to use voice only (not Morse code), she told them to use high frequency direction finding, not low; and on and on. And the Commander in Chief of the Armed forces ordered USCG and USN to support her mission. In other words, to do what they were told. And we know that USCG had this information and could do it because they in fact did do it in the first hours of the flight. Only at the critical later period of the flight did the USCG inexplicably decide to honor none of these requests.

For starters, let’s clarify when AE said she would be transmitting and receiving. She wrote that she would transmit at:

15 and 45 minutes after the hour [transmit]

And that she would listen, or receive, at:

00 and 30 minutes after the hour [receive]

Not surprisingly, the aspect of radio direction finding has also been obfuscated and distorted.  It is important to make clear that two forms of radio direction finding were possible; one in which 1.) radio operators on the ground could take bearings on a signal from AE and one in which 2.) AE could take bearings from a signal transmitted from the ground. The DF loop antenna on the aircraft was for taking bearings on signals emanating from the ground at 500 KCS. AE had sent a message indicating that she intended to transmit for ground operators to take a bearing on her at high frequency; not the reverse. In other words, AE was not expecting to take bearings on a ground signal at high frequency (well above 500 KCS). Here again, the failures of USCG were beyond regrettable and clearly negligent as the request to transmit at 500 KCS so that AE could take bearings on them was totally ignored. They never transmitted (not until 7:30 that morning, as the logs show). We know this because the logs show them listening on 500 KCS and never transmitting. In other words, the very homing signal at 500 KCS AE was relying on to constantly and continuously provide navigational error correction never came. It was as if Itasca had packed up and sailed away. They might as well have, for one can’t take a bearing on something that doesn’t transmit. When this is clarified, the question of USCG negligence takes on an entirely new dimension. Commander Thompson’s panic was justified. AE’s ability to do high frequency radio direction finding applied only to her transmissions to ground operators and was a backup. And as we noted, USN botched that by not supplying sufficient shore power.

And radios in 1937 were not transceivers. They consisted of a separate transmitter and receiver device; two different boxes completely. In those days it was not uncommon for a radio operator to wait, say, thirty minutes, and get a response by switching over to your receiver.

The full reasoning for this is lengthy, but given the disinformation pedaled for 76 years its worth the digression (see http://ameliaearhartcontroversy.com/lady-be-good/ for a more complete explanation).

On her first around the world attempt between Oakland and Honolulu the generator went out several hours before she reached her destination. It was determined upon landing that the current limiter fuse had blown because the amperage being used on board the Electra was too high. The current limiter fuse is in the engine compartment so it is impossible to repair in-flight. The remedy was for her not to run the receiver radio when she was transmitting.

Today when we turn on a radio it comes on almost instantly, which wasn’t the case in 1937. The radios were all the old tube type and required several minutes to warm up before they could receive or transmit. That is the reason she set times to transmit and receive so the radios would be operational when they were needed. It is hard to believe that USCG couldn’t understand this basic principle, but that is how they acted.

Another urban legend is that AE didn’t “key long enough to get a bearing on her”, another slice of drivel that’s been rolling on for 76 years now. On her first attempt AE had held the transmit key down for long periods so Pan American stations could get a “fix” on her and, as a result, had blown the current limiter. AE would thus have to wait until she was very close to Howland Island to transmit for long periods because if she blew the current limiter fuse again, she would have nothing but battery power to run on. That meant AE would lose any chance of direction finder assistance when she got close to Howland. Also, AE couldn’t lower the landing gear if the battery went dead and the generator wasn’t charging. If AE had located Howland Island without electrical power she would have been forced to land with the gear up.

And the reader is now only getting a glimmer of how bad the distortion of events surrounding this loss really are. It gets much, much worse.

So, what AE was doing was transmitting on the scheduled times she gave them (15 and 45 minutes after the hour). Then, she would switch over to her receiver and wait for their answer in exactly thirty minutes. Oops. First of all, the military was not using her time zone as she told them to, which didn’t amount to too much confusion, but the failure to maintain radio discipline in the radio room certainly did. For, after about 7 a.m. they stopped transmitting on the times AE was listening. That’s right. They transmitted nothing.  The Itasca was even blasting over her transmissions with thousands of watts on her exact frequency on several occassions; and on every occasion they were not transmitting when they were supposed to. These transmissions made over AE on several occassions occurred because radios were new back then and the military had already transitioned to real-time communication. So, in many cases, they were trying to respond to her transmissions immediately, not 30 minutes later. To complete the train wreck motif, they decided to also ignore her request to use voice only and always responded to her in Morse code. This makes even the early and late transmissions from Itasca effectively non-transmissions. Neither she nor FN knew Morse code. And most people in civilian aviation weren’t using it anymore anyway. Oh yes, and only USN had the high frequency direction finder. Only the Secretary of the Interior realized, by direct consultation with the first lady who knew AE, that the CG didn’t have any way to do direction finding as AE required but didn’t want USN as “competition”. So, the Secretary told the Navy to put a shore party on Howland to run a high frequency direction finder. What did they do? They didn’t bring a shore generator … which meant batteries. Which meant the batteries were burned up hours before AE was even close enough to do direction finding. So much for locating her from Howland.

intersection_Gardner_001
The first intercept, or putative “Howland island”, based on invalid celestial fix. See discussion that follows.

As we can see, there is no mystery or complexity behind why two-way radio communication couldn’t be established. But in the psychology of Earhart research, all blame must be channeled around the pilot only, for everything. Maybe people just find it hard to believe that USCG could make such ridiculously negligent errors. But they obviously did, as their own logs prove and John Riley already noted in 2000. Let’s take a tour:
GMT / Howland
18:30 7:00A.M. Itasca transmission: correct time
Up to this point, Itasca is on schedule and doing as AE asked in terms of the timing of transmissions. But the errors begin after 7 a.m. (did the radio room have a watch change at this time? Watches typically transfer at this time).
18:48 7:18A.M. Itasca transmission: wrong time
19:00 7:30A.M. Itasca transmission: right time but requested response by Morse, as told not to do.
19:12 7:42A.M. Earhart transmission: wrong time, but real-time request
19:13 7:43A.M. Itasca transmission: wrong time, running over AE
19:17 7:47A.M. Itasca transmission: wrong time
19:19 7:49A.M. Itasca transmission: wrong time
19:28 7:58A.M. Earhart transmission: late for quarter hour but significantly; her request for real-time response was heard. This is a big clue that Itasca was not doing as asked in all the other exchanges. See discussion in part two of fuel consumption
19:29 7:59A.M. Itasca transmission: correct time and AE replies
This is a big clue. Earhart didn’t hear them because they weren’t transmitting when they were told to.
19:30 8:00A.M. Earhart transmission: wrong time but in response real-time to first two-way comm.
19:35 8:05A.M. Itasca transmission: wrong time
19:36 8:06A.M. Itasca transmission: wrong time
19:37 8:07A.M. Itasca transmission: wrong time
19:42 8:12 A.M. Itasca transmission: wrong time now and wrong as indicated in message
19:43 8:13 A.M. Itasca transmission: wrong time
19:45 8:15 A.M. Itasca transmission: wrong time
19:48 8:18 A.M. Itasca transmission: wrong time
Itasca log: “no answer.”
20:03 8:33 A.M. Itasca transmission: wrong time

This is a total USCG disaster.

Forget about the transmissions that were late/early. What is the real significance of this radio log? The real significance of this radio log is not in what Itasca transmitted but in what they did not transmit: voice communication at the times Earhart was switched over to her receiver. It wasn’t just a convenience or convention once she established that operating procedure; if she didn’t physically switch over to the receiver (at the wrong time) she would hear nothing.

 Nikumaroro_Atoll

Fuel consumption Part Two

We mentioned earlier the lack of any definite information regarding the fuel expenditure. But we should also note that the fuel tanks on the Electra used powered pumps to draw fuel to the engines. These pumps could not fully empty the tanks. In order to do this the Electra was equipped with at least one manual pump. But the tanks on the plane were large which meant that a signficant amount of fuel could be recovered by manually pumping remnants of fuel unreachable by the powered pumps. Thus, one again, we see how easily assumptions about fuel expenditure can be skewed when we don’t really know what we’re talking about. And now we also see why the radio won’t suffice as constraint and we are about to see why the strongest signals received don’t help much either. After getting all the way to the sun line AE is frustrated by now and she, for the first time, breaks her own convention and transmits, almost on top of Itasca’s time (but she had to realize by then Itasca wasn’t following their schedule anyway):

19:28 7:58A.M. Earhart:” WE ARE CIRCLING BUT CANNOT HEAR YOU GO AHEAD ON 7500 EITHER NOW OR ON THE SCHEDULE TIME ON HALF HOUR.” Itasca log:” Volume S-5.”

Notice the signal strength is S-5 (this is on a scale of 1 to 5, 5 being the strongest). This scale is somewhat subjective and likely exaggerated in a tense situation such as this, but its close. She transmits at some point just before 7:58 a.m. (or maybe at 7:58 a.m.). This is not her scheduled transmit time of 00 and 30 after the hour. Now, notice what Itasca does:

19:29 7:59A.M. Itasca log:” AAAAAAAAAAAA (on 7500). Go ahead on 3105.”

This is the only time real-time communication actually occurred. But alack, they reply in … Morse code again. Now, notice that Earhart, realizing this is still her listen time, has turned immediately to her receiver and now back to her transmitter … all real-time:

19:30 8:00A.M. Earhart:” KHAQQ CALLING ITASCA WE RECEIVED YOUR SIGNALS BUT UNABLE TO GET A MINIMUM PLEASE TAKE BEARING ON US AND ANSWER 3105 WITH VOICE.” [ AE sent long dashes on 3105 for five seconds or so. This was the only direct reply that Itasca received from KHAQQ. It was later believed that she turned away at this point.]

ditching_Location_001

Where the Electra ended up. See discussion that follows. The mathematical possibilities uniquely converge here.

Surely the pilot would know if the navigator had taken her off course, right? Not really. The reality is that the pilot trusts the navigator to guide them to within radio range of the destination and whatever distance she was from Itasca it was clearly upon the direction of the navigator. The Electra was equipped with motion gyros and an autopilot, described by none other than AE herself;

“… directional gyros, the Bendix direction finder and various radio equipment. In the center of the instrument board is the Sperry Gyro Pilot”.

This rules out the pilot’s failure to manually maintain a compass bearing as a possible contributing factor. The pilot would simply set the autopilot to whatever the navigator tells them to set it. We note the curious maintenance log reported along with the Chater Report however, that indicates something peculiar. While it would be conjectural to suppose a fault in this equipment, it is tanatalizing nonetheless as it would have a direct bearing on over-compensating southward during DF flight. The record stated that:

“Sperry Gyro Horizon (Lateral and fore and aft level) removed, cleaned, oiled and replaced, as this reported showing machine in right wing low position when actually horizontal [this author’s emphasis]”. Did this actually fix the problem? We may never know. Did AE assume this to be in error when it was actually fixed and corrected? There are multiple ways of looking at this, but all of them are tantalizing if a large drift to the south is considered. Having said that, the celestial navigation errors are more than sufficient to misguide the aircraft as we show in what follows.

But all this begs the question, what else did the pilot herself have to say about this case that others have simply ignored?

She also wrote that,

“Fred Noonan has been unable, because of radio difficulties, to set his chronometers. Any lack of knowledge of their fastness and slowness would defeat the accuracy of celestial navigation. Howland is such a small spot in the Pacific that every aid to locating it must be available [this author’s emphasis]. Fred and I have.. repacked the plane eliminating everything unessential. We have even discarded as much personal property as we can decently get along without. . all Fred has is a small tin case which he picked up in Africa. I notice it still rattled, so it cannot be packed very full.”

In other words, the rumor, and it is admittedly so, that their delay from Lae was due to FN’s being intoxicated, may simply have been covered by the parallel chonometer problem. But we also know that the radio problems preventing the chronometer adjustment were solved before they left, so that is not a factor. And at this very same time AE was writing that FN had been drinking again and she didn’t know where he was getting it from. Thus, rumors about him being unable to get on the plane do not necessarily refer to the attempted takeoff video that shows him ably hopping aboard. But at least he does appear there, in that attempt, to be relatively sober. Finally, the oft-repeated and growingly naseous claim about how AE was negligent for removing the trailing antenna is nonsense. The trailing antenna had to be reeled out manually from the rear after takeoff and manually reeled back in on landing. And a switch in the after compartment was required to send power to it from the cockpit. Approaching Howland, AE couldn’t afford to rely on FN being in the after compartment, nor could she afford to have to coordinate with him in order to operate the radio during such a busy radio period if he were back there. It was, as a practical matter, useless for Howland and could have just as easily caused problems for AE sending and receiving on other antennae.

And,

“My “flying laboratory” became equipped with.. a Sperry Gyro-Pilot, an automatic device which actually flies the ship unaided. There is a Bendix radio direction finder which pointes the way to any selected broadcasting station within its range. There is the finest two way voice and code Western Electric communication equipment… The plane is a two motor all metal monoplane, with retractable landing gear. It’s normal cruising speed is 180 miles and hour and top speed in excess of 200. With the special gasoline tanks that have been installed in the fuselage, capable of carrying 1, 150 gallons, it has a cruising radius in excess of 4,000 miles. With a full load the ship weighs about 15,000 pounds. It’ is powered with two Wasp “H” engines, developing 1100 horsepower.”

And if you think AE hasn’t had problems with alcoholics on other flights, read this interview by George Putnam published in Soaring Wings:

(AE) told me those days at Trepassey taxed her spirit more than any experience she’d ever faced… She considered asking us to replace Stultz with another pilot (because of his drinking.) .. (However) she knew Stultz could fly the Friendship as no one else could… She simply got hold of her pilot and all but dragged him to the plane. he wasn’t in good shape.. Long afterward she told me the first few minutes of the next hour seemed to her the most dangerous minutes of her life – certainly the most dangerous of the flight…. AE knelt in the cabin, or wedged between the gas tanks, anxiously studied (Stultz) and “those little spots of red in the center of his cheeks” which never seemed to pale…. In the cabin she found a bottle smuggled aboard. Her instinct was to cast it through the trap door in the bottom of the fuselage. But.. what (if Stultz) should come and get it? .. as it turned out he never wanted that bottle, and in the end AE dropped it silently into the Irish Sea.

Many of the details described in these passages deal proximately with inconvenient truths that many “experts” in the field have claimed to be “mysteries”. There is no mystery.

gardner_Atoll

As already discussed, much conjecture exists over the question of radio propagation and some of the most accepted limits place an 80 nm restriction on how far the Electra was from the Itasca at the 7:42 and 8:44 transmission times. Of course, this is conjecture used to support the more restrictive Long hypothesis. We suspect that this number is, realistically, probably a bit larger. The difficulty in knowing this number with precision is that there are too many variables to take into account, each of which has its own assumptions built into it. But at the end of the day, there must be some limit on how far one can transmit with 50 source watts power at a given frequency (which affects range) that is realistic in most conditions. But alack, that very statement also depends on many other factors. But the biggest factor we wish to point out here is the one that everyone has ignored until now and one that

makes any sophisticated radio analysis of this problem inconclusive.

This factor, which is just one of many, is one based on the receiver’s capability. The Voyager spacecraft also transmits at 50 source watts power. But it’s also over one billion miles away … literally. This is why discussions of radio propagation and “expert” analysis of same constitutes ignorance and pathological science. We cannot know the exact atmospheric, electric, electromagnetic, moisture, Itasca amplifier, Itasca antennae configuration and other factors that causally determined what was responsible for perceived signal strengths in the radio room of the Itasca. Moreover, those strengths are, after all, perceived. The only reason we can even reliably communicate with the Voyager spacecraft is because the 50 watts at which it transmits is mostly consumed dealing with uncertainties in the Earth’s atmosphere. Most of the propagation through space can be achieved with very low power and done with great reliability and consistency. The problem is atmosphere, and that is why the Voyager’s transmission power is uprated to 50 W. In other words, there are times when the Voyager return signal is almost entirely attenuated by the atmosphere alone and barely gets through, then other times when it comes through orders of magnitude more strongly. You cannot make hard and fast predictions about radio waves in atmospheric conditions that are not controlled or recorded. Though the Voyager relies on much larger and more sophisticated receivers (very much so) it illustrates why knowing exactly what kind of conditions existed does matter. Thus, claiming 80 nm as some absolute limit is abusrd.

But we also know from the record that the AE’s radios performed conspicuously better than the Long’ers claim on previous flights, and this should raise some suspicion. For example, radio signals were heard clearly and strongly at 200 nm on that flight. This is likely because, like the navigation matter, “experts” delight in obfuscation. First of all, the altitude of transmission and reception matters since this determines the direct wave propagation behavior of radio signals. These signals are the strongest component of a radio signal and if there is a “line of sight” between transmitter and receiver direct wave propagation holds sway. At only 1000 feet altitude this distance is something like 40 nm. But as the aircraft climbs this distance changes dramatically. If we let ψ be the path length on the ground from a point directly below the plane to a land receiver and θ be the plane’s altitude, φ denotes the path length of the direct wave propagation; that is φ2 = ψ2 + θ2. This is another reason why we dismiss radio “certainty” as conjecture because we really can’t know the altitude of the plane at all stages of the flight. However, it is reasonable to allow that the Electra did climb at some point during the radio communication frustration and the loud S5 signals may simply be an artifact of this change. We simply don’t know.

Having said this, clearly there must be some limit to what can be reasonably expected? Of course there is, but it will likely be more liberal than what 80 nm would suggest. If only we knew the exact distance of the airplane from earlier transmissions we might be able to stochastically fix the distance at 7:42 but, unfortunately, that is part of what we’re trying to answer. Thus, it is entirely possible that, for whatever reason, on that day and other those particular conditions, a transmission at twice that distance with a similar perception of strength is conceivable. We don’t need to fix that value, only to suggest that such a modest change in assumptions means a full 160 nm could better characterize the reality. This is 184 statute miles. But there is another factor that might need adjusting.

It has been noted that a navigator wouldn’t likely fly directly to their target as this would only reduce their odds of finding it. Rather, they would use an offset. For numerous reasons, a navigator attempting to thread Howland would likely offset to their north and fly south to intercept. We shall see that whether or not AE flew an offset will be immaterial to the conclusion.

radio_Strength_Plots

While one can engage much conjecture over radio signals in any given instance, the chart above shows the nominal range of values one can expect with signal strength (on an “s” scale) plotted against nautical miles. But this is only a nominal range and it is not entirely improbable to hear signals at S-5 that are as far as 250 nm. Contrast this with TIGHAR’s analysis of a strong signal at 300 sm and you can see what we mean by “speculation”. The fact is, unless someone can reproduce those conditions exactly we may never know for sure if a plane within 300 sm of Howland could communicate reliably in this manner.

Denouement

The step to proving impairment beyond any reasonable doubt begins with the radio transmissions which, in the first case, indicated a distance of 200 miles from Howland, and in the latter, a distance of 100 miles from Howland. These two radio transmissions were separated in time by about 30 minutes. There have been considerable mental gymnastics performed around this fact but we will take the transmission for what it says.

The two transmissions received aboard Itasca were:

17:45 6:15A.M. Earhart:” ABOUT TW0 HUNDRED MILES OUT. APPROXIMATELY. WHISTLING NOW.” Itasca log:” Volume S-3″

[note: Sunrise at Howland Island was recorded in the log at 6:15 A.M.]

Clearly, the sun would rise locally between this point in time and just a few minutes later:

18:15 6:45A.M. Earhart:” PLEASE TAKE BEARING ON US AND REPORT IN HALF HOUR I WILL MAKE NOISE IN MICROPHONE- ABOUT 100 MILES OUT.” Itasca log indicated S-4

But this would indicate a ground speed of over 200 mph. Obviously, this must be reconciled as the distance appears not to have been 100 miles but closer to 55 miles. While it may be approximately correct indeed, this kind of approximation won’t suffice for finding Howland and the latter 100 miles out appears to be the anticipated navigational correction to smooth out this approximation. Thus, attributing this to an “estimate” doesn’t explain-away the problem. Under the circumstances, something must have changed to cause this change in estimate as it would have been obvious to AE that these numbers didn’t add up. Curiously, the very celestial sights that could have been taken just before sunrise should have better fixed the longitude and prevented this error. What this lopsided dual reporting shows is that a significant correction occurred and it most likely was due to the difference between estimates using celestial aids and estimates without. In other words, it suggests, but does not by itself prove, that the first estimate was based on a less accurate knowledge of longitude while the second report suggests a considerably better knowledge of longitude.

To reconcile this difference under DR we have to assume a path length error of about 45 to 70 miles up to the 200 mile report. The only known fix ever achieved on that flight was only about 7 hours into the flight:

07:20 5:20P.M. Earhart to Lae: “…POSITION LATITUDE: 4 DEGREES 33.5′ SOUTH, LONGITUDE: 159 DEGREES 07′ EAST.”

This is 4:33:5 S, 159:07 E.

Howland Island is located at 0:48.24 N, 176:36.59

At the equator, 200 miles west of this is:

200/69.172 = 2.8913 deg

And 100 miles west is:

1.44 deg therefore,

2.8913=> 3:29.13 + 176:36.59 = 179: 1:5 1.12 or 180:06.12

1.4456=> 1:44.56 + 176:36.59 = 177: 1:8 59 or 178:08.59

ð  Point A = 200 miles west Howland is 0:48.24 N and 180 deg 06 min 12 s West

ð  Point B = 100 miles west Howland is 0:48.24 N and 178 deg 08 min 59 s West

In other words, the 200 miles out estimate given by AE was an estimate based on the fact that the navigator believed he was crossing the international date line at that time. Therefore, we know what they believed to be their exact longitude; that is, 180 degrees. Thus, the actual error between the “200 mile” report and the 100 mile report was, more precisely:

180 deg 06 min 12 s West – 180 = 6 m, 12 s or 196 miles to 100 miles in thirty minutes with an actual ground speed of about 120 mph (see first part of fuel consumption section)

ð  96 * 2 = 192 mph

ð  (120/192)*96 = 60 sm

This is peculiar since it is very close (about 59 nm) to the error one will get if they fail to switch local dates in the nautical almanac correctly when crossing the international date line. Depending on when and how one fails to do this, it can result in an error either east or west of the intended target (this transit required two switches of dates for correct referencing in the almanac). If this were combined with a DR drift to the south generating a roughly equal but opposite offset, this could be extremely confusing to an impaired mind. However, we note that the error actually encountered appears to have been the reverse; namely, that the 200 miles out was based on a measurement of an LOP correct to within tolerance while the 100 mile report represents the point at which the navigator abandoned the LOP in favor of exact celestial fixes made previously that night. We shall see why shortly, and we shall see that this constituted a third, sentinel error.

Liz Smith posted her own theory on the internet regarding a date line error, called the “Date Line Theory”, which first clued this author to such a possibility. The problem with her theory, in this author’s estimation, was that she had mistakenly reversed the resulting error, not recognized the latitude error and proposed that the date error had occurred later by not reverting the date back to July 2, 1937, after FN had already correctly advanced the date to 02 July, 1937 at local midnight. Consequently, Smith’s calculations as to the position of the Electra were off by a few hundred miles.

To justify this claim about a dating error, we need to return to the navigational process to the concept of a local midnight, which occurred about 5 hours before the “200 mile” report, about 5 hours flight time before the international date line. During this time, FN did not advance his date as he must in order for the almanac entries to make sense. In other words, FN did not forget to set his date back to July 2 after crossing the international date line, he forgot to set his date to July 3 at local midnight. And FN also failed to notice the obvious southern deviation in course heading he subsequently ordered for the pilot based on these fixes.

Measuring the position of a star and applying a referent from exactly one day in the future will bias the result to create the appearance that the observer is about 360/365.25 deg (59.14 nm) further west than they truly are. Conversely, measuring the position of a star and applying a referent from exactly one day in the past will bias the result to create the appearance that the observer is about 360/365.25 deg (59.14 nm) further east than they truly are. 59.14 nm = 68 sm.

In most cases (to be explained), if FN were to measure the position of two or more celestial positions to obtain a fix by applying a referent from exactly one day in the future it will bias the result to create the appearance to the crew that they are south of where they truly are. Conversely, and in most cases (to be explained), if FN were to measure the position of two or more celestial positions to obtain a fix by applying a referent from exactly one day in the past it will bias the result to create the appearance to the crew that they are north of where they truly are.

If you ask most navigator’s they might not realize this because they know that the declination of a star doesn’t change appreciably in one day. But what they overlook with that objection is that the equatorial plane, at any given time of the year except at the spring and fall equinox, is tilted with respect to its orbital plane. This means that errors generated by referencing the wrong day create the same type of error in latitude as they do in longitude, just typically not quite as large (and goes by the sine of the seasonal tilt). The fact that this detail is commonly overlooked in the profession is itself a clue that FN might not have been aware of a southern excursion as a result of the longitude error, something that would tend to mitigate against the conclusion that he was impaired. Unfortunately, on July 2, 1937 Earth was tilted almost all the way up to about 23 degrees at that time, producing the maximal latitudinal effect. At 23 degrees, for example, the stars in the background move as the Earth orbits the sun not parallel to the spin of Earth but 23 degrees offset from that spin. Thus not only does the error manifest in longitude, it manifests as the sine of that angle. We will examine this relationship later.

One objection this author has heard is that the local dates wouldn’t have been used, and only GMT dates are relevant.

Here we find yet another area where even modern-day navigators can be tricked, and the confusion this has caused in Earhart research is unbelievable. And the international date line is notorious for doing this. But to explain this, we need to explain what made this flight different than 99% of the pacific flights that had occurred up to that time and most which occur today:

1.)   The Lae to Howland leg of the World Flight involved a transit from Lae to Howland all within one GMT day

2.)   The Lae to Howland leg of the World Flight involved a transit that occurred overnight, meaning that a local midnight was passed

3.)   The Lae to Howland leg of the World Flight involved a transit that crossed the international date line

4.)   The Late to Howland leg of the World Flight involved a transit that was west to east.

To explain how misleading all this can be, we start by imagining an aircraft on this flight somewhere between the point of local midnight and the international date line. When the plane left the airport west of the point of local midnight it was July 2, 1937. So, after a few hours of flight, the plane is east of local midnight and west of the international date line. Now, imagine you live on an imaginary island directly below the aircraft. And you decide on this early morning to get up, go outside with your sextant, and measure your location. When you open you almanac, to what page must you turn? Obviously, you must consult referents from July 3, 1937, because that is the date about which you are inquiring. What other date would you use? Why? If the answer is that our islander should revert to a GMT date, then reverting to 1 July, 1937, if it fits the data we’re about to show, cannot be discounted either. And notice, you are west of the international date line, so there is no need to revert back to 2 July, 1937. The correct date, for you, is July 3, 1937. And this is why this can be so confusing to navigators, for on first blush it would seem that all one needs to do is refer only to GMT dates but that is not the case. Local dates do matter and have a subtle impact. Therefore, during the period of time you are within the region confined by the local midnight and the international date line, using any date other than July 3, 1937 will result in invalid celestial fixes. And this is now obvious.

FN’s deliberate alignment of his takeoff time to a 00 GMT date of 2 July, 1937 for the entire trip is a smoking gun: This shows that he did in fact employ the July 2, 1937 error and it supports the 200 to 100 mile report analysis.

In the case of this flight, the period between local midnight and the international date line contained the error of referencing almanac data exactly one day in the past. Thus, FN was guiding AE too far south. What we mean by “in most cases” is that, because they are in the southern hemisphere in winter, most of the stars they can see will be in the southern hemisphere. This effect would become more pronounced after local midnight. Unless we know exactly which stars were shot we cannot know the exact latitudinal error, but a latitudinal error was introduced placing the aircraft to the south of where it should be (we can make estimates sufficient for our purposes here, but geodesic mathematics and actual identification of celestial bodies used will be required to collect data and compute at or below instrument tolerance). In other words, by not advancing his local time by one day FN guided the aircraft by assuming it was further east than it actually was (100 mile report) and directing it to the south of Howland island. The effect was catastrophic.

To be clear, what this evidence shows is that, first, here is what should have happened:

The date of July 2, 1937 at takeoff should have been advanced at local midnight to July 3, 1937. This occurred before reaching the international date line. Once the international date line was crossed this should revert back to 02 July, 1937.

What actually happened:

was that the date began as 02 July, 1937 but at local midnight the date was not advanced to 03 July, 1937 as it should have been. Upon crossing the international date line the date was neither advanced nor regressed, and the date remained at 02 July, 1937, which was by that time the correct date. The errors generated between local midnight and the international date line persisted as they were invalid, prior fixes taken during the night.

When the sun line was established the error in longitude was “cured”, but the latitude was not. But it is important to understand that when we say “cured” we really mean “cured” as far as we are concerned. There was no such “cure” for FN as he had no way to know which measurements were correct (unless he went back and found his errors). Thus, when they reached their “we must be on you but cannot see you” report they were then south, south west of Howland. At this point, and even at the 100 mile report, any navigator could have easily and rightly been confused. The navigator would be faced with an incongruity of data, in this case inexplicably ignored, until it failed him and Howland did not appear. But the situation is yet worse than this. Recall that by failing to advance his date to July 3 after local midnight, FN had incorrectly guided the plane far to the south of where it should be. His failure to notice this required course change after local midnight is an indication of an anomalous error, for it should have seemed a conspicuous event indeed. In any case, when he reached the 200 mile report he was likely basing this on a single LOP of the moon or other object. And it was incongruent with his celestial fixes previously. Thus he was faced with an impossible choice. Just as with a sun line, this would “cure” any error in longitude caused by using the wrong date, but it did not cure latitude. And because he had already changed latitude by the time he reached his putative “Howland”

he had effectively already decided, by the flight path taken, to discard any range of likely latitude values on that line that could have been derived from DR drift error. In other words, FN was left with multiple possible areas of uncertainty and, unless he could find his error, would have no choice but to search all of them, each one being on the same mean latitude (the one he was already committed to) but on multiple, distinct mean longitudes.

This constitutes “kludging” a latitude error margin from a celestial fix with a sun line; which is an unrelated construct. But the bias for trusting longitude over latitude was simple: the margin of error on longitude was far narrower than the margin of error on latitude. The conclusion is ineluctable, and the discovery of it is a testament to just how clever this navigator was. It was the right answer, but we have to assume a clever navigator if this occurred. Thus, the pertinent question now is, is there any way to discern the latitude on which they were flying? Yes, yes there is. Recall that by celestial fixes he would end up 59.14 nm short of the Howland longitude and believe that to be the Howland longitude. On the other hand, if he adopted the Howland sun line he would have to intercept it at the point that it intersects his latitude to the south. We can now see the significance of the 100 mile out report: this was no ordinary point, but a deliberate point of measure for FN who was about to pull a clever stunt. If the sun line longitude were correct, FN knew that he was 159.14 nm out from the Howland longitude. If the longitude measure of the celestial fix were correct, he knew he was 100 nm from the Howland longitude. Therefore, the first area of uncertainty to search would be at 59.14 nm west of the true Howland longitude. And because this was derived of a celestial observation, the tolerance would be about 10% of the measured value, which was 200-59.14 nm * 0.14 or about 140 miles * 0.14; yielding a margin of 14 miles (and because it is advanced, would require a 200 nm * 10%, or 20 nm addition). Therefore, he needed a 14 mile grid search at this point, with an additional “advanced” DR margin. But if the sun line longitude were the correct longitude, he would need to establish an area of uncertainty exactly 59.14 nm not west of the Howland longitude, but on what was the actual Howland longitude. If he performed iterative searches every 59.14 nm beyond that to the east, which we’ll momentarily explain, he would eventually reach a point at 2 * 59.14 nm east of the actual Howland longitude. A simple trigonometric relation yields the following approximate position for this area of uncertainty:

-4.68,-174.28 [provisional]

Which is about 15 miles due east of Gardner Island. We will be able to refine this value in what follows. In order to intercept a line parallel to the advanced, instantaneous sun line at Howland this is how far he would have travel east to reach it. Apparently, the erroneous celestial fixes the previous evening had directed the Electra far to the south of Howland and FN never realized it at the time. FN apparently believed he was flying to Howland. FN would have been wise to perform two grid searches, the first turning up nothing and adding about 28 miles to their fuel burn. They then could have proceeded east for about 59.14 nm miles and performed another grid search. He could continue this iteratively until he found by how many days he was off in the dating error, if for whatever reason (such as the inability to remember how many days off his calculation was the night before), he was not able to figure that out. At the final iteration, before completing it, their fuel would have exhausted. Of course, limited fuel would only provide the option of iterations in one direction, but in this case they could be done to the east. This shouldn’t have worried FN since all iterations to the west, if he were on the correct latitude, would have already been overflown. Thus, evidence of iterative searching is evidence of prior impairment. And the search itself had nothing to do with Gardner. They were looking for Howland. Fuel exhaustion occurred at approximately 9:45 that morning. After 8:45 a.m. at the distance they were flying, it is unlikely Itasca could have heard them reliably. This is because, between the Electra and the Itasca, the most rapid augmentation of distance was occurring at that time (the latitude they were flying was parallel to the Itasca’s latitude and away from the ship).

To understand why this is not a simple conjecture based on the two position reports of 200 and 100 miles out, we need to look at this from FN’s point of view; or stated more accurately, what we know today that he did not. As far as FN knew, he was already flying on the correct latitude to Howland. The only question in mind was, which longitude was it? If he chose to consider the possibility, and we have no way of knowing if he did, that the first area of uncertainty was wrong (the advanced, celestially fixed area of uncertainty), then he had no idea what his latitude was and he was hopelessly lost. He could only hope that his celestially fixed latitude was correct. Even if he tried to reconstruct a DR error margin for latitude on the sun line, he was hopelessly too far south or north (he didn’t know which) to do anything about it. And even reconstructing the DR estimate would depend on his memory of events during the celestial fixes; particularly, in recalling or making sense of the large southern excursion he had directed for the pilot hours earlier.

To further eliminate any possibility of conjecture or error, we can finally test our result against the southern deflection inhered in the transit by the tilt of Earth with respect to its orbit. We had previously stated that the local midnight occurred around 700 miles out of Howland, and this is where we expect the error was introduced. At this time, FN would have erroneously applied the date of 2 July, 1937 instead of 3 July, 1937. Letting the sine of the angle be given as 23.5 degrees since this was close to winter solstice, we get:

(700*sin((pi/2)*(23.5/90))) = 279.12 nm

Which is exactly what we postulated based on FN’s 100 mile report; that is, we postulated that FN would find himself at -4.68,-174.28 at the end of his grid searching (crudely estimated, this adds about 55 nm to the transit length vs. the correct path to Howland). In fact, we can now generate a more exact figure and compare it to the triangle just discussed:

48/60 , 24/60 => (0:80.40 + 3.9157) * 59.14 = (0.8040 + 3.9157) * 59.14 = 279.12 nm

We have the latitude AE and FN were so desperately needing, and it is decimal 3.9157 South.

Because it was winter solstice (virtually) the sun line 157-337 takes an angle from the meridian of 23 degrees. In the very early morning hours of July 5,1937 a radio message was received at Wailupe, HI at an advanced USN radio listening post; reportedly over a period of one hour, coded in poor Morse, broken and barely readable:

TWO EIGHT ONE NORTH HOWLAND CALL KHAQQ BEYOND NORTH DON’T HOLD WITH US MUCH LONGER ABOVE WATER SHUT OFF

Their distance from Howland island at the intercept as well as at a potential ditching was right at this position or very nearby; that is, 279.12 nm south of Howland. Unfortunately, the particulars of how this message was received are apparently lost to history, so we don’t know how this one message was spread out over one hour, where sentences begin and end or whether words and phrases are missing. Sadly, a few other messages such as this one were direction sourced to, not suprisingly, this same stretch of water just north of Gardner island. Was the attempt to code the phrase (as a broken transmission),

“… it is 281 nm north to Howland …” or

“ … we are 281 nm north of Howland”?

It had been reported that the Morse appeared to have been prepared before transmission, the same manner in which AE and FN were reported to have used Morse given their limited skill. Could they have believed they were north? We may never know and the issue of post-loss transmissions should be re-investigated in light of these findings. In particular, the United States Navy should be tasked to provide any and all available archival information regarding the “281” message.

Letting ψ be the sun line path length from Howland island to the intercept and θ be the “100 miles out” of 159.14 nm and φ be the distance from the putative “Howland” to the real Howland, we get;

ψ2 = φ2 + θ2.

ψ  = (((279.12)^(2)) + ((159.14)^(2)))^(1/2) = 321.3 nm and the length θ is given using the angle α between ψ and φ:

θ = ψ cosine (α).

With α = 23 degrees, we get 295 nm (vice 279.12 nm), well within the tolerance of the method and the planar coordinates used to approximate a small spherical region. It is important to take stock of what has just happened here. We have just unequivocally proven that the error hypothesized would place the Electra at these coordinates; that is,

decimal 3.9157 S, decimal 177.6098 W     [“we must be on you but cannot see you”]

And, for the reasons given, we can be almost certain now that this error in fact occurred.

The given coordinate represents the point of intercept; that is, the point where AE transmitted the famous message, “we must be on you but cannot see you”. At this place a small grid search over a 14 by 14 nm area, perhaps larger to account for DR error, was conducted with obviously negative result, and the crew then flew east in multiples of (to be explained) 59.14 nm to begin their second grid search.

We can check this value by performing the trigonometric calculation:

((((279.14)/(cos((pi/2)*(23/90))))^2)-((279.14)^2))^(1/2) = 118.4879 nm = 2 * 59.14 nm; that is, a multiple with x = 2. In other words, the marign of error in longitude east of the actual Howland longitude was 118 nm. To the west that margin is 59.14 + 100 nm, or 159.14 nm. Combined, that value is 118 + 159.14 = 277.14 nm, or about 279.12 nm. To visualize how this geometry was necessitated by what FN presumably knew and what he could not have known, we note that given two incongruent longitude means, the resulting error margin is 59.14 nm west and x*59.14 nm east of the actual Howland longitude (which for FN, was the Howland advanced sun line longitude). Where did x come from? If FN finds that neither the celestial nor the sun line longitude appears to be correct after performing searches in those areas (1 meridian west of the real Howland longitude and one on it), then the only possible solution is that a dating error was more than one day off. The problem for FN however, is that he cannot afford to fly in the other direction as the fuel is limited. He can only hope that his dating error is a multiple of one day, placing his target further and further east. What this irrefragably demonstrates is that FN realized he had made a dating error but could not remember, or could not resolve, exactly what the error was. This fact points uniquely to impairment and the only option remaining would be to search areas of “uncertainty” at successive intervals of 59.14 nm eastward. Upon the second iteration; that is, x = 2, the fuel would necessarily exhaust. Additionally, if he were not aware of the southern excursion because he did not appreciate the fact that a dating error would also incur a latitude error, he would have been supremely confident of his latitude, rejected his sun line longitude, and proceeded eastward.

What this shows is that it is highly improbable that the 200 and 100 mile error arose from any culprit other than the date error described, and that it almost certainly generated the flight paths described.

This flight path yields an area of uncertainty with x = 2 at:

decimal 3.9157 S, decimal 173.91 W                 

[centroid of fourth area of uncertainty and area of ditching]

Notice that when we speak of a “fourth” area of uncertainty, we mean it is x+2=4, because x is the number of areas of uncertainty east of the Howland longitude, and there are only 2 to the west of those. How do we know the value of “x”, and consequently, that the navigator was impaired? Fuel consumption. For to assume any value in a multiple of 59.14 nm away from the maximum probability of fuel burn grows less likely by that proportion (the available fuel permits us to see FN’s choices, which in turn demonstrate confusion over what dates were used). The time of ditching can be estimated by noting that this southern excursion added about 55 nm to the transit vice the nominal Howland route. Additionally, flying further east added another 59.14 + 118.5 nm. We can add 28 nm for each area of uncertainty, giving a total additional mileage since the “we must be on you but cannot see you” report at 7:42:

59.14 + 159.14 + 28 +28 = 233 nm

At a ground speed of 130 mph, this yields a flight time of:

1.797 hours. Adding at least about 15 minutes for search grids, this gives about 2.11 hours; that is, 7:42 + 2.11 => 9:49 a.m. if the second area of uncertainty was fully searched. If not, we can estimate a ditching time of around 9:30 to 9:45 a.m. This suggests that the Fuel Analyzer was working properly and the Electra was burning fuel on average at about:

59.14 + 159.14 + 28 + 28 + 55 = 288.64 nm

=> 2556 + 288.64 = 2844.64 nm full burn

=>  2844.64 / 1080 = 2.63 mpg

=> 1080 / 19:12 + 2:07 = 1080 / 21:19 = 1080 / 21.3166 = 50.66 gph

We could conjecture any number of theories as to what FN was thinking as he tried to rationalize the crazy situation in which he found himself. But another observation is that, though he had no choice but to accept his flown latitude as gospel, he could have also reasoned that was okay because the only contradictory measure he got was on longitude. After all, the latitude was based on a celestial fix, which should have been the most reliable figure. Either way, he was bound to the latitude, whatever it was, and even if he realized a dating error of some kind, he apparently did not take the latitude change into account.

What is fascinating about this discovery is that Long’s need to explain high fuel expenditures in route may actually have been due to the southern excursion after local midnight and could explain why they reached the vicinity of Gardner when they were “on you but cannot see you”. In other words, our initial suspicion that this was special pleading appears to be correct and the Fuel Analyzer probably worked. Had it not, it would have only failed after the point of no return anyway.

The following chart shows the inescapable geometry that the 200 to 100 mile error imposes::

the_Invalid_Navigation_002

Not drawn to scale; the angle between H and J is exaggerated for illustration

A – B = E – F = 59.14 nautical miles

Between A and C the correct local date was 03 July, 1937

All fixes circle “C” assumed a date between A and C of 02 July, 1937

A is local midnight

B is the invalid local midnight

C is the international date line

D is 100 miles from Howland’s longitude, correctly measured by a single object observation

E is 59.14 nautical miles west of Howland’s longitude

G is about 2*59.14 = 118 nautical miles east of Howland’s longitude

H is the flight path implied by an “uncorrected”, invalid celestial fix

I is the intended flight path

J is the actual flight path, inhered as a correction to H

K is 279.14 nautical miles south of Howland

A Calamitous Trail of Errors

Up to this point we’ve noted two good reasons why a navigator might employ more than one area of uncertainty in a search strategy. But given the unorthodox nature of such an approach, is there another, more compelling reason to suggest that a multiple area of uncertainty strategy would have been employed? To answer this, we summarize the first two reasons:

1.)  If FN were confident of his latitude, as his faulty celestial “fix” would have implied to him, then the only reason for turning up a negative search result would be because the longitude was wrong. And FN quite likely suspected a dating error, therefore, it is not unreasonable to expect that he would try different longitudes based on different dates that he might have used the night before.

2.)  Since FN had already flown out his latitude, he really had little else he could do since he didn’t have the fuel to search along a latitude and, furthermore, couldn’t search to the west due again to fuel limitations. Thus a progression eastward in increments of about 59.14 nm of iterative areas of uncertainty was the only choice he really had; other than giving up entirely.

But there is a third, more compelling reason why FN might choose to press eastward upon negative results in previous iterations of searches. If a date error can be recognized as a possible culprit, then so, too, can the fact be recognized that it more likely than not was either one day advanced or one day regressed. Therefore, from FN’s perspective, the initial intercept position they reached (when they transmitted “we must be on you but cannot see you”), represents the “left chiral” aspect of a dating error. In other words, it represents what happens if FN had failed to advance his date properly. On the other hand, an error falling into a “right chiral” class of errors manifests if he inadvertently regressed his date. A one day’s regression error implies an area of uncertainty 59.14 nm east of the known Howland longitude. Thus the axis of chirality in the errors entertained is the Howland longitude, also a potential area of uncertainty that would presumably be searched. By searching all three areas of uncertainty, from FN’s perspective, he has covered all possible error scenarios that involve a date shift of only one day. And it all hinges on his invalid assumption that he was actually flying on the, correct, true Howland latitude, which he was not, and which he did not realize because the subtlety of the error he enjoined was such that he was “tricked” by the latitudinal deflection that he could never have imagined would present. No one put a warning in the almanacs to tell navigators that if you make this one, obscure error, then you will also get an error in latitude that would not be obvious from all your navigational experience hitherto.

Thus, when AE completed the search at the third area of uncertainty east of the Howland longitude, it isn’t hard to imagine terror setting in. For at that point it would be clear that the odds of finding Howland had just dropped sharply. This third search would have completed just before the 8:44 a.m. transmission and is consistent with reports that the transmission sounded more stressed and labored than any before it. Now we know why. Moreover, it further amplifies the suspicion that the navigator had in fact been impaired the night before. For if FN was searching iteratively through left and right chiral error modes he was truly confused, even though we can see it was the only logical choice by the time they reached that point. He apparently had no idea what dates he had used.

What we see in this accident is a cascading of multiple errors that led to the loss. First, FN used the wrong date after local midnight by referencing dates for July 2, 1937. This was the first error. Such an error could be written off as a rare but possible mistake (FN had flown international date line flights many times). However, two more critical errors were made. The second was the direction of the aircraft to the south, which was an overt, easily recognizable change that the navigator apparently missed and which did not alert him to a mistake. Third, the taking of LOPs, by celestial bodies such as the moon (possibly) and later the sun indicated that a prior error had been made. The third error was that this new information failed to alert the navigator that something must be wrong with the previous measurements and that the incorrect date and course deviation were the cause. The second error could be written off as an even more unlikely but possible mistake. But the third error indicates something sentinel and salient: the failure to recognize the earlier errors at this point could be due only to a lack of memory of prior events as it was a blatant alarm telling the navigator that his prior measurements were in error. Had memory served, finding this error would have been trivial for a navigator of his ability. That he did not is clearly and convincingly a sign of a memory loss for the period of time from local midnight to the international date line. All three errors taken together then, are clearly and convincingly due to impairment of the navigator. And this is all predicated merely on taking the AE reports of position for what they are, with no special pleading or tea leave reading. Finally, the error of missing the declination trap would not likely have been due to impairment alone as this fact is not apparently common knowledge amongst navigator’s in the first place.

Conclusion

My conclusion is that the Commander in Chief of the Armed Forces ordered the USCG to assist Earhart in this flight; that FN was a competent and able navigator; that AE was a high risk but competent and able pilot who had been awarded the Distinguished Flying Cross; that the Lockheed 10E was a well-designed aircraft for its intended use; that the aircraft in question functioned nominally; that this case does not involve malfeasance; and that this case does not involve any wartime activity or any hostile actions by a foreign government.

The cause of this accident was most likely due to

  1. Navigator’s failure to consult almanac referents for 03 July (local time) between approximately 700 nm west of the Howland longitude and the international date line. The navigator likely consulted almanac values for 02 July (local time) for the entire flight. It does not matter how they were consulted, either in his notes, before the flight, during flight or directly from an almanac.
  2. Navigator’s failure to notice the conspicuous but anomalous instruction to inhere a southern excursion into the flight plan.
  3. Navigator’s failure to recognize that a seasonal tilt of Earth, even if practicably assumed constant, combined with the orbital motion of Earth, and the error of using an incorrect date, will create a significant error in declination measure of stars.
  4. Navigator’s failure to notice the first two errors when lines of position taken later clearly exposed them.
  5. USCG failure to follow radio instructions and consequent failure to establish two-way communication. Inexplicably, these errors began only at about 5:30 a.m. Howland time and continued to loss. That is, USCG was following those very same instructions prior to that, a period of about 16 hours.

Disposition of crew

The Lockheed 10E mod per Earhart contained numerous fuel cells. Based on a careful examination of the debris field, the crew most likely survived the ditching and managed to act to slow or retard the taking on of water within the fuel cells. This was apparently sufficient to maintain a positive buoyancy for at least 16 hours. There is evidence of deck flooding in the fuselage and blood in the water. The crew probably lacked sufficient incentive to exit the aircraft. Additionally, flooding control may have required constant attention inside the aircraft. There is inconclusive evidence of the inhalation of nitrogen by both crew members, possibly done in an effort to avoid the effects of drowning. They probably survived less than 48 hours after ditching. In any case, both most likely expired prior to reaching the reef.

40a (11)

The dog and pony show shown to the public, released only after significant pressure was placed on TIGHAR to release information. It is on a cliff above the main debris field and appears to be a wing, engine and propeller lying flat on the surface. Notice the characteristic propeller bending often seen in water landings.

Appendix A

Radio logs

The first time listed is GMT and time elapsed in the flight (since the Electra took-off at exactly 00:00, GMT is the same as elapsed time) while the second

time listed is the local time at Lae.

00:00 10:00a.m. Take-off from Lae

05:00 3:00pm. Earhart to Lae: ?AT 10,000 FT. BUT REDUCING ALTITUDE BECAUSE OF BANKS OF CUMULUS CLOUD.”

07:00 5:00P.M. Earhart to Lae: ” AT 7,000 FT. AND MAKING 150 MPH.”

07:20 5:20P.M. Earhart to Lae: “…POSITION LATITUDE: 4 DEGREES 33.5′ SOUTH, LONGITUDE: 159 DEGREES 07′ EAST. Everything Okay”.

08:00 6:00P.M. Earhart to Lae: “ON COURSE FOR HOWLAND ISLAND AT 12,000 FT.”

Received by Harry Balfour 10:00 8:00P.M. Earhart overheard by Nauru radio “…

A SHIP IN SIGHT AHEAD…”

12:42 1:12A.M. Itasca to San Fransisco:” HAVE NOT HEARD EARHART SIGNALS UP TO THIS TIME BUT SEE NO CAUSE FOR CONCERN AS PLANE IS STILL 1000 MILES AWAY…”

13:45 2:15A.M. Itasca log :” NOTHING ON 3105.”

14:00 2:30A.M. Itasca log:” ITASCA TO EARHART ON PHONE 3105.”

14:15 2:45A.M. Itasca log:” HEARD EARHART PLANE ON 3105 BUT UNREADABLE THROUGH STATIC.” Leo Bellarts (RO) reported hearing Amelia’s low monotone saying:’ …CLOUDY AND OVERCAST…” others heard but could not make out words.

14:30 3:00A.M. Itasca log :” SENT WEATHER TO KHAQQ.”

14:45 3:15A.M. Itasca log::” NOTHING HEARD FROM EARHART.”

15:00 3:30A.M. Itasca log:” SENT WEATHER.KHAQQ FROM ITASCA: WHAT IS YOUR POSITION? WHEN DO YOU EXPECT TO REACH HOWLAND? ITASCA HAS HEARD YOUR PHONE ON KEY. ACKNOWLEDGE THIS MESSAGE ON NEXT SCHEDULE,”(Weather at Howland)

15:15 3:45A.M. Earhart : “ITASCA FROM EARHART. OVERCAST…WILL LISTEN ON HOUR AND HALF HOUR ON 3105.”

15:30 4:00A.M. Itasca log:” BROADCAST WEATHER ON PHONE 3105 …AND KEY 3105.” Also transmitted:” WHAT IS YOUR POSITION? WHEN DO YOU EXPECT TO ARRIVE HOWLAND? WE ARE RECEIVING YOUR SIGNALS PLEASE ACKNOWLEDGE THIS MESSAGE ON YOUR NEXT SCHEDULE,”

15:45 4:15A.M. Itasca log:”EARHART UNHEARD ON 3105 THIS TIME,”

16:00 4:30A.M. Itasca log:” REPEATED PREVIOUS TRANSMISSION.”

16:23 4:53A.M. Itasca log:” SENT WEATHER/ CODE/ PHONE/ 3105 KCS.”

16:24 4:54P.M. Earhart:’ …PARTLY CLOUDY…” Itasca log:” Volume S-1″

16:25 4:55P.M. Itasca log:” Earhart broke in on phone-unreadable.”

16:30 5:00A.M. Itasca log:” Sent weather. ?What is your position etc..”

16:43 5:13A.M. Itasca log:” Earhart signals unheard on 3105.”

17:00 5:30A.M. Itasca log:” Sent weather. ?What is your position.'”

17:15 5:45A.M. Itasca log:” No hear during [scheduled time].”

17:30 6:00A.M. Itasca log:” Sent weather data.”

17:44 6:14A.M. Earhart:” [Want] BEARING ON 3105/ ON HOUR/ WILL WHISTLE IN MIC.”

17:45 6:15A.M. Earhart:” ABOUT TW0 HUNDRED MILES OUT. APPROXIMATELY. WHISTLING NOW.” Itasca log:” Volume S-3″ {note: Sunrise was recorded in log at 6:15 A.M.]

18:00 6:30A.M. Itasca log:”Sent weather and asked position.”

18:12 6:42A.M. Itasca log:” KHAQQ came on air with fairly clear signals calling Itasca.”

18:15 6:45A.M. Earhart:” PLEASE TAKE BEARING ON US AND REPORT IN HALF HOUR I WILL MAKE NOISE IN MICROPHONE- ABOUT 100 MILES OUT.” Itasca log:” Earhart signal strength 4 but on air so breifly bearings impossible.”

18:30 7:00A.M. Itasca log:” Sent weather, maintained contact on 500 kcs. For homing.'”

18:48 7:18A.M. Itasca log:” Cannot take bearing on 3105 very good/ please send on 500 or do you wish to take bearing on us/ go ahead please.” Itasca log:” No answer.”

19:00 7:30A.M. Itasca log:” Please acknowledge our signals on key please.” Itasca log:” unanswered.”

19:12 7:42A.M. Earhart:” KHAQQ CALLING ITASCA WE MUST BE ON YOU BUT CANNOT SEE YOU BUT GAS IS RUNNING LOW BEEN UNABLE TO REACH YOU BY RADIO WE ARE FLYING AT ALTITUDE 1000 FEET.” Itasca log:” Other log reads-Earhart says running out of gas only half hour left [verified by other witnesses]/ can’t hear us at all/ we hear her and are sending on 3105 and 500 same time constantly and listening

in for her frequently.” [ Note: Itasca had two logs in radio room,]

19:13 7:43A.M. Itasca log:” Received your message signal strength 5 ( sent AAA’s etc. on 500 and 3105). Go ahead.” [ Note that considering Amelia’s transmitter’s power of 50 watts, the strength of these signals ( strength 5 is the maximum possible) indicates that she must have been very close to Itasca at this point. According to the commanding officer, “It was also the time we expected her to arrive.”

19:17 7:47A.M. Itasca log:” Received your message signal strength 5.” Itasca log:” Sent AAA’s on 3105.”

19:19 7:49A.M. Itasca log:” Your message okay please acknowledge with phone on 3105.” Itasca log:” Keyed AAA’s.”

19:28 7:58A.M. Earhart:” WE ARE CIRCLING BUT CANNOT HEAR YOU GO AHEAD ON 7500 EITHER NOW OR ON THE SCHEDULE TIME ON HALF HOUR.” Itasca log:” Volume S-5.”

19:29 7:59A.M. Itasca log:” AAAAAAAAAAAA (on 7500). Go ahead on 3105.”

19:30 8:00A.M. Earhart:” KHAQQ CALLING ITASCA WE RECEIVED YOUR SIGNALS BUT UNABLE TO GET A MINIMUM PLEASE TAKE BEARING ON US AND ANSWER 3105 WITH VOICE.” [ AE sent long dashes on 3105 for five seconds or so. This was the only direct reply that Itasca received from KHAQQ. It was later beleived that she turned away at this point.]

19:35 8:05A.M. Itasca log:” Your signals received okay we are unable to take a bearing it is impractical to take a bearing on 3105 on your voice/ how do you get that?/ go ahead.”

19:36 8:06A.M. Itasca log:” Go ahead on 3105 or 500 kilocycles.” Itasca log:” Itasca sending on 7500 as her only acknowledgement was for signals on 7500.”

19:37 8:07A.M. Itasca log:” Go ahead.”

19:42 8:12 A.M. Itasca log:” Itasca to Earhart. ? Did you get transmission on 7500 kcs/ go ahead on 500 kcs so that we may take a bearing on you/ it is impossible to take a bearing on 3105 kilocycles/please acknowledge”.

19:43 8:13 A.M. Itasca log: “repeated above message on 7500.”

19:45 8:15 A.M. Itasca log: “Do you hear my signals on 7500 kcs or 3105 please acknowledge receipt on 3105/go ahead.”

Itasca log: “sent on 3105 and repeated on 7500.”

19:48 8:18 A.M. Itasca log: “Will you please acknowledge our signals on 7500 or 3105/go ahead with 3105.”

Itasca log: “no answer.”

20:03 8:33 A.M. Itasca log: “Will you please come in and answer on 3105/we are transmitting constantly on 7500 kcs and we do not hear you on 3105/please answer on 3105/go ahead.”

Itasca log: “this unanswered.”

20:04 8:34 A.M. Itasca log: “Answer on 3105 kcs with phone/how are signals coming in/go ahead.”

20:14 8:44 A.M. Earhart: “WE ARE ON THE LINE OF POSITION 157-337, WILL REPEAT THIS MESSAGE ON 6210 KCS. WAIT, LISTENING ON 6210 KCS. WE ARE RUNNING NORTH AND SOUTH.”

[Advancing the time to 1928 GMT to 2013 GMT produces a line of position at 333--‐153 degrees true. This means that FN’s 337-157 degrees true LOP was based on a sunrise fix]

Itasca log: ?On 3105-volume S-5.”

20:17 8:47 A.M. Itasca log: “We heard you okay on 3105 kcs. Please stay on 3105 do not hear you on 6210 maintain QSO on 3105.”

Itasca log: “This broadcast by voice on 3105 and by key on 7500. Nothing was heard on 3105 or 6210.”

This ends the radio log between the USS Itasca and Earhart.

Appendix B

Aircraft debris field Gallery

Images of human remains are not included but are present in the video. Generally described, the images appear to represent a male and a female lying directly beside each other to the right of the port after hatch and apparently in the after compartment. There appears in this area to have been considerable activity revolving around the fuel tanks. Some viewers believe that the presence of nitrogen bottles and some kind of breathing bag are present and accompany both sets of remains. The remains are mostly intact with at least one right hand and a corresponding thumb appearing to be dismembered. At the time of separation this hand would have been either tightening or loosening a gas cap on a fuel tank. There are signs of an effort to a) extract residual fuel from the tanks and b) influence the buoyancy of the fuel tanks and airframe. The center of gravity of the aircraft was forward of the center of buoyancy and would therefore result in a “tail-up” flotation scheme, consistent with the manner in which remains were discovered.

The images below might not be that dramatic if found off of Cape Cod. But in this case, no plane remotely resembling the age, type and description of the Lockheed 10E has ever crashed within hundreds of miles of Gardner Island. Thus, finding a debris field with any aircraft components at all is alarming in the Earhart search context. Finding such a field with components well correlated with the Electra constitutes cause for anyone aware of its historical import or who recognizes the presence of human remains requiring proper burial. When an artificial object has been in this kind of water for this long a confounding problem is the growth of coral around the wreckage. This has made the wreckage scene harder to identify for the layperson and it took this author some time to, at least on a gross level, distinguish between coral and artificial material. Having said that, with sufficient image resolution the presence of the Lockeheed 10E in these images is clear and convincing.

probable_After_Landing_Gear_And_Fuselage_And_Footwear_L10E_001

Close up of what appears to be the after landing gear and terminus of the fuselage, and which correlates with the L10E. Notice the transition of coral growth on the tire and the smooth surface underneath, which is indicative of an artificial material. The central wheel mechanics can be vaguely discerned through considerable coral growth and detritus. To the left of the tire can be seen what appears to be a very old fashion of hiking footwear positioned behind the tie-down line as well. recovered from Richard Gillespire, Executive Director of TIGHAR with no restriction on usage. Copied under Fair Use.

probable_After_Landing_Gear_And_Fuselage_L10E_001

Same section from a angle forward of previous; and appears to be the after landing gear and terminus of the fuselage, and which correlates with the L10E. Notice the transition of coral growth on the tire and the smooth surface underneath, which is indicative of an artificial material. The central wheel mechanics can be vaguely discerned through considerable coral growth and detritus. Note further the distinctive after fuselage taper. We note the appearance of footwear at the very end of the after fuselage. Paredoila would suggest that as we take the same pictures at different angles these “anomalies” should disappear. However, in this case, the “anomalies” only become clearer (see previous). This is true for both the tire and the footwear. Recovered from Richard Gillespire, Executive Director of TIGHAR with no restriction on usage. Copied under Fair Use.

 probable_Cockpit_Area_DF_Antenna_And_Hatch_Arms_L10E_001

This is believed to be the cockpit area filled with coral and soil. Notice the same pattern here in which coral has clearly grown on this object but where what appears to be a smooth, tubular shaped elemental object can be seen to the right. It has the distinctive geometry of the DF antenna that was mounted on top of the cockpit. Recovered from Richard Gillespire, Executive Director of TIGHAR with no restriction on usage. Copied under Fair Use.

probable_Cockpit_Top_Hatch_Assembly_With_Swing_Arms_L10E_002

This is the same “cockpit” area where we see what appear to be tubular shaped swing arms, also with metal or aluminum showing. Another piece correlates with the cockpit hatch or pilot’s seat on the 10E. Here, we can clearly see (but more clearly in other photos) the DF antenna and its elemental, tubular shape. Recovered from Richard Gillespire, Executive Director of TIGHAR with no restriction on usage. Copied under Fair Use.

probable_Cockpit_Top_Hatch_Assembly_With_Swing_Arms_L10E_001

This is another view of the same “cockpit” area where we see what appear to be tubular shaped swing arms, also with metal or aluminum showing. This structure correlates with the cockpit hatch on the 10E. Recovered from Richard Gillespire, Executive Director of TIGHAR with no restriction on usage. Copied under Fair Use.

probable_Direction_Finding_Loop_Antenna_L10E_002

Here we see a distinctive, artificial strucutre. It is clearly an artificial object also displaying metallic properties. Recovered from Richard Gillespire, Executive Director of TIGHAR with no restriction on usage. Copied under Fair Use.

probable_Direction_Finding_Loop_Antenna_L10E_001

The same object from another angle. Notice the smooth, metallic-looking, cylindrically shaped object to the right. This also appears to be artificial and could be the fuselage section just aft of the cockpit. Recovered from Richard Gillespire, Executive Director of TIGHAR with no restriction on usage. Copied under Fair Use.

probable_VNotch_Of_High_Frequency_Antenna_Wire_Insulation_Disintegrated_L10E_001

What appears at first blush to be a rope. Laying over that in a very thin layer is soil and possibly coral material. This is the tie-down rope for the airplane, which has a characteristic grommet on one end. Recovered from Richard Gillespire, Executive Director of TIGHAR with no restriction on usage. Copied under Fair Use.

probable_VNotch_Of_High_Frequency_Antenna_Wire_Insulation_Disintegrated_L10E_002

This is another close-up of the tie-down line used to secure the aircraft to the ground. Recovered from Richard Gillespire, Executive Director of TIGHAR with no restriction on usage. Copied under Fair Use.

probable_Port_After_Hatch_Inverted_L10E_001

We again see what appears to be a metallic surface covered in coral. The smooth areas appear to be steel or aluminum. At the top edge we see a short section of exposed metal with a groove such as that found where gaskets are employed. This correlates with the after cargo hatch door and it is inverted and driven into the soil. Crossing in front is the tie-down line. Human remains can be discerned just to the right of this picture frame. The proximity of both sets of remains, and other factors, gives the appearance of some activity involving the gas tanks and this port hatch shown.

Appendix C

A fast introduction to celestial navigation

 ae_003

Courage is the price that life exacts for granting peace.

The soul that knows it not, knows no release from little things. Knows not the livid loneliness of fear, Nor mountain heights whose bitter joy can bear the sound of wings. How can life grant us loan of living; compensate for dull gray ugliness and pregnant hate Unless we dare the soul’s dominion. Each time we make a choice, we pay. With courage to behold the restless day, And count it fair.

~ Amelia Earhart 1927

The End … and The Beginning of Amelia’s Wonderful Story

Disclaimer: The pictures and information provided are of limited extent, are not expected to adversely affect commerce in their regard and are of immense historic and academic import; thereby retaining considerable public value and justifying a plea to Fair Use.

I’ve been asked this question before so I decided to tackle it. Are we alone in the Universe? What are the odds? Can we even know that? The answer is yes, yes we can. And I am going to show you how. But first, I’m going to divert to define carefully what I mean by “life”. Then it gets weird … really weird.

Mars Opportunity deorbit debris

I’m going to use the term “advanced” life to mean any life form that possesses an arbitrary command of energy and power. By an “arbitrary” command I mean life that utilizes energy and power; that is, life that dispenses energy at given power rates into its environment in amounts exceeding that possible by its own metabolic or life sustaining function. And by “life” I mean any natural phenomenon that self-replicates with a fidelity of greater than 10% per iteration. The 10% is a bit arbitrary so we can substitute any variable we like, call it α, to denote that value since it won’t have any bearing on the basic premise of our argument. Thus, life, which we’ll denote as ψ, is defined as:

ψ = Any natural process that self-replicates at any non-zero rate with a fidelity of α per iteration or greater.

Notice that it does not depend on any a priori knowledge or principles of biology. It is a fundamental definition. It is, in effect, a statement grounded in physics and math alone. Now, this leads us to a refinement of our other definitions as well.

An “advanced” life case is any life ψ that generates a causal train resulting in a release of energy into its environment, as measured over its duration of existence as such, greater than the total energy released by the causal train it generates in the release of energy into its environment as a sole consequence of the energy expended to self-replicate as by definition.

Thus, there is some total energy Eψ over the duration of a case existence, ψ, required to self-replicate.  It is important to note that this energy includes all energy expenditures consequent to a design to self-replicate, not just the minimum required. In other words, if a microbe expends x amount of energy in an ocean swimming 5 miles this is considered a design expenditure because, from its inherent design, this expenditure follows. It thus does not necessarily follow from a design to expend additional energy as a self-replication strategy that is not “hard wired” in that design. But that proportion of energy expended, should it exist, that is expended outside any identifiable, hard-coded design distinguishes ψ into two different categories; “advanced” and “primitive”. Thus, an additional energy, Eµ, is released into the environment of and by an “advanced“ life case. Thus, we can define “primitive” life as satisfying the relation:

Et = Eψ

Where Et is the total energy released into the environment in a causal origin to which ψ is entrained; from the point in time following ψ to the terminus of its duration as such. And advanced life can be defined as any ψ satisfying:

Et = Eψ + Eµ

So, we can now operationally define what we mean by “advanced” life.

Water vapor clouds over Tharsis. Olympus Mountain is upper left.

We posit that a very large, robust experiment has already been conducted to determine if advanced life exists in this universe, with some space-time limitation. To explain this, we first imagine a region of space and time extending spherically around Earth, but a region beginning only about 1000 miles above Earth and extending to 30,000 miles above Earth. This is mostly empty space. But the point is that for some 100 years since various forms of energetic capture and analysis has existed on Earth, say, for example the existence of radio receivers, we know that no exobiological creatrure … no ψ, has existed in that region in the last 100 years (assuming for discussion that is how long we have been “listening”). Thus, there exists a “light cone” of discovery that excludes the presence of exobiological life in the context provided. This is not a certainty, but exclusion is highly probable since any release of Eµ in that context is unlikely to have occured without detection. And the greater the value of Eµ, the greater our confidence can be. But this is like saying the more “advanced” the hypotheisized life is, the more confident we can be that it is excluded. Now, if we extend that sphere out to, say,  a distance of 20 light minutes the light cone becomes a 20 minute light cone. This just means that, because energy propagation is limited to the speed of light we know that, at least 20 minutes prior to our measurement, we can exclude advanced life from the surface of the sphere. Now, if we extend that sphere to 100 light years we see the problem. We only began measuring this 100 years ago. Therefore, unless we do in fact detected a “signal” (what would almost certainly have to be a very loud, noisy one at that) on day one of our observations, we can still exclude the surface of this sphere. But we cannot exclude the space beyond that sphere … regardless of how sophisticated our equipment may be. But there are two subtle things going on here.

1.)    Distance to exclusion has increased in this example

2.)    Time to exclusion has increased in this example

Therefore, this is a strong case for arguing that assessments of the odds of advanced life existing in the universe are limited by this fundamental relation. For we can extend this sphere arbitrarily far into space but if we do so, we are necessarily forcing our understanding of “now” back into the past, making it impossible to state what any probability regarding the “true now” actually is for that extended region. Thus, we can speak of a probability per unit volume of space per unit time as a fundamental limit on any other construction of odds given the following axiom:

  1. We exist … right here and right now.

Corollary:

  1. We have existed at some time right here (Earth’s reference frame) in the last 10 billion years.

The Voyager space probe just before departure in the late 70s. It is now the most distant human-made object ever and is just now entering interstellar space. It will likely last over 100 million years before the erosion of interstellar particles dissolves it completely.

Ceteris paribus, for any arbitrarily chosen volume of space equal to ½ s, the probability of an advanced life existing there is ½. Of course, all is not equal, but it is a fair way to approximate a probability defined only in terms of what we can know. But again, this only covers a time interval of 100 years. The odds are reduced when we factor in the available time over which we can evaluate this. And notice that once again we will make no biological assumptions about evolution or the time it takes for advanced life to present. Thus we are being generous in terms of what can occur in some biolgoical sense and we are examining this only in terms of the limits of nature alone. We will be conservative inasmuch as we will only consider the last 100 years as the time that human beings have existed as an advanced life, at least in terms of this life’s ability to propagate.

We’ll pick a gross estimate for a time interval over which stars have existed, say, 10 billion years. Then,

100 / 10*109 = 1 * 10-8

The volume of a sphere with radius 100 LY is:

R = 2.81 * 106 LY3

The Milky Way Galaxy has a radius of Rmw of about 50,000 LY. It’s thickness is, on average, about 1000 LY. Therefore, the area of the Milky Way Galaxy disk is:

314159 LY2

And its volume is:

314159 LY2 * 1000 = 3.14*108 LY3

therefore,  the ratio of volumes of the Milky Way Galaxy and our 100 LY sphere is:

3.14*108 / 2.81 * 106 LY3 = 112 yielding a probability adjustment of:

1 * 10-8 * 112 = 1.12 * 10-6. But the probability of any “like” space containing life is only ½ based solely on what we know. Therefore, the probability is:

0.5 * 1.12 * 10-6 = 5.6 * 10-7

1 in 1.79 million against that life at least advanced as “us” has existed in the Milky Way Galaxy in the last 100 years. Ouch. We can adjust these odds to take things like empty space into account, but if you do you’ll see that the odds only get worse, not to mention the fact that you have to start assuming things. But let’s not be totally banal. Might it be better to ask, has an advanced life existed in the Milky Way since some time in the past that might strongly suggest ipso facto that they still exist now? We have to ask this question because the way we framed the odds we did not take longevity of an advanced civilization into account (at least not beyond 100 LY). Let’s do that now. Let’s give them some credit within the bounds of what we already know. Again, it isn’t a certainty because we cannot assume we are “equal” to some other, arbitrary, advanced folks out there. But, to be reasonable, let’s give them 10,000 years before they destory themselves. That means that our odds will change:

100 * 5.6 * 10-7 = 5.6 * 10-5; still far from being likely. That’s 1 in 17857 against there being more than one advanced life in the Milky Way.

Now, here’s the “skeawy pot”. What about ancient ruins of folks that just didn’t work out for mother nature? Very different numbers indeed. Let’s assume that we won’t likely find archaelogical remains more than, say, 100 million years old. Just for kicks. Then:

1000000 * 5.6 * 10-7 = 0.56, about 1 in 2. We have a winner. If we consider other galaxies like Andromeda, we’re certain to find them … lots of them. Now, you see where I’m going with this? The pattern of research we’re seeing on Mars where it is becoming more and more likely that “primitive” life (a slightly different bunch of assumptions, but still) once existed but the odds of finding it in our day and age is slim is a pattern we could have easily predicted using basic logic. We’re not likely going to find ET, but we damn well might find lots of their space probes, old cities and other remnants. Witchy, huh?

Conclusion

Looking at what we now know about Mars (long story, but microbial life was there), a curious pattern seems to be present. From one star system to the next, beginning with the nearest and going outward, we seem to find a similar probability to finding “primitive” life as we do to finding “advanced” life when, from one galaxy to the next, we proceed outward.

Cheers

- kk

Follow

Get every new post delivered to your Inbox.

Join 144 other followers

%d bloggers like this: