Copyright 2015

by A. Van Andle

Most of you who know me will be wondering why someone whose specialty is astronautical engineering would be diving into anthropology. Well, SETI (Search for Extraterrestrial Intelligence) involves some key questions regarding how, should we ever make contact with technological agency beyond Earth, we would identify it as such. Part of figuring that out is figuring out how the only case example we have originated. Where, when and how it originated can be instructive when confronted with key questions about the more general aspect of technological agency in the universe. I am keenly interested in this question. My approach is admittedly mercenary; I am (hopefully) tracking down technological agency back to its source. And I think I’ve discovered that it begins as much with social behavior as it does explicit technologies. And while I’ve hardly answered all my questions (and have created more), I think I’ve provided a good starting point for where to look next.

For those who insist on taxonomy, what you are about to read could be formally framed (but isn’t) as a decoupled assimilation model of human evolution; where the term “decoupled” means that the genes giving rise to characteristically human characters emerged in multiple geographic locations, not all of which this article identifies, and are thus “decoupled” from each other in space and time (geography and time). It is basically the assimilation model with multiple “founts” and considerable admixture.

Before we say a word, some key points should be stressed about anthropogenesis (the study of human origins). As it stands, there is insufficient direct evidence to reach conclusions about how modern humans emerged, from where they emerged or from “who” they emerged. Any such conclusion will likely lead to Type I and Type II errors and not provide us with a complete picture of what happened. The reason for this is that as we go further back in time, prior to the Holocene (about 10,000 years before present, or 10kybp):

  1. The archaeological record becomes more ambiguous; partly because of how time erodes evidence and partly because technological agency has less measurable impact on the environment.
  2. The best evidence, fossilization of human remains, currently represents only the tiniest, fractional representation of what existed and occurred at that time and even for what we do have, there is selection bias in that evidence.

In other words, the doors are wide open for hypotheses based on what we do know, but to suggest that a theory is ready for the press is specious and fraught with gotchas. That may not be what I wanted to learn, but it is the logical, evidential and objective reality. What I’m going to present to the reader is an analysis of inferential knowledge gained over many years, much of it very recent, and which necessarily does not depend on direct physical evidence in all cases. This is necessary to expand our thinking on what is possible, but it is not a theory and I’m not even sure it is an hypothesis; it’s just a suggestion for strategies to recover direct evidence in a way that is less prone to selection bias. Until someone can provide direct evidence not undermined by selection bias that represents a sufficient statistical sampling of the past, and unless I stand to be factually corrected (which I may), I present the following narrative as the most probable narrative based on available information today.

Anthropology has an uneasy similarity to Astronomy; to some degree they are observational and unlike physics, they do not readily admit of controlled experiments. This means that we have to choose what we observe and it is poor choices that have the most potential to confound, delay and frustrate enlightenment. In an uncertain environment the key strategy should be to minimize selection and confirmation bias in the assessment of direct evidence in our possession so that sound choices about how we observe whatever nature has to offer are optimal. But the realities of human limitations in the present must be balanced against this objective. A case in point is the ongoing controversy with Anthropology in the Americas and the so-called “peopling” of the Americas. As it stands, the best evidence available supports a human presence in the Americas beginning at Clovis time. Beyond that uncertainty increases sharply. Some anthropologists and other professionals in their field may wish to address the inevitable confirmation bias that quickly oozes into this void by holding themselves to account first to “moral” or ethical principles then only secondarily to principles of scientific objectivity. If I have ethical objections to participation in the design and construction of nuclear weapons I’d say I have a position worthy of defending. But in anthropology this ethical calculus is much ado about nothing and I will try to explain why in this work. But the first clue to this is mathematical literacy; our relationship to, say, a 4 year-old boy that lived in central Siberia 24,000 years ago goes by an exponential decay of base 2. The enlightened perspective is one that seeks to educate others of this fact rather than accepting the barbarism that metastasizes from ignoring it. As for the “controversies” in anthropology, people need to calm down and realize that it is their own confirmation bias that creates these issues to begin with. This is called enlightenment and I, for one, operate on the ethical principle of Illumination Everywhere, not illumination for a privileged few. My flummoxed state when assaulted by these “controversies” is arrested only when I appreciate how deeply ensconced mathematical illiteracy, ignorance and superstition in human society truly is. Hopefully this work will convey how and why.

So when we are presented with sites such as Pedra Furada, we have to ask if the narrative we are invited to believe appears realistic; that is, what are the odds that the narrative this paints are true? For nature sometimes diabolically but warmly invites us to believe a narrative we are likely to believe even when she knows it is false. At the level of the Pedra Furada site alone, we can say that there do indeed appear to be aspects that appear unrealistic, but you wouldn’t know that unless you looked very closely at the site itself. And the same can be said of other sites like it. So, the detective attempts to assess the realism of the narrative he or she is invited to believe; but not just at the level of a single site. He or she is also tasked with assessing odds at the macro level. Given what we do know, does the narrative we are invited to believe appear realistic? I am going to be arguing that some of the assumptions made thus far, in light of growing new evidence, are beginning to lack realism. I am going to show that the narrative that suggests, for example, that Homo erectus’ diffusion around the global was constrained is probably no longer realistic. That does not mean that there is direct evidence that Homo erectus (HE) in fact did diffuse globally. So, the purpose of this is to provide insight into our choices for observation, and how we can better balance limited resources and time with the desire for a more realistic set of choices in observation.

So, anthropologists, you should get your colleagues to go to north-eastern Siberia and start doing some archaeological digs to see if they can find the stop signs telling all those Homo erectus to stop and stay out of relative paradise down there just below that frozen Siberian tundra they’re living and too often dying on, you know the MPT era one that is a gigantic, dry, low-land bigger than Beringia itself. And since they like to follow game so they can eat and all, you should also see if they can find out what stopped all the wild game from doing the same … oh, wait, they didn’t stop did they? We know virtually all those Siberian mammals crossed back then … except our mysterious Homo erectus. How strange! There are many examples, but a 700,000 year-old wild horse has been found in the modern-day Yukon, for example. I am obviously being facetious, but I will attempt to show that, while we know that Homo erectus was a “tropical” primate, the assumption that Homo erectus was strictly so over time is probably no longer realistic. There would have been over 800,000 years of local adaptation time for HE before reaching Beringia and diffusion would continue to push the species to further extremes of climate (and we see no practical limit to far to how far HE spread). It is understandable if one is skeptical of HE surviving such a climate nonetheless, but I submit that once you know the whole story you will change your mind. We shall see shortly that nature has tendered to our delight an invitation to self-deception, that invitation that invites us to be entirely unrealistic about what our own data is showing us.

There is that nasty lice problem. Here are the facts. There are three known types of human louse, categorized as Type A, B and C. Type A is a body louse, while Types B and C are strictly a head louse. Type A body louse is a louse that actually depends on clothing as the lice live there and lay eggs there. But Types B and C are head louse only and do not depend on clothing to survive. The estimated evolutionary divergence time (based on DNA analysis) between body and head louse is estimated at 1.2 million to 700,000 ybp. Yes, you read that correctly. A population boom (not a divergence) in Type A body louse has been noted in the DNA evidence at around ~ 100 k ybp, and this is often cited as the coincident time of emergence of Homo Sapiens Sapiens (hss). In other words, the world has been misled on this point because this is not the time Type A first appeared, it is merely the point in time when its population increased sharply. It actually diverged about 1.2 million ybp to 700,000 ybp. Keep in mind that if hss began wearing clothes 1.2 million ybp, some time would be required for the louse to speciate into Type A, so we’d expect a slightly more recent divergence than 1.2 million ybp in that example. The point is that, though not certain, realistically speaking, something seminal occurred around 1.2 million ybp that provided the selection pressure leading to human louse speciation. And the most obvious candidate is that HE began wearing foreign objects, probably fur from animals. It was not 100,000 ybp as we have been invited to believe. This confusion originated when the studies showing a population increase in Type A lice were published, and the results repeated in the public domain as support for the “Out of Africa” theory, inasmuch as it supported a large population increase. This later got contracted to saying that it represented the time HSS speciated from its ancestor. And let’s forget for the moment about the lack of realism in the idea of a primate with fur living in, say, Africa, wearing clothes. Studies showing the date of divergence of Type A and Type B came out later (Reed et al, 2004) and only then was it confirmed that the 100,000 ybp date couldn’t explain the origins of hss. There is nothing controversial in these basic facts, it just shows how confirmation bias can cause us to want to fill in blanks when we don’t have sufficient data to justify it.

Type A body louse is global in extent, and Peruvian mummies about 1000 years old were found to have it. Type B strictly head louse is found only in Europe, the Americas (not just North America) and Australia. Type B strictly head louse was isolated in New World mummies dating back to 10,000 ybp. Type C strictly head louse is found only in Nigeria and Nepal. Thus, it is presumed from this data that Type B strictly head louse did not exist in the Old World as it was brought back to Europe during the colonial period. The presence of Type B in Australia and Type C in Nepal and Nigeria we address later. So, the distinguishing feature of this data is that in Holocene times Type B louse was extinct everywhere except the New World and Australia, and that it does not rely on clothing. Furthermore, this evidence rather suggests that clothing was being worn around the time of 1.2 mybp as Type A lice depended on clothing to survive. Realistically speaking, and given the climate, clothing was not likely being worn in Australia 1.2 mybp and it suggests that this event is more directly associated with an extremely cold environment somehow associated with the Americas. This is the more realistic premise.

Many have postulated with no direct evidence I could find that Type B human louse is associated with Homo erectus. This makes sense since HE possibly had considerable body hair and would not “need” Type A lice. But keep in mind that this is just an assumption. And we can see how one of the symptoms of snorting pixie dust is memory loss: when we have an ideological confirmation bias we tend to cite data selectively and forget to mention the obvious indication that human ancestors were clothed 1.2 mybp. This does not fit with hss speciation at roughly 100 k ybp. It does not mean that HE had lost his and her bodily hair, only that garments of some kind, however crude, were likely being placed over the body. It could have taken a long time indeed for the hair to disappear. This is in fact strong evidence of a strong selection pressure having to do with extremes of cold. It points away from Ethiopia, Nepal or Australia as the source bottleneck (when someone who has natural fur puts on a fur coat, you know it is cold, and it is itself suggestive of a bottleneck).

Now that we have the facts cleared up, the presenting question is what is the most likely scenario to explain the presence of Type B louse in the Americas but not in the Old World? In light of the nature of biological contamination, this is a conundrum. One researcher oddly suggested in a paper in the Journal of Pixie Dust Tripping Diaries (I like being facetious, so forgive me) that hss sometime after “leaving Africa” met HE in “Asia” and picked up Type B lice from him or her, then proceeded to the Americas where he or she infected everyone over there with it. Let’s examine this logic. It requires we examine two scenarios since neither was explicitly given. First, let us assume this was just one lonesome hss. Then this one lonesome hss managed to infect a series of his or her relatives over numerous generations (thousands of years for this kind of diffusion), all of whom happened to enter the Americas without exception, who then infected all of the Native Americans there. Not one of those descendants remained in Asia to leave a trace of Type B there. First, it wasn’t likely just one hss, but even if it was, this is not realistic at all. The second scenario is that it was more than one hss infecting others. Then the problem is worse. For now we have thousands infected with Type B over a large number of generations, each and every one of whom entered the Americas without notable exception. It wasn’t my intention to be overly critical, but I must call a spade a spade. This is not realistic. One internet sleuther had the interesting idea that perhaps modern humans arrived in America much earlier and simply passed the Type A lice to Homo erectus … in America. This is close but it doesn’t provide any particular reason for suspecting that Homo erectus would be in the Americas. But more importantly, it also makes the tacit but unfounded assumption that B lice was truly isolated to the Americas without an isolating mechanism provided (beyond just cold and distance). We need a more realistic explanation for how Type B was isolated to the Americas.

The most likely explanation, in general terms, is that:

The Type A louse speciated from Type B louse in a state of duplex bottleneck, meaning that its host was bottlenecked from some other host of the same species in both directions and on at least two sides, and that when this bottleneck was opened, gene flow (and consequently the gene flow of the louse) proceeded in a favored direction (the Americas) thereby converting a duplex bottleneck to a simplex bottleneck. And as a second condition, the original source population in the Old World went extinct, removing the habitat for the B lice in the Old World (we can remove the extinction assumption with an alternate narrative taken up later).

Not surprisingly, this is awfully similar to how assured duplex decontamination has to work for humans visiting other planets, otherwise, as Apollo showed, life has a tendency to forward contaminate with extreme ease. And specifically, in the scenario considered, if the overall land area occupied by the Type A host is very small (think Beringia) relative to the overall land area occupied by the Type B host (think lower North America), the Type A host population density will likely be higher at its end of the corridor, while the population density of the Type B host will likely be much lower at its corridor entry. The result is a difference in diffusion rates whose magnitude is a function of population densities at the corridor’s endpoints, biasing diffusion by the Type A host toward the Type B host. If the corridor closes periodically, this can establish a simplex bottleneck. Notice this then also requires a corridor of some non-negligible length.

It is important to realize that the populations at that time had no sense or knowledge of geography, so they had no notion of in what direction they were diffusing, or that they were diffusing into a constrained or narrow geography; or a harsher climate. This applies both to HE diffusing northeast toward the Bering Strait from Asia as well as diffusion into and out of the corridor in North America. Barring some kind of barrier, the diffusion will be more or less random (but might be influenced within those constraints by local vegeation, game, etc.). Thus, the population isn’t “migrating” or “walking” anywhere. They are simply diffusing outward. In such a scenario, a population will diffuse out and away from the southern exit of the corridor and into North America, generally moving away from the corridor’s entrance. Thus the density at the southern opening will more than likely be very low. But in Beringia the population would tend to swell at it’s entrance due to the geographic constraints of both ice and sea. This biases diffusion strongly from north to south and if that corridor closes before the Type B host can diffuse all the way back to the northern exit of the corridor, they will likely either be forced back south or will be wiped out by climatic change. So, while it is often presented as a “migration”, there is in reality no purpose or plan to the diffusion as regards geography. Rather, a population will be motivated far more by local considerations than anything else, and it will take many generations before major climatic change becomes noticeable. So, no particular “individual” would “plan” to move into a harsher climate, they just do over generations by chance. And under prehistoric conditions it is difficult to imagine any nomadic population as having much understanding of their environment beyond the provincial. But it does imply the requirement for a rather long corridor, and we must assume that at least some sub-populations were nomadic.

But at the end of the day, the ultimate factor maintaining the simplex bottleneck, even when the ice sheets were diminished or gone, was the simple fact that Beringians had clothing as well as other adaptations to survive in that climate while the Type B hosts did not. And when HE first came to Beringia, we recall, the conditions in Asia were warmer than they would ever be again (getting into Beringia pre MPT is easier than getting out later – we’ll explain what the MPT was momentarily).

I have seen many hypotheses on this question, but all of them share the same fundamental flaw in reasoning: you can’t have a durable “trade” or “swap” of A and B lice between hosts without contaminating both in a similarly durable fashion, unless there exists some bias to simplex transmission. It is unlikely, for example, that a host having the Type A louse would come to America, give the Type A louse to, say, the HE host in America, but then for the HE in America to fail to back-contaminate the source population with the Type B lice. Barring other assumptions, if the gate is open one-way, it is open both ways. This is a problem of biological contamination and it can be contained naturally only through a bottleneck with a chiral extinction and simplex transmission (we will also discuss a scenario where extinction is not required). When the bottleneck opens, back contamination is prevented naturally because diffusion from the Type A lice population to the Type B lice population is simplex; something is causing the diffusion of the Type A lice population toward the Type B lice population, and preventing the reverse. It is, in effect, a simplex bottleneck.

As for Australia, Ethiopia and Nepal then, it seems likely that their presence will be explained by a simplex bottleneck (or lack of extinction, to be examined) or a timing issue dealing with archaic variants (also to be examined). The difference with bottlenecks however, is that these bottlenecks are not as large geographically as whatever isolated Type B louse in America and they do not have correlates with the date of 1.2 mybp (we need a large geographical area to satisfy the corridor population density requirement). Other proxies of hominid migrations include typhus, which, according to genetics studies, appears to have migrated from the Americas to the Old World prior to the colonial period. This should be proof by itself that American gene flow was moving into the Old World in ancient times. Homo Sapiens as a product of these events likely emerged in Asia approximately 250 k ybp and diffused to its most recent habitat, Africa, about 200,000 ybp (to be supported).

I think what we’re going to find when the dust settles is that anthropogenics has been defined in the past by confirmation bias, though no one is going to call it that. What I mean is that empirical evidence follows the discoveries and the discoveries follow patterns of modern human agency; namely, that humans in recent times have expended far more resources and effort on the continent of Africa than anywhere else vis-à-vis hominids generally (much of which is a necessary evil). This is not a trivial concern because the entire enterprise of human origins leans squarely on direct evidence, and direct evidence only comes to us as nature provides it and to the extent and manner that we query it. Thus, to form a more viable hypothesis of what might have happened where we seek to “cure” this defect of confirmation bias that has vexed us to no end, we can only use inferential evidence to propose strategies for locating direct evidence pertinent to the question at hand. For reasons that will become clear later, I invite the reader to consider some particular types of confirmation bias relevant here:

  1. Ideological confirmation bias (aquatic ape theory and Out of Africa)
  2. Ethnic confirmation bias (derived of ignorance or shallow understanding of genetic change over time)
  3. Religious confirmation bias (anything we can conjure that undermines the notion of a non-superstitious explanation of what is).

Some key results of this confirmation bias relevant to our immediate discussion are the assumptions of “geographic coupling” (“coupling” is a term used in a slightly different sense when I speak of “decoupled assimilation”) in which some assume that modern-day populations are actually associated geographically with ancient populations; a prima facie unrealistic assumption. While there may be some geographic coupling for sure, the point is that it tends to be exaggerated because it is convenient for some to do so. Ideological bias comes in many forms, but the aquatic ape theory seems riddled with 20th century feminist tropes and extremism that doesn’t particularly help us understand what really happened. On the other side is a kind of latent (usually unintentional) bias of a misogynistic flavor that tends to discount any logic that centralizes or elevates the role of the female hominid. You can already see the silliness in all this, but I note it because it is a real phenomenon that has created a train wreck of misguided assumptions and conclusions. Then, of course, there are the religious biases which are important in their own right, but which can usually be identified a little easier as the key theme is always to undercut the fundamental premise of change over time, something so fundamental that it is hard to miss in its various guises. Ethnic confirmation bias is probably the most insidious of all, but it can also present as a hybrid of ideological bias: if you have a particular ideological view about ethnic issues (characteristically contemporary in their guise of course), your interpretations of evidence will tend to favor an ethnicity (which is itself fallacious) you feel or believe is unjustly or unfairly portrayed in other contexts. This is what I meant earlier by how one’s own confirmation bias is fallaciously creating the analogue of the “ethical quandry” of the scientist working on nuclear weapons. On the other hand, if you prefer to promote the perceived status of an ethnicity (typically your own) you interpret the evidence in some other, opposing way. It isn’t that people are stupid, rather, it is that they are allowing emotionalism and superstition to toxify their thinking to the point that their reasoning looks incredibly stupid. It’s like talking to a bunch of infants who have no capacity for reason, realism or the tiniest measure of precocious thinking on the matter. Now, there’s not a thing wrong with pixie dust. I’m just saying don’t drink and drive; don’t snort the dust while doing science. I just don’t want your pixie dust “science” to drive me off of a cliff.

About Haplogroups

I debated whether or not to include this section because, in reality, it is just an extension of the fallacy that intragroup diversity must imply antiquity. But some may look at haplogroups and think that this somehow supports the choice of assumption about antiquity. It does not because it is in fact of the same class of error. Out of Africa and the phylogenetic tree of human haplogroups are two sides of the same correlation vs. causality coin.

The narrative that will be forming here is simple and parsimonious, partly because it can be explained without requiring a Ph.D. in molecular biology. But some basic concepts need to be conveyed first. The easiest way to explain haplogroups is to start with an example. When a person has blue eyes biologists refer to this trait as a “phenotype”. This is the observable, physical trait the organism possesses. The genes that collectively code for this trait are called a “genotype”. So, a given phenotype has an associated, given genotype. Because of the way meiosis works, some genotypes are linked to others in the sense that, for example, fair skin, red hair and freckles, each as a distinct genotype, tend to be inherited together. And that tendency is merely a probability, not a certainty. So, there is a tendency beyond chance alone for a child who inherits red hair from a parent (both parents actually) to also inherit fair skin and freckles. These genotypes identified by this kind of probability linkage can be called haplotypes. And it is possible in population genetics to find a class or group of haplotypes that tend to come together in populations of individuals, and which can be used to identify a population by ancestry, which is called a haplogroup. But in order to maintain this distinction for a group of individuals in such a way that we can track them through generations and still be able to discern them as a contiguous group, biologists measure this “affinity” for an individual to a haplogroup through either a strictly matrilineal or strictly patrilineal line. This is because genes passed this way never recombine, and recombination tends to obscure this relationship over generations. The cost of doing this however, is that considerable genetic information is lost. For example, if we track genes inherited only from father to son, it means that as we go back into the ancestry of an individual we are tracking on the genes passed to the son from a father, from a father, from a father and so on. But 20 or so generations back this means that we have a single father at the root somewhere, even though the son we’re examining at this root level may have dozens, hundreds of thousands of fathers on n’th generational separation (the math is an exponential function with base 2). But for the sake of figuring out to what haplogroup he belongs, it is sufficient. The same is true for the matrilineal line. In that case the daughter has a mother, who had a mother, who had a mother, and so on.

When we examine haplogroups through the female line it is called mitochondrial DNA haplogroup analysis, designated as the mtDNA haplogroup. If through the male line it is called Y-chromosome DNA haplogroup analysis, designated as the Y-chromosome haplogroup. And these haplogroups will be different because they represent genes in different physical locations (loci) and therefore represent different chunks of DNA, and thus different groups of genotypes. Where the confounds can enter is when we try to construct what is called a phylogenetic tree which pretends to assign structure to all the haplogroups in some ordered way. Since recombination does not occur with these genes, we can use the passage of these haplotypes from one generation to the next as a “molecular clock” because the only change they endure is random mutation (which tends to have a predictable rate of occurrence given some known features). What is not so clear is the temporal directionality of this “clock” (which, in fact, is a statement of causality). To explain, when 90% of a population, say a haplogroup, has blue eyes and over a million years a portion of that haplogroup’s eye color diverges to say, 90% brown eyes, we say that the haplogroup diverged into another haplogroup, called a derived haplogroup. The problem however, is if the genetic information we’re examining is only of the brown-eyed kin, how do we know that a blue-eyed haplogroup ever existed? We can do that by taking a contemporaneous (sample of the same time period, such as today) sample from two or more persons in two or more haplogroups. Let one haplogroup be designated ζ1 and another ζ2. These are distinct haplogroups we can recognize and differentiate from each other because of a common set of traits, such as eye color. It could either a mitochondrial or a Y-chromosome haplogroup. And we seek their clade, or parent haplogroup, ζc.

So, under DNA analysis we can overlay these two genetic patterns (of ζ1 and ζ2) using computational techniques (algorithms) to see where the mutations accumulated over time match each other, and the point where the mutations no longer match. This establishes chirality and nothing more, and that is the problem. By chirality (in the most general sense) I mean that we see that there is a point, called the divergence, where the members of the two haplogroups diverged from a state of interbreeding to a state of breeding isolation from each other. But it does not, by itself, tell us what the temporal order is. Obviously, if we are dealing with a modern-day sample, the temporal ordering of the most recent haplogroup must be backward in time, but it becomes foggy when we pass the first point of divergence in reverse time. What has happened is that a rather simple issue has been thoroughly obfuscated in the thick fog of computational analysis that few understand or will take the time to fully understand and thereby think critically about what it really means. To explain this it helps to use an example. Suppose we analyze not two modern samples, but two ancient samples, say, each 20,000 years old. Now, suppose we see two haplogroups and, as before, we overlay ζm and ζn and seek ζc. And we notice that we can establish the chirality of interbreeding because we can see that some of the mutations on each match point for point, but then some of them don’t. Knowing nothing else, we are left with two possible, physical possibilities. Either ζm and ζn diverged at some point in the past and interbreeding between them stopped or, ζm and ζn merged at some point in the past and they collectively carried each other’s mutations in ζc. In the former case, causality orders the temporal tree forward, and in the latter case, causality orders the temporal tree in reverse. So, in the former case ζc becomes the ancestor of ζm and ζn but in the latter case, ζm and ζn become the ancestors of ζc. There is nothing, not a thing, in this example, to tell us which is which. Only by evaluating overall intragroup diversities and making assumptions such as the assumption that intragroup diversity implies antiquity, can we then order the tree as the former example. On the other hand, if admixture were the cause of the divergence it could imply the latter, and the phylogenetic tree can be reversed, flowing in exactly the opposite direction, if we do not establish causality correctly. This is a much more subtle example of how correlation does not imply causality, and it is of the same class of error. Other potential causes include overall population size over time and inbreeding over time.

We shall see in what follows that the phylogenetic tree of human Y-chromosome and mtDNA haplogroups has likely been turned on its head and is more the reverse of what it actually was in reality; all because researchers putting it together are systematically confusing correlation and causality. In order to sort out causality one has to enumerate all plausible causes, then ask of those, which is the more likely. That is, at least, a first step to getting it right. This is why I keep talking about bias in research; my point is to emphasize quite rightly that all these people cannot actually “be that stupid”. They aren’t, they are human beings subject to the foibles of human bias, in this case the most common being confirmation bias. Indeed, my conversations and debates with professionals in this field indicate to me that they are quite intelligent and very well educated. But it isn’t always a lack of smarts or education that leads to error. When some combination of admixture, population size and inbreeding over time are taken as causally more salient than antiquity, the correct ordering of the phylogenetic tree is not obvious at first glance despite the fact that we can tell it is mostly the reverse of what has been assumed. The narrative I will be building will allow us to identify the basal haplogroups nonetheless, and that pattern will be stunning when juxtaposed with that narrative, to the point that it is hard to imagine any narrative significantly different than what is suggested (we will see a surprisingly strong geographic relationship to the Y-chromosome flow over time).

Various proponents have put forth arguments against reversing the phylogeny (the phylogenetic tree) on account of all manner of reasoning. But the fundamental fallacy in all of them is that they neglect to appreciate just how much diversity and divergence can be affected by admixture. As an example, one claim is that the Y-chromosome A haplogroup is the most basal of all (and appears mostly in modern-day African populations), because the A haplogroup is most closely related to the Pan Troglodyte (chimpanzee). But this merely repeats the causality vs. correlation fallacy because it neglects to consider that a later-admixed species such as HE might itself be more closely related to Pan Troglodyte, particularly the HE sub-species that remained in the same habitat as Pan Troglodyte. Thence by admixing with it, the haplogroup A appears more basal. It’s the same fallacious reasoning wearing different clothes each time it is repeated. If a small population of Beringian descendents, say, who had already admixed with HE in Asia then admixed with HE in Africa (in the still distant past) then we certainly would see the A haplogroup as most basal, even though it really isn’t. From my own reading, the trend in Anthropology now seems to be moving toward the Assimilation model to explain human origins (but with an origin in Africa still assumed) because these subtleties about admixture, not being easily grasped on the surface, are now beginning to become clearer since genetic analysis of Neandertal and Denisova genes have proven that hss is an admixed species (yes, it has been proven). It’s forcing people to entertain the full breadth of what this could mean given what we now know.

In the final analysis, the hereditary trees, such as those constructed for human haplogroups, as it turns out, are now widely acknowledged amongst molecular biologists as being not useful for representing real populations in the past. This cannot be overstated as many are still apparently not aware of this. It came about after decades of mounting evidence that showed that evolution is considerably more complex than previously thought and that things like HGT (horizontal gene transfer) play a much bigger role than previously believed. One means of horizontal gene transfer are hybridization or admixture where, if we look at two distantly related species, we find chunks of DNA that are distantly related but also find chunks that are closely related. This is because of hybridization or admixture events in the past where genetic information is passed between species or sub-species. It was thought that this would be a simple matter of just isolating which parts were the result of the HGT and which parts were “more true” to both, but it has turned out that it isn’t that simple. Even for humans, previously thought to be relatively free of this issue, the problem is too large to solve with existing methods. “Trees of life” are continually worked on and published, but it is understood that this process continues in hopes of improving the method “in place” and there is no real expectation that current “versions” of such trees will reveal the reality of human ancestral relations. For these reasons, it is all the more prudent to be skeptical of using a phylogenetic tree to support “Out of Africa” or any other proposition that relies on it. We will see many indications throughout this work that the assumption about the antiquity of modern-day African genealogies is probably incorrect and consistent with this finding regarding taxonomy. For now, we just point out that it is established that some 20% of European ancestry carries Neandertal genes and that Neandertal were in Europe for hundreds of thousands of years (individuals may carry only 3-4% Neandertal genes, but what is relevant for population analysis is the Neandertal representation in the overall population). This being the case, if hss left Africa for only 70,000 odd years now, and if admixture were not skewing the result, it is the European ancestry that should be more diverse on account of admixture (they won the diversity lottery when they interbred with a sub-species isolated from Africa for hundreds of thousands of years). But that is not what “Out of Africa” invites … no, requires us to believe. This problem, exemplified well by the example given, is endemic throughout the human phylogeny (“Bias in Estimators of Archaic Admixture”, Rogers et al. 2014). Contradictions like this will continue to get worse until “Out of Africa”, and we’ll argue, “Out of Anything”, is abandoned.

The significance of Mal’ta boy’s still smoldering gun

In south central Siberia near Lake Baikal, modern-day Russia, a 4 year-old child’s remains were found. They were dated to about 24,000 ybp. Autosomal DNA was extracted, with paternal haplogroup R1b and maternal haplogroup U. A DNA study published in 2013 clearly and unambiguously shows that this boy carried about 25% of the genes that appear in Native American populations and were derived of Native Americans, not the reverse (Upper Paleolithic Siberian Genome Reveals Dual Ancestry of Native Americans, Raghavan, et al, 2013). There can be no realistic model anymore that suggests that pre-history Native Americans are solely derived of the Old World. This has massive implications that have not been fully absorbed by academia yet. The implications are massive because the boy was found over 4000 miles west of the Bering land bridge before Native Americans are supposed to have arrived in North America, over an expanse of Siberia in which there is virtually no basal relation between the ancient people that lived there in east Asia and modern-day Native Americans; further suggesting the boy’s kin’s arrival in Asia is very ancient because it means his kin diffused to where he was found, from the Americas. One should not confuse populations here: modern-day east Asians do have some genetic affinity with Native Americans, but it is the ancient east Asian population of Mal’ta boy’s contemporaneity that is relevant. And the key conundrum is that modern-day east Asians have no genetic affinity to the boy’s kin. Think about what this means: if the boy’s kin did indeed diffuse toward the Americas there should be some genetic affinity remaining in the modern-day east Asian population, even if it were admixed with other populations later. On the other hand, if Native Americans diffused west to the Lake Baikal area, modern-day east Asians would and actually do have genetic affinity for them. That region to the boy’s east was over 4000 miles of vast territory. Diffusion rates over a territory like that yield very ancient latencies. Combined with the genetic inference of the Raghavan study showing Mal’ta boy’s genes deriving from Native Americans, the conclusion is undeniable. Thus, in this case, we can say that something is conspicuously unrealistic about the currently assumed Native American migration. Given that yet another set of remains were found not far away with the same DNA results, this is high quality evidence. Known as Afontova Gora, this second exemplar was 17,000 years of age. Lake Baikal is just to the east of a region associated with a modern-day population with the only known, solid affinity to Native Americans. These are the Ket and Altai peoples who live just west of Lake Baikal in or near the Yennesei River basin, a region broadly bordered on the south by the northwest border of modern-day Mongolia. Retroviral infections (Neel, 1994) suggest that the actual locus of this related population may be just to the south in modern-day Mongolia. The find is significant because the boy’s DNA was also of Western European affinity (and it was inferred he had brown eyes, brown hair and freckled skin), that modern Native American DNA contains about 25% of his DNA and, moreover, the contemporaneous east Asian population spanning over 4000 miles between Mal’ta boy and Beringia have no relation detectable to Mal’ta boy. Expounding on the enigma, this is puzzling because, even if by introgression, it is improbable that no detectable genetic trace of Mal’ta boy would be present in the modern-day east Asian population if his descendants had in fact migrated in that direction. Thus, we ask, if the Bering Strait were traversed which direction is more likely? A better way of framing this might be, if the East Asian ancestors were not there, then who was? Was it Mal’ta boy’s peeps or was it “Native” Americans? Mal’ta boy’s maternal DNA was haplogroup U, which does not appear in the Americas. In other words, on first impression, it appears that Mal’ta boy was part of a migration that left the Americas, picked up U ancestry somewhere in ancient mid or East Asia, and ultimately ended up in Western Europe (the idea that U was excluded from the Americas, after over 4000 miles of migration through East Asia without U being picked up due to some kind of absolute, time durable introgression is also not realistic, given what we now know about where this boy got his genes from). Ignoring the recent and clearly damning evidence, we can blithely argue for a bottleneck in which the modern-day North American haplotypes were derived of U, but this is a Pandora’s box one might not wish to open, as I’ll discuss later. In any case, large scale genetics analysis has consistently shown a salient feature of heredity that continues to serve as point of confusion and misrepresentation when reported:

  1. Modern-day East Asian populations have more Native American ancestry than modern West Eurasians but, and this is the key point,
  2. East Asian populations have less Native American ancestry than West Eurasians 24,000 ybp.

Because people tend to falsely attribute modern DNA statistics to ancient populations, the first fact is often imputed as the second fact. This is a false attribution. And Native American admixture shows up globally, all over Asia, Europe and Africa. But African admixture shows up nowhere outside Africa. I would say it couldn’t be more obvious what is going on here, but actually it can, as the rest of this discussion will show.

Let us muse on then. In Montana, USA, the fossilized remains of a Native American child were found, dated to Clovis time and called Anzik. This specimen revealed that, alack, for anyone suggesting the conventional view that there was an east Asian “migration” into the Americas just prior to Clovis time, Anzik proves it wrong. For Anzik should therefore show a closer overall genetic affinity to east Asians that is some 10,000 years closer than his overall affinity to modern-day Native American desendents. But he doesn’t. He looks like a true Native American. In fact, his affinity is closer to Mal’ta boy! The affinity to east Asians suggests Anzik is an ancestor of them on account of a recent diffusion from his stock to modern-day east Asia (not the reverse).

The pattern set forth by Mal’ta boy is evidenced yet again in another DNA sample taken from a location about halfway between modern-day Moscow and the Black Sea, called Markina Gora and dated to about 37,000 ybp. One of the reasons the direction of migration is now becoming more clear in the latest studies is because labs are now testing the full genetic makeup, the autosomal DNA, which includes both the paternal and maternal lines. This allows much greater visibility into migration directions and patterns. In any case, the analysis of this specimen’s DNA was reported in “Genomic Structure in Europeans Dating Back at Least 36,200 Years”, Seguin-Orlando et al 2014 in the journal Science. It was found that he had a greater representation of Neandertal DNA than modern populations, that he was closely related to Mal’ta boy, that he also was unrelated to modern-day east Asians and that he had strong genetic affinity with modern-day Native Americans. Importantly, his genetic makeup was basal to modern Europeans. It’s time to get real: we can see, the pattern is one of migration from North America to the Old World, not the reverse. This confounds the “Out of Africa” theory and if it were realistic, we wouldn’t be seeing this 37,000 years ago just a stone’s throw from the Levant. Rather, we should be seeing a pattern of migration to the east. Moreover, the genetic evidence taken as a whole suggests these samples represent a large population in that region and cannot be explained as an anomaly from the Americas. But even if it were, we note that this is a sample some 20,000 years before humans are supposed to have entered the Americas, located over 7000 miles away diffusing in the wrong direction with a population behind them spanning 7000 miles that picked up nary a base pair from him. And the problem is, they have the Native American genes with them. As with Mal’ta boy, there is more interpretation of this data we could do, but in the interest of sticking only to what the DNA evidence explicitly supports, we’ll stop there.

But realistically speaking, the consensus view totally shatters if we wish to extort all that is available from these genetic exemplars, for it also shows, upon closer analysis, that the more parsimonious explanation of the overall genetic picture is that Mal’ta boy’s kin migrated to the approximate area where his remains were found from northern North America (to be exact) over 30,000 ybp, and that modern-day east Asian populations descend from a second wave that migrated from the same place some time later (actually from both southern Asia and North America). In other words, the ethnic tapestry is such because, as the simplest interpretation would have already suggested, the east Asians were following the boy’s kin into the Old World, demonstrating a pattern of migration and diffusion from the Americas to the Old World, not the reverse, and all long before the “first peoples” supposedly came to the Americas. But that’s not all. The mutations that define the U haplogroup clades and sub-clades show that our boy is 20% Denisova. We’ll discuss what this means later, but for now, we just note that it fits with a simpler, more parsimonious explanation of the big picture. A similar pattern, suggesting that the B haplogroup derives from the Americas, is apparent in a DNA sample dating back ~40,000 years taken from South China called Tianyuan, but these results are not as clear. The entire model of migrations vis-à-vis Beringia as currently held are no longer realistic. And this date of 30,000 ybp is merely a realistic minimum, which should force us to reconsider how hss emerged from Africa only 70,000 ybp, diffused as if being chased by saber-toothed tigers literally half-way ‘round the world, blasted through the arctic, settled in North America, then blasted their way half-way back to Africa (again by diffusion, not running) in just 40,000 years. Clearly, “Out of Africa” isn’t realistic either. Studies raising similar suspicions about the inadequacy of the “Out of Africa” model, particularly regarding the fallacy of intragroup diversity as a measure of antiquity, are numerous (Relethford, 2001; Harding 1997; Harding et al 1997, 2000; Yu et al 2001; Templeton 2002). That’s probably why these are called “models” and not “theories” at this point.

As for the academic consensus, we’ve only just started and it’s about to get real up in here.

In this view, Beringia and North America to some extent look more like a melting pot between Old and New world where lots of interbreeding could have occurred. The same is true of the Levant and the Sinai where the only entry point into a vast and possibly heavily populated continent of potential admixture existed. A key oversight in the interpretation of genetic intragroup diversity as indicative of antiquity is that archaeological evidence showing a time progression of hss out of Africa may be correct, but it says nothing of the larger time sequence of events: these sites may merely be indicative of migrations out of Africa after an hss archaic progenitor from outside Africa “generated” them. It’s the stratigraphic depth that is key and it fuels confirmation bias. There is no a priori reason to assume that these depths are the only depths that exist. Notice that the problem here even extends where we assume geographic coupling could be a factor. Geographic coupling is more salient where there are natural barriers, so east Asian populations may have changed (like the U haplogroup appearing then removing) whereas the absence of U in the Americas seems less likely to have been extinguished in this manner.

This would also explain why the X2 (X2a and X2g) haplogroup of northeastern, modern-day U.S. appears there as well as in western Europe; Mal’ta boy’s peeps brought it, even though Mal’ta boy himself didn’t have it. It came from the Americas to Western Europe the long way (not across the Atlantic, although that option is also possible, but we’ll see later that X2 is likely a more archaic – read older – genetic sequence originating from Beringia and that survived in the eastern US presently because this area was more isolated from Old World admixture coming back into America much later). The evidence regarding X2 is solid, taken from the ancient (pre-colonial) remains of a Native American at or near modern-day Hopewell, USA. The particular variant of X is distinct and others have in the past conflated it with a more common variant, thereby misunderstanding it’s significance.

While natural sea transport, and even some deliberate migration across the sea is feasible, it is improbable that migration volume rates were high enough to establish meaningful gene flow across continents in these time periods. Therefore, Beringia is the key diffusion path node we should focus upon. Having said that, some gene flow over longer periods of time of much more gradual and smaller impact, could have followed the “Magellan” natural route from Chile to Papua New Guinea, offering some gene flow in that direction only (flotsam is pulled from the coast of Chile beginning just beyond its breakwater, north toward the equator, then west, dumping out in the vicinity of Papua New Guinea). And the same is true for the natural trade currents from the northeastern parts of North America which dump out at modern-day Ireland and northern Spain. These are the only two natural sea currents that could transport the full distance from New to Old World. Curiously, genetic representation of Denisova genes in modern populations is most concentrated in Papua New Guinea and Neandertal in Western Europe. It is a conspicuous “coincidence” that both of these concentrations would be found exactly at the only two trade current termination points from the Americas (in sea transport by trade current, the transit can be accomplished without power of oar or sail or explicit navigation – the challenge is surviving 2 to 6 weeks at sea). Were the Neandertal and Denisova just cousins of hss, all of whom originated in the Americas? We’ll get to that.

The South Pacific Current, at its nearest landfall, begins as the Humboldt or Peru Current which is most aggressive and closest to shore near modern-day La Serena, Chile. Fishermen in this area could easily be swept into this current if they find themselves beyond the breakwaters. It dumps out preferentially near modern-day Lae, Papua New Guinea but also branches to south-eastern Australia. This kind of analysis is more damning than most appreciate, but you can think of it as similar to a two-body problem and how patches intersect. Basically, if you know the initial conditions, you really only need to know the gravitational forces involved to trace a trajectory to its conclusion. You can constrain which currents are viable for sea transport based on appreciating initial conditions (close to the breakwater – which is like a patch intersection – rapid and high potential). And this leaves open a question about Australia, which we’ll examine shortly. To the east of the Americas, the closest current to shore that is likewise aggressive nears what is near modern-day Miami, USA and proceeds northeast until it reaches modern-day northern Spain at a tangential clip (making landfall less probable) then turns back northwest toward modern-day Ireland where a landfall is more likely. These are the only two currents “departing” the Americas that both come close enough to land to ensnare a primitive seafarer (likely just fishermen just off shore) with sufficient current to draw them into a preferred course with a relatively short transit time. It is important to note that when we speak of HE “moving” across regions of the globe, we are in reality talking about a very slow diffusion of the population that occurs without any particular direction or purpose in mind. Thus, when a geographic passage narrows, even if it is not closed, gene flow can be reduced. If HE had been present in both north and south America for any length of time, we’d expect the Isthmus of Panama to cause some genetic divergence between those two populations. In fact, it has been found that even birds show divergence on account of the narrowing of land at the Isthmus, so this is expected. Given the discussion of currents and the genetic distributions of Denisova and Neandertal, the inference that Neandertal is correlated with North America and Denisova is correlated with South America is undeniably present. But what about hss? The question about hss is where the term “decoupled” comes into the Assimilation model: hss most likely evolved to its current form by global admixture after an intermediate sub-species or species evolved as HE and then later as Beringian. As it turns out, the Isthmus isn’t the only gene flow regulating barrier to consider. While the Rocky Mountains could be considered a candidate, their altitude is low enough and the range broken enough that, given what we’ve seen with HE, this wouldn’t like have an appreciable effect on HE gene flow. But the same can’t be said for the Andes mountains to the south. In order to see this, it is important to remember that, again, HE “traveling” is misleading and it is really about diffusion, and the Andes provides a substantial wall to diffusion that the Rockies do not offer. Thus, regardless of which population is correlated with which, we expect three distinct genetic drifts to occur in the Americas given sufficient time; one associated with the geography north of the Isthmus, one associated with the geography southeast of the Isthmus and one associated with the geography to the southwest of the Isthmus; the latter two being distinguished by the Andes mountain range.

Applying parsimony to the question of Australia or the Amazon (as a possible fount of hss), we note that by placing the fount of hss at the Beringia we are in effect saying that HE did not require sea-faring skills per se because:

  1. The entire transit sequence was natural sea transport
  2. The amount of gene flow that followed as a result was merely a “trickle” over a long period of time insufficient by itself to create a lasting, founding population.
  3. The primary gene flow “pipeline” at Beringia was in a more realistic place – where land masses meet.

In the case of Australia we must assume that the Wallace line was breached not merely by natural sea transport but by some deliberate maritime effort and that, once accomplished, it was of sufficient scale to create a founding population in Australia. Therefore, the Americas is the more parsimonious narrative. Moreover, if Neandertal, Denisova and hss are derived of a common ancestor, it is unclear how this ancestor might have generated three distinct lines (which required some degree of genetic drift or geographically defined admixture). And the only route of diffusion that could provide sufficient gene flow for a founding population for all three at that time was Beringia. The proviso to add to this is that the word “fount” is misleading. It might be better described as the “last fount” since it is apparent that HE had already developed quite a few characters consistent with modern human behavior, not the least of which was the hands necessary to engage generalized tool use. Ironically, the origin of that takes us yet deeper into antiquity and is not addressed here.

But the question of generalized tool use is critical. For this would be the first known instance in which natural selection operated on a character via a generalized mechanism. Normally, natural selection operates on a specific trait or cluster of traits. The key distinction is that nature has defined those traits, whatever the overall complex, and a subset, however large, of various modes of environmental manipulation it permits. But with hands that allow for a generalized set of characters to present through both design and use of tools, natural selection was operating directly and forcefully on cognition itself, which is the abstract engine that drives the generalized behaviors. When the mechanical operation of hands and their causal output is not fully generalized, selection pressures continue to operate on “hard-wired” behaviors associated with a special class of mechanical hand motions and causal outputs. No other species has the mechanical ability to manipulate the environment in an essentially fully general way. That general ability is the key to understanding the evolution of cognition and it likely occurred long before HE reached the Americas. But it was in Beringia that this general capacity likely was expanded upon by key selection pressures not present in the Old World. We’ll examine that later. The key takeaway is to realize that natural selection was, for the first time, operating on a body plan that permitted a direct cognitive connection to physics itself and which was more general than the mediating body plan. We arrived at this body plan by chance, which was itself driven by special selection pressures unrelated to the general selection pressures we’re discussing.

To be specific, we are saying that natural selection was, for the first time, operating via a body plan that depended on an abstracted connection to a potentially infinite number of causes and effects in which the effect redounds to a change in the odds of an organism perpetuating its genes that can only be fully characterized by appealing directly to relationships between the presenting selection pressure and matter and energy as output. The only missing ingredient here to fully generalize this was how these effects impacted the odds of a population perpetuating their genes, which was most likely not fully exploited (used) until HE reached some other environment. We will be looking for how this worked there (Beringia) momentarily, but it required a comprehension of causality between organisms, not just between organism and natural environment, even if that “comprehension” were illusory (the question of merit would be, is its product predictive, not necessarily is the “comprehension” a real phenomenon). The first skill-set “learned” was physical, the second, social (which ultimately redounded to a physical outcome affecting the probability of perpetuating genes). It most likely occurred on multiple continents. So, forget about “out of” anything. It isn’t realistic. The trick is to see if an alternative presents that comports with what has been learned from molecular biology. If you ever find alien life somewhere, to find out if it is of technological agency (or “intelligent”), you might want to look to see if it has a body plan that includes a mechanical setup that permits the organism to experience a general relationship to nature itself; that is, physics. Equivalently, you seek a body plan that enables general selection pressure in addition to special selection pressure. Only then do you have selection pressure operating via mind (abstract thought) on the probability of biological information perpetuation through time. This answers the long-standing question of why humans “needed” to evolve larger brains. It wasn’t that they “needed” to, it was that they by dumb luck got a body plan that allowed it, and the perpetuation of biological information was dramatically enhanced when an organism used the mind and not just the body, and was therefore positively selected. This is key to understanding and evaluating the presence of technological agency in the universe.

In order to create an operational definition of what we mean, we need to define a term, which I denominate a “general industry”. We begin by normalizing the magnitudes of mass and energy such that m=1 and E=1. Then within an allowed range of periodic elements and forms of energy, we say that an industry of matter and energy is general when initial state and final state may take on any directions when represented by a rank 2 order 3 tensor of the systems degrees of freedom. I don’t hide behind “credentials” and “consensus”, and anyone who truly understands what they’re talking about can explain something like this without telling the reader to go take a bunch of courses in differential toplogy and Lie algebra. So, let’s put this in English.

Consider the following thought experiment. We beign with an imaginary Homo species, call it Homo x. We next imagine an individual Homo x, call him Beavis. Beavis has full range of motion of his shoulder/arm/hand system, so, mechanically speaking, he can create a trivial tool with the same single, sweeping motion, regardless of the shape or morphology of the two rocks. But let us say Beavis is a freak in that he is the first of his kind to have this strange, unlimited degree of freedom motion. So, he’s not terribly bright, but he can learn. So, growing up he likes to play with rocks. He picks them up and then decides one day to strike one with another and watch it break. Fire, Fire, Fire. In fact, he enjoys it so much, he does it thousands of times. On occasion he notices that he strikes a break in one of the rocks that leads to something useful: he finds that sometimes he creates one that is rather useful at cutting the hides of animals. He realized it because his dad and all his ancestors did the same thing, but they did it by finding rocks in situ that had a similar shape (what anthropologists might later call a “geofact”). So, realizing he just struck a rock that looks just like the in situ rocks he has been conditioned to look so strenuously for must have been a startling surprise. No one actually broke a rock like this before. Hmm. So, he wonders, could I do it again? He tries but no luck. He fails because each time he tries it the rocks he holds have a different morphology. He isn’t smart enough to make this connection yet. But he is a learner! Eventually he does it again! But how? Then he keeps on doing it, breaking thousands and thousands of rocks over the years. Finally he develops an intuition and subconsciously develops a “feel” for the rocks. This would not be possible without a full range of motion in his shoulder/arm/hand system because each time he strikes a rock, each rock is of a different shape. Thus each time he strikes a rock, he needs a full range of motion to “test out” the effects on different shapes. If a chimpanzee tried this, he would never be able to develop a learned feel for how to adjust the orientation of shoulder/arms/hands and the differently shaped rocks. And that’s because the chimpanzee cannot simply adjust his shoulders, arms and hands in response to the change in the shape of a rock, which when taken from a broad, random sample of rocks in situ, will require a correspondingly broad range of shoulder/arm/hand responses to get the full experience. Beavis begins to realize, probably subconsciously and without requiring higher cognition, that the shape of the rocks is important and when the shape changes, he needs to change the orientation of the hand holding the rock and the movement of the wrist of the accelerated hand. And again, he couldn’t have learned this if the shoulder/arm/hand system had only a limited range of motion, for the results would never be consistent from one try to the next and there would be no pattern template from which to learn. In tensor-talk, we say that the direction in which optimization increases most rapidly is indeterminate (and their partial derivatives don’t exist). But it’s important to understand that this is not just true in a rock-splitting example, this is generally true for any trivial tool made. This has nothing to do with brains yet, it’s all mechanics and learning by rote repetition at this point. He draws the maximum by experience, that being the maximum in the number of successful tool breaks for every stone broken. But he needed the ability to manipulate mass and energy across the full range of degrees of freedom to do this. Once he figures this out, he and his kin experience a dramatic increase in survival rates because they are able to much more readily and frequently skin animals that before had been inedible, while everyone else was, say, dying because their food source was drying up. Thus, the selection pressure of food shortage in Beavis’ case is no longer operating on a pre-defined behavior or physical trait (that would remain constant in each life circumstance or instance), rather it is operating on a general set of behaviors that can change from one life instance to the next. How is this? Because Beavis can make a tool that can be used generally in the sense that:

  1. It can be used again in the future, depending on need, for the same purpose or for any other purpose for which it is suited.
  2. He can make many of these and share them with his kin, who will also be more reproductively successful.

So, the degree of reproductive success this confers upon Beavis and kin depends on how a tool is used case by case in an individual’s lifetime, not on the fixed, inherited traits that lend skill and aptitude in splitting rocks (which also shows this body plan must arise from special selection pressures only – it must be dumb luck for it happen). But obviously a selection pressure can’t operate on that directly. Rather, it is operating on the population’s capacity for abstract thought, which is what predominately governs the making and improvement of the tools and how those tools get used in any given instance, all of which then, in turn, redounds to reproductive success. In this way, the selection pressure has transformed from the more common form, which I call special selection, to this form, which I’ve denominated general selection. And none of this would have happened without Beavis’ new body plan.

So, general selection has a two-pronged effect. It tends to accelerate cognitive evolution favorable to an understanding of physics (how the tools are made – which forms the basis for every other intellectual entreprise) and cognitive evolution favorable to social behavior (how the tools are used – which forms the basis of emotional IQ, language, social structures, etc.). Where general selection is absent and only special selection operates, these two things can be mimicked, but they are qualitatively fundamentally different because they are hard-wired and operated upon by special selection. So, yes, humans are special in that sense, but both forms of selection are still purely natural.

About Beringia

Okay, so what is this MPT (Mid-Pleistocene Transition) thing and what exactly are we suggesting was going on about that time? The Mid-Pleistocene Transition occurred about 1.2 mybp and represented the beginning of a general cooling trend around the globe that continues today. As we approach the MPT time from the past what we see is that modern-day eastern Siberia was warmer. Glaciations began on cycles of about 41,000 years, then expanded to cycles of about 100,000 years after the so-called Mid-Pleistocene Transition (MPT) around 1.25 mbp (curiously the same time of the hss genetic bottleneck, which we’ll see is not so curious after all). During cycles of 41,000 years the overall climate was generally warmer. The MPT marks a climate boundary in which the period before was much warmer than the period after (known from the species implicated in biogenic opal production). Nonetheless, episodic glaciations still occurred on 41,000 year cycles and Beringia was dried and inundated numerous times before the MPT. In other words, it really is special pleading justified on the basis of direct evidence that we cannot obtain, to say that this happened only once. If we think of populations randomly diffusing over time, we’d expect that HE, being a tropical primate and all, wouldn’t go too far north even in warmer times. But what the record is showing is that HE was in fact in some pretty cool places such as the modern-day Georgia during a similarly warm period. Thus, we submit it unrealistic to think HE didn’t also diffuse into northeastern Siberia when the overall climate was much warmer (when populations diffuse, it isn’t a decision, it’s just a consequence of reproductive success). Yes, their numbers might have been smaller as it was probably still colder than Georgia, but nonetheless, we should expect some diffusion into that area. As the MPT approached things got cooler. But notice that by then we are now 600,000 years of evolution beyond Dmanisi Georgia and some cold weather, local adaptation is to be expected on account of HE’s tendency to diffuse over the entire Old World. But also notice that as it got colder the sea levels dropped (first glaciation). As the sea levels dropped “new land” to the east of Siberia opened up, and we know the climactic conditions therein were more favorable than those on the Asian mainland. So, logically, HE would have diffused into that area also. This is kind of a trap. As HE diffuses into Beringia HE becomes trapped by the cycles of sea level changes over time. Some 41,000 years later, as the cycle alternates, HE now sees his and her land being inundated driving diffusion that then spreads back both to Asia and modern-day Alaska. This is the “killing time” when natural selection is pumped pretty hard. But notice something. The MPT has passed and now the point in each glacial cycle where temperatures are warmest are colder than the same corresponding period before the MPT. The MPT has trapped HE into a vicious cycle of “killing times” from which he and she cannot escape. Only by adapting greater cold weather attributes can HE escape this cycle. The nature of the Pleistocene is that this pattern intensifies with time so that not only does HE have to evolve (change) to escape the trap, HE must evolve at a rate that exceeds the overall rate of global cooling. Eventually he and she do so, and the path of “least resistance” in this scenario is the ice free corridor that opens up, not in Asia, but in North America. Thus, escape to North America by diffusion will almost certainly lead to greater gene flow in that direction vice Asia.

To see this overall pattern, you have to picture the cold and warm cycles as an oscillation in climate that appears to be the same from one cycle to the next, but when graphed over numerous cycles, we see that the overall tendency is for each cycle’s lowest temperature to get lower, and each cycle’s highest temperature to also be lower. This is called a non-monotonic decrease in temperature.

The earliest secure dating of HE or hss presence in Beringia comes from archaeological evidence of lithic industry and human remains dating to 32,000 ybp on the Siberian side at the northern mouth of the Yana river. The problem with this site however, is that it is some 600 miles away from Beringia but has been adopted as somehow representative of Beringia (technically Beringia extends this distance, but here I pretend to define Beringia more provincially as the Bering Sea shelf, technically known rather as the Bering land bridge). The reality is that no archaeological site in Asia or modern-day Alaska has been identified pre-Holocene that is close enough to Beringia proper to provide meaningful insight (some would like to conscript the Yana river site to help explain the “refugia” hypothesis infra, but it doesn’t really work). What it does show however, is that anthropologists have underestimated the extent of hss diffusion in extremely cold climates as this site is about 71 degrees latitude North and 32,000 years old.

The most significant topographical and to some degree climatological barrier to diffusion east or west in eastern Siberia is the Verkhoyansk Mountain Range. This formation, thus far, delineates arcaheological sites on the west that are over 20,000 years old from sites on the east that are much younger. But this shouldn’t be surprising; the Verkhoyansk was glaciated from 25,000 to 14,000 ybp, so any finds of technological agency predating “Clovis” time to the east of that range would need to go deeper than 25,000 ybp. And that is not easy to do in that climate today. There appears to have been a “land” bridge connecting the continents of Asia and North America from about 60,000 ybp to about 11,000 ybp. Periods of land bridge availability prior to about 125,000 ybp have not been studied in detail but, given the Pleistocene episodic glaciation cycle, it is likely this kind of pattern continued into the past, at least back to the Mid-Pleistocene Transition time period. This is significant because there has been virtually no interest in researching primate diffusion in this area prior to about 100,000 ybp since the working assumption has been that no Homo erectus could have been anywhere near there. The earliest signs that have been excavated of any Homo adapting to an environment of this class in Asia is probably the one found at the northern mouth of the Yana river (a few miles east of the northern mouth of the Lana River), which is dated to 32,000 ybp, is at 71 degrees north (above Beringia), and is a hss site, which we discuss later. The North American ice corridor leading to the plains of modern-day USA likely opened 25,000 ybp, and might have supported a diffusion in that direction if sufficient weltands with mollusks existed (there was little else to survive on until about 14,000 ybp, too late to support diffusion to a relatively securely dated site in South America called Monte Verde). It has been recently discovered that mollusks may have been a central feature of the life of the Homo erectus, evidenced by the first signs of abstract expression by the rendering of imagery on shells some 1.5 mybp at modern-day Trinil, Java (Homo erectus at Trinil on Java used shells for tool production and engraving, Nature, Joordens et al 2014). The suggestion is that if this is an early representation of abstract expression inhered in nature; that is, the first recorded thought, then perhaps mollusks were a profound and important source of food for Homo erectus (it also suggests that general selection pressures were already at work on HE, which is why we must totally rethink HE’s ability to locally adapt).

As the narrative progresses, we will see more and more reason why we should be looking at an extremely cold environment, and we already see why a simplex bottleneck to North America may be involved (Type B lice). There are other places to consider, such as northern Siberia on the Artic coast. While diffusion along the frozen Arctic coastline might be relatively simpler, diffusion into the hinterland of Siberia might be harder, establishing a simplex bottleneck to North America. This would explain why the east Asian population is the odd one out as well. But it requires steeper assumptions, such as the idea that HE could survive at 71 degrees latitude and that there was some refugia available to them after the Mid-Pleistocene Transition began to cool things down to where they are now. If we can find a refugia near the Siberian Arctic Ocean boundary I think the odds would be better.

map_Of_Beringia_002Click image to enlarge. A key area of interest for the recovery of fossil remains is the south-facing side of the Aleutian island chain, which was part of Beringia, and which would have had the mildest environment during glaciations. Wetlands for mollusks were likely abundant. Modern-day Seattle, USA is 48.5 degrees North and the southernmost Aleutian rise is 51 degrees North. Dmanisi, Georgia, where 5 HE fossils dating to 1.8 mybp were found, is at 41 degrees north and 1015 m altitude vice -70 m and Pre MPT. The outline in the Bering Sea denotes the extent of dry land during glacial maximums.

The lithic industry in Beringia provides yet another reason for the consensus view to suggest that something “rapid and unprecedented” must have happened to explain the consensus model, which sounds like special pleading when it is done too often (see later discussion about a rapid diffusion to South America, rapid Asian diffusion, rapid Asian evolutionary change, etc.). What we find is that there is a stark difference in the lithic industries of the lower Americas and Beringia and east Asia, and the consensus view thus far has been to attribute this to “rapid innovation” in lithic technology. Given the pattern of “rapid” things going on in the “peopling” of the Americas, it is more realistic and likely that

A different lithic industry existed in Beringia at very low stratigraphic levels, and that arguments suggesting that lack of evidence in Beringia for earlier occupation are indicative of that reality are more likely due to the fact that there has been less deliberate selection of Beringia on account of effort and costs (selection bias), the very sites most likely to be oldest are now beneath the sea, and notably also because the stratigraphy of Beringia is notoriously complex and unreliable. And everything we have on the table tells us that Clovis culture likely had archaic antecedents on the Bering shelf, and vexingly, we can’t get to it. And less ancient antecedents are likely strewn in deeper, bad stratigraphy not just in modern-day Yukon Territory but all the way down to the USA plains. To further confuse the matter, Clovis likely originated in South America from this plains antecedent, which is quite deep and will likely have different diagnostic criteria than Clovis. Occupation on the approximate longitude of the modern-day Great Sioux Nation (Oglala Lakota) should be sought at much deeper levels, gradually getting deeper as you go north into modern-day Montana, USA, Alberta and Saskatchewan, Canada. Some diffusion east of the Rocky Mountain barrier to the Mississippi river, biasing north to the Great Lakes Region, should be expected.

It has been well-known but less discussed that the Asian populations of today are too genetically distinct to be linked to North Americans today (discussed previously – but this affinity issue has more to do with ancient east Asian DNA). In other words, the migration hypothesis which involves Asians migrating into the Americas is not sustainable based on what we know about genetics. While one can simply assume it happened, then calculate the number of migrations and the times they occurred (which has been done), this doesn’t resolve the problem of overall genetic affinity. There’s some fancy dancing in this ballroom: geneticists calculate Native American “migrations into North America” by testing against modern-day Asian genes, then assume a “migration” whose direction is conspicuously unrealistic, nigh impossible. Since it has been learned that humans might have occupied Beringia for some 15,000 years before diffusing into North America, some have quickly snapped this “refugia” hypothesis up as the explanation (Tamm et al, 2007). It has been supported by direct evidence as well, showing that a population occupying the geography of Beringia will likely show genetic indications of exposure to selection pressures consistent with bottleneck and refuge (Analysis of Mitochondrial DNA Diversity in the Aleuts of the Commander Islands and Its Implications for the Genetic History of Beringia, Deberneva et al, 2002). But this is a kind of Trojan horse. Once you suggest a recent instance of this, then show that Beringia does in fact produce a bottleneck on Homo species, you invite entirely new interpretations of how the migrations over time occurred. Apparently, once again, this subtlety, like so many others, has not dawned on the proponents yet. In other words, it really is special pleading justified on the basis of direct evidence that we cannot obtain, to say that this happened only once. When you show that it can happen once, you show it can happen many times. It probably did. Once we take the “refugia” hypothesis as given, it becomes unrealistic to assume it only occurred once at some magical point in history when the very context of the proposition is cyclic.

And this takes us back to the Sahul, that region beyond the famous Wallace Line. hss and elephants crossed the Wallace Line (narrowest passage about 20 miles open ocean) 500,000 years ago. Why couldn’t they cross the Bering Strait? All manner of mammals were crossing both within and outside the ice age. The idea that only Homo erectus was excluded is not realistic. It is not unnoticed in this analysis that HE was a “tropical” species, but given the time frames we are talking about, assuming a lack of local adaptation over time is not realistic, and we know other mammals were doing just that. I will define for my purpose the term “signal”:

A signal is any propagation via any combination of the strong, weak, electromagentic or gravitational forces of nature observed which may or may not be the product of technological agency. For the anthropologist we are usually talking about visible light, but it could be anything within this definition. So, it could also include by example the decay products of any mass, a radio signal, gravitational waves or charged particles. Because “signal analysis” it a lengthy topic in its own right, I’m going to save that for the follow-up treatment to this article where I discuss the evolution of social behavior.

And if the hypothesis advanced of general selection is correct, we should always anticipate that:

Wherever and whenever an organism exhibits a mechanical plan supporting general selection, and in the absence of other factors, it should be expected that abstract mental faculties will increase in proportion to the pressures present as an integral of time, and a staid cognitive faculty is not the expectation. Ergo, validation of the presence of technological agency should begin with an examination of the body plan to determine if it is capable of general industry. Secondary to that, a putative artefact can be examined for the same purpose by its signal, and if general industry is shown necessary and sufficient for the putative artefact’s observed state, the putative artefact was in contact with technological agency.

HE had this mechanical plan some 2 million years ago. Thus local adaptation on Earth by mental faculties is to be expected, and things such as social skills, clothing and the use of fire should be expected. Once socialized it will likely evince an exponential pattern. Because the socialization aspect is the least understood aspect of technological agency, I will not engage it in this particular treatment, but will examine it in a follow-up. If we observe “stone tools” of putative HE origin, we can, in the simplest case, examine the tool by its signal.

Of note is that elephants are known to swim at least 50 km in open ocean and humans and HE could certainly hitch a ride (some humans do this today). This is something to keep in mind as other mammals might also serve that purpose, including those in Beringia (Big Diomede island can be seen from the Bering Strait shoreline). The real challenges to crossing are not the water or even the cool climate but, rather, the large mountains of ice that showed up after 1.2 million ybp. More recent evidence has suggested that Beringia was in many ways a kind of oasis in that glacial extent into this region was limited, even while glaciation outside it was considerable. It consisted mostly of prairie with woodland near rivers and freshwater. When glaciations retreated (there were several) gene transfer was likely from West to East as the encroaching of the sea was from west to east. This could be the region where roughly 18,500 “persons” were “trapped” for an extended period in which selection pressures could have driven traits like those we see in Neandertal (UV changes, fair skin, light hair, etc.) and Denisova (large body frame). This would have been the first glaciation that began around 1.2 mbp. Afterward, the glacial cycles would have been similar but less extreme as the climate in Beringia was most likely warmest before 1.2 mbp. Beringia likely contained considerable wetlands near the ancient coastal areas which might have provided a food source in mollusks (mollusk species are most diverse in North America, tend to appear in wetlands at low altitude, tend to appear in areas where surface friction is high, such as by rocky surfaces and generally match this environment well). Mollusks are quite common both today and in the past in Beringia. Wetlands are a salient feature of this area as well. Mastodons were in Beringia as late as 125,000 ybp. What is particularly noteworthy and something apparently not noticed even by the “aquatic ape” proponents, is that the human dive reflex, which reduces heart rate and metabolism to reduce oxygen intake when submerged, is especially strong in cold water, even more so than most other aquatic species. Please read that again, because it suggests that if you’re looking for aquatic adaptation for humans, you might start in particularly cold waters, like those of Beringia.


Click image to enlarge. I decided to find the diffusion route of highest probable gene flow by minimizing on altitude and latitude. The result was a surprise. Homo erectus never needed to do half as much as assumed to reach Beringia. Upon the first glaciation Homo erectus could have walked and lived on relatively flat lands, rising only slightly to cross over what is now a flat valley across Kamchatka to the Pacific side. He was never more than 10 degrees above the latitude of Dmanisi. But it also explains the biasing of the bottleneck to preclude flow back to Asia. On the Asian side at glacial minima this fortuitous route was closed by the sea and the Beringians had to diffuse over mountain ranges at higher latitudes at the Bering Strait to penetrate Siberia. At glacial maxima notice the narrow shore line on the eastern shores of Siberia; this was covered in glaciers and ice ran to the sea. On the North American side during glaciation modern-day Alaska, USA had an ice free corridor going directly to the USA midwest. When the glaciers melted Beringia flooded gradually with a shoreline moving from west to east, forcing the Beringians to the North American side, which had ice-free regions in the southwestern and central portions of modern-day, mainland Alaska, USA. Thus the bottleneck on the Asian side was continuously duplex while the bottleneck on the North American side alternated between duplex and simplex according to glacial cycle.

A quick note about the population size is that the data we have suggests that both Neandertal and Denisova populations were highly inbred (who commonly engaged in selfing – which is parents and offspring producing offspring). While some studies note the population truncation at 1.2 million ybp, some assume that this means that they were also widely distributed across the globe. But this assumption is mysterious since there is no knowledge as yet of who the hss ancester 1.2 mybp actually was. I had for some time suspected strong inbreeding as an explanation for the emergence of human language, but that subject is so subjective I won’t get into it here. I will just suggest that human language likely emerged as a result of intensifying reciprocal altruism in Beringia in which ethics of trust were first formulated between kin, primarily as a means to prepare and plan for deadly effects of climate. As Beringian, Denisova and Neandertal expanded across the globe, this cultural behavior likely expanded with them until the practice of inbreeding gradually tranformed into interbreeding.

Recent studies have shown that in the most recent habitation of Beringia, as we suspect for the much earlier HE habitation, the people lingered there for a while; in the most recent case about 15,000 years. Then they spread, some into Asia, but mostly into North America (consistent with our point about gene flow leading the retreating shoreline in Beringia as well as the ice corridor). But once again interpretations are made that are non-sequitirs. It is “believed” that the relatively uniform distribution of genetic information from North to South America is due to a rapid diffusion after this last bottleneck. But this does not necessarily follow, as it could have also been due to the fact that a population was already present in the Americas. Other things bolstered recently are that the massive die-off in the Americas most likely occurred due to a pathogen brought by the arrivals from Beringia. This could also have wiped out most of the “native” population in the Americas that was already there, once other pathogen carrying hominids from Asia re-entered the Americas.

Sedimentation rates on the Bering Sea floor appear to average about 3 cm every one thousand years, suggesting a very approximate depth of 36 meters at 1.2 million years (it may not be constant, and some cores go much deeper at that age). Few cores have been drilled this deep and most appear to tap out at around 300,000 ybp (about 9 meters). Thus drilling depths of 50 meters or more would need to be anticipated to ensure a good look-back to and even before 1.2 million ybp. Deep cores (to 700 m below floor) have revealed that carbonate deposit layers become progressively thicker and more frequent as you go deeper.

It is believed that about 1.2 million years ago there was a population bottleneck of hss ancestors with a total population of about 18,500 (there was another one around 70,000 ybp of about 10,000 persons). This (1.2 mya) is the precise time that the first ice age began and global climate began to cool (the Mid-Pleistocene Transition, or MPT). This suggests the likelihood that the reason for this bottleneck was climate since no global extinction events occurred at that time. The only location on Earth where HE could have been bottlenecked due to climate, given that HE was a still-adapting tropical primate, was Beringia (no other extreme latitude refugia has been identified). And it fits because the eastern end of Siberia was considerably warmer than today just prior to this. At this time, Homo erectus was essentially the same size as hss (maybe slightly smaller) and this would be a favorable adaptation to a cooler climate. Since no major geologic extinction events can be identified in this period that would have been global in nature, this bottleneck was most likely a physical barrier that separated one population of Homo erectus from the other. Given the wide dispersal of Homo erectus, including having crossed the Wallace line, this suggests that whatever it was it must have been a very solid barrier (rivers, mountains, etc. seem to have had little impact on HE) with an interior territory large enough to sustain the HE through this period of bottlenecked genetic change. The most obvious candidate is Beringia. This comports with the view that we’ve been building; namely, that HE entered the Americas sometime before 1.2 million years ago, that the arrival was likely somewhat close to 1.2 mya and arduous (befitting Beringia) given the small population numbers, and likely endured for a long time. Thus, this reasoning also points to the Americas.

While Beringia may have been much warmer than it is today before the first ice age set in 1.2 mya, what about the water? We know that HE or hss crossed the Wallace line to reach Australia and also likely crossed open ocean to reach Java. The width separating land in the Bering Strait is no greater than about 25 miles (Big Diomede). One of the things overlooked in anthropogenics has been this very issue. What no one has apparently realized is that prior to 1.2 mya crossing the Bering strait, indeed surviving in that area, would be far easier than it is now. So the challenge of “making the passage” is a completely different one when we backdate it to this time. And once again, we can see it as unrealistic to think that HE could make it over open ocean to Java 1.4 mya but then not likewise cross the Bering Strait. That just doesn’t add up. Granted, the numbers of HE that cross might be low considering how far north this is, and considering the difficulty of the journey, but 18,500 reproducing “persons” is reasonable over 200,000 years’ time. And that is about what we’re looking at. After that the Bering Strait would begin to resemble more what it looks like today; forbidding and cold even in the best seasons. This is probably the founding population of humanity geneticists identified as the 18,500 persons isolated from all others 1.2 mya. Something happened in the Americas that did not occur in the Old World, and that is what made us human.

About Asia

If the remains found at Dmanisi, Georgia appear to represent an early form of HE it is probably because they are. Skulls of 5 intact HE, aged 1.8 mybp, were found, which indicated a wide variation in skull shapes, but that they were still within the range of a single species. It was a stark reality check to confirmation bias: when found separately, such skulls would have likely been classified as different species. When viewed together, they are obviously of the same species, just of normal variation. What it reveals is that confirmation bias has skewed prior assumptions. It was reported in the October issue of Science, 2013. This suggests that Homo erectus was apparently the only, or one of just a very few, hominids and the innumerable “species” found in Africa were likely the result of confirmation bias in analyzing fossil skulls. This is the narrative that recruits the fewest number of assumptions and is therefore more likely. Homo erectus was likely a omnivore that ate the stomach contents of their prey as well as other hominids, including Homo erectus.

Thus the narrative that is forming is one where HE emerged location unknown (but probably west of the Bering Strait) and arrived in the Americas probably around 1.3 million years ago. After being isolated, three variants of HE descendants emerged, hss, Denisovan and Neandertal. Each likely emigrated from the Americas to Europe, across the Pacific and in greatest volume, across Beringia. It was likely Denisova to Pacific, Neandertal to Europe and hss to northern Asia. 1.4 to 2 million years ago the topography of Beringia was being shaped by Pre-Illinoian and Illinoian glaciation. Numerous HE local adaptations likely occurred in throughout Asia, which is the simplest explanation for the variety of fossils found there. Like the case in Africa, confirmation bias is likely to appear where one has a predisposition, for whatever reason, to classify fossilized remains as a unique (non HE) species. The reality is probably that there was greater intra-species diversity of HE than there was of hss, and this seems to be overlooked too often.

Fair Skin, another mystery wrapped in the Enigma, but likely an Asian phenomenon

The subject of the emergence of fair skin, as one might expect, entails ethnic confirmation bias that waxes nigh in the literature and becomes fairly easy to see for the pixie dust it is. Nonetheless, we need to examine it in the context of the larger enigma. We begin by describing the adaptation so that we can better understand from whence it might have come. Here are the facts.

The balance of evidence suggests that fair skin involves changes in the DNA related to the production of melanin and something called urocanic acid. Both of these substances are very strong UVB blockers which, in climates where UVB is low, are maladaptive because the UVB is needed to process Vitamin D. It could also be maladpative in dry environments as urocanic acid has been associated with moisture retention, thus making the hominid with fair skin more prone to dehydration and dry skin. But this offers yet another moment to take stock of what is realistic versus what isn’t. If we see that two phenotypes have changed both of which relate to UVB exposure, water and Vitamin D, the simplest and most realistic conclusion is that this was due to being exposed to an evironment where UVB was consistently (averaged over a lifespan) very high or very low, where water was available in nominal amounts, and where Vitamin D natural sources were limited. This can happen in any place where two conditions present in tandem: extreme latitude (which creates winter days that are very short, such as 4 hours or less) and a lack of dietary variety, such as can occur when the climate is extreme. But we seek an area where water is available, either by a humid climate or by freshwater sources.

As regards Vitamin D: Vitamin D is in fact a class or group of hormones (secisteroids) responsible for more efficient uptake of Calcium, Iron, Magnesium, Phosphate and Zinc by the intestines. Therefore, if we wish to understand the selection pressures involved, we need to understand the full picture; namely, what is being affected by increases or decreases in these hormones. Thus, on the one end, wherever there is a deficiency in the environment of direct UVB, Calcium, Iron, Magnesium, Phosphate and Zinc, there could be a positive selection pressure for these changes (fair skin). On the other end, where there are problems known to be associated with a deficiency of Vitamin D there could be a positive selection pressure for fair skin. This includes positive selection pressure for stronger bone (or higher resistance to softening and weakening of bone as in ricketts), cognitive enhancement, higher resistance to infectious disease, moderation of severe depression and lowering of all-cause mortality rates (especially in the elderly and a strange connection).

Presumably, where the environments involve deficiency on the former end and demands on the latter end, the selection pressure could be higher (strongest) for fair skin. Both disease and depression would be particular challenges in an extreme latitude. It is important to point out that this doesn’t imply that all these factors are relevant, only that it is likely that at least one of them played a role. And in some cases, these may only be proxies of other changes. For example, if fair skin does in fact provide some degree of cognitive enhancement it does not mean that fair-skinned persons are smarter. It only means that the selection pressure for higher intelligence (which we already suspect was present at Beringia) was there, so that someone having fair skin today need not have also inherited the “higher intelligence”. These are just proxies that tell us to what kind of environment the fair-skinned hominid was most likely exposed. One of the more likely contenders is the pathogen connection. Note that Beringia was a locked bottleneck of limited geographic extent with a population growing for some 800,000 years with little relief. Due to this salient fact it is my own suspicion that pathogens are the primary selection pressure behind fair skin. We’ve noted the significance of pathogens in North America and how this might have affected the population there as well. The pathogens introduced were perhaps not coming from Asia but were more likely coming from Beringia, and Beringia is beginning to look more and more like a pathogen incubator. Fair skin appears to confer significant advantage in this regard (Irivine & McLean 2006). But we haven’t really formed an argument to suggest that fair skin is more likely to have something to do with Beringia than somewhere else. We take that up infra.

As regards moisture retention, it is important to also realize that dehydration is a function of two variables; atmospheric aridity as well as access to freshwater. In Beringia there was indeed dry air, but the freshwater presence was considerable (wetlands abounded) and the likely selection pressure interaction therefore involved Vitamin D via melanin and urocanic acid reduction in skin.

Finally, it has been noted that fair skin, at least in Europe, appeared very recently (7000 years ago – the same dates obtained for the Wendover site, discussed later). But as with other forms of selection bias, this doesn’t tell us where it first appeared (the study was based on a sub-population increase, not an origin with n=1); in other words, its appearance in Europe could have been by admixture with a different population about 7000 years ago. As we’ll see later, this could indicate the approximate time that Beringian descendents on the Atlantic seaboard of modern-day USA and Canada (most likely modern-day Florida, USA where the trade current was strongest) first arrived in northern Spain and Ireland in sufficient number to inhere this change in the European population. That originating population would itself have been, by that late time, an admixture of Beringian descent and populations that had recently diffused into North America from Asia. And only after that would the demography of Europe begin to reflect the demography we see today in Europe. But the real disappointment in these findings regarding the origins of fair skin genotypes as originating as recently as 7,000 ybp is that it has also been confirmed that skin pigmentation variation is in fact controlled by over 100 genes. In other words, this is hardly a complete picture of what is going on.

Taking account of these facts, we must ask some hard questions. Recorded history reveals that the complexion of Europeans was as it is presently as far back as 2000 years ago, or that at least this was a common complexion in the region. Therefore, we are looking at a window of time as short as 5000 to 8000 years for the genes associated with fair skin to have spread throughout Europe. But the problem is a bit more pronounced than that: fair skin appears not only in Europe but pretty much around the world in an approximately monotonically increasing pattern as latitude approaches 90 degrees North. But there is another pattern less discussed, which is the pattern of complexion growing lighter as we move west from North America, and the anchor of this gradient in North America suggests this is due to access to Vitamin D in seafood in a climate where land sources of Vitamin D are scarce. Moreover, it has been found that, though we know over 100 genes are involved, the “center of mass” in color variation seems to revolve around a specific type of albinism, suggesting the possibility that fair skin may have originated with an albino population (“Single nucleotide polymorphisms in the MATP gene are associated with normal human pigmentation variation”, Graf et al, 2005). Albinism in the hss may be the result of ancient hybridization with a species of evolved albinism. This is speculative for now, but it comports with our view of an Arctic period of bottlenecked evolution where melanin would simply be maladaptive at any concentration. What we are suggesting regarding albinism is that the Beringian species may have evolved normal albinism such that pigmentation variation didn’t exist, and when they hybridzed with HE it resulted in an “abnormal” presentation of albinism in hss that was at first common, they grew rarer with succeeding generations of hybrids. There are several lines of evidence that hint at this possibility; namely, that albinism is associated with as yet unidentifed pigmented reticular cells in the human bone (this issue of bone, Vitamin D and malnutrition is salient in arctic climates), slowed blood coagulation (an adaptation in an extremely cold environment to permit clotting when it would otherwise clot too fast due to sharp temperature changes) and the immune system, which we’ve already seen is another salient feature of arctic adaptation (“Albinism Associated with Hemorrhagic Diathesis and Unusual Pigmented Reticular Cells in the Bone Marrow: Report of Two Cases with Histochemical Studies”, Hermansky et al, 1959 and “Linking Albinism and Immunity: The Secrets of Secretory Lysosomes”, Stinchcombe et al, 2004). In other words, broad variations in color from brown to the constellation of colors seen today may be the result of hybrid evolution over time to “correct” the albino contribution from an earlier progenitor ancestor. And that albinism could be related to extremes of climate is suggested by the convergence of albinism in different populations (“Genetic analysis of cavefish reveals molecular convergence in the evolution of albinism”, Protas et al, 2006). It is generally and widely known that alibinism is associated with vision problems as well (our interest being in the albino-related issue of photophobia), and this further implicates vision as a key genotypic distinction between hss and the Beringian stock. These connections between albinism and other complications are well-known amongst hybridization specialists as phenotypic outliers, or complements, to hybrid and one-sex ancestor used to identify the phenotypic characters of the other-sex ancestor. Thus, we can expect that the Beringian DNA when found will show considerable adaptive changes involving immunity, bone, brain development and possibly cranial size (larger than hss), absence of melanin or “color”, considerable adaptations of vision (possibly larger eyes for arctic low-light conditions), etc. At this point, again, it is all conjectural, but if any of these should be confirmed, it could strengthen the possiblity that other traits such as those identified should be researched.

As the admixture ratio of Beringian ancestry to HE ancestry decreases, the probability of hss albinism complications should increase if our proposition is correct (the proposition is that hss albinism represents a “defective” variant of the Beringian albinism). Therefore, wherever we see greater pigmentation in hss, we should see a greater incidence of albinism, all else being equal. This is a key clue that points to an albino ancestry of hss as a hybridized species. It is important to note however that by speaking of Beringian versus HE ancestry we should be careful to note that this ratio is nearly uniform throughout hss today and that whatever overall difference exists, it is likely smaller than the differences that exist within those groups themselves. The validity of making hybridization inferences in the case of humans vis-à-vis albinism would seem to be undercut if the same complications of albinism also exist in other species in nature, as albinism appears in numerous species. Otherwise, it is strengthened. But it won’t settle the issue: even if the argument seems to be undercut (other animals do have vision, brain cell complications and slower blood coagulation, for example) the “albinism” we speak of in a Beringian stock is a unique condition not properly called albinism or fair skin, so it is difficult to assess whether or not hss albinism appears on account of the same mechanisms that give rise to it in other species, or if there is an additional factor, such as being a hybrid of Beringian and HE. And we further note that the picture is more complicated by the fact that, as stated, albinism is a convergent phenomenon, so it is entirely possible that several species have this history because of an earlier exposure of a sub-population to extreme cold whose genes persisted in the phylogeny, possibly long before many of these species existed as such. And there are in fact hints in the literature and in statistics that humans do suffer a broader range of complications from albinism than other animals, that it arises more frequently with inbreeding and it does not follow a strictly Mendelian explanation.

This finally brings us full circle on the topic of color in hss, and as you might have guessed, I had a mercenary but premeditated purpose. It is one more reason to suspect a lack of realism in the Out of Africa model. The highest incidence of albinism in contemporary hss is East Africa (it is very high there, about 1 in 3000, and 1 in 1000 in some local areas). But this is supposed to be the proximate fount of hss, and is host to the most diverse modern-day gene pool on Earth. The strongest selection pressure operating against albinism is a tropical climate, just like what we see in West Africa. And this is in fact empirically demonstrated as death rates of albinos in this very environment are highest, and generally related to UV exposure and other climate related complications. Thus, it is hard to understand why hss would originate in West Africa, then diffuse into the rest of the world, carrying all those recessive albino genes, yet nary a nook or cranny in the world offers less selection pressure against albinism, apparently. This is not realistic, and this was the primary reason for diverging on the albino topic; to show that we rather have reason to think that admixture, from a very different geography that only finally converged on East Africa recently (200,000 ybp), is the reason why it is concentrated there, as previously discussed.

As we move west, the scarcity of dietary intake increases as the access to land decreases. This is likely the Rosetta stone for understanding the appearance of this complexion in human populations. Because of Arctic ocean ice and the lack of southern excursions of the coastline as seen in North American topography, access to Vitamin D likely decreases as we move from Beringia to Scandinavia. But the Arctic coastline, as mentioned previously, does in fact provide a much easier “highway” of diffusion west and provides admixture isolation from the populations to its south. Thus we ask, could a middle eastern gene alone explain what we observe of the global distribution of the fair complexion? It is possible, but not probable. It would be unrealistic to think that a gene originating in the middle east only 10,000 years ago not only managed to sweep the globe east to west circumferentially, but managed to do so in reverse gradient with respect to its origin. The author’s of the study that found the recent gene in the middle east (reported in Nature) acknowledged that, at best, this gene accounted for maybe one-third of the complexion seen in selected, European sub-populations. Something doesn’t add up. And we’ve noted before, when we first asked about Beringia, could the “refugia” have been on the Arctic coast, which in a similar way to Beringia, might provide a simplex bottleneck to the Americas? We softly rejected the hypothesis in favor of Beringia because this was too challenging of an environment for a species transitioning from HE to some locally adaptive variant. But what about a variant in much more recent times? Thus we see there presents before us a far more realistic narrative:

As we approach the end of the Pleistocene, and as the population at Beringia admixed to a stock more closely resembling hss, a relatively unadmixed Beringian stock liked diffused west on account of improved survivability, continuing their diffusion with little or no admixture from the south, west beyond the Ural Mountain Range, continuing west beyond the region of modern-day Murmansk, Russia, and around the northern extrema of Scandinavia, finally moving inland in fjords and there admixing with populations that had diffused from West Eurasia previously. Fossils as well as preserved DNA will likely be found on the Siberian-Arctic continental shelf continuously on this approximate latitude. This contribution to the complexion of Europeans was likely supplemented by a lighter diffusion from North America to the areas of modern-day northern Spain and Ireland. This gradient is consistent with that observed, and likely accounts for the majority of the fair complexion characters normally associated with the “whitening” of Europe. The remains found near the mouth of the Yana River and dated to about 32,000 ybp are likely an admixed result of this earlier diffusion and a careful analysis of this autosomal DNA may yield further clues. The point of entry inland should be sought at whatever latitude sea ice is expected to be absent year-round and may provide a unique opportunity to find such fossils at stratigraphic layers in the 50,000 to 500,000 ybp range. The genes identified in the middle east are likely themselves the result of an adaptation originating further to the east as Beringian stock admixed there at lower latitudes.

About North America

It had been believed in the days of “Clovis-first” that there was an ice free corridor from Beringia to the North American interior which would have provided a diffusion route before the end of the Last Glacial Maximum (LGM). The problem with this theory was that considerable evidence mounted to show that no such corridor existed and that it had indeed been blocked. Archaeologists then set about trying to reconcile dates in North America that preceded Clovis by suggesting a route from Beringia to modern-day Oregon, USA, or that vicinity, as the northernmost extreme of a viable founder’s homeland. But the idea that early humans diffused over a distance in excess of 1000 miles is unrealistic on its face, as this is more than simply a requirement for human travel to that area, but is a requirement that entails significant gene flow from Beringia to the founder’s new homeland. The more parsimonious explanation is that humans arrived before the LGM began. And this remains parsimonious with or without direct evidence of their presence in North America at that time if confirmation bias has precluded searches for this direct evidence in the past. And it certainly has, for to do so requires a deliberate excavation below stratigraphic layers of Clovis, something researchers have only recently begun to do with some timidity. But if we accept a date prior to the LGM, then we also open the door to HE arriving yet earlier. This is the door no one wants to open, but the body of evidence clearly shows it should be. And prior to 1.2 million ybp it gets yet harder to deny the HE presence when we consider that Beringia and the entire region to its southwest and southeast was much warmer than it is today. Realistically speaking, it begins to sound more and more like special pleading to argue that somehow HE just, as we stated, stopped in Asia for no explicable reason. In any case, recent data suggests that the full closure of the ice sheet does in fact preclude a Clovis date for arrival, but it also shows that full closure is a somewhat rare event with glaciations and that the closure during the LGM was one such rare event.

Over time, this suggests a potential bias in gene flow from Beringia to the Americas vice Asia.

This begins to paint a picture of a dynamic bottleneck in Beringia potentially lasting hundreds of thousands of years, first occupied by HE around 1.4 mbp. Over the ensuing millennia HE in Beringia most likely supplied gene flow through a partial bottleneck that would swing from full bottleneck to partial bottleneck depending on the glaciation cycle. And with each loosening of the bottleneck, gene flow went in both directions, but with greater intensity toward the Americas; at least at first. Much later this might have changed as HE evolved a greater capacity to live and survive in colder climates (HE began as a tropical primate and it is a fascinating curiosity as to where HE came from. The region of Africa and extending through the Levant and to the larger Caucasus area is probable). So, we would propose then that HE in Beringia was a local adaptation of HE to a colder weather primate. Thus it is likely that the Americas bias eventually normalized in the sense that the bias eroded and episodic gene flow occurred in both directions more or less equally. This has massive implications for the emergence of hss.

Only Beringia realistically explains both high inter-group genetic and intra-group language diversity of Native Americans (yet another vexing puzzle for anthropogenics). Language is known to be associated with human dentition; language groups are often correlated with dentition features. But with the right conditions this need not be the case. If language first emerged in Beringia then we have newfound, parsimonious explanatory power that is hard to achieve otherwise. If gene flow progressed from Beringia into Asia and America, then language likely followed. If the Old World experienced gene flow from Beringia through admixture over time, then language diversity would expand enormously as dentition diversity was likely high in the native Old World populations that were admixed (these Old World “people” would likely be locally adapted HE). But the intragroup genetic diversity of the HE in the Americas would be quite low. If, as it seems likely, Beringia remained a barrier to diffusion from Old World to New (more so than from New World to Old on account of the ice sheet corridor), and if the Old World HE population were much larger (as seems natural), then

small numbers of highly dentition diverse populations diffusing from Old World to New would have a tendency to dramatically increase language diversity in the Americas with an increase in genetic intragroup diversity that is not as punctuated. Most of the gene flow into the Americas would continue to be dominated by a Beringian population of very low genetic intra-group diversity while simultaneously Beringia and all of the Americas take on greater and greater language diversity from the Old World.

What we know about the Last Common Ancestor (LCA) of Neandertal and hss

As it turns out, genetics studies have shed some light on the LCA of Neandertal and hss. This is important because according to the inferences we’ve assembled regarding Beringia, this would not be any known fossil as of today. Researches used DNA to construct a dental morphology that would be expected from the dental morphologies of known hominids to see if they could find a corresponding dental morphology in other hominids, the idea being that the LCA should be identifiable as one of the know hominids that existed in the early to mid Pleiostocene and from which both Neandertal and hss descended. The results were negative. In other words, all known fossils were a negative match. Because dental morphology is one of the least ambiguous characters one can use to identify distinct genetic change, this is considered a powerful argument that whoever hss and Neandertal descended from, their secrets went with them to the grave, so far. But it also tells us something else.

Obviously, the demonstrates that whatever the true narrative, a fossil find consistent with the LCA, which must have existed, has as yet never been found. Thus, to argue that the inferences made regarding Beringia can’t stand because no direct evidence of the fossils of “Beringian” hominids exist is an invalid objection. Obviously, for whatever reason, the LCA is missing from the fossil record and this supports my contention that the reason is confirmation bias. If we know it exists but it has not been found, it demonstrates how incomplete the overall fossil record really is, and there can be any number of reasons for that. Most of them likely revolve around geography and stratigraphy; if a single representative of a fossil of an entire species hasn’t been found this is most likely due to the fact that the region where the fossils were deposited is difficult to access and/or its stratigraphy is deeper than what has been deliberately searched in a given geography. And it also comports with the general observation that, like LCA, HE might have been found in the Americas for no more complicated reason than no one has bothered to dig that deep. But there is more. The study also found that the speciation event leading to Neandertal and hss occurred about 1 million years ago. This is difficult to comprehend in the Out of Africa model but fits quite readily with a Beringia origin as a period of about 200,000 years is what we would expect as a sufficient delay for Neandertal to appear in North America. That places the LCA at 1.2 million years before present, with an intermediary speciation stage of about 200,000 years. And this converges well with what we know about blood types.

Neandertal were blood type O (though there is a small chance they were type OA or OB). Blood type B arose around 3.5 mypb. But genetic analysis shows that blood type O emerged 1.15 mybp, around the time of the Beringian bottleneck and the MPT. Genetic evidence suggests it is more likely to have appeared before the split between Neadertal and hss, thus emerging in the Beringian population. This agrees with the LCA divergence time.

But is there any direct evidence that the “migration” in fact did go the opposite direction as has been previously supposed? In other words, is there any direct evidence of admixture into Africa from an “archaic” variant of Homo? Yes, there is (Genetic evidence for archaic admixture in Africa, PNAS, Hammer et al 2011). Genetic studies of modern-day African populations have shown that at least 2% of their DNA did come from at least one admixture event about 35,000 ybp from an archaic introgressor which itself branched from the parent DNA ancestor about 750,000 ybp; and we see in that data another branch at about 400 kbp, both of which are conservative estimates. This suggests the plausible scenario that an introgression occurred from the ancestor into a descendent of this modern-day sample 750 ybp, and again with that line of descent about 400 ybp. It is important to keep in mind that this introgression, if from a Beringian hominid, most likely occurred in far east Asia and the presence of the descendent in Africa is a consequence of the diffusion of their ancestors from the far east back to Africa over the subsequent 750,000 years time. So, we have an introgressor that succeeded in inserting considerable genetic information into what is supposed to be the modern human … 650,000 years before the modern human supposedly left Africa. There are any number of ways to interpret this, but the most realistic and simplest explanation is that this was not an isolated incident (we see in only one sample that it happened twice in that line), and that those traits associated with modern humans diffused into Africa after having originated elsewhere.

About realism and odds

Statements are often made with a certainty and conviction that the data just doesn’t justify. At this point, we cannot say much at all about human origins. Here we are making mere inferences about where to find more direct evidence; regardless of where it leads. But when we speak of the origins of HE it gets even murkier. HE is found almost “everywhere” globally speaking. The earliest remains in Africa appear to be around 1.8 mbp. Claims in Asia have them dating back to 2 mbp (which is, of course, contested). In Java they are 1.4 mbp. I’ve heard HE was even hanging out in Europe for a time.

Homo erectus is securely dated as early as 1.4 million years ago in Indonesia (Java). Given that this is an island and to the south of the mainland, it is realistic to expect that HE was in the general area of northeast Siberia before the cold snap set in at 1.2 million years ago (prior to 1.2 mya, Earth was generally much warmer than it is today and Beringia would never again be as favorable for passage). In other words, this also fits with the above in that the dates converge on a relatively modest introduction of HE into North America just before it became too cold for HE to survive in that region, forcing a population of perhaps about 18,500 in Beringia south into lower latitudes in North America. One of the characteristic features of HE is that this primate was a tropical primate and would find survival easiest in humid, warm tropical environments. The HE has been associated with the eating of shellfish or mollusks in SE Asia, namely, the species. This belongs to the larger group Unionidae who appear in SE Asia but who are most diverse in North America (surprise). Another surprise is that their decomposition releases considerable calcium carbonate which retards fossilization of other in situ materials. This could have an impact on the fossilization of HE remains. Its natural habitat is wetlands. In any case, HE was found to be eating this seafood in SE Asia using a clever drill and separate technique. It has been implicated in some variants of the aquatic ape theory to explain the marine characters hss display. This is interesting since it is proposed that HE and hss ventured to the base of trees to obtain these shellfish, then dived underwater briefly to obtain more. Thus the marine characters are associated with rapid dives in which the organism makes the best use of the time underwater and does not spend a very long time under or in the water. This is precisely the kind of challenging environment seen not just in Beringia and, say, the Amazon, but throughout them, over hundreds of miles. Fossils for HE could be found with deliberate, deep excavations in places such as the Pacific Northwest (AK and Canada) as well as the Atacama, in both sites possibly lending useful genetic information. But such searches in the Americas would need to be quite deliberate as such fossils will not just “pop out of the ground” as they can conveniently do in Africa.

But Asia is still a no-man’s land and HE is still not well adapted to it. The corridor between what would later be called the Laurentian and Cordilleran ice sheets has also opened. So HE can now diffuse into modern-day Alaska and Canada. Thus, once HE breaks this vicious cycle by sufficiently rapid genetic evolution, he and she are able to establish a founding population somewhere in what is today the U.S. Midwest, likely not far east of the Rocky Mountain front range extending from Montana to New Mexico, and marked on this approximate longitude by the present-day Sioux Nation. This most likely occurred in the period of about 1 million ybp. Thus, the modern-day Native American population was likely here since that time, which is before hss even existed. And this shows why its not even clear what we mean by that when we go that far back. Archaeologists should dig much deeper in the region extending from the Rocky Mountain front range to the longitude of Illinois in modern-day USA. In any case, as a light Asian gene flow finally made its way through Beringia and back to North America (probably about 900,000 ybp), natural diffusion likely left some areas less “touched” by this admixture, that being modern-day eastern USA and Canada and the areas west of the Andes, resulting in what would become known as the Neandertal and Denisova subspecies, respectively, by about 800,000 ybp. As time went on, this “barrier” at Beringia would become less so, and more and more admixture of the original Beringian population with the rest of the HE world would occur, until finally the entire global population was admixed and hss was born. Ancient DNA consistent with Neandertal signatures along the U.S. eastern seaboard, especially the northeast, should be sought and the same is true of the Denisova signature in modern-day Peru and Chile near the coasts inland to the tops of the Andes. Fossils and DNA, if the layers are deep enough, could be found in or near the Atacama as well as in modern-day Alaska.

As a brief aside, genetic studies of Neandertal DNA have shown that the Neandertal had a favored gene associated with nicotine addiction. Natural selection favors genes wherever there is a selection pressure to do so, and in the absence of nicotine this wouldn’t be possible. It could be argued that some other agent had a similar biochemical effect (dopaminergic) however, the simpler explanation is that this was a response to the presence of nicotine in the environment for an extended period of time. According to botanists, nicotine was not present in the Old World until some time after the colonial period. The simplest explanation for this is that Neandertal ancestry includes some exposure to nicotine, which implies a physical presence in the Americas for a considerable amount of time.

The hang-up for professional researchers is always direct evidence (this is unremarkable). But as we’ve noted this is a chicken and the egg problem, and that is the problem we’re trying to address here. So, we offer some conjecture about Beringia that could help identify where and how this direct evidence might show up. Perhaps it was the isolation and potentially large population (eventually) feeding off of mollusks (not large game necessarily) in Beringia that led to a greater need for social skills; both because of the high population density (and limited resources that must be shared) as well as the need for more punctuated reciprocal altruism on account of the climate. Perhaps this led to strong selection pressure with very high lethality for HE during the first phases following the MPT. Body size of HE likely increased dramatically to adjust to this climatological extreme, and diving for mollusks may have introduced aquatic features (this has been suggested for the East Indies as well). But in that climate, some way to warm the body immediately before and after dives could have led to an innovation in garments, which in turn led to a loss of bodily hair (that loss also driven by the aquatic pressures). Therefore, for direct evidence, we seek intermediate fossils and DNA in Beringia, eastern Siberia and the Americas which show signs of adaptation by increased body size, increased brain case, more aquatic features, loss of hair (shown in genetics studies but possibly skewed to more recent dates due to admixture of this feature later), UV adaptation to the skin and/or related organs, increased socialization skill (punctuated, more sophisticated modes of reciprocal altruism) and increase in organized industry (needed for group survival in a confined, deadly climate where the threat is more abstract). While it may be tempting by some (as we’ve discussed) to associate this with modern populations, don’t. This is a derivative species of HE that preceded hss and is not “us”. It is archaic to hss. The premise here is that this archaic species interbred with HE globally to generate an admixed species called Homo Sapiens Sapiens. This thorough admixture effect can be seen by noting that while Native Americans today may, as a population, be genetically more similar to this archaic species (in terms of inter-species diversity), traits such as fair skin, freckles and red hair may also be distant remnants of this species. The “Denisova-effect” in Asia complements this picture. This just evidences how thorough the admixture was. Other traits not outwardly visible, or that don’t show up through high intergroup genetic diversity are still present in all populations of hss today. It is, in a sense, a surreal illustration of just how related all of humanity really is, and the reflection of how the known genetic bottleneck 1.2 mbp manifests in us now.

In this model, we can now imagine a gradient of admixture across geographies that is parsimonious and makes sense. If Old World admixture in the New World proceeded as a random geographic distribution, we’d expect admixture to be less in areas that are geographically confined with respect to the Old World; precisely the area of modern-day eastern USA and Canada and the areas west of the Andes. Here we see past traces of the Beringia stock in Neandertal and Denisova traits respectively. As the Beringia stock diffused into South America a very long time ago further local adaptation likely occurred, leading to the phenotypic, visible characters noted in the Denisova. And when we try to convert the “refugia” model to the Amazon, we see a far less parsimonious narrative with less explanatory power. This is why Beringia appears more and more like the likely narrative for the archaic origin of “characteristically” human traits. And that picture gets worse if we substitute the East Indies or Australia as some kind of “refugia”. The gene flow only works when we geographically position the fount at Beringia. In other words, Beringia is uniquely positioned to parsimoniously explain numerous seeming contradictions and problems that confound the current understanding of anthropogenics.

Thus we now begin to converge on the unique candidate for a simplex bottleneck with chiral extinction (or archaic provenance of head lice).

  1. It was in a location that once the species in question was adapted enough to overcome it, greater diversity in local adaptation was possible outside America which permitted greater variety of dentition to, by limited back migration, generate high diversity in language in a population of low genetic diversity. This is only possible via Beringia.
  2. Genetics studies of Mal’ta boy and others have revealed that significant diffusion likely occurred through the area of Beringia, primarily from East to West, and that Old World admixture has a significant component coming from the Americas prior to 30,000 ybp, opening up the Beringia Pandora’s box as wide as HE adaptation permitted.
  3. In order to achieve simplex diffusion of a population we need a condition whereby the selection pressures in play decrease in strength from source to destination, and where there is a “trap” that prevents escape geographically from this condition. This alters random diffusion in that those that diffuse toward the destination survive in larger numbers, thus biasing diffusion in that direction. If the length of that diffusion over a geographic region is long enough, and if that region is episodically rendered totally uninhabitable frequently enough, it will force diffusion in one direction only; toward the destination. It requires a totally uninhabitable boundary to constrain and define this condition. This is exactly what the ice-free corridor, running from Beringia to modern-day Montana, USA does. There is no other condition on the globe meeting these criteria, and this uniquely identifies the location as Beringia.
  4. Because of the ice-free corridor, this bottleneck was closed to the west but partially open to the east and was therefore simplex. This occurred at Beringia.
  5. It consisted of a west chiral extinction (or isolation of an archaic head louse), the Old World being to its west and the Americas to its east; that geography uniquely consistent with Beringia.
  6. The isolation of the Type B human head louse to the Americas in the specific pattern discussed has the most parsimonious explanation via a simplex bottleneck centered on Beringia, in which the head louse speciated, probably on account of the presence of clothing, into a body louse 1.2 million years ago.
  7. The conditions leading to a “trap and capture” in setting up a chiral duplex and simplex bottleneck require a climate change that included decreasing average temperature following a non-monotonic, cyclic function between two geographic regions that otherwise act as a significant physical barrier to diffusion, that place uniquely identified globally as Beringia 1.2 mybp (Mid-Pleistocene Transition).
  8. By definition, it was a genetic bottleneck determined from DNA analysis of considerable population reduction with the only appropriate candidates being  1.2 mybp and 70,000 ybp, the former occurring at Beringia.
  9. It was a cold environment that selected for the use of garments far earlier than would be expected in any climate other than that seen in Siberia, Antarctica, the Artic or Beringia; and which occurred around 1.2 mybp. If HE was smart enough to drill anatomically correct holes in mollusk shells (which they did) they were smart enough to wrap a mastodon fur around their butt if the incentive was conspicuous enough after diving in cold water for … mollusk shells. This ties the aquatic adaptation to cold water in a simple, elegant solution. While too lengthy to get into here, the aquatic adaptations in cold water are so extensive in humans it is likewise unrealistic to imagine such a strong selection pressure did not exist in the past. If mollusks were their primary form of sustenance, this provides the motive for the deep diving behavior. At that time and place, it may have been the only sustaining food available.
  10. Genetics studies have likewise shown that the LCA of Neandertal and hss was a distinctly separate species 1 to 1.2 million years ago; that the LCA is absent from the known fossil record (required if in Beringia) and that no other hominid known is a viable candidate.

These observations, taken together, uniquely identify the geographical area of this bottleneck as Beringia on (1.25m ybp, 800k ybp). As amazing as it is to imagine HE getting there and surviving, it probably happened.

There are three conclusions I am seeing from all this that are realistic and far more probable:

  1. Homo erectus probably did arrive in the Americas a very long time ago and the absence of fossils here is inexplicable but, nonetheless, it almost certainly happened. Given what we know of Homo erectus migrations and the geology of change at Beringia, it is not realistic to imagine a scenario where Homo erectus simply stopped there.
  2. Migrations through Beringia, at minimum, involved considerable gene flow out of America and into the Old World, evidenced by the 25% of all native American genes carried by Mal’ta boy, whose stock was almost certainly migrating West, not East.
  3. The sudden and dramatic increase presumably created by admixture of hss with neandertal (modern-day European intrapopulation neandertal contribution is over 20%) in intragroup diversity of the hss outside Africa disproves the higher diversity of modern African populations being caused by antiquity. Neandertal were apparently reacting to selection pressures outside Africa for over 400,000 years, with over 20% of hss genes outside Africa reflecting that added diversity, making the “out of Africa” time latency of 75,000 years miniscule by comparison. It doesn’t add up. There must be another reason/s for the higher diversity of modern-day African derivatives.


Suggestions for stratigraphic consideration of fossils of an unknown hominid now identified as Early and Late Beringian.

This map was created by minimizing on latitude and altitude from an initial condition with adjustments for rivers and fresh water systems. From the map we can see that there are 4 areas of particular archaeological interest where admixture was highly probable:

  1. The Great Lakes area of North America, extending from modern-day New York around the north side of these lakes and west, to modern-day Minnesota. This region continues up the coast, along modern-day northeastern USA and southeastern Canada.
  2. Far northeast Asia near the Kamchatka peninsula, modern-day Russia.
  3. A region in central-north Siberia extending south from the mouth of the Yenesei River, along that river, for a few hundred miles.
  4. A mostly contiguous border extending over northern modern-day west Russia, bisecting modern-day Finland and running along the coasts of modern-day Sweden, following this coastline to modern-day Molde, Norway.

DNA analsyis and fossil remains in these areas could hold exceptional promise for further discovery. The motif taking shape is one in which the earliest diffusion was a relic of an extinction of the Beringian stock that had spread east and west along the arctic coasts, admixing and leaving their genes in the record only at key admixture locations as mentioned. Then later, though possibly simultaneously, an east-west diffusion pattern also existed, admixing Beringian stock via a westward diffusion into Asia and a southward diffusion through central modern-day Canada. The extraordinary survivability skills of the Early Beringian stock that allowed for this arctic latitude diffusion likely emerged from Beringia on the interval [1.2m ybp, 1m ybp], with many of the core and key “human” characters emerging at that place and time as well. The key to understanding this strange pattern is that while Beringian stock likely developed the ability to survive in extreme climates, they still required a food source, which could only be had from the sea through thin ice. This channeled the early diffusion along arctic coasts and kept the Beringian stock mostly offshore. As the Earth cooled, and as arctic ice thickened and remained year-round, those skills were probably not sufficient for survival and Beringian stock at these latitudes likely perished.

Human beings have been emigrating since the dawn of the species from every continent and every place. But the “Out of Africa” narrative invites us to believe that the first homo sapiens diffused or migrated from the African continent. This is an executive summary of a much longer analysis I’ve done and presents only the most salient inferential clues that suggest the former is no longer a realistic position to take and which further supports Beringia as being one of multiple “founts” of the homo sapiens. The same class of evidence for the “Out of Africa” hypotheses, as well as other contenders, is considerably less, hinging primarily on naïve assumptions about genetic intra-group diversity and archaeological finds that speak to only a subset of elapsed time (and therefore do not speak to events prior). While hss likely did diffuse from Africa through the Levant, they likely diffused into Africa long before that. And it is important to distinguish between what is being claimed; in the “Out of Africa” model it is not merely the claim that fossils exist there, that homo sapiens once lived there, that homo sapiens “left” Africa at some point or that there is sound archaeological evidence to support a movement of human beings into the Levant and Asia. Rather, the claim is much stronger. It is to propose that homo sapiens originated there, with all the phenotypic diversity that defines it, and that latter claim is what I suggest is no longer realistic. Indeed, I’ve positioned the current situation as suggesting that it is unrealistic to entertain any notion of an out of anywhere hypothesis because the nuances of the evolution of the homo sapiens are too subtle for such a naïve treatment. While the details remain elusive, at least two things have emerged as likely originating in two very different geographies: tool use and social skills, both of which were crucial to making us who we are, in precisely the sense that we think of when we ask “what makes us human”. The geography implicated in tool use cannot be supported with existing evidence, but it was likely not Beringia. It likely occurred 1.2 to 3 million years ago, likely in a tropical climate and likely in an area where selection pressures for the development of hands was strong, but not necessarily initially for tool use. Human hands are uniquely different than those of non-human primates today, but they clearly show a phylogenetic affinity.

A final note about Old World admixture

While some admixture of the much older HE of the Old World could have happened with the Beringian stock migrating west, it likely did not happen in large numbers because the B lice did not remain in the Old World. But we cannot ignore one other possibility. It is possible that the B lice had not yet evolved in the Old World, and that it actually originated from another, older strand of lice in the New World. Could this have been the Type C lice found in Ethiopia and Nepal? Thus, is the narrative we seek one in which HE first arrived in the Americas carrying the Type C lice? It seems likely that it was indeed either Type C or a close, more archaic cousin. In other words, admixture of HE and the Beringian stock in the Old World could have occurred if this were the case, and HE need not have gone extinct before that. We can say this because once again we see bottlenecking and isolation effects in the presence of the Type C lice (though the Ethiopian case is less clear). In Nepal altitude could have isolated Type C. In Australia, Type B lice could have arrived from more recent Denisova “mini” migrations and remained simply because Type B was isolated there on account of the Wallace line. This is a weak, hypothetical claim right now because we don’t have supporting inferential evidence, but I suspect it is correct. Therefore, modern homo sapiens was a admixed result of archaic HE and Beringian stock, both in the Old World and the New (in the case of the Old World it was more of an episodic sweep, in the case of the New it was a longer, more gradual process of repeated admixture episodes). Later migrations from Asia into the Americas likely added more admixtures along the less constrained geographic regions in the Americas (indicating its recency and generating the Neandertal and Denisova cousins we can detect today). Therefore, we can expect to find admixture of Beringian stock with HE beginning as early as 500,000 ybp roughly, with an accelerated pace picking up after diffusion southwest from northeastern Siberia into the modern-day Mongolia region and the Caucasus. The Caucasus would then be reached (more intense admixing began) approximately 250,000 ybp. Africa was likely the last continent subjected to admixture due to the geographic funnel that is the Levant, and modern-day African populations therefore exhibit the highest intra-group genetic diversity (they carry genes from all the populations going back even to the Americas), the successive admixing migrations rendering their inter-group diversity lowest.

And now … The Denouement

If we trace the haplogroups back in time based on a loose (but not full) assumption of geographic coupling such that ancient populations are at least correlated by continent for some amount of time, then reverse the phylogenetic tree, we get something interesting. The Beringian male haplogroup will likely be found to have derived of an HE male haplogroup founder, and that Beringian Y-chromosome haplogroup will likely be one not seen before. I will designate it β*, and it is more likely than not that the hss male haplogroup R diverged from β* in North America and Q in South America. That is, we have:

β* → R (North America)

β* → Q (South America)

An intermediate step is equally possible, which most likely involved:

β* → R (North America)

R → Q (South America)

With R and Q being closely related in either case. This was likely the first divergence and each R and Q may have been basal to the hss R and Q seen later, making this first divergence something “Beringian” and not fully hss. It will likely be found that Neandertal hss derived of an R ancestry in the modern-day eastern USA and Canada and Denisovan likely derived of a Q ancestry in the region confined by the Pacific Ocean and the Andes mountains in South America. If we look at a map of the Y-chromosome population frequencies globally see a surge in both R as well as divergent haplogroups of Q in exactly the locations identified as associated with trade currents (and Asia, which we take up next). Haplogroups A, B, C and D diverged in the Americas from R and Q, most likely just R in North America, during the several thousand years of introgression going on in the Old World. Eventually, but well before east Asia was re-populated by the modern-day east Asian population, backflow into the Americas added to this mix, with possible “refugia” contributing to the divergence into A, B, C and D (in other words, the modern-day non-X Native American haplogroups originated by admxing of R and backflow from Asia). The full admixture did not fully complete and render X extinct before the present.

But the majority of gene flow, as stated, most likely passed west of Beringia into Asia very roughly approximated as 250,000 ybp. As for who the modern east Asian population in Siberia could be, that question appears to have been answered and there may be little mystery as to why this diffusion from Beringia to the west left no discernible genetic information: studies indicate that modern-day east Asians came from south Asia recently and this has been widely known amongst researchers in Asia for some time (Y-Chromosome Evidence for a Northward Migration of Modern Humans into Eastern Asia during the Last Ice Age, Su et al, 1999). But this “modern” migration occurred well before the supposed migration into the Americas, essentially disproving any diffusion into the Americas from Siberia in that time frame (~50,000 to 5,000 ybp with haplogroups H6 and H8) and suggesting a diffusion west did occur however before that. If any such easterly diffusion occurred ~50,000 to 5,000 ybp it came from Beringia very recently (and it appears, in an entirely different context – that being HE – much earlier from Siberia). And this has been confirmed from another direction in which it is found that a migration from Asia into the Americas at the required times (~30,000 ybp) was essentially impossible (Western Eurasian ancestry in modern Siberians based on mitogenomic data, Miroslava Derenko et al, 2014). Read that again, please. Thus the more realistic narrative of a west to east migration into Beringia now becomes yet clearer as it is evident that, realistically, if a population diffused through Siberia to the east much earlier (~1.4 mybp) there indeed would be no trace of “Native American” haplogroups or genes in that area today. Genetic drift would have erased any trace of it, and what was left would be admixed with H6 and H8 much later. So, let’s break this down. First, it is the more realistic and plausible scenario that there was an HE diffusion from the west to the east through Siberia to Beringia about 1.4 mybp. This process of local adaptation to colder climates may have taken 400 to 600 thousand years. About 250,000 ybp the westerly bottleneck from Beringia was finally durably pierced and another diffusion from Beringia to the west occurred through Siberia and to the Lake Baikal area and beyond, which gave rise to Mal’ta boy. After some genetic drift, whatever genes were left behind in eastern Siberia were admixed with advancing hss from southern Asia around 50,000 ybp. Note that the only way to make realistic sense of what happened here is if Homo erectus entered Beringia and ultimately the Americas.

Globally, we see a distribution of the R haplogroup and its derivatives into the areas of modern-day west Russia, the Caucasus and Anatolia. This is most likely the heaviest concentration of Beringian genetic flow outside Beringia, if we exclude the much earlier flow into North America. Because the diffusion of a population across west Siberia to the areas indicated would take a considerable amount of time and little Beringian gene flow is seen, at least from this diffusion itself, in east Asian populations, it likely means that the diffusion of modern-day east Asians into that area must be somewhat ancient. This places the primary and heaviest diffusion from Beringia into Asia completing and reaching the general longitude of Lake Baikal no later than 75,000 ybp, providing sufficent time for gene flow and diffusion over east Asia from other locations to generate the modern-day population see there. What we see in these male haplogroups and their diffusion is an introgressive sweep into a locally adapted HE population, with little sign of admixture of HE matrilineal DNA in the Beringian descended males.

The female haplogroups are the real tell. In these we see something a little different. The divergence of female haplogroups appears to “leap” from origination at Beringia with haplogroups D4h3 and X2a, deriving mostly of areas not far removed geographically from Beringia itself, then much later mysteriously diverging only in disparate global locations. As with Y-chromosome β*, we expect a Beringian mtDNA haplogroup α*:

α* → D4h3 (West coast North America)

α* → X2a (midwest North America)

This global dispersion likely is a reflection of the introgression already cited; that is, the female gene flow was initially constrained to earlier Beringian stock then only later began to take on diversity from locally adapted (and likely by then admixed) HE populations all over the globe. This is a telling pattern because it is a pattern of introgression whose locus of origin points to Beringia (the early male haplogroups appear most concentrated in areas of the Old World diffusion sites identified above, but the early female haplogroups appear to be concentrated in areas of North America, with increasing concentration as you approach Beringia, then only much later diverging in all over the globe suddenly). The most realistic and probable conclusion from this overall narrative is that hss completed its transition from HE to hss in a gradual process of gene flow and admixture that proceeded from the region of Beringia with an intermediate species, admixing with locally adapted HE by diffusion south into North America, through South America, west across the Pacific and east across the Atlantic, and in a large move, west into Asia; this entire process being required to generate what we now identify as hss.

The final demonstration of explanatory power in this parsimonious narrative is what is finally a simple explanation for the X2a haplogroup that has puzzled anthropologists and others for so long. Just like the male haplogroups R and Q in the Americas, the X female haplogroup represents the Beringian “progenitor” in the local HE population occuring world’s apart because of introgression; only leaving its trace when Beringian descendents reached West Eurasia, East Eurasia, Anatolia and the Caucasus, then likewise in western Europe when what was left of X in America made its way there by trade currents, and only after the Beringian descended females began admixing with locally admixed HE. Thus we see that, for example, the Solutrean hypothesis has it exactly backward. Native North Americans arrived in Europe, consistent with trade currents and not sailing against them, in modern-day Ireland and northern Spain. That the X remnant remained so long in the Americas is due to the isolation of the eastern seaboard of modern-day USA and Canada from the rest of the continent. Thus, we can solve the mystery of how X traveled the world without also taking other haplogroups with it (this is a problem in either direction and has never been explained until now). By the time X admixing began, the male haplogroups had already diverged into local sub-clades. And there was no other female haplogroup, it was just X. This introgression feature of behavior likely continued for a very long time, perhaps extending as well to the Neandertal and Denisova until any remaining trace of HE was extinct. It had nothing, nada, to do with “Out of Africa” or out of anything.

The D4h3 haplogroup has been positioned as belonging to a “canoe” or “sea” people of the western coasts of the Americas. This is probably false. The confusion arose from a study (“Distinctive Paleo-Indian Migration Routes from Beringia Marked by Two Rare mtDNA Haplogroups”, Perego, et al, 2009) that used only a handful of data points to extrapolate assumptions of continental proportions, quite literally, in which the data itself suggested the “seafaring” interpretation was probably not true. Putting aside for the moment that the mtDNA molecular clock “reading” appears contrived based on assumptions about migration dates (using migration dates to prove migration dates), what the data in fact revealed was an Americas-wide distribution of the modern-day D4h3 haplogroup samples with a conspicuous pattern:

The modern-day samples, and only the modern-day samples, were all on mainland American locations where an island or islands were just off shore.

And a pattern of ancient D4h3 haplogroup samples with an equally conspicuous but very different pattern:

The ancient samples, and only the ancient samples, were all located all over the American mainland with insufficient numbers to establish a statistically significant bias for shore or inland habitation. Having said that, they stretched from modern-day Juneau, Alaska to southern Chile, and from modern-day California to Illinois (Hopewell).

This was then interpreted (and reported by the media) as scientific evidence that the Americas were “colonized” partly by a wave of D4h3 carriers along the American coasts from Beringia. It was then further deduced that these were “sea people”. This logic is invalid. As stated regarding statistical significance, the conclusion about a bias for coastal habitation by the ancient carriers of D4h3 is not supported by the sample size. Therefore, this statement is being made solely off of two, maybe three modern-day samples. There is a more realistic intepretation that doesn’t run us afoul of statistical significance and invalid logic:

It is rather more likely that the carriers of the D4h3 haplogroup were much more populous in the Americas than they are today, that they lived in all or most of the Americas, and that another population entered the area later and admixed with them. The sea is an excellent natural barrier to admixture, which is why their signature still appears today conspicuously close to the very islands their ancestors likely inhabited (Catalina, Galapagos and Falkland). Those particular carriers were no more “sea people” than the carriers at Hopewell.


Image courtesy of Austin Whittal (

All of the samples were from the Americas, except one, which was in modern-day China. Because it is only one sample, it is impossible to reach any conclusions, or to derive a best odds, of what this might mean. It may well be that diffusion occurred along the west coast, indeed it is likely, but this is also likely ancillary to the story of human origins or the “peopling of the Americas”. But to be fair, other studies do in fact support the coastal theory, if only it weren’t for the fact that they still seem to be clustering around islands. Thus, the “migration” by “sea-farers” still seems improbable and the simpler explanation is that, as stated, this is a residual population that endured admixture isolation. The D4h3a haplogroup has been found in one Nahua person, one Tarahumara person and one Mazahua person all of the mainland area adjoining the modern-day Catalina islands (“Journal of Human Genetics”, Mizuno et al, 2014). While it is entirely possible, and yet likely, that coastal “trickle” migrations occurred, this is not likely the droid you’re looking for. Not only do the ancient inland DNA samples suggest that conclusion, but the slower gene flow by sea also suggests it. When we examine the distribution of D4h3 throughout South America, we see a tendency for the percentage of carriers to increase as we move south. Over 20% of one sample were found inland, in modern-day Argentina, which faces the Atlantic (“An Alternative Model for the Early Peopling of Southern South America Revealed by Analyses of Three Mitochondrial DNA Haplogroups”, Saint Pierre et al, 2012). An astonishing 20% D4h3 was found (“Analysis of admixture and genetic structure of two Native American groups of Southern Argentinean Patagonia”, Sala et al, 2014, Molecular Biology Reports) in living native persons in Patagonia, further suggesting this was a case of seaward diffusion of a new population. This is because Tierra Del Fuego is easily accessible from the mainland, while islands such as the Galapagos are hundreds of miles from shore. The barrier for Patagonia (and its increasing tendency as we move south) has more to do with climate and distance from Beringia and suggests rather that the D4h3 people were not sea-farers per se but just a dominant population in the past. This last fact combined with the inland ancient DNA makes this conclusion far more realistic.

Frustration in the accumulated direct evidence

I tend to be uber skeptical on some physical evidence and here’s why: if you suppose you have found an archaeological site, then if it is to be accepted as such it should be clear and convincing that technological agency was present and that the dates for it are secure. This is because I can take a walk in the woods just about anywhere and find something I’d like to attribute to technological agency … and be wrong. It is easy to do. The problem reminds me of the challenge photography as physical evidence presents: you can show me a grainy photo of a person standing on the beach a mile away and it can be easy to make a mistake and assume a person is there when they are not. I see people do this all the time, and usually when I dismiss it I turn out to be right. In the same way, for an archaeological site to bear clear and convincing indications of the past presence of technological agency I need to be able to see clearly and convincingly that the effect I am witnessing is not due to some physical, geologic, or biological activity (without agency). As for geology, I tend to suspect it more likely when the context is one involving the presence of kinetic energy, such as at the foot of a waterfall. Any environment that is energetic can mimic technological agency. Any context where number and chance is salient; thereby affecting probability, false positives for technological agency are likely. Thus, most of the “lithic industry” found in places like Africa, which most accept, I reject as conjecture unless there is a context of other causal effects due to technological agency concentrated in the same place (vice just one), which anthropologists call context. And it’s just common sense. One need only ask if they were there, would it be clear and convincing that technological agency, and nothing else, is the cause of the effect observed. And the dating of the effects must be by some method where it is clear and convincing that the dating method is appropriate for the purpose, and where several dates have been taken of several effects at the site, all of which agree within a sensible standard deviation. Unfortunately, as we go back in time this signal of technological agency we seek gets weaker. The noise of nature tends to drown it out. Thus most “lithic industry” I see with dates before about 30,000 ybp I tend to reject unless there is some very extensive context to back it up. The best evidence is always the remains of the hominid, either actual or fossilized. This is why it is hard to assess the veracity of a site without considerable detail (Pedra Furada is an example of a site that looks interesting on first impression but then looks conspicuously natural upon closer examination of effects dated prior to Clovis). Thus, we need fossils, and I’m going to list some of the evidence today not as evidence for a claim or hypothesis, but as suggestion of where fossils might be found in the future.

Obviously, the key to testing this hypothesis lies in fossils which simply do not seem to be present in the Americas. Or do they? This introduces a set of lengthy controversies that are really not worth tangenting on too much, but I’ll speak to some of them. The site in South America known as Monte Verde now dated at 12500 ybp also has a lesser known artifact (spear point) with hemoglobin on it that tests at 33 kybp. This 33 kybp find was part of a second, nearby excavation that is incomplete but presumably ongoing. At the first site there were indications of the presence of a medicinal tea still used by the locals, indicative of the fact that a culture existed there that made use of local materials and with sufficient trial and error, had found it medicinal. This likely pushes the 12500 ybp date back to something closer to 13,000 ybp for the minimum age of first significant occupation of the area (herbal traditions are necessarily provincial and take a very long time to develop). While footprints are the stuff of novels, it is a fascinating fact that a child’s footprint was found inside one of the huts at the first site near a hearth. And these folks obviously were lacking good health care insurance and personal hygiene education on account of the fecal matter covering the floor of the hut. But hey, it worked for them.

But as we’ll see is the pattern, because this is a mere drop in the bucket when it comes to the evidence locker, the 33 ybp evidence is not often referenced (and that work is ongoing). Another site in Brazil shows dates going back as far as 50 kybp (human lithic activity) near Pedra Furada. The problem with this site is a bit more obvious as the shaping of stone for the presumed purpose of technological agency does indeed show signs of having a confusing context where some naturally occurring stones from a cliff fall lie alongside the purported artefacts. Art from that area also appears questionable in terms of dating. Therefore, more for reasons of suspicion this site tends to get less attention. Another site which is far more problematic for conventional dating assumptions is near Hueyatlaco, Mexico. This site is crucial because it has been tested so many times now that it appears to be legitimate. It had lots of effects one can observe. But it was also one of the ugliest scenes in archaeology disputes I’ve read about, so I’ve decided to rely on a professional anthropologist for his seemingly objective opinion on what that dispute was really about (there is a fair degree of political intrigue behind the story but it is not terribly relevant): David Meltzer has written (in First Peoples in a New World: Colonizing Ice Age America) that the key issue at Hueyatlaco was not so much about the dates (showing human lithic activity back to a date probably prior to 250 k ybp) as it was about conflicting dates. But curiously, he does note that once again, we see mollusks involved in a site that would presumably be HE related. This pattern has been a ubiquitous feature of our tracking of HE (mollusks were generally widespread in North America). In any case, the dates for some tools clearly of technological agency were found associated with the remains of a Mastodon. So, that date does seem solid and it is hard to dispute that this must mean technological agency was present about 150 miles southwest of modern-day Mexico City near Hueyatlaco in pre-Clovis time for sure. But he notes that (and this is follow-up research done long after the research there began in the 1960’s and completed in 1973, and the controversy was well-known) dates of those mollusks showed Holocene dates (very recent). The problem here however, as Meltzer notes, occurs when different methods are used and they are not well-understood, mistakes can happen. First, you cannot use 14C dating on a Mollusk that old (when the date is an unknown, the method must capture the extrema you are seeking to prove or disprove). So, that we have a conflicting date suggests the 14C date is more likely the wrong one. Meltzer doesn’t say this, but I discount this “conflict” on those grounds. Because ash layers were involved in the stratigraphy, dating techniques new at the time were used (fission track and Uranium dating) to attempt to assess the contextual age of material; applied directly to the ash material. If something is found just below that layer, dates showing an age comparable to it are likely sound. In any case, other conflicting dates were found when vertebrate remains were tested, showing upper Pleistocene or Holocene dates while diatoms showed dates of around 80,000 ybp (the diatoms were dated on the basis of specie extinctions and were therefore independent of the physical dating methods). It is unclear from Meltzer’s commentary whether the vertebrates were dated using physical methods or species extinction patterns. Meltzer goes on to mention the political intrigue, then points out that the stratigraphy was also confusing: this site was hard to date because the ash appeared to be of an unusual makeup with all manner of pitfalls in dating (even the manner of deposition is still being debated). Argon / Argon dating yielded confusing results as well, at one point suggesting everything might be younger than 40,000 ybp, but then more results muddied that result. My own suspicion is that the dates of 250,000 ybp are real but due to the peculiar geology there I will hold out for the possibility that they are not. The presence of technological agency in the context of mastodon remains only suggests an Upper Pleistocene date that could be younger than 40,000 ybp. And the “footprint” found there is where my skepticism meter really goes off the chart, as I do not think inadvertent engravings are reliable generally (not for tracking technological agency – a footprint is not a full skeleton). Efforts continue to try to get a more secure dating on the formation.

Diring Yuriakh is a somewhat controversial hominid site located at the confluence of Diring Yuriakh creek and river Lena, about 140 km south of Yakutsk in Siberia. It is believed to have been occupied around 300,000 years ago and consists of an Oldowan-like pebble assemblage. It is about 700 miles northeast of Lake Baikal in the direction of the Bering Strait. Unfortunately, it consists of only one layer which makes its veracity hard to assess (having several layers with successive dates that are monotonic is a powerful argument for the veracity of the dating). Most of the controversy lies in the dating. The dates were obtained using thermoluminescene dating in an environment where the lighting conditions would likely misstate the correct age. In fact, reverse polarization of ferrous material suggests that the site is in fact older than 750,000 years and the Oldowan-style assemblage is consistent with assemblages of that time frame or earlier seen in Africa for HE. The original excavator estimated the age at over 1.8 mybp based on the Oldowan comparison. Estimates based on geology place the age at older than 1.6 million ybp. What makes the site worth mention is that there was a “tomb” found adjacent to many of these presumptive stone instruments. This gives us two classes of “effect” to examine, but it is thin.

Another site is Wendover, Florida USA where we have a Holocene site (~7000 ybp) but one that is clearly divergent in culture and industry from anything ever seen before in the Americas. Because it looks suspiciously Scottish it appears that ideological confirmation bias has been applied to squelch ethnic confirmation bias. I’ve never been a fan of fighting fire with fire, because usually it just makes the fire bigger, hotter and more obvious. And this site is definitely a signature of technological agency that is securely dated. I reference it because it could show behavioral (cultural) diversity in the Americas consistent with what I’d expect (and not “arriving” from Europe).

The site at Calico, California, USA is a confusing mess of objects apparently the product of technological agency dating back to 200,000 ybp. The salient observation amidst this controversy is that, while the debate centers on whether they are natural or not, if the same objects had been found in Africa they’d likely be accepted as the product of technological agency (and they match HE lithic activity in Africa, though this doesn’t say much given its simplicity). But again, we still are looking at a drop in the bucket in terms of direct evidence. This site reminds me a lot of the “grainy photo” where all we have is the photo. Bad Feng Shui. We are left trying to interpret ancient tools of only two or three types, all of which are ancient enough to be vague by design in the first place. Without more context to tell us, “hey, I’m here”, it’s hard to say what it means. Photo “experts” will tell us it is real, but photo experts are often wrong, too.

There are a few others of note, but it ultimately comes down to the volume of evidence and realism: we don’t have enough direct, compelling evidence to place technological agency in the Americas before about 20,000 ybp (Topper site, South Carolina, USA) if we’re willing to be a little risky. But we are edging eerily close to that critical mass and the key stumbling block for anthropologists appears to be their desire for diagnostic[1] utility: anthropologists like to see patterns from one site to the next, and when they do, they derive “diagnostic” criteria to help identify later discoveries. But what happens if, for whatever reason, the patterns don’t exist but the people did? My own answer to this is that HE fossils need to be found in the Americas at multiple sites, preferably with some DNA, and this will force a sea-change in thinking on the matter. Work in Alaska, USA and the Atacama could be conducted with a dual-purpose; one to find useful archaeological data in Upper Pleistocene time and one to go deeper while you’re there to find fossils. Hueyatlaco could be a good place to search as well since a pattern has already been seen there. It would provide opportunities to refine the dating there as well.

This inferential “proto” hypothesis was made possible only by discoveries made in the recent years, to which I am greatly indebted, but which may be unpopular for some on account of what we’ve discussed, specifically:

  1. In the matter of post-dating the Native American presence in the Americas on account of cultural confirmation bias.
  2. In the matter of assembling a narrative of migrations from the African continent on account of ideological confirmation bias.
  3. In the matter of neglecting salient aquatic features of hss on account of male gender confirmation bias.
  4. In the matter of assigning special significance to modern-day populations in the true narrative on account of ethnic confirmation bias.

Identifying these biases is something a child could do and it is obvious. The key indicator of its presence is when a purported narrative or hypothesis contains a salient and consistent pattern that supports the above biases, precisely because those biases are human ideations for which nature has no reason to abide. That’s the sobriety test we should all apply when attempting to ascertain when and if a proponent is snorting pixie dust. If you like the ideation that is your pixie dust, then snort it, but be aware there is a time to be sober and a time to be high.

For those of you who rightly insist on direct evidence, I agree and for my part, I will put this analysis on the shelf for now. When HE fossils begin showing up in the Americas I’ll dust it off, read it again, and say, “wow, what an amazing story”.

And that’s my two cents.

[1] The concept of “diagnostics” sounds an awful lot like structured confirmation bias: because sites found are necessarily a tiny minority of what existed, one’s view of what was going on is biased, something historians also don’t like to acknowledge. So, to then use that to “qualify” a new site as legitimate is in effect a kind of structured confirmation bias. You are trying to “qualify” a new find when in reality you don’t know what you’re talking about; and you operate from the position of bias your anecdotal experience has given you (and YES, it IS anecdotal because you don’t have a full statistical picture of what actually EXISTED). I’m not sure of the solution to that potential problem, but I do see it’s potential for confusing the issue. At minimum, it should be applied with caution; to wit, diagnostics should only be constructed where a particular pattern’s cause between sites is known and understood with confidence and not due to some other causal factor you can’t observe. This is why Calico results would be accepted in Africa, but endlessly challenged in North America. It isn’t just that direct evidence in agreement hasn’t been found for Calico.


Hi all,

Today, 14 October 2014, we read of a report by WHO which states that by 1 December we could see as many as 10,000 deaths from Ebola per week. Oddly, the mainstream press finds this newsworthy, despite the fact that this very same mathematical statement was made by WHO and CDC weeks ago when the same numbers were provided in a different form. We are told that as of today, there are 4500 dead and 9000 infected. Going off of the average reported error factor of about 2, this means that there are more likely 9000 deaths and 18,000 infected as of now. There is a disturbing problem going on here which begs comment in the harshest terms. Magical thinking is undermining the ability not just of the public, but of the authorities, to comprehend this disaster on account of three salient features now evidenced incontrovertibly by the statements and actions such as the one just noted:

  1. Mathematical illiteracy
  2. Inability to assess the assumptions of the math
  3. Lack of realism; or inability to reduce the general to the specific

What about log base 2 of 18000 can some people not understand? And where people do get the significance of exponential growth, what about the core assumptions is not clear? The only assumptions behind this math are pretty dry:

  1. Nature will not do something incredibly improbable in the next 12 months to favorably alter the math.
  2. There is no infrastructure on Earth that can locate, extract and “isolate” 70% of a population of 36,000 in sub-Saharan Africa scattered in three different jurisdictions over uncounted thousands of square miles of African city and jungle. (I’ll explain that number shortly). And certainly 4000 Marines can’t do it.

WHO and CDC are counting on the magical thinking of isolating 70% of the infected population, based on the idea that the reproduction rate of the virus could be rendered linear under those conditions. But implicit in this thinking is the requirement that all those persons be identified and extracted from the general population, and then isolated not by merely housing and feeding them, but by bio-containment isolation. And if we have 18,000 infected now, give or take a few thousand if it pleases you – it doesn’t much matter – then no infrastructure having a viable mathematical impact on this situation exists now and will not likely exist within the doubling time of the viral spread. Therefore, let’s not be silly and let’s assume 2*18,000=36,000. It certainly isn’t going to happen with 2000 beds provided by the U.S. military.

I’ve been trying to explain this now for over two months (in various forms) and am convinced no one comprehends a thing I’m saying.

Let me be clear, the assumptions given above are highly likely to hold and the logarithmic relation given will dominate the outcome. What part of that sentence is confusing? Do I really need to work this logarithm to demonstrate what highly likely means? Try this:

log2(7*109) – log2(36000) = about 17.5 months

I’m assuming that the doubling is every 4 weeks, which is generous, and I’m assuming the world’s population is at least 7 billion. Yes, the doubling rate will increase as the total number of infected increases, but I’ll ignore that for the moment.

We have about 18 months before there won’t be anyone left to discuss this because the mortality rate WHO just released is another failure to reduce the general to the specific; namely, that when people start dying in large numbers the death rate will approach 90% or greater consequent to the sheer anarchy and chaos that will result from the viral mortality of 70%, not to mention the lack of food for anyone remaining.

Let me be clear one more time:

This reduces the problem to two options. We can do one or both. We can develop a real bang up vaccine really, really fast and/or we can isolate populations with force. Of course, the West can relax since their superior health care systems might prevent the assumed “seeding” numbers in the thousands, like we have now in West Africa. In that case, the West can expect to be alone when the dust settles … and to be a little hungry. Hope they’re happy with that. But no, there is no need to panic in the West because clusters will indeed likely be snuffed out. But I hope we can see why …

That doesn’t much *&^% matter.

And speaking of the West, CDC’s recommendations to hospitals are a circus of failures to reduce the general to the specific; a process otherwise known as deduction. “Meticulous” guidelines cannot be followed stochastically in a general hospital environment when those hospitals are using BSL-2. Someone in USG with a brain needs to implement BSL-4 in regional hospitals … right now.

I think nature is about to “inform” us as to just how dumb we really are and future observers will quite likely regard many actions already taken or omitted as criminal negligence of the highest order.

Over the past couple of years I’ve been trying to get the message out that religion and political ideology are vehicles of misplaced emotion that undermine IQ and are squeezing humanity into destruction in a death spiral of ignorance and superstition. I’m afraid my message won’t be heard until billions die.

– kk

“A Prank, A Cigarette and a A Gun”


An article about the murder of Meredith Kercher. I’ve agreed not to say much but if you’re interested in this case this is the most important read of all. The truth of what happened is in here.

Click below:

The So-called “Best Fit Report”

by Sigrun M. Van Houten

What is Best Fit Analysis, a quick intro

Students of statistics and stochastic analysis will recognize the term “Best Fit” as a statistical construct used to determine a most probable graph on a scatter plot. In the same manner when the clandestine services seek to resolve a most probable narrative of events when information about that narrative is limited or inaccessible, one can assess the information that is available to construct a probabilistic scatter plot. Once done, it is then possible to graph a “line” that represents a best fit to those data points. In this case, the data points making up the scatter plot are individual facts or evidence and the associated probability they possess as inference to a larger narrative. And the “line” drawn as a best fit is the most likely narrative of events.

This procedure has parallels in normative notions well understood in western law for some time. Namely, it deals with probative value which, though perhaps not used in the strict sense used in law, is used here as a catch-all to describe each data point. Each data point reflects the probability of a given piece of evidence. But what do we mean by “probability of a given piece of evidence”? In Best Fit analysis (BFA) we begin by constructing a hypothesized narrative. When applied to criminology, the hypothesized narrative usually presents itself fairly easily since it is almost always the “guilt narrative” for a given suspect or suspects in a crime. In this short introduction to BFA, I will show how it can be used in criminology. The advantage in criminology is that rather than having to sort through innumerable hypotheses as is common in the clandestine services, here we have the advantage that we usually have a hypothesis presented to us on account of an accusation or charge. We can then use BFA to test the narrative to see if it is the most likely narrative. With perturbations of the same, we can likely identify alternative narratives more likely to be correct.

Norms of western law are dated in some cases, and some have not been updated for a long, long time. One of those areas apparently is the area of probative value. Typically in Courts of western nations it is presumed that a piece of evidence has “probative value” if it points to a desired inference (which may be a guilt narrative or some more specific component thereof). I’m not an attorney so I can’t categorically state that the concept of a “desired inference” really refers to an overall guilt narrative or simply the odds that the evidence points to a guilt narrative. But what I can say is that in practice it almost always is used in the sense of an overarching narrative or reality.

A case in point is a famous case in which a man was accused of murdering his wife during a divorce. It turned out that his brother had actually committed the crime. But once his brother was convicted an attempt was made to convict the husband of the crime by the accusation that he “contracted” with the brother to commit the crime and end his divorce case favorably. In the second trail of the husband the evidence was almost entirely circumstantial and the jury relied heavily on an increase in phone activity between the husband and his brother leading up to the murder. Normally, the brothers had not spoken on the phone often and there was a clear and obvious sudden increase in the frequency of calls. The jury interpreted this as collusion and convicted the husband of murder. Thus, when brought to Court, the desired inference of testimony and records of phone calls was that collusion existed. This is a piece of evidence being used to point to a guilt narrative. The problem however, was that it was never shown why it should be more likely to have inferred collusion than simply distress over a divorce. It is not unusual for parties in a divorce to reach out to family and suddenly increase their level of communication at such a time. In other words, and on the face of it, one inference was just as likely as the other.

What legal scholars would say is that this is a reductionist argument and fails because it does not take into account the larger “body of evidence”. Unfortunately, this is mathematically illiterate and inconsistent with the proper application of probability. This is because it takes a black and white view of “reduction” and applies it incorrectly, resulting in a circularity condition. The correct answer is that

… One takes a body of evidence and reduces it to a degree sufficient to eliminate circularity and no further.

In other words, it is not all or nothing. In fact, this kind of absolutist understanding of “reductionist argumentation” is precisely what led to the results of the Salem Witch Trials. In those cases, probative value was ascribed based on a pre-existing hypothesis or collection of assumptions; essentially a cooked recipe for enabling confirmation bias either for or against guilt.

To explain what we mean, in the case of the phone calls between brothers, one cannot use a hypothesized narrative (the inference itself) to support the desired inference. This is circularity. But one also cannot reduce the evidence to such a degree that the body of evidence in toto is not weighed upon the merit of its parts. From the perspective of probability theory, this means that we must first determine whether, as an isolated construct, the probability that the phone calls between brothers were for the purpose of collusion must be greater than the probability that the calls were due to emotional distress. And it must be something we can reasonably well know and measure. While we can never apply numerical values to these things, it must at least be an accessible concept. Once we’ve looked at the odds of each of the two possible inferences we can then ask which is more likely. Unless the inference that the calls were for the purpose of collusion is greater than the odds that the calls were for the purpose of emotional support, there can be no probative value (in the sense we are using that term here).

The reason for the “isolation” is that we cannot determine aforesaid odds by using the inference, or the larger narrative, to support those odds because it is the narrative that is the hypothesis itself. Having said that, once we have done this, if we can show that the odds are greater that the calls between brothers were for the purpose of collusion, even if that difference of probability between the two inferences is very small, the phone calls can then be used to assess the likelihood of the guilt narrative by considering it in the context of the body of knowledge. In other words, if we could associate numbers with this analysis as a convenience for illustration, if we have 10 pieces of evidence bearing only, perhaps 5% probability difference favoring the guilt narrative, it might be possible nonetheless to show that the guilt narrative is the most likely narrative. In other words, we consider all evidence, each with its own net odds, in order to frame the odds of the guilt narrative. And we are therefore using reduction only to the extent that it excludes circularity, and no more. And both the number of evidentiary items and the odds of each matter. If we had 3 pieces of evidence each bearing a net probability of 90% favoring a guilt narrative, it might be just as strong as 10 pieces bearing a net probability of only 5%. And it is these odds that must be left to the jury, as it is not a mathematical or strictly legal exercise but an exercise in conscience and odds.

Sadly, it is routine practice in western Courts to employ probative value in such a manner as to establish in the juries thinking a circularity condition whereby the larger narrative of guilt or innocence is used to substantiate the probative value of individual pieces of evidence. The way to control this is for the understanding of probative value to change and modernize, and to require Judges to reject evidence (rule inadmissible) that either does not point in any direction (net odds of 0%) or points in a different direction than the desired inference. This is a judgment call that can only be left to the Judge since to leave that in the hands of the jury effects prejudice by its very existence. While there seems to be lip service to treating probative value as we’ve described, it appears to almost never be followed in practice and most laws and Court regulations permit Judges to use their “discretion” in this matter (which, in practice, amounts to accepting evidence with zero probative value). Standards are needed to constrain the degree of discretion seen in today’s Courts and to render the judgment of Judges in matters of probative value more consistent and reliable. One way to do this is to treat evidence as it is treated under BFA.

While many groups that lobby and advocate against wrongful conviction cite all sorts of reasons for wrongful convictions, tragically they seem to be missing the larger point which is that these underlying, structural and systemic issues surrounding probative value are the true, fundamental cause of wrongful conviction. For without proper filtering of evidence, things like prosecutorial misconduct, bad lab work, etc. find their ways to the jury. It is inevitable. But the minute you mention “structural” or “systemic” problems everyone runs like scared chickens. No one wants to address the need for major overhauls. But any real improvement in justice won’t come until that happens.

Thus, with BFA, in the clandestine context, we take a large data dump of everything we have. Teams go through the evidence to eliminate that which can be shown on its face to be false. Then we examine each piece of evidence for provenance and authenticity, again, only on what can be shown on its face. I’m condensing this process considerably, but that is the essence of the first stage. We then examine in each piece in relation to all advanced hypotheses and assign odds to each. Once done, we look at the entire body of evidence in the last stage to determine which of the narratives (hypotheses) requires the least number of assumptions to make it logically consistent. That one with the least number of assumptions is the Best Fit. If we were to graph it we would see a line running through a scatter plot of probabilistic evidence. That line represents the most likely narrative. On that graph assumptions appear as “naked fact” and are “dots” to be avoided.

To see a good example of how BFA is employed, you can see my work on the Lizzie Andrew Borden, Jon Benet Ramsey, Darlie Routier and Meredith Kercher cases. That this method is remarkably more effective than what we see in police investigations and Courts is well-known by those that have used this technique for at least three decades now. But it has been somewhat outside the radar of the general public because of its origins. My hope is that through public awareness this method can be applied to criminology and jurisprudence resulting in a far greater accuracy rate in the determination of what actually occurs during criminal acts, especially in matters of Capital crimes where the added overhead is well worth it.

~ svh

Notice: I am told that Mozilla users can experience a copy of this report that has sections of text missing. I recommend that Mozilla users download the pdf and view it on their desktop. – kk


Big Sis modeling in … an oilfield. Our second tour of Bakken with our dad.

I’ve written on the petroleum issue before and as I learn more the worse it seems to look. Yes, there is good news about unconventional oil, like tight oil. And there are all kinds of alternative energies out there, too. But the choices for humanity are beginning to narrow to a point that open, frank discussion about what is going on is desperately needed.

First, I should mention something about the developments here in the States regarding new sources of oil that has everyone in the industry so excited. These sources are unconventional oil. There are two types of “unconventional” oil. First, there is what is called shale oil, AKA “tight oil”. This is really just conventional petroleum that happens to be found inside shale rock. The only reason it hasn’t already been exploited is because the drilling techniques needed to get to it are a bit more complicated than a simple, vertical bore shaft drilled in one spot in which the petroleum is “loose” enough to flow into the pipe on its own. With tight oil not only do you have to drill horizontally to exploit the relatively horizontal orientation of the shale rock, you have to “encourage” the oil to move into the bore pipe because it is tightly bound to the shale rock. This is done by a process that has come to be known as “fracking” in which explosive charges are placed in the bore pipe to perforate the casing and water is pumped in to crack the shale and create small paths within it and through which oil can flow into the bore pipe. Second, we have what is known as oil shale. Read that carefully, I just swapped the order of the words to get a new beast. And that’s why people get confused over these two types of resource. Their names are about as similar as one can get. But oil shale is a totally different thing. Here, the oil is not simply conventional oil tightly trapped in a rock. Here, the “oil” is not fully developed by nature and has not reached the final stage of its conversion into petroleum.  It is in an intermediate stage between fossil and true oil. This fluid is called “kerogen”. The problem with kerogen is that in order to complete the transition to petroleum, which you must do to make it a viable fuel source, requires considerable heating. If we just look at the energy equation it means that we are putting more energy into the production of oil from oil shale than we get out (with current techniques). And what many economists and optimists don’t seem to realize is that this problem is a physics problem, not just an economic one. In other words, there is no technology or economic model that will change this. Kerogen, in the form we know it, will never be economically viable in and of itself.

So-called “Peak Oil” tells us that, because petroleum is a finite resource, it must exhaust at some point in the future. Like so many academic statements, one can see that this is incontrovertible but as is often the case, the practical reality does not so easily admit of a simple application of a general principle to a specific problem. Such is the case with Peak Oil. People promoting this theory are effectively overgeneralizing to a specific set of circumstances and reaching erroneous conclusions. I’m going to try to sort out this mess here and explain what Peak Oil means for humanity in realistic, probable terms. First, I noted that the energy one puts into extracting petroleum must obviously not equal or exceed the energy they can extract from the recovered petroleum itself. Otherwise, there isn’t much point in extracting it. But from this point forward in the popular discussion of Peak Oil the conversation diverges into Wonderland. The crux of the problem, from what I can see, is that those that understand the geology and science of petroleum don’t understand economics and those that understand economics don’t understand the fundamentals of science. Add to that the inherent opaque nature of the petroleum industry and its methods and it is no wonder that there is immense confusion over this topic. Okay, so why is “Peak Oil” an “overgeneralization” of the human energy consumption problem? First, we need to point out that the idea that something is finite, and that one’s ability to extract it in situ will likely follow a bell curve in which the rate of recovery rises and then falls is an incredibly general proposition. And it’s that phrase rate of recovery that we need to understand better.

All finite things will tend to exhibit bell curve, or normalized behavior; that is, one’s extraction of them in situ (limiting the generality to resource exploitation for this discussion) will likely get faster in the beginning, then slow down as it depletes. But global Peak Oil is just one application of this broad generalization. Notice that an oil well, if all else remains the same, will also tend to extract petroleum at normalized rates, increasing sharply in the beginning and tapering as its reach into a reservoir diminishes. This has nothing to do with global peak oil. Likewise, a reservoir will, all else being equal, tend to follow a normalized pattern of extraction rates. This also has nothing to do with global peak oil. And please notice the qualifier “all else being equal”. Let me explain. The rate at which an oil well can extract oil from a reservoir, assuming the supply from the reservoir remains essentially constant (its really big), depends on numerous factors. The depth, diameter and bore length of the bore hole all affect that value. The fatter the pipe, the faster you can get petroleum out. Depth can affect pressure which will affect how fast you can pump it out. Indeed, even your pumping equipment can affect those rates. But things like the permeability of the rock also matter. I should point out that oil doesn’t usually sit in the ground in pools. Rather, it is “locked up” in the pores of rocks. Different rocks allow it to escape at different rates. Shale, for example, doesn’t give it up easily. So, that too, affects the rate of recovery. So, the reach an oil well has into a reservoir is a time dependent function that is highly localized and dependent on all the factors mentioned. Thus, it may be possible to drill another well nearby, but importantly, no less than some minimum distance away, to increase the flow rate. That minimum viable distance is determined also by those factors. Finally, for any given well, as the pressure begins to drop due to the peaking of that single well, not necessarily the entire reservoir, one can increase the internal pressure, forcing petroleum out faster, by boosting it with water. If that isn’t enough, you can inject gases under pressure to increase the flow rate.

In other words, the rate at which a single well delivers petroleum product is highly dependent on capital investment in the well. And producers have to consider how much they want to invest based on market conditions and overall performance of their overall recovery operations. Thus, the so-called “bell curve” becomes a joke. One can artificially shape this curve however they want depending on all the factors mentioned because, at the oil well level, the supply is halted only as a time dependent function of the presence of oil locally around the well bore. What this means is that, you can drain the region around the bore hole but over a very long time the rest of the reservoir will push oil back into that region and refill it. So, that also can be seen as a production rate variable. The reader should be able to clearly see now that the “peaking” of an oil rig is totally dependent on numerous variables, only one of which is the presence or availability of oil locally around the bore hole. Thus, simply yanking production rate figures for a well out and suggesting that it or its reservoir has hit a fundamental peaking of capacity based on those numbers is absurd. You cannot know that unless you have access to all the data and variables I’ve mentioned, and only then can you analyze the well and understand if an observed peaking is due to some natural, finite barrier or is rather due to the particulars of the well design and operation.

We can extend this discussion in scale and apply similar logic to the reservoir itself. We cannot know if a reservoir is reaching a true, finite and natural peak unless we know about each of those wells and, importantly, what percentage of the acreage from which a well is viable is actually covered by a well. So, in the same way, one cannot pluck data from a reservoir and conclude anything from that.

At the global level the same limitation applies. We need to know the true facts about each reservoir in order to reach any conclusions about:

  1. Actual, existing production capacity globally
  2. Total, defined reserves remaining

But can’t we see that if global well spudding is increasing and peak production in various countries has occurred that it must be occurring in the near term globally? Yes … unless we consider the powerful impact economics has on all this. The United States reached a peak around 1970 and its domestic production declined thereafter (until recently as shale oil has pushed production up considerably). But what we don’t know is why. Was it because the actual recoverable oil had diminished to something below one-half its original amount? Or was it because the investments necessary to continue producing the fields in the States were considered economically unsound given the global prices for petroleum at the time? Did petroleum companies just forego water and gas pressurization, increased drilling into existing reservoirs, etc. because it was cheaper to buy overseas? Did environmental regulation drive this? There is reason to believe that other factors were in fact at play because domestic production in the United States has risen again even if we control for shale oil production. And much of that is occurring from existing fields. But there’s more. Various agencies tasked with estimating reserves continually come up with reserve figures much, much higher than peak oil advocates claim. USGS and IEA, while they don’t agree on all the numbers, clearly state that conventional oil reserves in the United States are over 20 billion barrels. Where did that come from? It comes from the same fields that have always been producing petroleum in the United States. But for whatever economic reason, the additional investments in those wells simply have not been made. That is changing now. If the United States were to continue consuming at its present rate, and if that 20 billion barrels was the only source of oil for consumption in the States, it would last about 3 years. But since Canada supplies about ¼ of U.S consumption and shale oil is providing an ever increasing portion (quickly approaching ¼) that number is likely closer to 10 years.

Numbers for shale oil are about 20 years; that is, if all oil were drawn from those fields it would last about 20 years. This combined with the remaining conventional oil is 30 years, at least (and assuming Canada disappeared), of petroleum supply. But Canada’s reserves are yet larger and their consumption is an order of magnitude lower than that of the United States (their population is an order of magnitude lower than that of the U.S.). Thus, realistically, the U.S./Canada partnership, which is unlikely to be broken, will easily put the U.S. supply beyond 50 years. And that assumes that the middle east and everything else just vanishes. If we plug that back in its even longer. Let’s be clear, regardless of what’s going on around the globe, the U.S. and Canada are not going to trade their own oil away if it means their own consumption must drop. Nor would any other nation. Shale oil production in the United States is climbing meteorically, to about 4 million barrels a day in 2013. This is unheard of since less than 5 years ago it was virtually zero.

The more challenging oil shale; that is, kerogen bearing rock, is a U.S. reserve so large it is hard to calculate or predict where it might end. Needless to say, we have about 50 years to develop it and get it online. It seems unlikely that this goal will not be achieved, but I’ll discuss its challenges more later.

Okay, so is the problem solved? Can we all go home now? Not hardly. The same nuances mentioned earlier that better inform our discussion of peak oil also inform our understanding of the current petroleum situation, to include shale oil and oil shale options. Thus far, we’ve spoken only of production rates of petroleum. But here is the real, fundamental problem with petroleum: when it was first discovered and used on a wide commercial basis, beginning about 1905, it was so easy to obtain that in terms of energy it only cost us about 1 barrel of crude in power generation to draw and collect 100 barrels of crude for sale in the marketplace.  Some speak of this relation as the Energy Returned On Energy Invested, or EROEI ratio. I alluded to it above. It basically begins by noticing that if a fuel source is to be viable then we cannot expend more energy to get it than the energy it provides to us. In the case where those energies are equal, EROEI = 1. In the event that we consume more energy to get petroleum than the petroleum recovered provides, then the EROEI < 1. This is unsustainable also. Therefore, for petroleum, or any fuel, to be viable it must have an EROEI > 1. Having cleared that up, some confusion over how physics and economics overlaps on this matter has gushed out on the internet and elsewhere like water over Niagara Falls. Why? If we will recall, around 1905 the EROEI must have been 100, since for every 100 barrels of crude we could sell we expended 1 barrel’s worth in energy to get it out of the ground. The problem is that since that time the EROEI has dropped precipitously by about one order of magnitude. Thus, the global average EROEI is about 10 nowadays. But what this implies is what seems to be confusing people. Some think that if the ROEI gets any closer to 1 we’re doomed. Some have even said that you need an EROEI of 3 or 4 to make petroleum economically viable. This is not true and is based on certain assumptions that need not be true either. In order to be not only economically viable but economically explosive in its market power the ROEI simply needs to be greater than 1. That’s all. Let me explain.

There is this thing called “economics of scale”. To explain it’s relevance here, consider the following thought experiment. Suppose we discover a massive petroleum reserve in Colorado that contains some 2 trillion barrels of recoverable “oil”. At current U.S. consumption rates, if every drop of petroleum consumed in the U.S. were pulled from that one field, it would last 275 years. Ah, but you say, that reserve is kerogen. Kerogen is the play I referred to above where I pointed out that we had about 50 years to figure out a way to economically utilize it. This is because the other oil, so-called “tight oil”, or shale oil, will run out by then. But the big, big problem with kerogen is that lots of energy are needed to make petroleum out of it. Current retorts (heaters for heating kerogen) run at about 400 C and have an EROEI of about 3 to 4. Of course, this is first generation technology, but for the sake of discussion, let’s assume it is 3. For demonstration, we assume that the current, conventional EROEI on oil is about 10. How could kerogen possibly be cost effective? Economies of scale. Great, problem solved? Nope. Let me finish.

Let’s assume that, for the sake of discussion, we have an infrastructure that can begin producing petroleum at incredibly high rates. How is this? Kerogen is located only about 500 meters in the ground and can be manually extracted. This means that there are no “pressure curves” or constraints on how much can be removed how fast. It’s simply a matter of having sufficient resources to do the work. But more importantly, these rates can be achieved because, as one increases the rate of recovery, you are not fighting against a finite maximum lode (effectively) and the economics of scale work because it is one field, not several fields geographically separated over great distances. Thus, as petroleum flows out at rates far exceeding what was possible before, the price of that petroleum drops. And it keeps dropping as the market is flooded with petroleum. Imagine that before this operation commences oil costs 1 dollar a barrel (to make the math simpler). Let us say I have 100 dollars to spend on energy. So, I purchase 100 dollars worth of energy. But, it took 10 dollars worth of energy to get the oil I’m using as energy. So, my net return is 90 barrels of crude. Now, suppose after operations commence 100 dollars now buys 1000 barrels of crude. This means that I can net 900 barrels of crude for the same 100 dollars. My energy has gone up dramatically but my economic cost is constant. Of course, our EROEI is lower now, so we have to adjust and recalculate. 100 dollars buys 200 barrels of crude with an EROEI=3. Thus, for the same economic cost I have doubled my energy and have done so in the same amount of time because, by economy of scale, I can obtain that petroleum twice as fast as before. And I can achieve that production rate because I do not have to worry about running out for quite a while.

So, as with peak oil, simply blurting out EROEI doesn’t explain everything. You have to take all variables into account. Okay, will we finally get to the bad news? Yes, we are now ready to see the deeper problem and the key point so many are tragically missing. I somewhat glossed over economies of scale and production rates for kerogen and assumed that we actually had the ability to ramp up to that. In other words, we have to be able to invest in that massive infrastructure in Colorado to start this voracious beast up. Do we have what we need? Well, we have the petroleum in shale oil. But is that really all that matters? Of course not. We will not be able to reach that kind of production rate in kerogen to petroleum with excesses of tight oil alone. And this is where it gets interesting.

Those that study economics and petroleum often point out that the strength of an economy is largely dictated by the per capita energy per unit time that a country or region achieves. Energy per unit time is power and it is measured in watts. So, what they are saying is that the strength of an economy ultimately falls back to per capita power consumption. This is why climate change is so controversial overseas. Other countries know this and they see attempts by western, industrialized nations to limit CO2 emissions as nothing more than curbing per capita power consumption; thus derailing economies. For the western world, the association between per capita power consumption and CO2 is not nearly as strong, so it does not affect them as badly. But for countries still burning lots of coal and for countries without efficient cars and trucks, such cutbacks in CO2 would have drastic effects on their ability to industrialize. But for our discussion it is important for what it does not capture. To explain this, another example is in order. Consider a farmer living in the 1700s in North America. They plow fields using a mule and bottom plow. The per capita power consumption for the farmer is, say, x. Now, a farmer in North America in 2013 performs the same task using a very small, diesel powered tractor with plows, harrows and the like. In this case, the per capita power consumption is considerably higher and we’ll denote it y. Notice that over the years the transition from x to y is gradual as each new technology and piece of equipment increases the power consumption available to the operator. But why, exactly, does this seem to be correlated with overall quality of life? Why is it that better health, education and so on are so common as power consumption increases? The reason lies in the definition of energy and power. In physics the term “useful work” or “work done” in an “environment” is a term that refers to the effect, or result, of applying energy to a defined “environment”. Thus it is often called the negative of energy. Thus, when we apply energy to an environment we are dumping energy into that environment in some controlled, intelligent manner. In the case of the farmer example, the “environment” is the soil, or the Earth itself, which we transform intelligently into something favorable to plant growth. This takes lots of energy. In fact, the mule and plow ultimately expend exactly the same total energy as the tractor does. The difference however, is how fast it happens. The tractor does it orders of  magnitude faster. In other words, it is the power of the tractor over the mule that makes the difference. Thus, we can fundamentally improve our lives by intelligently applying power to satisfy a human need with speed, giving us the time to engage in other worthy tasks. We can use that time for leisure, education or other work related tasks. In the end, the quality of life improves.

Thus per capita power consumption is key to the advancement of humanity, period. We have no time to march in protest of an unjust ruler, time to educate our children, time to do other useful work such as plow our neighbors garden for them, or anything else, if we are captive to spending most of the hours of our lives slowly expending all the energy necessary for our survival. Power is freedom.

But power provides other improvements to quality of life indirectly as well. We can afford to have clean, running water because we had the power to dig enough water wells, we have the power to run the factories that make our pharmaceutical medicines which alleviate suffering, we have the power to build massive buildings called schools and universities in which our capacity to learn is enhanced, and on and on.

So, in the petroleum discussion, when we speak of “ramping up” to a new way of obtaining petroleum which requires more upfront energy than the old forms of petroleum, we are talking about a per capita power consumption problem. The shale oil can solve that for us. But what this discussion is missing is a key ingredient in this ramp up. Once again, we have to be careful not to overgeneralize. Generally speaking, this power statement is correct. But in reality, we have to consider something else. And that something else is the “environment” we just discussed. We can usually just ignore it because

The rate at which we can access the environment is assumed to be infinite, or, at minimum, proportionately greater than the rate at which we are expending energy into it.

This will not always be the case. Let me explain. In the case of the tractor, of course we have access to the ground because that is what we’re farming and there is no obvious constraint on how fast we can “get to it”. But what if we change the example a little. Suppose we now think of a factory that takes aluminum mined from the Earth and smelts it, producing ingots of aluminum that can then be shipped to buyers who may use it to build products that society needs. Well, the mines that extract the aluminum can only do so with finite speed. And if that resource is finite, and especially if it is rare or constrained in volume, the rate at which we can recover it is indeed constrained. Now, if I am a buyer of ingots and I make car parts out of the ingots, the rate at which I can make cars no longer depends solely on the power consumption available on my factory assembly line. Now, I have to consider how fast I can get ingots into the factory. This is a special case and we can see that generally this is not actually an issue. But to understand that we have to increase our altitude yet more over the forest to see the full lay of the land. Ultimately, all matter is energy. We should, in principle, be able to generate raw materials from energy alone if the technology for it exists. However, as a practical matter, we can’t do that. We depend on raw materials, aluminum being but one example, which come from the periodic table of the elements, plants and animals and minerals of the Earth. But the most constrained of all of them is the periodic table. As it turns out, petroleum is not our only problem and, not surprisingly, the crisis of the elements is of a very similar nature. It isn’t really that we are “running out”, it’s that the rate at which we can access them is slowing down while consumption goes up. And that’s the problem with petroleum, too. We have plenty of reserves, but our ability to access it fast enough is what is getting scary. Unfortunately for us, raw materials and rare Earth metals especially, are hard to find on the Earth’s surface. Almost all of the rare elements are deep within the Earth, much too far down to be accessible. Thus, our supply chain is constrained. This is why plastics have become so popular over the last three or four decades. In fact, some 90% of all manufacturing in the world now depends in some way on petroleum, ironically, because the raw materials we used to use are drying up. And the rate at which we can recycle it is not nearly fast enough.

So, the very same problem of production rates in petroleum exist for the elements and what we have not discussed is the world outside the United States. I have deliberately focused on the U.S. and Canada for a reason. The global situation beyond is dire. Why is this? Because, even if we solve the petroleum production rate problem in the United States, as I’ve suggested it will be,

It will be frustratingly constrained in its usefulness if dramatic improvements in the rate of production of elements of the periodic table are not found rapidly.

And that’s just the U.S. and Canada. The situation in the rest of the world is far, far worse. There is only one place where such elements in such large quantities can be found and exploited rapidly. And it is not in the ground, it is up, up in the sky. Near Earth asteroids are the only viable, natural source that can fuel the infrastructure creation necessary to drive the kerogen production needed. But said more fundamentally, if we don’t find a solution in the staged development of shale oil, then kerogen, coupled with massive increases in natural resources which increases in power consumption can take advantage of, humanity will die a slow, savage and brutal death.


What we really need here to express this economic situation is a new figure of merit that combines per capita power consumption with the rate at which we can access the raw materials that are being manipulated by any given power source. We cannot perform a meaningful study of this issue without it. Thus, for now, I will call it Q and it shall be defined on the basis of a per operator required figure (analogous to per capita but based on a per operator figure, a technologically determined value). And I shall define it as the product of a given power consumption and the raw materials in mass kg operated on by the power source per second time. Q would be calculated for each element, mineral or defined material it operates on using a subscript. So, for aluminum it would be:


And for any particular, defined economic enterprise the collection of such materials I will take to be a mean of all such Q and denote it:


Now, the Qm for the kerogen to crude conversion (retorting) must be greater than some minimum value that is actuarially sound and economically viable. For a sufficient value we can expect economic prosperity and for some lesser value we can expect a threshold of survival for humanity. That threshold is determined by the pre-existing Qc (Q based not on an operator but on true per capita basis) and the maximum range of variance an economy can withstand before becoming chaotic and unstable (meaning, before civilized society breaks down). So, what do we mean by death and destruction. Well, here’s the bad news.

The problem we are facing is a double faceted one.

We are seeing a reduction in global “production” rates for both energy and matter.

As populations increase this will get worse. Only Canada and the United States appear to be in a position to respond with the favorable geology and sufficient capital, technology and effort to compensate for dramatic losses in conventional oil production rates: if you are pumping water and gases into oil wells now to boost production the drop off after peak won’t be a smooth curve but will look more like a cliff. And now we can see why the Peak Oil concerns are real, but for the wrong reasons. The problem is that though the oil is there, it is costing more and more to get it out and the raw materials (capital) needed to invest in ever increasingly expensive recovery – economies of scale – are not forthcoming. The “cliff” is economic, not physical. Thus, even in the few countries where reserves are still quite large, economies of scale do not appear to be working precisely because of a lack of raw materials (capital) and, to some degree, energy. The divergent state of affairs between North America and everywhere else is due to several factors:

  1. Whatever the cause, conventional petroleum production rates are declining or are requiring greater and greater investment to keep up with prior production rates. This could be because of fundamental peaking or it could be because nominal investments needed to improve production rates have simply not been initiated until now.
  2. Tight oil is the only petroleum product that has been shown to be economically viable and that is not affected by the problem in 1.
  3. North America has by far the most favorable geology for shale, which is why it has been possible to start up tight oil production in Canada and the U.S.
  4. North America has the strongest economy for fueling the up-front, very high investment costs that a new infrastructure in tight oil will require
  5. The U.S. and Canada have been studying the local shale geology for over 30 years and have developed a sufficient knowledge to utilize it, to a degree far surpassing what has been done anywhere else.
  6. North America has the most advanced drilling technology for this purpose than any other locale can call upon or utilize.
  7. Despite the massive consumption in the United States, Canada and the U.S. appear to be at or near energy independence now, which means that instabilities around the globe will not likely have a negative impact on tight oil production as a result of its economic shock (at least not directly).

The biggest question for the United States is this. What are you going to do about raw materials? The good fortune found in tight oil will avail nothing if the United States doesn’t also dramatically increase the rate at which it can “produce” raw materials, particularly elements of the periodic table. The only way to do this is to create a crewed space flight infrastructure whose purpose is to collect these materials from asteroids, where they appear in amounts astronomically greater than anything found on Earth. If the United States fails to do this, it and Canada will go the way of the rest of Humanity. To explain, it may survive the tight oil period. The problem won’t present until the switch to kerogen is attempted in some 30 or more years. But it would take 30 years to develop such a space flight infrastructure. There is no room for gaps. Because of kerogen’s poor EROEI, it will absolutely depend on higher production rates of raw materials; i.e. increased flow of capital.

Of course, at some point alternative energy will have to be developed and the entire primary mover infrastructure will have to be updated. That is really the end goal. But this is no small task. It will cost trillions and will take decades to convert humanity over to a fully electric infrastructure. That is one of the key requirements for comprehensive conversion to alternative energies. And alack, we do not have the raw materials on Earth to build enough batteries for all of it. Thus, once again, the asteroids loom as our only hope. When and if we achieve an energy infrastructure that does not include fossil fuels we will have taken a key step in our development. At that point, for the first time, humanity will be progressing using the fundamental physical principles common throughout the universe and not specific to Earth. It will be a seminal transition.

What does this mean? I had written a few paragraphs on that question but, realizing how depressing it all is, I leave it at this. USG needs to start developing this space infrastructure yesterday and they need to keep hammering away at kerogen. I hope I’m wrong about this.

– kk


An image of the surface of Titan

As you may know, I try to keep my ear to the ground on matters of crewed space flight. I wanted to share with my readers a major development, a paradigm shift, going on right now in space transportation. At the close of the fifties the United States and the Soviet Union were competing with each other to fly higher, faster and farther than any before them. Yuri Gagarin lit the candle when he was the first human being to orbit Earth. But the less well known factoid about space flight has been in it’s irony: lifting objects into low Earth orbit (LEO) was a technological barrier for humanity that was briefly overcome only by sheer bravado and brute force in a way that would never be economical. Rockets were constructed that staged their fuel on the way up, dropped their airframe in pieces like a disintegrating totem pole and reached over 17,000 mph just to place a few hundred pounds into orbit. The truth is, no one really had the technology needed to do this economically. But both the US and the USSR convinced each other of one thing: While it may be a silly waste of money to do such a thing, both of them could place a relatively lightweight nuclear warhead into orbit and pose a threat to the other. Once they proved it, everybody went home. Space flight for the last 50 years has been a stunt that only governments could afford and that only mutually assured destruction could inspire. Unless someone could find a way to reuse these craft, especially the most expensive components, flying to LEO by throwing away all of your hardware on every flight would never make sense in the hard reality of economic necessity.

But reusing these machines was a technological leap beyond Sputnik and Apollo … a big leap. And for that reason space flight floundered for decades. And yes, we’ve all heard what a big waste the Space Shuttle was. But I want to offer a counterpoint that history gives us in 20/20 hindsight. The two most expensive components of rockets, by far, were absolutely dominating all discussion of space flight. Since we couldn’t overcome those two problems mass, yes weight, was the dominating factor in every discussion of space flight. From space probes to deep space to space stations, to the shuttle and to the moon and Mars; weight was the big nasty sea dragon that ensured that talk of frequent missions such as these was hopeless. You can’t go to Mars with a pocketknife and a Bunsen burner, as I like to say, but many proposed it anyway. But the reality was that limitations on weight, borne of the extremely constraining limits placed by the technological limitations that in turn affected the economics, ensured we’d get nowhere. Everything hinged on making LEO economical and loosening the maddening mass restriction that has bedeviled the human space enterprise for some fifty years now. I am happy to report that one of the two technological hurdles needed to overcome this limitation has been cleared and the second is being aggressively run to ground.

The first problem is that rocket engines are powerful; so powerful in fact that they are something like 20 times more powerful than jet engines by either weight or volume. The temperatures at which they operate are in the 6000 degree F range with combustion pressures over 1000 psi. Jet engines come nowhere near being able to handle this. And you need rockets because, frankly, we don’t have any other way to get to LEO. Air breathing hybrids are, contrary to popular myth, decades away (kind of like fusion power), because we still don’t know how to combust a fuel/air mixture over a wide range of speeds in a single scram jet design, for example. The only foreseeable technology is rocket engines. But there’s the rub. We can’t just throw them away on every flight because they are, along with the actual airframe itself, by far the most expensive components of the rocket. And that is what I meant by technological barriers: we threw these expensive things away before not just because we couldn’t carry their weight into orbit, but because our technology was too primitive to build a rocket that didn’t destroy itself after a single flight. Specifically, the key component is called a turbo-pump and in reality, rockets are just pipes and wires built around the turbo-pump. The turbo-pump is the key technology and the really expensive part of the machine. And we didn’t know how to build a turbo-pump that you could keep firing over and over, like driving your car to work every day. Previously, we had to throw them out after every drive, like throwing out your car engine every time you go to the grocery store. This would never be economically sustainable. And it took us nearly 50 years to solve this problem. And that’s where the Space Shuttle comes in.

The Space Shuttle was supposed to be reusable, but as we all know, it never really was. But the one component of the Shuttle that gets little attention is the high pressure turbo-pumps of its RS-25 engine. Over nearly 30 years of operating the Shuttle, and because it was supposed to be reusable, NASA kept tinkering away at the turbo-pumps; flying them, studying them and enhancing them. It took billions of dollars, years of time and thousands of people-hours. But over those years engineers at NASA finally began making headway on the engine that was supposed to be reusable, was a total throwaway, but which was gradually becoming a true, reusable, high powered rocket engine. They figured out how to reduce the temperature and pressure a bit and “Block I” was born. Then they figured out how to handle the massive cavitation and “out of round” motion of the turbo shaft at 37,000 rpm and 85,000 horsepower, calling it “Block II”. They shot-peened the turbine blades to resist Hydrogen wear and used Silicon Nitride as ball bearings, something that took months of jig testing trying all sorts of workarounds to resolve heavy wear on ball bearings and surfaces that was forcing them to either throw the engines out after one flight or overhaul them after every flight. The advantages of this new engine (10 flights before overhaul, an incredible advance) were so impressive NASA set about capitalizing on what it had learned with plans to build a new engine to replace this one, which would have a 100 flight before overhaul (this was to be the RS-83 and RS-84). In other words, for the first time they really knew how to build a truly reusable engine. Of course, by the time Block II came out and they had started on the new engine, the Shuttle was about to retire and the new engine program was cancelled. But NASA is more or less open source and their work and findings spread around the research community. People learned the lessons so hard-earned by NASA. People started building turbo-pumps with Silicon Nitride bearings, used new computer coding models developed by the NASA Shuttle team for dampening (another huge problem) and generally incorporating every lesson they could from NASA’s experience. Numerous papers have been written on this and Aeronautical Engineering professors write almost verbatim from NASA documents extolling the lessons learned. Around 2012 SpaceX tested a totally new turbo-pump. It’s overall thrust is a dramatic increase over its predecessor. The turbo-pumps can be re-fired and the rocket is reusable. Something big has happened. There are, of course, narcissists in our capitalist world that will never acknowledge credit where credit is due, but yes, all this came from NASA and it’s clear and unambiguous.

SpaceX is, for now, also the only company looking at reusing the airframe, the other truly expensive component, and is the first to make real, tangible progress in that direction. But interestingly, that problem is intermixed with the turbo-pump problem, for the simplest technological solution is just to fly the booster right back to the launch pad when it’s done lifting your cargo. This was unthinkable before reusable turbo-pumps were perfected (it would require three separate firings in one flight – most turbo-pumps will literally blow up if you try to shut them down gracefully, much less start them back up). It won’t be long before the rest of the industry jumps on this bandwagon and does the same. In fact, we think they have but cannot confirm that it is coming from NASA research directly. Everyone is now fascinated with reusable rocket engines and airframes. The engineering streets are abuzz with talk. To see why, for a 50 million dollar launch only about 200,000 dollars is used for fuel. The turbo-pump suite costs from 10 – 40 million (which today are just thrown away). When the books are settled, at the least, it costs about 3500 dollars or more per pound for taking cargo to LEO. But if you reuse the rocket and its turbo-pumps that cost will plummet to less than 100 dollars a pound. In the next 10 years the boundaries of space are about to explode with activity. There are more resources that are easy to recover and bring to Earth than anything imaginable from past experience. It will be like the 49 Gold rush squared. And all this talk about “lightweight” and “mass restrictions” will all sound rather quaint.

And btw, this is why NASA is now focusing on a new Space Launch System (that Mondo rocket that looks awfully like the Saturn V), which is for deep space flight. They already know the LEO problem is solved and they are leaving it to private industry. That’s what all the confusion and hoopla in the political world, vis-à-vis space flight, is all about; people are realizing that a paradigm shift is occurring. Bring it on!

Neil, I will wink at the moon for you Sunday night.


Hi all,

The truth is stranger than fiction. I’m going to make a point and I’m going to use one of the wildest conspiracy theories out there to make it: the idea that UFOs exist and that they are spying on us. Indeed, that they are spying on nuclear weapons facilities around the globe as if a preface to global invasion. Good stuff, but let’s read between the lines and I think the truth might be in there somewhere. Something is going on, but it’s not what some think. Call me a skeptic, but first we need to explain what that word means, which takes me right to my point.

First, the term “skeptic” is confusing in the UFO research arena. Apparently, those that hold the orthodox view that UFOs do not exist seem to view themselves as “skeptics”. This was a bit confusing to me at first but it’s only a matter of semantics. Or is it? I think it is more accurate to call those that challenge the orthodox opinion to be skeptics, not the reverse. For the view that UFOs are real is a heterodox opinion and, by definition, skeptical of the orthodox view.

Having said this, most attempts by the orthodoxy to refute UFO evidence seems to revolve around a style that is frustrating to read and research: most attempts to “debunk” seem to be tomes of Ignoratio elenchi (basically, making arguments that are cleverly irrelevant to the presenting claim). And that itself gives the appearance of dishonesty. Whether it is honest or not, those that make these arguments should consider this in their analysis because it is a source of considerable suspicion for most readers. One of the common tools used in this approach, among many others, is the tendency to over-emphasize things like witness credibility, particularly the credibility of the researcher themselves. For if the researcher can be found to be dishonest or misleading (or just crazy), then it is assumed that everything they say or claim is false.  Ironically, this is the same approach used in Western law and it has been shown in numerous psychological studies to be fallacious. This is because there are two types of “believers”; those that are charlatans and do not in fact believe and those who engage in wishful thinking but still believe the over-arching premise. And for most researchers who reach heterodox conclusions the latter is more common. It’s the age-old fallacy of throwing the baby out with the bath water without realizing that while some parts of a person’s research may be fallacious or faulty, that does not, by itself, imply that all of it is. The astute researcher has to know how to sort this out.

But what many orthodox researchers do, once they find any fault or error in an analysis, is engage in an argument of Ignoratio elenchi by focusing only on the matters of credibility, not the actual claims being made. Thus they spend volumes discussing tangential factors to the overall story that, by themselves, have nothing to do with the truth or falsity of the over-arching claim. That is not to say that credibility has no place: if one finds that, for example, the so-called “Majestic 12” documents originated from U.S. Air Force counter-intelligence agents (Special Agent Richard Doty, to be precise), then it can be safely assumed that any document referencing Majestic 12 is probably a fabrication. That is a credibility assessment. But what makes it germane and different than broad, context-void assessments of credibility, is the fact that it is contextually relevant. On the other hand, suggesting that Robert Hastings, who researches odd events at nuclear weapons facilities, has deliberately fudged facts about one incident at one location, say, in 1967 or 1968, does not by itself provide sufficient evidence to “debunk” evidence that exists for the same kind of phenomenon at another site in 1975, whether that evidence originated with Hastings or someone else. Critical analysis just isn’t that simple and explanations that try to do this are catch-penny arguments used to appeal to human bias and prejudices; namely those having to do with people’s natural tendency to disbelieve anything that comes from a source that has been dishonest at some point in the past. This is the same reason why a known and admitted prostitute on the witness stand is not guaranteed to lie for any and all questions posed to her. It just isn’t that simple. And what we are doing here has nothing to do with trying an individual; we are seeking confirmation of facts and assertions that may or may not be independent of that witness.

Two excellent examples of deceptive “debunking” are the Air Force “Case Closed” report of 1995 and the internet article by James Carlson found at In the case of the Air Force an attempt to debunk the skeptical claims about the official narrative of the Roswell incident of 1947 was proffered in 1994 and 1995 when the Air Force changed its official story of it being due to weather balloons to a claim that it was due to something called project Mogul. This tells us, implicitly but loudly, that the Air Force was engaged in counter-intelligence when it lied about the weather balloons. Forgetting for the moment that germane questions of credibility about the Mogul claim are thus obvious, the Air Force spent almost all of its report talking about project Mogul. It was basically a history lession about Mogul. But that is really not relevant to the skeptical claim. And sure enough, the explanation was riddled with problems. There were crash dummies employed in 1947 that didn’t exist until 1952, the Ramey memo in which it can clearly be seen by any modern computer user to have referenced “victims of the wreck”, and so on. Thus, given that we know that disinformation was the source of the weather balloon explanation, and given the obvious application of Ignoratio elenchi in its 1995 Mogul diatribe, it is no wonder the American public doesn’t believe it. That USG can’t see this is astonishing but reinforces the view that they are out of touch with the public.

In a similar way, Carlson makes an elaborate argument that the 1967 and 1968 phenomenon at one missile base was, at best, wishful thinking of a skeptic named Robert Hastings. Once the “gotcha” was in place, it was then assumed by character assassination that all the other events must be of a similar nature. It was an exercise in Ignoratio elenchi.

And this is why these kinds of analyses are a turn-off for most readers. Most readers see this as a personal attack on a person rather than an honest pursuit of truth. That Establishment figures in government, who do the same thing, have not apparently noticed this is astonshing but it shows how out of touch they are with everyday people. If there were ever a sign of elitism sticking out like a sore thumb, this is it. Thus, in order to examine the Hastings research, we need to examine each case of purported tampering at each base on each date and ask only the questions of merit:

1.)  Who actually witnessed the visible phenomenon? Are their names known to us? What is the chain through which this information reaches us now? What have they said? What written or electronic data is available to corroborate it? Where is it? Can we see it? Is it clear and convincing?

2.)  What witnesses can report on the radar data? Have they also been identified? What is the chain through which this information reaches us now? Do records of these radar sweeps exist? Can we see them? Is it clear and convincing that solid objects were present?

3.)  What failure mode, if any, was witnessed and who witnessed this? Have they been identified? What is the chain through which this information reaches us now? Did anything actually fail? What was the nature of the failure? What records exist to corroborate this failure? Is it clear and convincing that a failure mode without prosaic explanation occurred? (classified aspects of their operation can still be protected by a careful review of how the systems are explained – we don’t need to know how they work).

4.)  Does the movement of objects, if established as above, correlate well between visual sightings and radar tracks? Does it appear to be intelligently directed as a response or anticipation of human behavior? Time, location and altitude are critical here.

We don’t need diatribes and tomes of personal attacks, tangential information and digressions of Ignoratio elenchi to resolve this. We need data.

The global public is becoming more and more sophisticated in their understanding of geopolitics and disinformation. They more easily recognize it and its common attendants, such as catch-penny reasoning, Ignoratio elenchi, the role of greed and money and the extremes to which power corrupts. It’s time for USG to catch up.

Virtually every government “investigation” and every “debunker” out there has done nothing to address the four questions that could be equally applied, in different form, to just about any “conspiracy theory”. And the field is chock-full of charlatans and fairy tales that can be easily discounted with a modicum of background research into the provenance of documents and the nature of the claims by simply applying the questions above, even if they cannot be fully answered. There is truth between the lies.

I would submit that USG should rethink the way it approaches conspiracy theories by avoiding catchpenny silliness and responding to them directly and in a hyper focused manner by releasing data specific to the claims of merit because the tactic they’ve been using, since at least 1947, is itself becoming a National Security issue because of the distrust in government it has caused. And Popular Mechanics commentary – which most who have two brain cells to put together know this to be a USG shill – in their replies isn’t needed. Just the relevant data. They need to do this with the Kennedy assassination, 911, UFOs and anything else of popular lore. Sadly, I’m afraid their hubris is too inflated now to ever do that, but for my part, I’ve illuminated the path. Listen to me now, hear me later.

– kk

P.S. Watch how disinformation works. Everybody is focused on Edward Snowden. But is that really the story? How about the fact that his revelations have confirmed that you are being watched, Orwell-style? All the way down to your Safeway discount card application and purchasing data. Yep, that’s right. Go read what he actually handed over to the journalist that reported it. Google is your friend.

As some of you might have heard, a site suspected of being a Chachopoya site was discovered east of the Andes in the western Amazon jungle back in 2007. I don’t know if it’s going to be excavated or not. The site has been called “Huaca La Penitenciaría”, or Penitentiary Ruins, so named because it looks like a fortress of stone. It is buried in deep jungle growth at an elevation of about 6000 feet. This means that archaeologists now have to consider the fact that the Chachapoya’s eastern boundary extended into the Amazon. Anyway, numerous structures have now been verified in the Amazon jungle and it is clear that this “jungle” wasn’t always so. Human beings have been “terraforming” it for centuries creating a very rich, black soil on top of the acidic rain forest soil.


Huaca La Penitenciaría

The “black soil” (terra preta) has been a mystery until recent years. Now, it has been learned, these people had an ingenious system of settled agriculture in which they enriched poor soil with charcoal and other materials by building dikes, water channels and artificial lakes. Then they harvested fish in those lakes. Pretty clever. Archaeologists now believe the Amazon area was host to a population as high as 5 million with “urbanity” ratings exceeding that of Ancient Rome. They essentially built an artificial archipelago. This civilization for which we still know so little extended from sea to shining sea, all the way across the Amazon. The finding of the “penitentiary” means that we now know that heavy stonework was employed in the Amazon. This is a sea-change in thinking as it means we can expect to see more of it.

soil_Comparison_Amazon_001Amazonian Terra preta and regular Amazonian soil.

I decided to grab some of the satellite data on where this “black soil” has been found and do some google searching in satellite images. I was hoping others could help me identify some new prospects. I’ll be writing a big piece on anthropology and archaeology about all of this and more pretty soon. But for now, I was just wondering if anyone knew about these places and what might be here (such as modern constructions).
You’ll notice a lot of circles and berms. These are the types of shapes seen before, called geoglyphs, which it has now been found represent raised earth mounds. Anyway, peruse and enjoy. The locations are in the file name, so click on the file and get the filename to find them on google.


Here, what looks like a pyramid or platform structure with a couple of trees growing on top. Notice the regular geometric seams in the rock.possible_Artificial_Object_Field_Shaped_Like_Arrow_Head_At_12.364966_South_68.867673_West_002

This is very strange. The picture is taken from an angle, but when corrected, this raised Earth appears to take on the shape of an arrowhead. The pyramid/platform/whatever is at the bottom right. Oddly, the arrowhead points on an azimuth direct to Puma Punku (about 2.5 degrees), not far away. The figures are (reverse azimuth):

Distance: 467.1 km
Initial bearing: 357°29′53″
Final bearing: 357°32′42″

possible_Artificial_Object_At_12.364966_South_68.867673_West_002possible_Another_Geometrically_Regular_Field_With_StoneWork_At_Decimal_12.336082_South_68.869891_West_001possible_Another_Geometrically_Regular_Field_Lower_Portion_At_Decimal_12.336082_South_68.869891_West_001A geometric object lying in the same field as the platform below.possible_And_Another_Geometrically_Regular_Field_Zoom_At_Decimal_12.337193_South_68.846632_West_001

Compare this to the layout and overhead at Tiwanaku

pumuPunka_OverheadAnd the layout scheme at Puma Punku:

puma_Punka_Layoutpossible_And_Another_Geometrically_Regular_Field_At_Decimal_12.337193_South_68.846632_West_001Here is what looks like a stone platform like the one at Tiwanaku

Compare this to the Chechapoya Penitentiary layout:


You might have noticed something odd. What are those cauldron, casket looking structures on the roof? Who knows. But check this out:


Unfortunately, the video quality degrades inline, but you can click on the image, then click on full size to get a better view. This is just east of the arrowhead, about a mile or so. Notice the odd roof structures. This site is very close to Puma Punku, and I will have much to say about masonry in my article on this subject. The masonry problem may have been solved in the Amazon jungle.

And much, much more. I’ve found stuff like this all over the Amazon. Archaeologists are now saying that there may be hundreds or even thousands of Earthworks in the Amazon that people have practically lived on for centuries and not noticed until now.


%d bloggers like this: