Hi all,

I’ve added more detail to the framework which attempts to outline the framework with its necessary provisions for implementation in a slightly more tangible form. The ideas derive of what is known as “general federalism”, but those tenets are limited as much as possible at this high level.

Perhaps the most important first step in a top-down re-evaluation of global governance should begin by identifying the sine qua non of the global or regional environment necessary for a successful and durable global rule of law to exist. And most of that will hinge on normative beliefs, customs and practices within a given society. This is almost, but not identically, akin to stating that a cultural environment conducive to global rule of law must exist first. And, as it stands, I will argue, this is in fact the key impediment to effective multi-lateralism. Humanity must grow up, change and dispense with beliefs and behaviors that, while they may have an antecedent in our biological and social past, can no longer enjoy scientific support for their continued usefulness.

One of humanity’s greatest foibles is our tendency to inject emotion into intellectual inquiry, and the tendency this has to marginalize and exclude reason. Many today blame this on religion or some other boogey man. Certainly, religion provides a feeding ground for uncontrolled emotion. But the truth is that a more fundamental and universal cause presents itself as misplaced emotion. All of the points outlined below deal directly with this issue and provide a way for humanity to address serious, global issues rationally. It represents an executive summary of what this author has been working on for several years now and a full treatment and justification can be found in later works to be shared.

The most fundamental changes needed can be summarized below:

Matter and Energy; an evolutionary step in our understanding of economic theory such that we delineate the most fundamental factors affecting economies. The most fundamental foundation of an economy lies in how we manage matter and energy. Economic “theory” merely rides on top of this fundamental, limiting fact. For any beings of technological agency, consumption of matter and energy is likely the gravest long-term threat to survival. Yes, this is a universal claim. Today we call this sustainability, but sustainability at such a fundamental level as what we are describing here finds a nexus in economic prosperity as well. They are the same thing; most people just don’t realize it yet. The prevailing myth in our time has us believe that it is a one-way street: prosperity depends on sustainability. The truth is that both depend on each other. So, wherever there is technological agency, consumption of matter and energy will increase with time. Therefore, long-term planning should focus on increasing access to matter and energy. Currently, this is sharply limited because we do not have the means to create an actuarially sound space transportation infrastructure. This author’s primary area of interest and effort lies in work that will grossly antiquate all existing space flight technology and make this an economic reality. We will see more about this in the next 2 to 4 years as this author matures his current work, now being done outside of the public radar. It will be known later in the form of a non-profit named the Organization for Space Enterprise. The reason why space is the focus is lengthy, but our current approach of trying to hold fast to an existing form of matter (such as petroleum) or to transition to a new form of matter (periodic elements used in solar panels, for example) is not scalable. It will ultimately destroy humanity (by a gradual starvation of matter and energy) and the only long-term solution is to source matter and energy in quantities vastly larger than what is available on Earth alone. Because of the time frames involved, this effort must begin now.  This will require nimble, systemic change in the underpinnings of the free market. A clever solution is an optimization that “does no damage” to the existing system but affords more directed use of matter and energy, and this author has a proposal. Whatever this author does, USG would be well-advised to invest heavily in the development of the means and methods (not all of which involves new technologies) required to render space flight economically viable and actuarially sound.

  1. Systemic change, at the level of fundamental law, must be constructed to provide both representation and participation in decisions regarding how matter and energy, at its initial source, will be tasked within a free market.
  2. This change cannot undermine the principles of free market economics because it must “do no harm” to systems of demonstrated past performance. Therefore, the scope of this input should be limited to the incentives the public en masse is willing to provide to the private sector to encourage the survey, extraction and refinement of matter and energy on Earth and elsewhere. And such incentive should be constrained by fundamental law only to matter and energy at its source (survey, extraction and refinement; SER) with any additional powers explicitly denied. This I’ve denominated the “Public Trust” which establishes all matter and energy as public property legally owned by an irrevocable trust. This element is advised but not essential. The key concern is that no government entity should be legally entitled to ownership of matter and energy used by the private sector. The public owns it collectively by legal Trust, but the private sector is legally entitled to use it. Ownership does not transfer from private to public for existing matter and energy, but new finds are absorbed into public ownership with legal protections for private entities that seek to utilize and market it.
  3. Considerations of sustainability in this scheme should be addressed separately in Statute by direct representation and participation. The fundamental factors of merit should be codified as a balance of immediate prosperity and long-term impact (on nature and its impact on future accessibility to matter and energy).
  4. The Courts of a general federation should operate only where a party’s inference in a Court of the Federation shall not augment less the evidence submitted in support bears substantial probative force by the manner of procedures consistent with the scientific method.

Social Justice; the evolutionary step in our normative understanding of social justice. We need to transform the public understanding of social justice to inhere the more that social justice should be blind to personality and demographic and should rather focus on behaviors of merit and those that lack merit. The old saying that violence begets violence likewise extends to the notion that emotional extremism begets emotional extremism. Almost all notions of social justice today rely on emotional domination of the issues and feed off of ancient and barbaric fears that do nothing but generate a vicious cycle of repeated but varying “causes” through history. The result is that throughout history we see a pattern of social justice never materializing generally throughout society, with one cause giving rise to another in multi-regional and multi-temporal cycle that has been going on for at least 1000 years. This is difficult to see in our immediate present because these patterns take considerable time to cycle and may occur in disparate geographies. At the base of this cycle we see the exclusion of reason in discourse on account of the emotion so naturally abundant in matters of social justice. While emotion has a legitimate place and time, if humanity is to prosper, we must learn how to separate emotion from issues requiring reason to solve. Due to vested interests in the current norm of emotionally-driven understandings of social justice, this is a grave threat to the future of humanity. This will require nimble, systemic change advanced mostly through cultural efforts.

  1. It should be established as a matter of fundamental law that any and all sumptuary law that cannot sustain scientific scrutiny shall not be law or equity within the jurisdiction of the Federation.
  2. It should be established that any Statute or equity in the Federation which shall be reasonably expected to influence a matter of social justice, however broad, shall be applied to all human behavior uniformly and predictably to all persons without regard to personality or demographic, less it shall not stand as law or equity in the Federation. This provision would extend to enforcement as well. Ironically, this issue is solved by simply restating a key premise of rule of law itself: uniformity and predictability.

The Political Class and public distrust: Lack of participation and therefore some semblance of control, whether a good thing or not, evokes fear. Fear undermines trust. The solution is to find a reduction of the scope and scale of the political class such that representation and participation of the public is dramatically enhanced. Direct democracies simply do not work, therefore, a totally novel and new understanding of how to merge a political class with a more direct form of participation is urgently needed. This author has a proposal. The future of neo-liberal idealism is the evolution beyond representation alone and more into the area of direct participation. A clever means of rendering that participation competent via a political class is key to this solution involving an Assembly (analogous to a jury) and a Federation Civil Corps of citizens. As organic power decentralizes via technological agency, the duty to participate will quickly transform from nuisance to demand. The key is not to view this as an elimination of the political class, but as a “force multiplier” of the same, permitting the political class to take on a more focused role centering on providing competence to govern. Additional mechanisms within the participatory role of the public are needed to dilute incompetence and enhance representation. This will require nimble, systemic change.

  1. The analogy given here to western law and courts is somewhat sloppy. In the case of an Assembly, their role is the consideration of statute, not equity. Equity should belong solely to the courts of the Federation.
  2. Competence is provided by a panel of elected officials (a political class) analogous to a panel of judges with the privilege of writing statute, making first motion to vote and other provisions too lengthy to get into here.
  3. Statute is “algorithm friendly” allowing votes of very large numbers of persons randomly called to duty by a double-blind process to occur in seconds.
  4. Negotiation, resolution and consultation for making statute is performed by a Federation Civil Corps, consisting of lawyers, economists and other experts. It shall be a strictly merit-based system. Their duty is to inform and educate the Assembly and provide communication and consultation capacity between the elected officials and the Assembly.
  5. Assemblies are called every 12 months, consisting of a unique selection of Citizens of the Federation at large. It could be either voluntary or a lawful duty (I suggest that it be a lawful duty).
  6. Numbers of Assembly members are sufficient to allow representation of one-tenth of all Citizens of the Federation once every 100 years.

Organic power structure: Organic power structures in any society of technological agency will tend to decentralize over time and organic power in the future will more likely exist as a force of change coming from the populace in mass. The very meaning of “organic power structure” is shifting beneath our feet, and victory will go to those that see it. It is important to warn future generations that this is a consequence of technological change itself and not an objective or goal. We must prepare for this, and it is a key reason for the need to re-frame our normative understanding of social justice (but it must be done for all matters of social justice in order to ensure that a durable norm is the product). Class differences cannot be resolved if justice for one means injustice for another, regardless of the relative differences. This author has a solution that will ensure justice for all, which includes a mechanism that does not rely on schemes of income redistribution or the denial of social reward already accrued through lawful and legitimate means. This transition will occur over many generations no matter what is done, but this author’s solution provides a structured way to do this without injustice and general social collapse; and under a durable framework of governance. The key finding here is that organic power is evolving into something never seen before: organic power has throughout all of human history derived of relatively large differences in wealth but is now, for the first time, evidencing a pattern of change toward balance of power derived of wealth and power derived of technological agency. To remain durable, a responsible government must take these forces of influence into account. This will require nimble, systemic change:

  1. This historical evolution is accommodated in full by the process outlined regarding participatory governance.
  2. It should be a matter of substantive fundamental law that no person may be dispossessed of property without due process of law, which fundamental law should inform as not ponderable by any court of the Federation less imminent domain for the greater good is well established and fair market value compensation is afforded.
  3. It should be a matter of substantive fundamental law that the right to private property shall not be infringed.
  4. It should be a matter of substantive fundamental law that the right to seek redress for violations of substantive fundamental law shall not be infringed; however, lobbying of the Federation by any entity for any other reason shall be unlawful. This is a key provision of durability and an accounting for a new kind of organic power and should not be overlooked.

Implementation: A General Federation must be extremely flexible over time such that it can begin as a non-profit, then promote to accession by a nation-state. Then over time it must include other nation-states limited in pace to inclusion of states only where the norms cited herein are sufficiently strong to support it. An alliance of states that do not possess these norms will not be durable or effective and is the primary reason why multilateralism has failed. Currently, the only candidates that exist are the United States, Israel, Germany and the UK, and those states will require much preparatory work in nurturing a healthy set of norms as listed here before that can happen. Currently, the United States is number one on the runway, despite its relatively poor advancement in matters of social justice. Additional mechanisms have been devised to also allow scaled accession of developing nations. But it should not be forgotten that while normative practices are necessary, codification in explicit rule of law must come alongside it. Schemes that deny the central necessity of codified, transparent rule of law gathered by consensus will fail. This is the second cause of the failure of multilateralism. Disaggregated states and other schemes that pretend to operate “from the back door” are not durable in time. We don’t need more politicians or technocrats as they are not a solution to the problem, they are in the near future likely to be the problem. And that is because, wherever the scope of the political class expands, the fear increases. In a future world of ever advancing technological agency failure to better balance competence with participation will be disastrous. The public must be enlisted to fulfill this balance and give agency a voice. To be clear, this identifies the much larger, longer-term threat which it encompasses (but includes much more) we today call terrorism, the canonical, most extreme example of this failure mode.

  1. This can be achieved in fundamental law by the inclusion of a provision for a “National Codicil”, too lengthy to describe here.
  2. A National Codicil reduces burdens on member states to allow sunshine provisions for the ramping up of certain Federation sovereignties over a renewable period of fifty years.
  3. It should begin with the United States as its sole member such that the normative values of that institution may be inhered sufficiently before it accedes to a cosmopolitan or pluralistic stage. It does not require any change to U.S. relations with other organizations such as the UN or NATO. That the U.S. be the first state is crucial. The U.S. could begin this process solely pro forma with essentially no commitment to change U.S. policy at the outset, but its long-term effect on inhering these values worldwide would be enormous. It would be the first, tangible example of the idealistic values of global-ready, neo-liberal western democracy and would quickly have countries begging to join. This would put USG in the driver’s seat as far as ensuring those values are present beforehand. It would also give USG a chance to introduce this to the U.S. public and give them, and supporters of the cause, time to digest it and increase support for it. It would also give USG an opportunity to experiment with and tweak the Assembly concept. The answer to global governance is simple, we just need to DO it.

Who can implement this: Such an effort can only be achieved if spearheaded by the leadership of a nation most “advanced” in these norms and whose relative economic power can sustain it. The United States still isn’t there, but it is humanity’s best hope. It’s time to get novel and advance the state of affairs in the management of human society. The clock is ticking. Listen to me now, hear me later.

  1. The system propounded is a Hamiltonian, federal system; that is, wherever statute is enacted for the one State, then for all uniformly. It is a system of subsidiarity. It is a system with a strong executive and which regards economics as within the compass of the social contract. It is a system consisting of four distinct branches; legal, economic, executive and judicial. It is a system contrived to balance the powers of those branches, and to balance the interests of the greater good and the individual. It is a system whereby equity is solely applied to inform the rule of law by the color of the instance, not violate it. The executive and judicial powers are filled by a political class. The legal and economic powers are filled by a political class and their respective Assemblies. Supreme Justices are appointed by the political class for life.

The future will involve many quick pivots as asynchronous events seem to drive us in all directions at once. Multilateralism demands the kind of decisive action only a durable force can provide. A strong federal executive and its lot, constrained by the idealistic, normative values that tame it, is where it’s at. This has been evidenced most recently in the crisis with ISIL and Ebola. One week it was ISIL, the next week it was Ebola. No one invented that. It’s our future, get used to it.

A final, related note is the question of where is Russia, PRC, DPRK et al when Ebola threatens millions of human lives? Yes, they are offering assistance, but no one has acted as assertively as the United States. This is a time-tested pattern. From Fort Sumter to Leyte Gulf, from Ia Drang to Somalia, America has repeatedly shed blood for a greater good. Now, 3000 servicemembers are about to risk their lives once again and thousands of tons of hardware move into harm’s way. It tells us that idealistic normative values coupled with clever fundamental law are the forces of idealism and assertiveness humanity needs. The lack of response elsewhere is not because of a lack of ability. Russia has a fine navy. PRC has a massive army. Criticize the United States if you wish (and there is a time and place for that), but it is a cheap shot that merely jeopardizes humanity’s future. It’s time to get real.

- kk

Hi all,

Perhaps the most important first step in a top-down re-evaluation of global governance should begin by identifying the sine qua non of the global or regional environment necessary for a successful and durable global rule of law to exist. And most of that will hinge on normative beliefs, customs and practices within a given society. This is almost, but not identically, akin to stating that a cultural environment conducive to global rule of law must exist first. And, as it stands, I will argue, this is in fact the key impediment to effective multi-lateralism. Humanity must grow up, change and dispense with beliefs and behaviors that, while they may have an antecedent in our biological and social past, can no longer enjoy scientific support for their continued usefulness.

One of humanity’s greatest foibles is our tendency to inject emotion into intellectual inquiry, and the tendency this has to marginalize and exclude reason. Many today blame this on religion or some other boogey man. Certainly, religion provides a feeding ground for uncontrolled emotion. But the truth is that a more fundamental and universal cause presents itself as misplaced emotion. All of the points outlined below deal directly with this issue and provide a way for humanity to address serious, global issues rationally. It represents an executive summary of what this author has been working on for several years now and a full treatment and justification can be found in later works to be shared.

The most fundamental changes needed can be summarized below:

  1. Matter and Energy; an evolutionary step in our understanding of economic theory such that we delineate the most fundamental factors affecting economies. The most fundamental foundation of an economy lies in how we manage matter and energy. Economic “theory” merely rides on top of this fundamental, limiting fact. For any beings of technological agency, consumption of matter and energy is likely the gravest long-term threat to survival. Yes, this is a universal claim. Today we call this sustainability, but sustainability at such a fundamental level as what we are describing here finds a nexus in economic prosperity as well. They are the same thing; most people just don’t realize it yet. The prevailing myth in our time has us believe that it is a one-way street: prosperity depends on sustainability. The truth is that both depend on each other. So, wherever there is technological agency, consumption of matter and energy will increase with time. Therefore, long-term planning should focus on increasing access to matter and energy. Currently, this is sharply limited because we do not have the means to create an actuarially sound space transportation infrastructure. This author’s primary area of interest and effort lies in work that will grossly antiquate all existing space flight technology and make this an economic reality. We will see more about this in the next 2 to 4 years as this author matures his current work, now being done outside of the public radar. It will be known later in the form of a non-profit named the Organization for Space Enterprise. The reason why space is the focus is lengthy, but our current approach of trying to hold fast to an existing form of matter (such as petroleum) or to transition to a new form of matter (periodic elements used in solar panels, for example) is not scalable. It will ultimately destroy humanity (by a gradual starvation of matter and energy) and the only long-term solution is to source matter and energy in quantities vastly larger than what is available on Earth alone. Because of the time frames involved, this effort must begin now.  This will require nimble, systemic change in the underpinnings of the free market. A clever solution is an optimization that “does no damage” to the existing system but affords more directed use of matter and energy, and this author has a proposal. Whatever this author does, USG would be well-advised to invest heavily in the development of the means and methods (not all of which involves new technologies) required to render space flight economically viable and actuarially sound.
  2. Social Justice; the evolutionary step in our normative understanding of social justice. We need to transform the public understanding of social justice to inhere the more that social justice should be blind to personality and demographic and should rather focus on behaviors of merit and those that lack merit. The old saying that violence begets violence likewise extends to the notion that emotional extremism begets emotional extremism. Almost all notions of social justice today rely on emotional domination of the issues and feed off of ancient and barbaric fears that do nothing but generate a vicious cycle of repeated but varying “causes” through history. The result is that throughout history we see a pattern of social justice never materializing generally throughout society, with one cause giving rise to another in multi-regional and multi-temporal cycle that has been going on for at least 1000 years. This is difficult to see in our immediate present because these patterns take considerable time to cycle and may occur in disparate geographies. At the base of this cycle we see the exclusion of reason in discourse on account of the emotion so naturally abundant in matters of social justice. While emotion has a legitimate place and time, if humanity is to prosper, we must learn how to separate emotion from issues requiring reason to solve. Due to vested interests in the current norm of emotionally-driven understandings of social justice, this is a grave threat to the future of humanity. This will require nimble, systemic change advanced mostly through cultural efforts.
  3. The Political Class and public distrust: Lack of participation and therefore some semblance of control, whether a good thing or not, evokes fear. Fear undermines trust. The solution is to find a reduction of the scope and scale of the political class such that representation and participation of the public is dramatically enhanced. Direct democracies simply do not work, therefore, a totally novel and new understanding of how to merge a political class with a more direct form of participation is urgently needed. This author has a proposal. The future of neo-liberal idealism is the evolution beyond representation alone and more into the area of direct participation. A clever means of rendering that participation competent via a political class is key to this solution involving an Assembly (analogous to a jury) and a Federation Civil Corps of citizens. As organic power decentralizes via technological agency, the duty to participate will quickly transform from nuisance to demand. The key is not to view this as an elimination of the political class, but as a “force multiplier” of the same, permitting the political class to take on a more focused role centering on providing competence to govern. Additional mechanisms within the participatory role of the public are needed to dilute incompetence and enhance representation. This will require nimble, systemic change.
  4. Organic power structure: Organic power structures in any society of technological agency will tend to decentralize over time and organic power in the future will more likely exist as a force of change coming from the populace in mass. The very meaning of “organic power structure” is shifting beneath our feet, and victory will go to those that see it. It is important to warn future generations that this is a consequence of technological change itself and not an objective or goal. We must prepare for this, and it is a key reason for the need to re-frame our normative understanding of social justice (but it must be done for all matters of social justice in order to ensure that a durable norm is the product). Class differences cannot be resolved if justice for one means injustice for another, regardless of the relative differences. This author has a solution that will ensure justice for all, which includes a mechanism that does not rely on schemes of income redistribution or the denial of social reward already accrued through lawful and legitimate means. This transition will occur over many generations no matter what is done, but this author’s solution provides a structured way to do this without injustice and general social collapse; and under a durable framework of governance. The key finding here is that organic power is evolving into something never seen before: organic power has throughout all of human history derived of relatively large differences in wealth but is now, for the first time, evidencing a pattern of change toward balance of power derived of wealth and power derived of technological agency. To remain durable, a responsible government must take these forces of influence into account. This will require nimble, systemic change.
  5. Implementation: A General Federation must be extremely flexible over time such that it can begin as a non-profit, then promote to accession by a nation-state. Then over time it must include other nation-states limited in pace to inclusion of states only where the norms cited herein are sufficiently strong to support it. An alliance of states that do not possess these norms will not be durable or effective and is the primary reason why multilateralism has failed. Currently, the only candidates that exist are the United States, Israel, Germany and the UK, and those states will require much preparatory work in nurturing a healthy set of norms as listed here before that can happen. Currently, the United States is number one on the runway, despite its relatively poor advancement in matters of social justice. Additional mechanisms have been devised to also allow scaled accession of developing nations. But it should not be forgotten that while normative practices are necessary, codification in explicit rule of law must come alongside it. Schemes that deny the central necessity of codified, transparent rule of law gathered by consensus will fail. This is the second cause of the failure of multilateralism. Disaggregated states and other schemes that pretend to operate “from the back door” are not durable in time. We don’t need more politicians or technocrats as they are not a solution to the problem, they are in the near future likely to be the problem. And that is because, wherever the scope of the political class expands, the fear increases. In a future world of ever advancing technological agency failure to better balance competence with participation will be disastrous. The public must be enlisted to fulfill this balance and give agency a voice. To be clear, this identifies the much larger, longer-term threat which it encompasses (but includes much more) we today call terrorism, the canonical, most extreme example of this failure mode.
  6. Who can implement this: Such an effort can only be achieved if spearheaded by the leadership of a nation most “advanced” in these norms and whose relative economic power can sustain it. The United States still isn’t there, but it is humanity’s best hope. It’s time to get novel and advance the state of affairs in the management of human society. The clock is ticking.

Listen to me now, hear me later.

The future will involve many quick pivots as asynchronous events seem to drive us in all directions at once. Multilateralism demands the kind of decisive action only a durable force can provide. A strong federal executive and its lot, constrained by the idealistic, normative values that tame it, is where it’s at. This has been evidenced most recently in the crisis with ISIL and Ebola. One week it was ISIL, the next week it was Ebola. No one invented that. It’s our future, get used to it.

A final, related note is the question of where is Russia, PRC, DPRK et al when Ebola threatens millions of human lives? Yes, they are offering assistance, but no one has acted as assertively as the United States. This is a time-tested pattern. From Fort Sumter to Leyte Gulf, from Ia Drang to Somalia, America has repeatedly shed blood for a greater good. Now, 3000 servicemembers are about to risk their lives once again and thousands of tons of hardware move into harm’s way. It tells us that idealistic normative values coupled with clever fundamental law are the forces of idealism and assertiveness humanity needs. The lack of response elsewhere is not because of a lack of ability. Russia has a fine navy. PRC has a massive army. Criticize the United States if you wish (and there is a time and place for that), but it is a cheap shot that merely jeopardizes humanity’s future. It’s time to get real.

- kk

“A Prank, A Cigarette and a A Gun”

Amanda+Knox+Amanda+Knox+Awaits+Murder+Verdict+hndaoOijPOEl

An article about the murder of Meredith Kercher. I’ve agreed not to say much but if you’re interested in this case this is the most important read of all. The truth of what happened is in here.

Click below:

The So-called “Best Fit Report”

by Sigrun M. Van Houten

What is Best Fit Analysis, a quick intro

Students of statistics and stochastic analysis will recognize the term “Best Fit” as a statistical construct used to determine a most probable graph on a scatter plot. In the same manner when the clandestine services seek to resolve a most probable narrative of events when information about that narrative is limited or inaccessible, one can assess the information that is available to construct a probabilistic scatter plot. Once done, it is then possible to graph a “line” that represents a best fit to those data points. In this case, the data points making up the scatter plot are individual facts or evidence and the associated probability they possess as inference to a larger narrative. And the “line” drawn as a best fit is the most likely narrative of events.

This procedure has parallels in normative notions well understood in western law for some time. Namely, it deals with probative value which, though perhaps not used in the strict sense used in law, is used here as a catch-all to describe each data point. Each data point reflects the probability of a given piece of evidence. But what do we mean by “probability of a given piece of evidence”? In Best Fit analysis (BFA) we begin by constructing a hypothesized narrative. When applied to criminology, the hypothesized narrative usually presents itself fairly easily since it is almost always the “guilt narrative” for a given suspect or suspects in a crime. In this short introduction to BFA, I will show how it can be used in criminology. The advantage in criminology is that rather than having to sort through innumerable hypotheses as is common in the clandestine services, here we have the advantage that we usually have a hypothesis presented to us on account of an accusation or charge. We can then use BFA to test the narrative to see if it is the most likely narrative. With perturbations of the same, we can likely identify alternative narratives more likely to be correct.

Norms of western law are dated in some cases, and some have not been updated for a long, long time. One of those areas apparently is the area of probative value. Typically in Courts of western nations it is presumed that a piece of evidence has “probative value” if it points to a desired inference (which may be a guilt narrative or some more specific component thereof). I’m not an attorney so I can’t categorically state that the concept of a “desired inference” really refers to an overall guilt narrative or simply the odds that the evidence points to a guilt narrative. But what I can say is that in practice it almost always is used in the sense of an overarching narrative or reality.

A case in point is a famous case in which a man was accused of murdering his wife during a divorce. It turned out that his brother had actually committed the crime. But once his brother was convicted an attempt was made to convict the husband of the crime by the accusation that he “contracted” with the brother to commit the crime and end his divorce case favorably. In the second trail of the husband the evidence was almost entirely circumstantial and the jury relied heavily on an increase in phone activity between the husband and his brother leading up to the murder. Normally, the brothers had not spoken on the phone often and there was a clear and obvious sudden increase in the frequency of calls. The jury interpreted this as collusion and convicted the husband of murder. Thus, when brought to Court, the desired inference of testimony and records of phone calls was that collusion existed. This is a piece of evidence being used to point to a guilt narrative. The problem however, was that it was never shown why it should be more likely to have inferred collusion than simply distress over a divorce. It is not unusual for parties in a divorce to reach out to family and suddenly increase their level of communication at such a time. In other words, and on the face of it, one inference was just as likely as the other.

What legal scholars would say is that this is a reductionist argument and fails because it does not take into account the larger “body of evidence”. Unfortunately, this is mathematically illiterate and inconsistent with the proper application of probability. This is because it takes a black and white view of “reduction” and applies it incorrectly, resulting in a circularity condition. The correct answer is that

… One takes a body of evidence and reduces it to a degree sufficient to eliminate circularity and no further.

In other words, it is not all or nothing. In fact, this kind of absolutist understanding of “reductionist argumentation” is precisely what led to the results of the Salem Witch Trials. In those cases, probative value was ascribed based on a pre-existing hypothesis or collection of assumptions; essentially a cooked recipe for enabling confirmation bias either for or against guilt.

To explain what we mean, in the case of the phone calls between brothers, one cannot use a hypothesized narrative (the inference itself) to support the desired inference. This is circularity. But one also cannot reduce the evidence to such a degree that the body of evidence in toto is not weighed upon the merit of its parts. From the perspective of probability theory, this means that we must first determine whether, as an isolated construct, the probability that the phone calls between brothers were for the purpose of collusion must be greater than the probability that the calls were due to emotional distress. And it must be something we can reasonably well know and measure. While we can never apply numerical values to these things, it must at least be an accessible concept. Once we’ve looked at the odds of each of the two possible inferences we can then ask which is more likely. Unless the inference that the calls were for the purpose of collusion is greater than the odds that the calls were for the purpose of emotional support, there can be no probative value (in the sense we are using that term here).

The reason for the “isolation” is that we cannot determine aforesaid odds by using the inference, or the larger narrative, to support those odds because it is the narrative that is the hypothesis itself. Having said that, once we have done this, if we can show that the odds are greater that the calls between brothers were for the purpose of collusion, even if that difference of probability between the two inferences is very small, the phone calls can then be used to assess the likelihood of the guilt narrative by considering it in the context of the body of knowledge. In other words, if we could associate numbers with this analysis as a convenience for illustration, if we have 10 pieces of evidence bearing only, perhaps 5% probability difference favoring the guilt narrative, it might be possible nonetheless to show that the guilt narrative is the most likely narrative. In other words, we consider all evidence, each with its own net odds, in order to frame the odds of the guilt narrative. And we are therefore using reduction only to the extent that it excludes circularity, and no more. And both the number of evidentiary items and the odds of each matter. If we had 3 pieces of evidence each bearing a net probability of 90% favoring a guilt narrative, it might be just as strong as 10 pieces bearing a net probability of only 5%. And it is these odds that must be left to the jury, as it is not a mathematical or strictly legal exercise but an exercise in conscience and odds.

Sadly, it is routine practice in western Courts to employ probative value in such a manner as to establish in the juries thinking a circularity condition whereby the larger narrative of guilt or innocence is used to substantiate the probative value of individual pieces of evidence. The way to control this is for the understanding of probative value to change and modernize, and to require Judges to reject evidence (rule inadmissible) that either does not point in any direction (net odds of 0%) or points in a different direction than the desired inference. This is a judgment call that can only be left to the Judge since to leave that in the hands of the jury effects prejudice by its very existence. While there seems to be lip service to treating probative value as we’ve described, it appears to almost never be followed in practice and most laws and Court regulations permit Judges to use their “discretion” in this matter (which, in practice, amounts to accepting evidence with zero probative value). Standards are needed to constrain the degree of discretion seen in today’s Courts and to render the judgment of Judges in matters of probative value more consistent and reliable. One way to do this is to treat evidence as it is treated under BFA.

While many groups that lobby and advocate against wrongful conviction cite all sorts of reasons for wrongful convictions, tragically they seem to be missing the larger point which is that these underlying, structural and systemic issues surrounding probative value are the true, fundamental cause of wrongful conviction. For without proper filtering of evidence, things like prosecutorial misconduct, bad lab work, etc. find their ways to the jury. It is inevitable. But the minute you mention “structural” or “systemic” problems everyone runs like scared chickens. No one wants to address the need for major overhauls. But any real improvement in justice won’t come until that happens.

Thus, with BFA, in the clandestine context, we take a large data dump of everything we have. Teams go through the evidence to eliminate that which can be shown on its face to be false. Then we examine each piece of evidence for provenance and authenticity, again, only on what can be shown on its face. I’m condensing this process considerably, but that is the essence of the first stage. We then examine in each piece in relation to all advanced hypotheses and assign odds to each. Once done, we look at the entire body of evidence in the last stage to determine which of the narratives (hypotheses) requires the least number of assumptions to make it logically consistent. That one with the least number of assumptions is the Best Fit. If we were to graph it we would see a line running through a scatter plot of probabilistic evidence. That line represents the most likely narrative. On that graph assumptions appear as “naked fact” and are “dots” to be avoided.

To see a good example of how BFA is employed, you can see my work on the Lizzie Andrew Borden, Jon Benet Ramsey, Darlie Routier and Meredith Kercher cases. That this method is remarkably more effective than what we see in police investigations and Courts is well-known by those that have used this technique for at least three decades now. But it has been somewhat outside the radar of the general public because of its origins. My hope is that through public awareness this method can be applied to criminology and jurisprudence resulting in a far greater accuracy rate in the determination of what actually occurs during criminal acts, especially in matters of Capital crimes where the added overhead is well worth it.

~ svh

Notice: I am told that Mozilla users can experience a copy of this report that has sections of text missing. I recommend that Mozilla users download the pdf and view it on their desktop. – kk

big_SIS

Big Sis modeling in … an oilfield. Our second tour of Bakken with our dad.

I’ve written on the petroleum issue before and as I learn more the worse it seems to look. Yes, there is good news about unconventional oil, like tight oil. And there are all kinds of alternative energies out there, too. But the choices for humanity are beginning to narrow to a point that open, frank discussion about what is going on is desperately needed.

First, I should mention something about the developments here in the States regarding new sources of oil that has everyone in the industry so excited. These sources are unconventional oil. There are two types of “unconventional” oil. First, there is what is called shale oil, AKA “tight oil”. This is really just conventional petroleum that happens to be found inside shale rock. The only reason it hasn’t already been exploited is because the drilling techniques needed to get to it are a bit more complicated than a simple, vertical bore shaft drilled in one spot in which the petroleum is “loose” enough to flow into the pipe on its own. With tight oil not only do you have to drill horizontally to exploit the relatively horizontal orientation of the shale rock, you have to “encourage” the oil to move into the bore pipe because it is tightly bound to the shale rock. This is done by a process that has come to be known as “fracking” in which explosive charges are placed in the bore pipe to perforate the casing and water is pumped in to crack the shale and create small paths within it and through which oil can flow into the bore pipe. Second, we have what is known as oil shale. Read that carefully, I just swapped the order of the words to get a new beast. And that’s why people get confused over these two types of resource. Their names are about as similar as one can get. But oil shale is a totally different thing. Here, the oil is not simply conventional oil tightly trapped in a rock. Here, the “oil” is not fully developed by nature and has not reached the final stage of its conversion into petroleum.  It is in an intermediate stage between fossil and true oil. This fluid is called “kerogen”. The problem with kerogen is that in order to complete the transition to petroleum, which you must do to make it a viable fuel source, requires considerable heating. If we just look at the energy equation it means that we are putting more energy into the production of oil from oil shale than we get out (with current techniques). And what many economists and optimists don’t seem to realize is that this problem is a physics problem, not just an economic one. In other words, there is no technology or economic model that will change this. Kerogen, in the form we know it, will never be economically viable in and of itself.

So-called “Peak Oil” tells us that, because petroleum is a finite resource, it must exhaust at some point in the future. Like so many academic statements, one can see that this is incontrovertible but as is often the case, the practical reality does not so easily admit of a simple application of a general principle to a specific problem. Such is the case with Peak Oil. People promoting this theory are effectively overgeneralizing to a specific set of circumstances and reaching erroneous conclusions. I’m going to try to sort out this mess here and explain what Peak Oil means for humanity in realistic, probable terms. First, I noted that the energy one puts into extracting petroleum must obviously not equal or exceed the energy they can extract from the recovered petroleum itself. Otherwise, there isn’t much point in extracting it. But from this point forward in the popular discussion of Peak Oil the conversation diverges into Wonderland. The crux of the problem, from what I can see, is that those that understand the geology and science of petroleum don’t understand economics and those that understand economics don’t understand the fundamentals of science. Add to that the inherent opaque nature of the petroleum industry and its methods and it is no wonder that there is immense confusion over this topic. Okay, so why is “Peak Oil” an “overgeneralization” of the human energy consumption problem? First, we need to point out that the idea that something is finite, and that one’s ability to extract it in situ will likely follow a bell curve in which the rate of recovery rises and then falls is an incredibly general proposition. And it’s that phrase rate of recovery that we need to understand better.

All finite things will tend to exhibit bell curve, or normalized behavior; that is, one’s extraction of them in situ (limiting the generality to resource exploitation for this discussion) will likely get faster in the beginning, then slow down as it depletes. But global Peak Oil is just one application of this broad generalization. Notice that an oil well, if all else remains the same, will also tend to extract petroleum at normalized rates, increasing sharply in the beginning and tapering as its reach into a reservoir diminishes. This has nothing to do with global peak oil. Likewise, a reservoir will, all else being equal, tend to follow a normalized pattern of extraction rates. This also has nothing to do with global peak oil. And please notice the qualifier “all else being equal”. Let me explain. The rate at which an oil well can extract oil from a reservoir, assuming the supply from the reservoir remains essentially constant (its really big), depends on numerous factors. The depth, diameter and bore length of the bore hole all affect that value. The fatter the pipe, the faster you can get petroleum out. Depth can affect pressure which will affect how fast you can pump it out. Indeed, even your pumping equipment can affect those rates. But things like the permeability of the rock also matter. I should point out that oil doesn’t usually sit in the ground in pools. Rather, it is “locked up” in the pores of rocks. Different rocks allow it to escape at different rates. Shale, for example, doesn’t give it up easily. So, that too, affects the rate of recovery. So, the reach an oil well has into a reservoir is a time dependent function that is highly localized and dependent on all the factors mentioned. Thus, it may be possible to drill another well nearby, but importantly, no less than some minimum distance away, to increase the flow rate. That minimum viable distance is determined also by those factors. Finally, for any given well, as the pressure begins to drop due to the peaking of that single well, not necessarily the entire reservoir, one can increase the internal pressure, forcing petroleum out faster, by boosting it with water. If that isn’t enough, you can inject gases under pressure to increase the flow rate.

In other words, the rate at which a single well delivers petroleum product is highly dependent on capital investment in the well. And producers have to consider how much they want to invest based on market conditions and overall performance of their overall recovery operations. Thus, the so-called “bell curve” becomes a joke. One can artificially shape this curve however they want depending on all the factors mentioned because, at the oil well level, the supply is halted only as a time dependent function of the presence of oil locally around the well bore. What this means is that, you can drain the region around the bore hole but over a very long time the rest of the reservoir will push oil back into that region and refill it. So, that also can be seen as a production rate variable. The reader should be able to clearly see now that the “peaking” of an oil rig is totally dependent on numerous variables, only one of which is the presence or availability of oil locally around the bore hole. Thus, simply yanking production rate figures for a well out and suggesting that it or its reservoir has hit a fundamental peaking of capacity based on those numbers is absurd. You cannot know that unless you have access to all the data and variables I’ve mentioned, and only then can you analyze the well and understand if an observed peaking is due to some natural, finite barrier or is rather due to the particulars of the well design and operation.

We can extend this discussion in scale and apply similar logic to the reservoir itself. We cannot know if a reservoir is reaching a true, finite and natural peak unless we know about each of those wells and, importantly, what percentage of the acreage from which a well is viable is actually covered by a well. So, in the same way, one cannot pluck data from a reservoir and conclude anything from that.

At the global level the same limitation applies. We need to know the true facts about each reservoir in order to reach any conclusions about:

  1. Actual, existing production capacity globally
  2. Total, defined reserves remaining

But can’t we see that if global well spudding is increasing and peak production in various countries has occurred that it must be occurring in the near term globally? Yes … unless we consider the powerful impact economics has on all this. The United States reached a peak around 1970 and its domestic production declined thereafter (until recently as shale oil has pushed production up considerably). But what we don’t know is why. Was it because the actual recoverable oil had diminished to something below one-half its original amount? Or was it because the investments necessary to continue producing the fields in the States were considered economically unsound given the global prices for petroleum at the time? Did petroleum companies just forego water and gas pressurization, increased drilling into existing reservoirs, etc. because it was cheaper to buy overseas? Did environmental regulation drive this? There is reason to believe that other factors were in fact at play because domestic production in the United States has risen again even if we control for shale oil production. And much of that is occurring from existing fields. But there’s more. Various agencies tasked with estimating reserves continually come up with reserve figures much, much higher than peak oil advocates claim. USGS and IEA, while they don’t agree on all the numbers, clearly state that conventional oil reserves in the United States are over 20 billion barrels. Where did that come from? It comes from the same fields that have always been producing petroleum in the United States. But for whatever economic reason, the additional investments in those wells simply have not been made. That is changing now. If the United States were to continue consuming at its present rate, and if that 20 billion barrels was the only source of oil for consumption in the States, it would last about 3 years. But since Canada supplies about ¼ of U.S consumption and shale oil is providing an ever increasing portion (quickly approaching ¼) that number is likely closer to 10 years.

Numbers for shale oil are about 20 years; that is, if all oil were drawn from those fields it would last about 20 years. This combined with the remaining conventional oil is 30 years, at least (and assuming Canada disappeared), of petroleum supply. But Canada’s reserves are yet larger and their consumption is an order of magnitude lower than that of the United States (their population is an order of magnitude lower than that of the U.S.). Thus, realistically, the U.S./Canada partnership, which is unlikely to be broken, will easily put the U.S. supply beyond 50 years. And that assumes that the middle east and everything else just vanishes. If we plug that back in its even longer. Let’s be clear, regardless of what’s going on around the globe, the U.S. and Canada are not going to trade their own oil away if it means their own consumption must drop. Nor would any other nation. Shale oil production in the United States is climbing meteorically, to about 4 million barrels a day in 2013. This is unheard of since less than 5 years ago it was virtually zero.

The more challenging oil shale; that is, kerogen bearing rock, is a U.S. reserve so large it is hard to calculate or predict where it might end. Needless to say, we have about 50 years to develop it and get it online. It seems unlikely that this goal will not be achieved, but I’ll discuss its challenges more later.

Okay, so is the problem solved? Can we all go home now? Not hardly. The same nuances mentioned earlier that better inform our discussion of peak oil also inform our understanding of the current petroleum situation, to include shale oil and oil shale options. Thus far, we’ve spoken only of production rates of petroleum. But here is the real, fundamental problem with petroleum: when it was first discovered and used on a wide commercial basis, beginning about 1905, it was so easy to obtain that in terms of energy it only cost us about 1 barrel of crude in power generation to draw and collect 100 barrels of crude for sale in the marketplace.  Some speak of this relation as the Energy Returned On Energy Invested, or EROEI ratio. I alluded to it above. It basically begins by noticing that if a fuel source is to be viable then we cannot expend more energy to get it than the energy it provides to us. In the case where those energies are equal, EROEI = 1. In the event that we consume more energy to get petroleum than the petroleum recovered provides, then the EROEI < 1. This is unsustainable also. Therefore, for petroleum, or any fuel, to be viable it must have an EROEI > 1. Having cleared that up, some confusion over how physics and economics overlaps on this matter has gushed out on the internet and elsewhere like water over Niagara Falls. Why? If we will recall, around 1905 the EROEI must have been 100, since for every 100 barrels of crude we could sell we expended 1 barrel’s worth in energy to get it out of the ground. The problem is that since that time the EROEI has dropped precipitously by about one order of magnitude. Thus, the global average EROEI is about 10 nowadays. But what this implies is what seems to be confusing people. Some think that if the ROEI gets any closer to 1 we’re doomed. Some have even said that you need an EROEI of 3 or 4 to make petroleum economically viable. This is not true and is based on certain assumptions that need not be true either. In order to be not only economically viable but economically explosive in its market power the ROEI simply needs to be greater than 1. That’s all. Let me explain.

There is this thing called “economics of scale”. To explain it’s relevance here, consider the following thought experiment. Suppose we discover a massive petroleum reserve in Colorado that contains some 2 trillion barrels of recoverable “oil”. At current U.S. consumption rates, if every drop of petroleum consumed in the U.S. were pulled from that one field, it would last 275 years. Ah, but you say, that reserve is kerogen. Kerogen is the play I referred to above where I pointed out that we had about 50 years to figure out a way to economically utilize it. This is because the other oil, so-called “tight oil”, or shale oil, will run out by then. But the big, big problem with kerogen is that lots of energy are needed to make petroleum out of it. Current retorts (heaters for heating kerogen) run at about 400 C and have an EROEI of about 3 to 4. Of course, this is first generation technology, but for the sake of discussion, let’s assume it is 3. For demonstration, we assume that the current, conventional EROEI on oil is about 10. How could kerogen possibly be cost effective? Economies of scale. Great, problem solved? Nope. Let me finish.

Let’s assume that, for the sake of discussion, we have an infrastructure that can begin producing petroleum at incredibly high rates. How is this? Kerogen is located only about 500 meters in the ground and can be manually extracted. This means that there are no “pressure curves” or constraints on how much can be removed how fast. It’s simply a matter of having sufficient resources to do the work. But more importantly, these rates can be achieved because, as one increases the rate of recovery, you are not fighting against a finite maximum lode (effectively) and the economics of scale work because it is one field, not several fields geographically separated over great distances. Thus, as petroleum flows out at rates far exceeding what was possible before, the price of that petroleum drops. And it keeps dropping as the market is flooded with petroleum. Imagine that before this operation commences oil costs 1 dollar a barrel (to make the math simpler). Let us say I have 100 dollars to spend on energy. So, I purchase 100 dollars worth of energy. But, it took 10 dollars worth of energy to get the oil I’m using as energy. So, my net return is 90 barrels of crude. Now, suppose after operations commence 100 dollars now buys 1000 barrels of crude. This means that I can net 900 barrels of crude for the same 100 dollars. My energy has gone up dramatically but my economic cost is constant. Of course, our EROEI is lower now, so we have to adjust and recalculate. 100 dollars buys 200 barrels of crude with an EROEI=3. Thus, for the same economic cost I have doubled my energy and have done so in the same amount of time because, by economy of scale, I can obtain that petroleum twice as fast as before. And I can achieve that production rate because I do not have to worry about running out for quite a while.

So, as with peak oil, simply blurting out EROEI doesn’t explain everything. You have to take all variables into account. Okay, will we finally get to the bad news? Yes, we are now ready to see the deeper problem and the key point so many are tragically missing. I somewhat glossed over economies of scale and production rates for kerogen and assumed that we actually had the ability to ramp up to that. In other words, we have to be able to invest in that massive infrastructure in Colorado to start this voracious beast up. Do we have what we need? Well, we have the petroleum in shale oil. But is that really all that matters? Of course not. We will not be able to reach that kind of production rate in kerogen to petroleum with excesses of tight oil alone. And this is where it gets interesting.

Those that study economics and petroleum often point out that the strength of an economy is largely dictated by the per capita energy per unit time that a country or region achieves. Energy per unit time is power and it is measured in watts. So, what they are saying is that the strength of an economy ultimately falls back to per capita power consumption. This is why climate change is so controversial overseas. Other countries know this and they see attempts by western, industrialized nations to limit CO2 emissions as nothing more than curbing per capita power consumption; thus derailing economies. For the western world, the association between per capita power consumption and CO2 is not nearly as strong, so it does not affect them as badly. But for countries still burning lots of coal and for countries without efficient cars and trucks, such cutbacks in CO2 would have drastic effects on their ability to industrialize. But for our discussion it is important for what it does not capture. To explain this, another example is in order. Consider a farmer living in the 1700s in North America. They plow fields using a mule and bottom plow. The per capita power consumption for the farmer is, say, x. Now, a farmer in North America in 2013 performs the same task using a very small, diesel powered tractor with plows, harrows and the like. In this case, the per capita power consumption is considerably higher and we’ll denote it y. Notice that over the years the transition from x to y is gradual as each new technology and piece of equipment increases the power consumption available to the operator. But why, exactly, does this seem to be correlated with overall quality of life? Why is it that better health, education and so on are so common as power consumption increases? The reason lies in the definition of energy and power. In physics the term “useful work” or “work done” in an “environment” is a term that refers to the effect, or result, of applying energy to a defined “environment”. Thus it is often called the negative of energy. Thus, when we apply energy to an environment we are dumping energy into that environment in some controlled, intelligent manner. In the case of the farmer example, the “environment” is the soil, or the Earth itself, which we transform intelligently into something favorable to plant growth. This takes lots of energy. In fact, the mule and plow ultimately expend exactly the same total energy as the tractor does. The difference however, is how fast it happens. The tractor does it orders of  magnitude faster. In other words, it is the power of the tractor over the mule that makes the difference. Thus, we can fundamentally improve our lives by intelligently applying power to satisfy a human need with speed, giving us the time to engage in other worthy tasks. We can use that time for leisure, education or other work related tasks. In the end, the quality of life improves.

Thus per capita power consumption is key to the advancement of humanity, period. We have no time to march in protest of an unjust ruler, time to educate our children, time to do other useful work such as plow our neighbors garden for them, or anything else, if we are captive to spending most of the hours of our lives slowly expending all the energy necessary for our survival. Power is freedom.

But power provides other improvements to quality of life indirectly as well. We can afford to have clean, running water because we had the power to dig enough water wells, we have the power to run the factories that make our pharmaceutical medicines which alleviate suffering, we have the power to build massive buildings called schools and universities in which our capacity to learn is enhanced, and on and on.

So, in the petroleum discussion, when we speak of “ramping up” to a new way of obtaining petroleum which requires more upfront energy than the old forms of petroleum, we are talking about a per capita power consumption problem. The shale oil can solve that for us. But what this discussion is missing is a key ingredient in this ramp up. Once again, we have to be careful not to overgeneralize. Generally speaking, this power statement is correct. But in reality, we have to consider something else. And that something else is the “environment” we just discussed. We can usually just ignore it because

The rate at which we can access the environment is assumed to be infinite, or, at minimum, proportionately greater than the rate at which we are expending energy into it.

This will not always be the case. Let me explain. In the case of the tractor, of course we have access to the ground because that is what we’re farming and there is no obvious constraint on how fast we can “get to it”. But what if we change the example a little. Suppose we now think of a factory that takes aluminum mined from the Earth and smelts it, producing ingots of aluminum that can then be shipped to buyers who may use it to build products that society needs. Well, the mines that extract the aluminum can only do so with finite speed. And if that resource is finite, and especially if it is rare or constrained in volume, the rate at which we can recover it is indeed constrained. Now, if I am a buyer of ingots and I make car parts out of the ingots, the rate at which I can make cars no longer depends solely on the power consumption available on my factory assembly line. Now, I have to consider how fast I can get ingots into the factory. This is a special case and we can see that generally this is not actually an issue. But to understand that we have to increase our altitude yet more over the forest to see the full lay of the land. Ultimately, all matter is energy. We should, in principle, be able to generate raw materials from energy alone if the technology for it exists. However, as a practical matter, we can’t do that. We depend on raw materials, aluminum being but one example, which come from the periodic table of the elements, plants and animals and minerals of the Earth. But the most constrained of all of them is the periodic table. As it turns out, petroleum is not our only problem and, not surprisingly, the crisis of the elements is of a very similar nature. It isn’t really that we are “running out”, it’s that the rate at which we can access them is slowing down while consumption goes up. And that’s the problem with petroleum, too. We have plenty of reserves, but our ability to access it fast enough is what is getting scary. Unfortunately for us, raw materials and rare Earth metals especially, are hard to find on the Earth’s surface. Almost all of the rare elements are deep within the Earth, much too far down to be accessible. Thus, our supply chain is constrained. This is why plastics have become so popular over the last three or four decades. In fact, some 90% of all manufacturing in the world now depends in some way on petroleum, ironically, because the raw materials we used to use are drying up. And the rate at which we can recycle it is not nearly fast enough.

So, the very same problem of production rates in petroleum exist for the elements and what we have not discussed is the world outside the United States. I have deliberately focused on the U.S. and Canada for a reason. The global situation beyond is dire. Why is this? Because, even if we solve the petroleum production rate problem in the United States, as I’ve suggested it will be,

It will be frustratingly constrained in its usefulness if dramatic improvements in the rate of production of elements of the periodic table are not found rapidly.

And that’s just the U.S. and Canada. The situation in the rest of the world is far, far worse. There is only one place where such elements in such large quantities can be found and exploited rapidly. And it is not in the ground, it is up, up in the sky. Near Earth asteroids are the only viable, natural source that can fuel the infrastructure creation necessary to drive the kerogen production needed. But said more fundamentally, if we don’t find a solution in the staged development of shale oil, then kerogen, coupled with massive increases in natural resources which increases in power consumption can take advantage of, humanity will die a slow, savage and brutal death.

What???

What we really need here to express this economic situation is a new figure of merit that combines per capita power consumption with the rate at which we can access the raw materials that are being manipulated by any given power source. We cannot perform a meaningful study of this issue without it. Thus, for now, I will call it Q and it shall be defined on the basis of a per operator required figure (analogous to per capita but based on a per operator figure, a technologically determined value). And I shall define it as the product of a given power consumption and the raw materials in mass kg operated on by the power source per second time. Q would be calculated for each element, mineral or defined material it operates on using a subscript. So, for aluminum it would be:

QAl

And for any particular, defined economic enterprise the collection of such materials I will take to be a mean of all such Q and denote it:

Qm

Now, the Qm for the kerogen to crude conversion (retorting) must be greater than some minimum value that is actuarially sound and economically viable. For a sufficient value we can expect economic prosperity and for some lesser value we can expect a threshold of survival for humanity. That threshold is determined by the pre-existing Qc (Q based not on an operator but on true per capita basis) and the maximum range of variance an economy can withstand before becoming chaotic and unstable (meaning, before civilized society breaks down). So, what do we mean by death and destruction. Well, here’s the bad news.

The problem we are facing is a double faceted one.

We are seeing a reduction in global “production” rates for both energy and matter.

As populations increase this will get worse. Only Canada and the United States appear to be in a position to respond with the favorable geology and sufficient capital, technology and effort to compensate for dramatic losses in conventional oil production rates: if you are pumping water and gases into oil wells now to boost production the drop off after peak won’t be a smooth curve but will look more like a cliff. And now we can see why the Peak Oil concerns are real, but for the wrong reasons. The problem is that though the oil is there, it is costing more and more to get it out and the raw materials (capital) needed to invest in ever increasingly expensive recovery – economies of scale – are not forthcoming. The “cliff” is economic, not physical. Thus, even in the few countries where reserves are still quite large, economies of scale do not appear to be working precisely because of a lack of raw materials (capital) and, to some degree, energy. The divergent state of affairs between North America and everywhere else is due to several factors:

  1. Whatever the cause, conventional petroleum production rates are declining or are requiring greater and greater investment to keep up with prior production rates. This could be because of fundamental peaking or it could be because nominal investments needed to improve production rates have simply not been initiated until now.
  2. Tight oil is the only petroleum product that has been shown to be economically viable and that is not affected by the problem in 1.
  3. North America has by far the most favorable geology for shale, which is why it has been possible to start up tight oil production in Canada and the U.S.
  4. North America has the strongest economy for fueling the up-front, very high investment costs that a new infrastructure in tight oil will require
  5. The U.S. and Canada have been studying the local shale geology for over 30 years and have developed a sufficient knowledge to utilize it, to a degree far surpassing what has been done anywhere else.
  6. North America has the most advanced drilling technology for this purpose than any other locale can call upon or utilize.
  7. Despite the massive consumption in the United States, Canada and the U.S. appear to be at or near energy independence now, which means that instabilities around the globe will not likely have a negative impact on tight oil production as a result of its economic shock (at least not directly).

The biggest question for the United States is this. What are you going to do about raw materials? The good fortune found in tight oil will avail nothing if the United States doesn’t also dramatically increase the rate at which it can “produce” raw materials, particularly elements of the periodic table. The only way to do this is to create a crewed space flight infrastructure whose purpose is to collect these materials from asteroids, where they appear in amounts astronomically greater than anything found on Earth. If the United States fails to do this, it and Canada will go the way of the rest of Humanity. To explain, it may survive the tight oil period. The problem won’t present until the switch to kerogen is attempted in some 30 or more years. But it would take 30 years to develop such a space flight infrastructure. There is no room for gaps. Because of kerogen’s poor EROEI, it will absolutely depend on higher production rates of raw materials; i.e. increased flow of capital.

Of course, at some point alternative energy will have to be developed and the entire primary mover infrastructure will have to be updated. That is really the end goal. But this is no small task. It will cost trillions and will take decades to convert humanity over to a fully electric infrastructure. That is one of the key requirements for comprehensive conversion to alternative energies. And alack, we do not have the raw materials on Earth to build enough batteries for all of it. Thus, once again, the asteroids loom as our only hope. When and if we achieve an energy infrastructure that does not include fossil fuels we will have taken a key step in our development. At that point, for the first time, humanity will be progressing using the fundamental physical principles common throughout the universe and not specific to Earth. It will be a seminal transition.

What does this mean? I had written a few paragraphs on that question but, realizing how depressing it all is, I leave it at this. USG needs to start developing this space infrastructure yesterday and they need to keep hammering away at kerogen. I hope I’m wrong about this.

- kk

surfaceOfTitanHuygensLandingZone

An image of the surface of Titan

As you may know, I try to keep my ear to the ground on matters of crewed space flight. I wanted to share with my readers a major development, a paradigm shift, going on right now in space transportation. At the close of the fifties the United States and the Soviet Union were competing with each other to fly higher, faster and farther than any before them. Yuri Gagarin lit the candle when he was the first human being to orbit Earth. But the less well known factoid about space flight has been in it’s irony: lifting objects into low Earth orbit (LEO) was a technological barrier for humanity that was briefly overcome only by sheer bravado and brute force in a way that would never be economical. Rockets were constructed that staged their fuel on the way up, dropped their airframe in pieces like a disintegrating totem pole and reached over 17,000 mph just to place a few hundred pounds into orbit. The truth is, no one really had the technology needed to do this economically. But both the US and the USSR convinced each other of one thing: While it may be a silly waste of money to do such a thing, both of them could place a relatively lightweight nuclear warhead into orbit and pose a threat to the other. Once they proved it, everybody went home. Space flight for the last 50 years has been a stunt that only governments could afford and that only mutually assured destruction could inspire. Unless someone could find a way to reuse these craft, especially the most expensive components, flying to LEO by throwing away all of your hardware on every flight would never make sense in the hard reality of economic necessity.

But reusing these machines was a technological leap beyond Sputnik and Apollo … a big leap. And for that reason space flight floundered for decades. And yes, we’ve all heard what a big waste the Space Shuttle was. But I want to offer a counterpoint that history gives us in 20/20 hindsight. The two most expensive components of rockets, by far, were absolutely dominating all discussion of space flight. Since we couldn’t overcome those two problems mass, yes weight, was the dominating factor in every discussion of space flight. From space probes to deep space to space stations, to the shuttle and to the moon and Mars; weight was the big nasty sea dragon that ensured that talk of frequent missions such as these was hopeless. You can’t go to Mars with a pocketknife and a Bunsen burner, as I like to say, but many proposed it anyway. But the reality was that limitations on weight, borne of the extremely constraining limits placed by the technological limitations that in turn affected the economics, ensured we’d get nowhere. Everything hinged on making LEO economical and loosening the maddening mass restriction that has bedeviled the human space enterprise for some fifty years now. I am happy to report that one of the two technological hurdles needed to overcome this limitation has been cleared and the second is being aggressively run to ground.

The first problem is that rocket engines are powerful; so powerful in fact that they are something like 20 times more powerful than jet engines by either weight or volume. The temperatures at which they operate are in the 6000 degree F range with combustion pressures over 1000 psi. Jet engines come nowhere near being able to handle this. And you need rockets because, frankly, we don’t have any other way to get to LEO. Air breathing hybrids are, contrary to popular myth, decades away (kind of like fusion power), because we still don’t know how to combust a fuel/air mixture over a wide range of speeds in a single scram jet design, for example. The only foreseeable technology is rocket engines. But there’s the rub. We can’t just throw them away on every flight because they are, along with the actual airframe itself, by far the most expensive components of the rocket. And that is what I meant by technological barriers: we threw these expensive things away before not just because we couldn’t carry their weight into orbit, but because our technology was too primitive to build a rocket that didn’t destroy itself after a single flight. Specifically, the key component is called a turbo-pump and in reality, rockets are just pipes and wires built around the turbo-pump. The turbo-pump is the key technology and the really expensive part of the machine. And we didn’t know how to build a turbo-pump that you could keep firing over and over, like driving your car to work every day. Previously, we had to throw them out after every drive, like throwing out your car engine every time you go to the grocery store. This would never be economically sustainable. And it took us nearly 50 years to solve this problem. And that’s where the Space Shuttle comes in.

The Space Shuttle was supposed to be reusable, but as we all know, it never really was. But the one component of the Shuttle that gets little attention is the high pressure turbo-pumps of its RS-25 engine. Over nearly 30 years of operating the Shuttle, and because it was supposed to be reusable, NASA kept tinkering away at the turbo-pumps; flying them, studying them and enhancing them. It took billions of dollars, years of time and thousands of people-hours. But over those years engineers at NASA finally began making headway on the engine that was supposed to be reusable, was a total throwaway, but which was gradually becoming a true, reusable, high powered rocket engine. They figured out how to reduce the temperature and pressure a bit and “Block I” was born. Then they figured out how to handle the massive cavitation and “out of round” motion of the turbo shaft at 37,000 rpm and 85,000 horsepower, calling it “Block II”. They shot-peened the turbine blades to resist Hydrogen wear and used Silicon Nitride as ball bearings, something that took months of jig testing trying all sorts of workarounds to resolve heavy wear on ball bearings and surfaces that was forcing them to either throw the engines out after one flight or overhaul them after every flight. The advantages of this new engine (10 flights before overhaul, an incredible advance) were so impressive NASA set about capitalizing on what it had learned with plans to build a new engine to replace this one, which would have a 100 flight before overhaul (this was to be the RS-83 and RS-84). In other words, for the first time they really knew how to build a truly reusable engine. Of course, by the time Block II came out and they had started on the new engine, the Shuttle was about to retire and the new engine program was cancelled. But NASA is more or less open source and their work and findings spread around the research community. People learned the lessons so hard-earned by NASA. People started building turbo-pumps with Silicon Nitride bearings, used new computer coding models developed by the NASA Shuttle team for dampening (another huge problem) and generally incorporating every lesson they could from NASA’s experience. Numerous papers have been written on this and Aeronautical Engineering professors write almost verbatim from NASA documents extolling the lessons learned. Around 2012 SpaceX tested a totally new turbo-pump. It’s overall thrust is a dramatic increase over its predecessor. The turbo-pumps can be re-fired and the rocket is reusable. Something big has happened. There are, of course, narcissists in our capitalist world that will never acknowledge credit where credit is due, but yes, all this came from NASA and it’s clear and unambiguous.

SpaceX is, for now, also the only company looking at reusing the airframe, the other truly expensive component, and is the first to make real, tangible progress in that direction. But interestingly, that problem is intermixed with the turbo-pump problem, for the simplest technological solution is just to fly the booster right back to the launch pad when it’s done lifting your cargo. This was unthinkable before reusable turbo-pumps were perfected (it would require three separate firings in one flight – most turbo-pumps will literally blow up if you try to shut them down gracefully, much less start them back up). It won’t be long before the rest of the industry jumps on this bandwagon and does the same. In fact, we think they have but cannot confirm that it is coming from NASA research directly. Everyone is now fascinated with reusable rocket engines and airframes. The engineering streets are abuzz with talk. To see why, for a 50 million dollar launch only about 200,000 dollars is used for fuel. The turbo-pump suite costs from 10 – 40 million (which today are just thrown away). When the books are settled, at the least, it costs about 3500 dollars or more per pound for taking cargo to LEO. But if you reuse the rocket and its turbo-pumps that cost will plummet to less than 100 dollars a pound. In the next 10 years the boundaries of space are about to explode with activity. There are more resources that are easy to recover and bring to Earth than anything imaginable from past experience. It will be like the 49 Gold rush squared. And all this talk about “lightweight” and “mass restrictions” will all sound rather quaint.

And btw, this is why NASA is now focusing on a new Space Launch System (that Mondo rocket that looks awfully like the Saturn V), which is for deep space flight. They already know the LEO problem is solved and they are leaving it to private industry. That’s what all the confusion and hoopla in the political world, vis-à-vis space flight, is all about; people are realizing that a paradigm shift is occurring. Bring it on!

Neil, I will wink at the moon for you Sunday night.

kk

Hi all,

The truth is stranger than fiction. I’m going to make a point and I’m going to use one of the wildest conspiracy theories out there to make it: the idea that UFOs exist and that they are spying on us. Indeed, that they are spying on nuclear weapons facilities around the globe as if a preface to global invasion. Good stuff, but let’s read between the lines and I think the truth might be in there somewhere. Something is going on, but it’s not what some think. Call me a skeptic, but first we need to explain what that word means, which takes me right to my point.

First, the term “skeptic” is confusing in the UFO research arena. Apparently, those that hold the orthodox view that UFOs do not exist seem to view themselves as “skeptics”. This was a bit confusing to me at first but it’s only a matter of semantics. Or is it? I think it is more accurate to call those that challenge the orthodox opinion to be skeptics, not the reverse. For the view that UFOs are real is a heterodox opinion and, by definition, skeptical of the orthodox view.

Having said this, most attempts by the orthodoxy to refute UFO evidence seems to revolve around a style that is frustrating to read and research: most attempts to “debunk” seem to be tomes of Ignoratio elenchi (basically, making arguments that are cleverly irrelevant to the presenting claim). And that itself gives the appearance of dishonesty. Whether it is honest or not, those that make these arguments should consider this in their analysis because it is a source of considerable suspicion for most readers. One of the common tools used in this approach, among many others, is the tendency to over-emphasize things like witness credibility, particularly the credibility of the researcher themselves. For if the researcher can be found to be dishonest or misleading (or just crazy), then it is assumed that everything they say or claim is false.  Ironically, this is the same approach used in Western law and it has been shown in numerous psychological studies to be fallacious. This is because there are two types of “believers”; those that are charlatans and do not in fact believe and those who engage in wishful thinking but still believe the over-arching premise. And for most researchers who reach heterodox conclusions the latter is more common. It’s the age-old fallacy of throwing the baby out with the bath water without realizing that while some parts of a person’s research may be fallacious or faulty, that does not, by itself, imply that all of it is. The astute researcher has to know how to sort this out.

But what many orthodox researchers do, once they find any fault or error in an analysis, is engage in an argument of Ignoratio elenchi by focusing only on the matters of credibility, not the actual claims being made. Thus they spend volumes discussing tangential factors to the overall story that, by themselves, have nothing to do with the truth or falsity of the over-arching claim. That is not to say that credibility has no place: if one finds that, for example, the so-called “Majestic 12” documents originated from U.S. Air Force counter-intelligence agents (Special Agent Richard Doty, to be precise), then it can be safely assumed that any document referencing Majestic 12 is probably a fabrication. That is a credibility assessment. But what makes it germane and different than broad, context-void assessments of credibility, is the fact that it is contextually relevant. On the other hand, suggesting that Robert Hastings, who researches odd events at nuclear weapons facilities, has deliberately fudged facts about one incident at one location, say, in 1967 or 1968, does not by itself provide sufficient evidence to “debunk” evidence that exists for the same kind of phenomenon at another site in 1975, whether that evidence originated with Hastings or someone else. Critical analysis just isn’t that simple and explanations that try to do this are catch-penny arguments used to appeal to human bias and prejudices; namely those having to do with people’s natural tendency to disbelieve anything that comes from a source that has been dishonest at some point in the past. This is the same reason why a known and admitted prostitute on the witness stand is not guaranteed to lie for any and all questions posed to her. It just isn’t that simple. And what we are doing here has nothing to do with trying an individual; we are seeking confirmation of facts and assertions that may or may not be independent of that witness.

Two excellent examples of deceptive “debunking” are the Air Force “Case Closed” report of 1995 and the internet article by James Carlson found at http://www.realityuncovered.net/blog/2011/12/by-their-works-shall-ye-know-them-part-1/. In the case of the Air Force an attempt to debunk the skeptical claims about the official narrative of the Roswell incident of 1947 was proffered in 1994 and 1995 when the Air Force changed its official story of it being due to weather balloons to a claim that it was due to something called project Mogul. This tells us, implicitly but loudly, that the Air Force was engaged in counter-intelligence when it lied about the weather balloons. Forgetting for the moment that germane questions of credibility about the Mogul claim are thus obvious, the Air Force spent almost all of its report talking about project Mogul. It was basically a history lession about Mogul. But that is really not relevant to the skeptical claim. And sure enough, the explanation was riddled with problems. There were crash dummies employed in 1947 that didn’t exist until 1952, the Ramey memo in which it can clearly be seen by any modern computer user to have referenced “victims of the wreck”, and so on. Thus, given that we know that disinformation was the source of the weather balloon explanation, and given the obvious application of Ignoratio elenchi in its 1995 Mogul diatribe, it is no wonder the American public doesn’t believe it. That USG can’t see this is astonishing but reinforces the view that they are out of touch with the public.

In a similar way, Carlson makes an elaborate argument that the 1967 and 1968 phenomenon at one missile base was, at best, wishful thinking of a skeptic named Robert Hastings. Once the “gotcha” was in place, it was then assumed by character assassination that all the other events must be of a similar nature. It was an exercise in Ignoratio elenchi.

And this is why these kinds of analyses are a turn-off for most readers. Most readers see this as a personal attack on a person rather than an honest pursuit of truth. That Establishment figures in government, who do the same thing, have not apparently noticed this is astonshing but it shows how out of touch they are with everyday people. If there were ever a sign of elitism sticking out like a sore thumb, this is it. Thus, in order to examine the Hastings research, we need to examine each case of purported tampering at each base on each date and ask only the questions of merit:

1.)  Who actually witnessed the visible phenomenon? Are their names known to us? What is the chain through which this information reaches us now? What have they said? What written or electronic data is available to corroborate it? Where is it? Can we see it? Is it clear and convincing?

2.)  What witnesses can report on the radar data? Have they also been identified? What is the chain through which this information reaches us now? Do records of these radar sweeps exist? Can we see them? Is it clear and convincing that solid objects were present?

3.)  What failure mode, if any, was witnessed and who witnessed this? Have they been identified? What is the chain through which this information reaches us now? Did anything actually fail? What was the nature of the failure? What records exist to corroborate this failure? Is it clear and convincing that a failure mode without prosaic explanation occurred? (classified aspects of their operation can still be protected by a careful review of how the systems are explained – we don’t need to know how they work).

4.)  Does the movement of objects, if established as above, correlate well between visual sightings and radar tracks? Does it appear to be intelligently directed as a response or anticipation of human behavior? Time, location and altitude are critical here.

We don’t need diatribes and tomes of personal attacks, tangential information and digressions of Ignoratio elenchi to resolve this. We need data.

The global public is becoming more and more sophisticated in their understanding of geopolitics and disinformation. They more easily recognize it and its common attendants, such as catch-penny reasoning, Ignoratio elenchi, the role of greed and money and the extremes to which power corrupts. It’s time for USG to catch up.

Virtually every government “investigation” and every “debunker” out there has done nothing to address the four questions that could be equally applied, in different form, to just about any “conspiracy theory”. And the field is chock-full of charlatans and fairy tales that can be easily discounted with a modicum of background research into the provenance of documents and the nature of the claims by simply applying the questions above, even if they cannot be fully answered. There is truth between the lies.

I would submit that USG should rethink the way it approaches conspiracy theories by avoiding catchpenny silliness and responding to them directly and in a hyper focused manner by releasing data specific to the claims of merit because the tactic they’ve been using, since at least 1947, is itself becoming a National Security issue because of the distrust in government it has caused. And Popular Mechanics commentary – which most who have two brain cells to put together know this to be a USG shill – in their replies isn’t needed. Just the relevant data. They need to do this with the Kennedy assassination, 911, UFOs and anything else of popular lore. Sadly, I’m afraid their hubris is too inflated now to ever do that, but for my part, I’ve illuminated the path. Listen to me now, hear me later.

- kk

P.S. Watch how disinformation works. Everybody is focused on Edward Snowden. But is that really the story? How about the fact that his revelations have confirmed that you are being watched, Orwell-style? All the way down to your Safeway discount card application and purchasing data. Yep, that’s right. Go read what he actually handed over to the journalist that reported it. Google is your friend.

As some of you might have heard, a site suspected of being a Chachopoya site was discovered east of the Andes in the western Amazon jungle back in 2007. I don’t know if it’s going to be excavated or not. The site has been called “Huaca La Penitenciaría”, or Penitentiary Ruins, so named because it looks like a fortress of stone. It is buried in deep jungle growth at an elevation of about 6000 feet. This means that archaeologists now have to consider the fact that the Chachapoya’s eastern boundary extended into the Amazon. Anyway, numerous structures have now been verified in the Amazon jungle and it is clear that this “jungle” wasn’t always so. Human beings have been “terraforming” it for centuries creating a very rich, black soil on top of the acidic rain forest soil.

chachapoya_Penitentiary_Ruin

Huaca La Penitenciaría

The “black soil” (terra preta) has been a mystery until recent years. Now, it has been learned, these people had an ingenious system of settled agriculture in which they enriched poor soil with charcoal and other materials by building dikes, water channels and artificial lakes. Then they harvested fish in those lakes. Pretty clever. Archaeologists now believe the Amazon area was host to a population as high as 5 million with “urbanity” ratings exceeding that of Ancient Rome. They essentially built an artificial archipelago. This civilization for which we still know so little extended from sea to shining sea, all the way across the Amazon. The finding of the “penitentiary” means that we now know that heavy stonework was employed in the Amazon. This is a sea-change in thinking as it means we can expect to see more of it.

soil_Comparison_Amazon_001Amazonian Terra preta and regular Amazonian soil.

I decided to grab some of the satellite data on where this “black soil” has been found and do some google searching in satellite images. I was hoping others could help me identify some new prospects. I’ll be writing a big piece on anthropology and archaeology about all of this and more pretty soon. But for now, I was just wondering if anyone knew about these places and what might be here (such as modern constructions).
You’ll notice a lot of circles and berms. These are the types of shapes seen before, called geoglyphs, which it has now been found represent raised earth mounds. Anyway, peruse and enjoy. The locations are in the file name, so click on the file and get the filename to find them on google.

possible_Yet_Another_Geometrically_Regular_Field_With_Cube_At_Decimal_12.332623_South_68.87198_West_001possible_Geometrically_Regular_Field_With_Xs_And_Ls_At_Decimal_12.306922_South_68.891402_West_001possible_Geometrically_Regular_Field_With_Rectangular_Objects_And_Xs_And_Ls_At_Decimal_12.306922_South_68.891402_West_001possible_Artificial_Structure_Zoom_001

Here, what looks like a pyramid or platform structure with a couple of trees growing on top. Notice the regular geometric seams in the rock.possible_Artificial_Object_Field_Shaped_Like_Arrow_Head_At_12.364966_South_68.867673_West_002

This is very strange. The picture is taken from an angle, but when corrected, this raised Earth appears to take on the shape of an arrowhead. The pyramid/platform/whatever is at the bottom right. Oddly, the arrowhead points on an azimuth direct to Puma Punku (about 2.5 degrees), not far away. The figures are (reverse azimuth):

Distance: 467.1 km
Initial bearing: 357°29′53″
Final bearing: 357°32′42″

possible_Artificial_Object_At_12.364966_South_68.867673_West_002possible_Another_Geometrically_Regular_Field_With_StoneWork_At_Decimal_12.336082_South_68.869891_West_001possible_Another_Geometrically_Regular_Field_Lower_Portion_At_Decimal_12.336082_South_68.869891_West_001A geometric object lying in the same field as the platform below.possible_And_Another_Geometrically_Regular_Field_Zoom_At_Decimal_12.337193_South_68.846632_West_001

Compare this to the layout and overhead at Tiwanaku

pumuPunka_OverheadAnd the layout scheme at Puma Punku:

puma_Punka_Layoutpossible_And_Another_Geometrically_Regular_Field_At_Decimal_12.337193_South_68.846632_West_001Here is what looks like a stone platform like the one at Tiwanaku

Compare this to the Chechapoya Penitentiary layout:

chechapoya_At_6000_Feet_Peru

You might have noticed something odd. What are those cauldron, casket looking structures on the roof? Who knows. But check this out:

possible_Penitentiary_Analog_At_Decimal_12.363672_South_68.846252_West_001

Unfortunately, the video quality degrades inline, but you can click on the image, then click on full size to get a better view. This is just east of the arrowhead, about a mile or so. Notice the odd roof structures. This site is very close to Puma Punku, and I will have much to say about masonry in my article on this subject. The masonry problem may have been solved in the Amazon jungle.

And much, much more. I’ve found stuff like this all over the Amazon. Archaeologists are now saying that there may be hundreds or even thousands of Earthworks in the Amazon that people have practically lived on for centuries and not noticed until now.

-kk

Follow

Get every new post delivered to your Inbox.

Join 149 other followers

%d bloggers like this: