Archive

Global Rule of Law

checkerFloor03Welcome to the Greatest Game on Earth

With the Shenanigans of Edward Snowden and the controversy surrounding government surveillance, a topic of key concern has been raised to the fore and there seems to be immense confusion and frustration over it. The reason for this is that Orwellian strategists have been applying a clever tool of deception for centuries that only in the digital age is becoming reified, more transparent and more obvious to the public. And it’s time the public put an end to it. What am I talking about?

I’m talking about the nebulous concept of “forbidden knowledge” and how that amorphous notion has been used to conflate categories and classes of information to permit a clever form of effective censorship. The easiest way to explain this is by example. Suppose I receive a political commentary on the internet from a friend who passes it to me by email. Let us say it is critical of a powerful, despotic political leader. What some would like to be able to do is to be able to render that knowledge forbidden; that is, to be able to prosecute anyone who possesses that knowledge, imprison them and silence them. Or barring that, prosecute them with enough punishment so as to dissuade them from publishing or sharing it further. And let’s not kid ourselves, the true Holy Grail in this scheme is to be able to do this without the public realizing it. For if in an open, supposedly democratic society we can achieve this without public backlash, then all the better. It’s kind of obvious how this might be done in North Korea, but what about in the States? How could you censor opinion itself without public backlash? The answer is conflation.

Suppose I have a friend who has illegally obtained classified information from the NSA, then sent that information to me by email. Whether I paid for it or if it was given to me freely is irrelevant to the point I’m going to make. Now, suppose I then publish that information and give it wide distribution in the public domain. In this case, the State can prosecute both my friend and myself without much public backlash and without the appearance of betraying the precepts of an open society. But what I’d really like to do as an Orwellian strategist is to confuse the issue a bit. And here’s how I do that. Not only do I prosecute the person that illegally obtained the information, I make sure anyone repeating it (myself in this example) is also prosecuted. In the case of an NSA leak, no one gets particularly alarmed (many do, but we’ll fix this defect in the example shortly). And the result is that I can effectively censor information even when that information is leaked. I can shut it down and prevent its sharing in the public domain by the chilling effect this will create on anyone who dares repeat what they’ve heard. So, not only do I make the illegal theft of classified information illegal but I make the mere possession of classified information illegal. Well, what’s wrong with that, you ask? Nothing, in and of itself. The problem however, is that by framing the legalities this way it allows me to define classified information not just as classified, but as “forbidden knowledge”. In other words, rather than merely making the theft of such information illegal, I render that information illegal by virtue of it being “forbidden knowledge”. This is a big Orwellian step, because now I have defined information itself to be illegal. Now, I have an amorphous and nebulous term called “forbidden knowledge” which, by virtue of its nature, I can now add or take away various types of information from that category without the public noticing. All I need to do now is find a clever way to insert into that category opinions “I don’t like”. Can I really do that without the public noticing? Oh, yes, it isn’t that hard to do. But this example doesn’t really do the situation full justice, because it is not as easy in the case of classified government leaks to get the public to:

… not only accept this Orwellian criminalization of information itself, but to actively support it.

No, we need something that is universally reprehensible in order to do that. Because, you see, if we can choose something that is universally reprehensible we can use that to not only gain the public’s acquiescence in this crime, we can garner their active support. The idea here is to conflate different types of information with others. So, let’s change the example.

Suppose a friend of mine runs a female sex slave trade in which he manages the repeated, violent rape of abducted women, videotapes it, then sends me copies to publish on the internet. Let’s say he even murders a few women in the videos, just to make it truly reprehensible. In the Orwellian model, this information itself (the video data) is now illegal. It is “forbidden knowledge”. Will anyone object? Not likely. In fact, most will avidly support this conflation. If I hadn’t thought about it a little more, I would. But nevertheless, this allows the policy-makers to create law that establishes the existence of forbidden knowledge. Now all they need to do is substitute information willy-nilly, not in law, but in the enforcement of the “forbidden knowledge” law. If they’re clever, they can introduce some laws that help that a bit without the public really noticing. For an example of just how real and not so theoretical that idea is, think of “terrorism” and how this amorphous concept has been used to justify the excesses of the NSA, or the fact that police now routinely listen in to wireless communications of private residents without their knowledge or consent. Indeed, without a Court’s knowledge, as no warrant is required. They do it all over America right now. This is a real-world example of this “substitution” technique. For, you see, now everything is called “terrorism”. Jaywalking is terrorism as far as the police are concerned. The same thing happens when forbidden knowledge is an object of legal prosecution; where different kinds of knowledge become conflated.

So, what is the failing here? The failing is that we have been duped into the idea that information itself can be made illegal. This is toxic to Enlightenment because all information, from the most reprehensible to the most popular, is necessary to provide the perspective and input for fully informed debate. For one thing, we know something “reprehensible” has occurred in the first place when the information required to evidence it is legal to possess. Otherwise, we are barred from even knowing it exists. Think about that. If information itself is illegal, we must rely on someone else with the legal “privilege” to possess the forbidden knowledge to tell us it exists. How Orwellian is that? Rather, what we should do, but what some don’t want to do, is to render the actions that lead to forbidden knowledge illegal; then investigate and prosecute those cases. But, you see, the problem with this for the Orwellian types is that this means transparency because the governments can’t so easily get away with conflating types of information. Now, in a truly free society, if they prosecute the “information”, they really have to prosecute the actions that led to the information, not the information itself. So, they must investigate and prosecute the NSA leaker, the rapist/murderer and the terrorist. The problem with this for the Orwellian is that it renders what is being made illegal more transparent and obvious. Trying to make the act of writing a political commentary illegal would not garner public support and would rather garner opposition. To do so would inspire an Arab Spring of sorts. That’s not acceptable. The object of the Orwellian model is to oppress the subject without the subject realizing who has oppressed them.

Therefore, if the Orwellian can inflate a political commentary with rape, for example, through the nebulous mechanism of “forbidden knowledge”, it is much easier to dupe the public and remain hidden behind the scenes, pulling the strings of censorship whilst be unobserved as the toxin to Enlightenment. In a just and free society those that possess knowledge cannot be guilty of a crime because they did not do the deed. If you pay for a cable news channel and witness a live murder on television, this is not a crime. Murder is. If you read Snowden’s leaked documents that is not a crime. Stealing them is. Some crimes are so horrific that we tend to want to ascribe a criminal liability to anyone that so much as thinks about such a crime. But we do this at our own peril, and the Orwellian masters are counting on you. They are counting on you to help them make ignorance the only legal option. Think about that for a minute. Enlightenment and knowledge is the enemy, and it must be made illegal. So, the compass of forbidden knowledge must grow while you sleep and it must not be made obvious who exactly instilled this chilling effect on you. Think Illumination Everywhere and free humanity of this madness.

In a truly just and free society the question of whether or not a “deed” was committed in the first place is open to public scrutiny and visibility in that case. No one would call writing a political commentary a criminal deed, but they might think rape is. When google and their lobbyists go up against Congress on this issue, this is what they need to convey and explain. For if they do not, they will lose the argument to the forces of darkness and the hortatory of jingoistic double-talk.

– kk

network

Hi all,

The statistics look more and more ominous (for some) each passing year as the American public moves further from mainstream news and further into the alternative media. Americans are flocking in droves to the likes of alternative news sites on the internet and, in particular, to alternative news productions on youtube. Statistics confirm that comments sections in mainstream news website articles are more influential of public opinion than the articles themselves. It is as if there is some kind of wholesale rejection of mainstream thought. What is behind this trend? To explain this, a brief digression into political ideology is needed, but I’ll make it brief.

Prior to about the 14th Century western culture was dominated by a normative confirmation bias that pervaded human thought: religion was the normative confirmation bias that shaped and defined how we viewed and understood our world. Virtually nothing in people’s lives was unaffected by this lens. Whether it was science, society, family, politics or any other matter, religion framed how we understood everything. Even explaining the cosmos had to pass a normative confirmation bias test; for if whatever idea one advanced about the workings of the cosmos was incongruent with the religious lens, it was rejected. And within western religion there were differing “flavors”, different sects of religion which occupied the populace in endless debate over which one represented the real “truth”. But oddly, what seldom occurred to those living in this time was that religion itself might not be necessary. While there were atheists in that day and age for sure, for most people it was not widely known or understood that one could actually just not believe in religion (or “god”) itself, whether that meant continuing to believe in a god in your own way or not believing at all. To promote this idea widely could severely jeopardize one’s own self-preservation, so wherever the view was held, it was held in private and the very idea that religion itself was not needed to understand our world was not a widely discussed debate. Some scholarly elitists debated it as time went on, but this was mostly after the Enlightenment, and much more so beyond the 14th Century. But that brings us to the next point. The idea that we might explain our world without a need for religion didn’t suddenly appear overnight. It gained wider popularity in the general population gradually over time. This breakout into public discourse began during the Enlightenment and slowly expanded over the next two or three hundred years.

But what seems to have escaped the attention of the larger discourse was the fact that normative confirmation bias dies a hard death. To explain this, we should first try to characterize religion more generally in this context so as to provide a more objective way of identifying normative confirmation bias, whatever guise it may assume in different times and places. The key feature of religion vis-à-vis normative confirmation bias was the fact that religion provided a general theory by which the universe could be explained and one only needed to deduce specific conclusions from that theory to know how to interact with their world. It was, in a sense, a simplex way of thinking. But what the enlightenment taught us was that there was another way to understand our world, a duplex way of thinking. Instead of applying a general theory as a given, we could empirically observe the world around us to induce – vice fabricate of whole cloth – a theory of how our world works. Once such a theory is established, we could then deduce from that how to interact with our world. This was the birth of empirically driven reason. By observing our world in a series of “experiments” we could from those specific cases induce a general understanding of how the universe worked. Once established, knowing how to rationally interact with our world became a simple matter of deduction. Thanks to established theory, we then had a predictive tool to guide our actions. What this lesson should have taught us was that general theories not founded in empirical observation were the mother of normative confirmation bias. But alack, we did not learn this lesson. We merely rejected religion (to some degree of approximation) and failed to apply this same logic to everything else. And that is partly due to the fact that the tools of empirical observation were limited in the scope of subjects to which they could be applied. Anything that escaped our capacity to empirically characterize therefore, remained in place as a potential “mother” of normative confirmation bias.

As the years wore on, our capacity to empirically measure previously vague and inscrutable matters such as sociology improved. While this capacity to empirically measure our sociological world, our understanding of how to manage society based on an understanding of that sociology remained fixed: we continued to base decisions on the management of human society on theories derived whole cloth which were independent of those measures. To be sure, most of these theories had considerable thought put into them, but that is not the same thing as an empirically driven understanding in which a theory is induced from specific experimental observation. But this wealth of intellect and thought in those theories was no different in form to those same theories applied in the religious context which we came to know as theology. Neither of them were theories born of empirical observation. Thus, to the surprise of many today, just as it was a surprise to the adherents prior to the 14th Century, neither religion nor political ideology is necessary to understand our world. Today, our capacity to empirically measure social indicators and effects is robust but our mental paradigm is still stuck in the past on theories of political ideology which were either devised hundreds of years ago or have antecedents in the same. We can trace most western political ideology today to the Enlightenment to such philosophers as Aristotle and Plato. Certainly, more has been added since, but the basic methodology hasn’t changed. What we see today in political ideology is a mélange of ideas from various philosophers of the past, ideas that were born whole cloth long before the empirical ability to measure sociological factors existed. In the same way that religion fueled ignorance and superstition, political ideology relegates our understanding of how to manage society to a simplex method devoid of any true empirical foundation. Of course, we can argue semantics and call this view itself political ideology. But there is a fundamental distinction here that renders that view suspicious: this new understanding of how to manage society is a duplex method based on a direct appeal to empirical observation of society upon which law and economics should operate. In this enlightened view we attempt to inform our understanding of law and economics on means and methods consistent with the scientific method. It will not be complete for sure, since our capacity to measure sociological features of our universe are limited, but the change is a paradigm shift.

Small, weak steps in this direction have been evidenced by academics who attempt to understand the effects of policy on society but the universal failing in all of them is the mere fact that political ideology as a thought paradigm exists and is pervasive (normative) in the same way that religion was before the 14th Century. Thus any academic, for example, who tries to empirically observe the universe in a world dominated by religious thought will encounter the same normative confirmation bias and the results will reflect that bias. They will “see” their data through the lens of religion and will inexorably apply a religious world-view to it. That’s why it is called “normative”. Thus, what humanity needs is a Reformation of thought, a Reformation of Enlightenment that removes this confusion so that those living today can realize, first and foremost, that political ideology isn’t needed to understand how to manage human society. As it stands today, most cannot even see that obvious fact and falsely believe that political ideology is somehow a necessary construct. But as we saw with religion, this is not true and is believed only because the “thought system” of political ideology is normative in the present. This means that the very basis of governance, the systems of government that exist today, need reform and the best way to do that is to do what adherents did to reform religion: the role of the priest class must be diminished. In the same way, the role of the political class in our systems of government must be diminished. This can be done by increasing direct public participation in governance. While direct democracies do not work, we need not accept the ad reductio absurdum for progress. What we need is a system in which an elected political class determines public policy by consultation and participation of the public directly, in much the same way that Western Courts rely on juries and judges to operate together to render verdicts. In Western Courts, judges render juries competent, but juries provide the “conscience” of the Court. Judges provide uniform (hopefully) jury instructions and other rules of operation by which the jury operates. The jury provides the conscience of peers. In a similar way, statute and public policy should be created. This is, indeed, the future of neo-liberal western democracy. I’ve spoken of this in other contexts where I mention how technological agency is making this Reformation all the more urgent, but this article explains more of the background to this reasoning and the “why” behind what is happening.

So, what does all this have to do with the death of mainstream journalism? What the death of mainstream journalism is revealing is that technological agency (in this case the information age) is forcing this Reformation upon us and we need to be informed as to the “why” behind it so that we can make wise choices to guide this Reformation, lest the “Reformation” lead to chaos. The only way to control this Reformation being forced upon us is to increase direct public participation in governance, and we must do it as quickly as possible. Unlike the 14th Century, today time cycles move quickly and we don’t have two hundred years to sort this out.

Finally, I can prescribe my opinion on what mainstream journalism can do to save itself from irrelevance and destruction. In the past, with the normative confirmation bias of political ideology holding sway, journalists attempted to report on the “who, what, when, why and where”, the five W’s. But because of normative confirmation bias they tended to try to “go a little deeper” and report the “true” news by identifying the 5 W’s of a given story, which I’ll call an event, in order to discover the “true” event narrative. This meant that some if not a lot of effort was spent in trying to discount or “disprove” any other competing event narrative. In other words, each story had any number of event narratives which could be applied to it and journalists attempted to “discern” which narrative was the “true” narrative. But the reality is that, in stories that inherently involve subjective matter it isn’t possible to always do this. Only by applying a confirmation bias could the “true” narrative be “found”. One exception to this, in a pure sense, would be matters of pure scientific fact. But rarely does a story involve only pure scientific fact. In almost all stories reported, there is some element of subjectivity either in the core of the story or in the details surrounding it, even when a core scientific fact can be established. In other words, seldom if ever do scientific facts stand alone as the only issue at hand in a story. As an example, we can take anthropogenic climate change as a case study in which a core scientific fact can presumably be established by a journalist. But the story of anthropogenic climate change involves more than just that core scientific fact. In other words, there are competing narratives of “truth”, each with its own set of the 5 Ws. In this case, the “what” is the veracity of the scientific consensus. What journalist do today is they try to validate one of these narratives by applying normalized confirmation bias; usually political ideology, and thereby reject competing narratives, usually by not reporting on them at all, or incompletely. The role of a reporter is to report on all the narratives by providing a complete accounting of the 5 Ws of each narrative, rather than trying promote one narrative or the other. There is no need to do this. There is no need for political ideology, or any other form of normative confirmation bias. The obvious objection to this is that it is an idealized way of understanding reporting since we know that a “true” narrative, at least sometimes, does exist. At least a true “what” exists.

And it is this dissonance between the ideal and the practical that is the cause of the decline of mainstream journalism.

Let me explain. In the information age the consumer has the option of patronizing those outlets that comport with their own confirmation bias. Therefore, in an unrestricted society of free speech the consumer will simply ignore any “mainstream” outlet that “chooses” an event narrative inconsistent with their own confirmation bias. Ultimately, this will result in the evaporation and disappearance of any mainstream outlet since all surviving outlets will be outlets of personalized confirmation bias, choosing event narratives for each story according to the presumed confirmation bias of their consumer. The effect of this is ignorance and superstition, to put it bluntly. This is the result of genuine choice for the consumer. Thus, the only way for a mainstream outlet to avoid self-destruction is if they take the ideal tact; that is, they must report on all conceivable event narratives for each story in complete form, providing the 5 Ws for each. By virtue of the fact that they are mainstream, most consumers will stop there because, in the midst of that report, they will apply their own confirmation bias to choose their own event narrative from those provided.

The advantage is that the mainstream outlets are still seen as valuable by the consumer because, if the narrative that comports with one’s own confirmation bias is presented, the consumer does not associate the other narratives with the mainstream outlet itself. Otherwise, in the presence of choice, the consumer will simply flee and ignore the mainstream outlet (and find it untrustworthy).

In order to ensure this outcome, mainstream outlets must provide all the 5 Ws of each conceivable narrative in factual depth on each W with opinion avoided as much as possible. Unless and until normalized confirmation bias is expunged from human society, this is the best one can do. But in terms of the bottom line of mainstream outlets and their advertising revenue, this is a total coup. Almost all consumers will consume the mainstream news at least when first learning of a story, and the majority of traffic will be satisfied with that report. But how do I know this? Empirical observation. The rapid growth of alternative news on youtube demonstrates my point. The most popular alternative outlets are, surprise, factually shallow and mostly cheerleaders of a particular political ideology. This is because they are specialized toward a particular confirmation bias and must be shallow to garner wide consumption. The factual depth they provide is adequate for most consumers and comparable to what is offered in the mainstream outlets now (at least for the one narrative each presents). If the mainstream outlets spent more time on factual depth and broader narrative reporting, they could report in the same space they always have by removing the opinion and rather transparent attempts to dance around inconvenient narratives. One of the key ways the mainstream outlets expose this dancing is how most mainstream stories obviously are leaving out details (sometimes by increasing verbiage!) because the stories make little sense without them. It has the motif of a meme or sound bite and the consumer senses something is missing. This same thing was sensed by citizens of the Soviet Union for 70 years. And that’s another reason why people in the West today go to alternative outlets.

But won’t bias always exist? Of course it will, to include many forms of bias beyond simple confirmation bias. The distinction made here is in how I am, for the purposes of this discussion, defining normalized and episodic bias. A bias (or bigotry) is normalized when the recognized authorities of that culture promote that bias by action or inaction. A bias is episodic when an individual bears it.

The key change created by the Protestant Reformation was that the role of clergy was diminished. Over time this had the effect of reducing the normative confirmation bias that existed in western culture due to religion, ultimately rendering it episodic vice normalized. The same thing will happen to the normative confirmation bias created by political ideology once the role of the political class is diminished. For the activist, the best strategy here is to promote greater participation of the public in governance in the manner described here rather than getting into the weeds of the rationale that accompanies it. The reason for that is that the very same normative confirmation bias of political ideology will work against the goal. But the notion of greater public participation in governance is an idea ripe in our time and should be nurtured.

– kk

P.S. For more info on what we mean by reducing the role of the political class and how direct democracy can be made feasible, see the intro (click) to General Federalism.

EbolaClinicThe Ebola community clinic suggested by WHO and posted on the Washington Post website. U.S. should immediately begin production of self-contained, modular “kits” of this design on a war footing now. They should plan on constructing thousands. Provisions to provide armed security using U.S. military personnel at each of these clinics should be added to this design.

The following is a thought experiment with updates and it should be taken seriously, but not as an inevitability either. Nature can upset these rates of viral diffusion in any number of ways and there are too many variables to know if this estimate is valid. This is just my opinion, and the suggestions are offered as such.

If the number of infected persons should exceed 100,000, then it is unlikely that any international effort will have sufficient resources to make any mathematically non-negligible difference in the rate of diffusion of the virus. At that time, the only means of stopping the diffusion of the virus will by external quarantine.

Will we see that number? If we assume that the total number of infected is three times the estimated amount, as some close to the ground have stated, and the reported figure is 5347 as of 18 September, then the likely more accurate count is 3*5347=16041 persons as of 18 September. Therefore, 100,000 infected persons is notionally reached on the x’th doubling:

105 = 16041*2x

Solving for x and multiplying by the observed doubling time we get:

[log2 (105) – log2 (16041)] * 3 = 7.92 weeks

Defeating this trend with 3000 military personnel and several tons of equipment is improbable in that time. In fact, making any appreciable difference in this rate is improbable when we consider the fact that it will take at least 30 days for this effort to fully stand up. If we cannot defeat this function in this time, then the point in time that the probability of containing the virus is maximum is when that point is reached; that is, in 7.92 weeks. USG should be prepared for that time, which is about 18 November.

Here’s why.

At this time USG should (and must if it is to succeed as pointed out) shift to a quarantine solution. At this time, the total number of infected is 100,000. While this may be too many to contain the virus within the population, it is small enough to contain it within the borders of the three affected countries, assuming steps are taken to do so. But simply putting troops on the border may not be enough. A no-fly zone will be required and all arteries of passage outside these countries must be involuntarily (but temporarily) evacuated with a radius of not less than 15 miles from the point where the artery intersects the border (the actual distance would vary depending on the size of the passage and the topology). Refugees should be sent to either Liberia, Sierra Leone or Guinea. And authorities will have to “guard” this area and allow no entry. This is the only solution in the absence of a vaccine. USG should begin now to collaborate with all neighboring countries to give U.S. military personnel access to these “choke points”. USN must enforce a no-sail zone off the coasts of Liberia and Sierra Leone on 18 November.

If these numbers should attain by 18 November USG should initiate stop-loss on all services and respond with a strategic military effort, which may involve the deployment of many tens of thousands of troops. U.S. military forces, if the numbers exceed 3,000, should be issued all chemical, biological and radiological gear (and I do mean ALL three types) to all personnel. Other nations that supply military personnel should do likewise. Prioritization of vaccination, wherever available, should be ranked thusly:

1. Military personnel and health professionals in the hot zone

2. Gibraltar, Sinai and Panama (my concern is that governments will continue to underestimate the scale of human migratory flow, and how this will cause a wave of successive virus infection flow along migration paths, which will converge on these “ambush sites”; in the sense that these sites, if treated wisely, could serve as points to “ambush” the virus).

3. Populations at the periphery of the hot zone

4. The hot zone

5. The global population, en masse.

If the number of infected increases the probability of containment will fall. When the number of infected in the affected countries reaches 1 million, containment will likely fail in any case. 1 million infected will be reached on:

[log2 (106) – log2 (16041)] * 3 about 4.5 months, or January 15.

At that time, should that occur, USG should apply the same tactic at Gibraltar and the Sinai and the no-fly and no-sail zone should be extended to the continent of Africa. If a vaccine is available, a mass and heavy vaccination of the population in the areas of Gibraltar and Sinai should be performed in a buffer area extending about 100 miles on either side of this boundary. The vaccine should be made available to any person seeking it, regardless of origin and indeed, it should be mandatory (if a refugee arrives in a heavily vaccinated area they are less likely to attempt to continue their exodus). If a vaccine is not available in that quantity, the same area should be evacuated. In either case, the area will require U.S. troops to deny passage to all persons. They should also deny entry into the vaccinated zone. Once this is accomplished, and all available assets should be employed to do so, similar checkpoints should be established at successive national borders from the outbreak. Obviously, if sufficient quantities of a vaccine remain, they should be administered at those locations and to the population generally.

USG should begin collaboration with Spain, Morocco, Israel and Egypt now. As a precautionary measure, USG should do the same with Panama and begin plans to set up a similar boundary in Panama. Plans to establish a buffer in Panama from Atlantic to Pacific by evacuation should begin now. If the efforts to contain the outbreak on the continent fail the virus will have run its course by:

[log2 (109) – log2 (16041)] * 3 about 12 months, or about September 15, 2015.

Resulting in the loss of very roughly 500 million human lives.

If containment to the African continent fails, the virus will run its course and preferentially impact countries that are not well developed or which have large, poor populations. In that case, the virus will run its course in roughly:

[log2 (7*109) – log2 (16041)] * 3 about 14 months, or about November 15, 2015.

Resulting in the loss of very roughly 2.5 billion human lives.

If the (by then) pandemic escapes the African continent, China and especially India are at grave risk because of their large, poor populations confined to a single legal jurisdiction. China will likely not allow foreign assistance on a large scale, though India might. The problem that is being overlooked here is that once the virus has a large pool of infected the ability to contain it drops sharply. In reality, I expect the 2.5 billion figure to be higher. We can expect geopolitical destabilization to occur in this case, with war and conflict becoming a salient feature. The worst case scenario, though not a likely one, is for some authorities such as those in China to use a “snake and nape” tactic. I would caution any such nation (think Pakistan and China) that use of nuclear weapons will only exacerbate the problem as the radiological effect will be as bad or worse than the virus itself. The problem here is trying to guess what a developing nation will do when desperate.

There is no way to know how this function will behave in the future and its doubling rate may change, resulting in large differences in these estimates. However, we cannot ignore the fact that the observed doubling rate is the best information we have for projecting the diffusion of the virus. Another reason to be somewhat skeptical of our starting numbers is the fact that we don’t know if the doubling time is an artifact of the reporting fidelity or a true representation of the rate of diffusion of the virus. However, it is probably imprudent to assume the former.

USG and NIH should place GSK on a war footing now, by the manner of imminent domain or national security if necessary. This should be extended to any other private competency as well, if identified. The likelihood of this virus taking a good hold in any industrialized, wealthy nation is very low, but these events could have cataclysmic economic consequences on the entire world nonetheless.

This entire analysis assumes that no mutation rendering transmission airborne occurs. All suggestions provided are made on the premise of minimizing loss of human life.

– kk

The following is a thought experiment and it should be taken seriously, but not as an inevitability either. Nature can upset these rates of viral diffusion in any number of ways and there are too many variables to know if this estimate is valid.

If the number of infected persons should exceed 100,000, then it is unlikely that any international effort will have sufficient resources to make any mathematically non-negligible difference in the rate of diffusion of the virus. At that time, the only means of stopping the diffusion of the virus will by external quarantine.

Will we see that number? If we assume that the total number of infected is three times the estimated amount, as some close to the ground have stated, and the reported figure is 5347 as of 18 September, then the likely more accurate count is 3*5347=16041 persons as of 18 September. Therefore, 100,000 infected persons is notionally reached on the x’th doubling:

105 = 16041*2x

Solving for x and multiplying by the observed doubling time we get:

[log2 (105) – log2 (16041)] * 3 = 7.92 weeks

Defeating this trend with 3000 military personnel and several tons of equipment is improbable in that time. In fact, making any appreciable difference in this rate is improbable when we consider the fact that it will take at least 30 days for this effort to fully stand up. If we cannot defeat this function in this time, then the point in time that the probability of containing the virus is maximum is when that point is reached; that is, in 7.92 weeks. USG should be prepared for that time, which is about 18 November.

Here’s why.

At this time USG should (and must if it is to succeed as pointed out) shift to a quarantine solution. At this time, the total number of infected is 100,000. While this may be too many to contain the virus within the population, it is small enough to contain it within the borders of the three affected countries, assuming steps are taken to do so. But simply putting troops on the border may not be enough. A no-fly zone will be required and all arteries of passage outside these countries must be involuntarily (but temporarily) evacuated with a radius of not less than 15 miles from the point where the artery intersects the border (the actual distance would vary depending on the size of the passage and the topology). Refugees should be sent to either Liberia, Sierra Leone or Guinea. And authorities will have to “guard” this area and allow no entry. This is the only solution in the absence of a vaccine. USG should begin now to collaborate with all neighboring countries to give U.S. military personnel access to these “choke points”. USN must enforce a no-sail zone off the coasts of Liberia and Sierra Leone on 18 November.

U.S. military forces, if the numbers exceed 3,000, should be issued all chemical, biological and radiological gear (and I do mean ALL three types) to all personnel. Other nations that supply military personnel should do likewise. Prioritization of vaccination, wherever available, should be ranked thusly:

1. Military personnel and health professionals in the hot zone

2. Gibraltar, Sinai and Panama (my concern is that governments will continue to underestimate the scale of human migratory flow, and how this will cause a wave of successive virus infection flow along migration paths, which will converge on these “ambush sites”; in the sense that these sites, if treated wisely, could serve as points to “ambush” the virus).

3. Populations at the periphery of the hot zone

4. The hot zone

5. The global population, en masse.

If the number of infected increases the probability of containment will fall. When the number of infected in the affected countries reaches 1 million, containment will likely fail in any case. 1 million infected will be reached on:

[log2 (106) – log2 (16041)] * 3 about 4.5 months, or January 15.

At that time, should that occur, USG should apply the same tactic at Gibraltar and the Sinai and the no-fly and no-sail zone should be extended to the continent of Africa. If a vaccine is available, a mass and heavy vaccination of the population in the areas of Gibraltar and Sinai should be performed in a buffer area extending about 100 miles on either side of this boundary. The vaccine should be made available to any person seeking it, regardless of origin and indeed, it should be mandatory (if a refugee arrives in a heavily vaccinated area they are less likely to attempt to continue their exodus). If a vaccine is not available in that quantity, the same area should be evacuated. In either case, the area will require U.S. troops to deny passage to all persons. They should also deny entry into the vaccinated zone. Once this is accomplished, and all available assets should be employed to do so, similar checkpoints should be established at successive national borders from the outbreak. Obviously, if sufficient quantities of a vaccine remain, they should be administered at those locations and to the population generally.

USG should begin collaboration with Spain, Morocco, Israel and Egypt now. As a precautionary measure, USG should do the same with Panama and begin plans to set up a similar boundary in Panama. Plans to establish a buffer in Panama from Atlantic to Pacific by evacuation should begin now. If the efforts to contain the outbreak on the continent fail the virus will have run its course by:

[log2 (109) – log2 (16041)] * 3 about 12 months, or about September 15, 2015.

Resulting in the loss of very roughly 500 million human lives.

If containment to the African continent fails, the virus will run its course and preferentially impact countries that are not well developed or which have large, poor populations. In that case, the virus will run its course in roughly:

[log2 (7*109) – log2 (16041)] * 3 about 14 months, or about November 15, 2015.

Resulting in the loss of very roughly 2.5 billion human lives.

If the (by then) pandemic escapes the African continent, China and especially India are at grave risk because of their large, poor populations confined to a single legal jurisdiction. China will likely not allow foreign assistance on a large scale, though India might. The problem that is being overlooked here is that once the virus has a large pool of infected the ability to contain it drops sharply. In reality, I expect the 2.5 billion figure to be higher. We can expect geopolitical destabilization to occur in this case, with war and conflict becoming a salient feature. The worst case scenario, though not a likely one, is for some authorities such as those in China to use a “snake and nape” tactic. I would caution any such nation (think Pakistan and China) that use of nuclear weapons will only exacerbate the problem as the radiological effect will be as bad or worse than the virus itself. The problem here is trying to guess what a developing nation will do when desperate.

There is no way to know how this function will behave in the future and its doubling rate may change, resulting in large differences in these estimates. However, we cannot ignore the fact that the observed doubling rate is the best information we have for projecting the diffusion of the virus. Another reason to be somewhat skeptical of our starting numbers is the fact that we don’t know if the doubling time is an artifact of the reporting fidelity or a true representation of the rate of diffusion of the virus. However, it is probably imprudent to assume the former.

USG and NIH should place GSK on a war footing now, by the manner of imminent domain or national security if necessary. This should be extended to any other private competency as well, if identified. The likelihood of this virus taking a good hold in any industrialized, wealthy nation is very low, but these events could have cataclysmic economic consequences on the entire world nonetheless.

This entire analysis assumes that no mutation rendering transmission airborne occurs. All suggestions provided are made on the premise of minimizing loss of human life.

– kk

Hi all,

I’ve added more detail to the framework which attempts to outline the framework with its necessary provisions for implementation in a slightly more tangible form. The ideas derive of what is known as “general federalism”, but those tenets are limited as much as possible at this high level.

Perhaps the most important first step in a top-down re-evaluation of global governance should begin by identifying the sine qua non of the global or regional environment necessary for a successful and durable global rule of law to exist. And most of that will hinge on normative beliefs, customs and practices within a given society. This is almost, but not identically, akin to stating that a cultural environment conducive to global rule of law must exist first. And, as it stands, I will argue, this is in fact the key impediment to effective multi-lateralism. Humanity must grow up, change and dispense with beliefs and behaviors that, while they may have an antecedent in our biological and social past, can no longer enjoy scientific support for their continued usefulness.

One of humanity’s greatest foibles is our tendency to inject emotion into intellectual inquiry, and the tendency this has to marginalize and exclude reason. Many today blame this on religion or some other boogey man. Certainly, religion provides a feeding ground for uncontrolled emotion. But the truth is that a more fundamental and universal cause presents itself as misplaced emotion. All of the points outlined below deal directly with this issue and provide a way for humanity to address serious, global issues rationally. It represents an executive summary of what this author has been working on for several years now and a full treatment and justification can be found in later works to be shared.

The most fundamental changes needed can be summarized below:

Matter and Energy; an evolutionary step in our understanding of economic theory such that we delineate the most fundamental factors affecting economies. The most fundamental foundation of an economy lies in how we manage matter and energy. Economic “theory” merely rides on top of this fundamental, limiting fact. For any beings of technological agency, consumption of matter and energy is likely the gravest long-term threat to survival. Yes, this is a universal claim. Today we call this sustainability, but sustainability at such a fundamental level as what we are describing here finds a nexus in economic prosperity as well. They are the same thing; most people just don’t realize it yet. The prevailing myth in our time has us believe that it is a one-way street: prosperity depends on sustainability. The truth is that both depend on each other. So, wherever there is technological agency, consumption of matter and energy will increase with time. Therefore, long-term planning should focus on increasing access to matter and energy. Currently, this is sharply limited because we do not have the means to create an actuarially sound space transportation infrastructure. This author’s primary area of interest and effort lies in work that will grossly antiquate all existing space flight technology and make this an economic reality. We will see more about this in the next 2 to 4 years as this author matures his current work, now being done outside of the public radar. It will be known later in the form of a non-profit named the Organization for Space Enterprise. The reason why space is the focus is lengthy, but our current approach of trying to hold fast to an existing form of matter (such as petroleum) or to transition to a new form of matter (periodic elements used in solar panels, for example) is not scalable. It will ultimately destroy humanity (by a gradual starvation of matter and energy) and the only long-term solution is to source matter and energy in quantities vastly larger than what is available on Earth alone. Because of the time frames involved, this effort must begin now.  This will require nimble, systemic change in the underpinnings of the free market. A clever solution is an optimization that “does no damage” to the existing system but affords more directed use of matter and energy, and this author has a proposal. Whatever this author does, USG would be well-advised to invest heavily in the development of the means and methods (not all of which involves new technologies) required to render space flight economically viable and actuarially sound.

  1. Systemic change, at the level of fundamental law, must be constructed to provide both representation and participation in decisions regarding how matter and energy, at its initial source, will be tasked within a free market.
  2. This change cannot undermine the principles of free market economics because it must “do no harm” to systems of demonstrated past performance. Therefore, the scope of this input should be limited to the incentives the public en masse is willing to provide to the private sector to encourage the survey, extraction and refinement of matter and energy on Earth and elsewhere. And such incentive should be constrained by fundamental law only to matter and energy at its source (survey, extraction and refinement; SER) with any additional powers explicitly denied. This I’ve denominated the “Public Trust” which establishes all matter and energy as public property legally owned by an irrevocable trust. This element is advised but not essential. The key concern is that no government entity should be legally entitled to ownership of matter and energy used by the private sector. The public owns it collectively by legal Trust, but the private sector is legally entitled to use it. Ownership does not transfer from private to public for existing matter and energy, but new finds are absorbed into public ownership with legal protections for private entities that seek to utilize and market it.
  3. Considerations of sustainability in this scheme should be addressed separately in Statute by direct representation and participation. The fundamental factors of merit should be codified as a balance of immediate prosperity and long-term impact (on nature and its impact on future accessibility to matter and energy).
  4. The Courts of a general federation should operate only where a party’s inference in a Court of the Federation shall not augment less the evidence submitted in support bears substantial probative force by the manner of procedures consistent with the scientific method.

Social Justice; the evolutionary step in our normative understanding of social justice. We need to transform the public understanding of social justice to inhere the more that social justice should be blind to personality and demographic and should rather focus on behaviors of merit and those that lack merit. The old saying that violence begets violence likewise extends to the notion that emotional extremism begets emotional extremism. Almost all notions of social justice today rely on emotional domination of the issues and feed off of ancient and barbaric fears that do nothing but generate a vicious cycle of repeated but varying “causes” through history. The result is that throughout history we see a pattern of social justice never materializing generally throughout society, with one cause giving rise to another in multi-regional and multi-temporal cycle that has been going on for at least 1000 years. This is difficult to see in our immediate present because these patterns take considerable time to cycle and may occur in disparate geographies. At the base of this cycle we see the exclusion of reason in discourse on account of the emotion so naturally abundant in matters of social justice. While emotion has a legitimate place and time, if humanity is to prosper, we must learn how to separate emotion from issues requiring reason to solve. Due to vested interests in the current norm of emotionally-driven understandings of social justice, this is a grave threat to the future of humanity. This will require nimble, systemic change advanced mostly through cultural efforts.

  1. It should be established as a matter of fundamental law that any and all sumptuary law that cannot sustain scientific scrutiny shall not be law or equity within the jurisdiction of the Federation.
  2. It should be established that any Statute or equity in the Federation which shall be reasonably expected to influence a matter of social justice, however broad, shall be applied to all human behavior uniformly and predictably to all persons without regard to personality or demographic, less it shall not stand as law or equity in the Federation. This provision would extend to enforcement as well. Ironically, this issue is solved by simply restating a key premise of rule of law itself: uniformity and predictability.

The Political Class and public distrust: Lack of participation and therefore some semblance of control, whether a good thing or not, evokes fear. Fear undermines trust. The solution is to find a reduction of the scope and scale of the political class such that representation and participation of the public is dramatically enhanced. Direct democracies simply do not work, therefore, a totally novel and new understanding of how to merge a political class with a more direct form of participation is urgently needed. This author has a proposal. The future of neo-liberal idealism is the evolution beyond representation alone and more into the area of direct participation. A clever means of rendering that participation competent via a political class is key to this solution involving an Assembly (analogous to a jury) and a Federation Civil Corps of citizens. As organic power decentralizes via technological agency, the duty to participate will quickly transform from nuisance to demand. The key is not to view this as an elimination of the political class, but as a “force multiplier” of the same, permitting the political class to take on a more focused role centering on providing competence to govern. Additional mechanisms within the participatory role of the public are needed to dilute incompetence and enhance representation. This will require nimble, systemic change.

  1. The analogy given here to western law and courts is somewhat sloppy. In the case of an Assembly, their role is the consideration of statute, not equity. Equity should belong solely to the courts of the Federation.
  2. Competence is provided by a panel of elected officials (a political class) analogous to a panel of judges with the privilege of writing statute, making first motion to vote and other provisions too lengthy to get into here.
  3. Statute is “algorithm friendly” allowing votes of very large numbers of persons randomly called to duty by a double-blind process to occur in seconds.
  4. Negotiation, resolution and consultation for making statute is performed by a Federation Civil Corps, consisting of lawyers, economists and other experts. It shall be a strictly merit-based system. Their duty is to inform and educate the Assembly and provide communication and consultation capacity between the elected officials and the Assembly.
  5. Assemblies are called every 12 months, consisting of a unique selection of Citizens of the Federation at large. It could be either voluntary or a lawful duty (I suggest that it be a lawful duty).
  6. Numbers of Assembly members are sufficient to allow representation of one-tenth of all Citizens of the Federation once every 100 years.

Organic power structure: Organic power structures in any society of technological agency will tend to decentralize over time and organic power in the future will more likely exist as a force of change coming from the populace in mass. The very meaning of “organic power structure” is shifting beneath our feet, and victory will go to those that see it. It is important to warn future generations that this is a consequence of technological change itself and not an objective or goal. We must prepare for this, and it is a key reason for the need to re-frame our normative understanding of social justice (but it must be done for all matters of social justice in order to ensure that a durable norm is the product). Class differences cannot be resolved if justice for one means injustice for another, regardless of the relative differences. This author has a solution that will ensure justice for all, which includes a mechanism that does not rely on schemes of income redistribution or the denial of social reward already accrued through lawful and legitimate means. This transition will occur over many generations no matter what is done, but this author’s solution provides a structured way to do this without injustice and general social collapse; and under a durable framework of governance. The key finding here is that organic power is evolving into something never seen before: organic power has throughout all of human history derived of relatively large differences in wealth but is now, for the first time, evidencing a pattern of change toward balance of power derived of wealth and power derived of technological agency. To remain durable, a responsible government must take these forces of influence into account. This will require nimble, systemic change:

  1. This historical evolution is accommodated in full by the process outlined regarding participatory governance.
  2. It should be a matter of substantive fundamental law that no person may be dispossessed of property without due process of law, which fundamental law should inform as not ponderable by any court of the Federation less imminent domain for the greater good is well established and fair market value compensation is afforded.
  3. It should be a matter of substantive fundamental law that the right to private property shall not be infringed.
  4. It should be a matter of substantive fundamental law that the right to seek redress for violations of substantive fundamental law shall not be infringed; however, lobbying of the Federation by any entity for any other reason shall be unlawful. This is a key provision of durability and an accounting for a new kind of organic power and should not be overlooked.

Implementation: A General Federation must be extremely flexible over time such that it can begin as a non-profit, then promote to accession by a nation-state. Then over time it must include other nation-states limited in pace to inclusion of states only where the norms cited herein are sufficiently strong to support it. An alliance of states that do not possess these norms will not be durable or effective and is the primary reason why multilateralism has failed. Currently, the only candidates that exist are the United States, Israel, Germany and the UK, and those states will require much preparatory work in nurturing a healthy set of norms as listed here before that can happen. Currently, the United States is number one on the runway, despite its relatively poor advancement in matters of social justice. Additional mechanisms have been devised to also allow scaled accession of developing nations. But it should not be forgotten that while normative practices are necessary, codification in explicit rule of law must come alongside it. Schemes that deny the central necessity of codified, transparent rule of law gathered by consensus will fail. This is the second cause of the failure of multilateralism. Disaggregated states and other schemes that pretend to operate “from the back door” are not durable in time. We don’t need more politicians or technocrats as they are not a solution to the problem, they are in the near future likely to be the problem. And that is because, wherever the scope of the political class expands, the fear increases. In a future world of ever advancing technological agency failure to better balance competence with participation will be disastrous. The public must be enlisted to fulfill this balance and give agency a voice. To be clear, this identifies the much larger, longer-term threat which it encompasses (but includes much more) we today call terrorism, the canonical, most extreme example of this failure mode.

  1. This can be achieved in fundamental law by the inclusion of a provision for a “National Codicil”, too lengthy to describe here.
  2. A National Codicil reduces burdens on member states to allow sunshine provisions for the ramping up of certain Federation sovereignties over a renewable period of fifty years.
  3. It should begin with the United States as its sole member such that the normative values of that institution may be inhered sufficiently before it accedes to a cosmopolitan or pluralistic stage. It does not require any change to U.S. relations with other organizations such as the UN or NATO. That the U.S. be the first state is crucial. The U.S. could begin this process solely pro forma with essentially no commitment to change U.S. policy at the outset, but its long-term effect on inhering these values worldwide would be enormous. It would be the first, tangible example of the idealistic values of global-ready, neo-liberal western democracy and would quickly have countries begging to join. This would put USG in the driver’s seat as far as ensuring those values are present beforehand. It would also give USG a chance to introduce this to the U.S. public and give them, and supporters of the cause, time to digest it and increase support for it. It would also give USG an opportunity to experiment with and tweak the Assembly concept. The answer to global governance is simple, we just need to DO it.

Who can implement this: Such an effort can only be achieved if spearheaded by the leadership of a nation most “advanced” in these norms and whose relative economic power can sustain it. The United States still isn’t there, but it is humanity’s best hope. It’s time to get novel and advance the state of affairs in the management of human society. The clock is ticking. Listen to me now, hear me later.

  1. The system propounded is a Hamiltonian, federal system; that is, wherever statute is enacted for the one State, then for all uniformly. It is a system of subsidiarity. It is a system with a strong executive and which regards economics as within the compass of the social contract. It is a system consisting of four distinct branches; legal, economic, executive and judicial. It is a system contrived to balance the powers of those branches, and to balance the interests of the greater good and the individual. It is a system whereby equity is solely applied to inform the rule of law by the color of the instance, not violate it. The executive and judicial powers are filled by a political class. The legal and economic powers are filled by a political class and their respective Assemblies. Supreme Justices are appointed by the political class for life.

The future will involve many quick pivots as asynchronous events seem to drive us in all directions at once. Multilateralism demands the kind of decisive action only a durable force can provide. A strong federal executive and its lot, constrained by the idealistic, normative values that tame it, is where it’s at. This has been evidenced most recently in the crisis with ISIL and Ebola. One week it was ISIL, the next week it was Ebola. No one invented that. It’s our future, get used to it.

A final, related note is the question of where is Russia, PRC, DPRK et al when Ebola threatens millions of human lives? Yes, they are offering assistance, but no one has acted as assertively as the United States. This is a time-tested pattern. From Fort Sumter to Leyte Gulf, from Ia Drang to Somalia, America has repeatedly shed blood for a greater good. Now, 3000 servicemembers are about to risk their lives once again and thousands of tons of hardware move into harm’s way. It tells us that idealistic normative values coupled with clever fundamental law are the forces of idealism and assertiveness humanity needs. The lack of response elsewhere is not because of a lack of ability. Russia has a fine navy. PRC has a massive army. Criticize the United States if you wish (and there is a time and place for that), but it is a cheap shot that merely jeopardizes humanity’s future. It’s time to get real.

– kk

Hi all,

Perhaps the most important first step in a top-down re-evaluation of global governance should begin by identifying the sine qua non of the global or regional environment necessary for a successful and durable global rule of law to exist. And most of that will hinge on normative beliefs, customs and practices within a given society. This is almost, but not identically, akin to stating that a cultural environment conducive to global rule of law must exist first. And, as it stands, I will argue, this is in fact the key impediment to effective multi-lateralism. Humanity must grow up, change and dispense with beliefs and behaviors that, while they may have an antecedent in our biological and social past, can no longer enjoy scientific support for their continued usefulness.

One of humanity’s greatest foibles is our tendency to inject emotion into intellectual inquiry, and the tendency this has to marginalize and exclude reason. Many today blame this on religion or some other boogey man. Certainly, religion provides a feeding ground for uncontrolled emotion. But the truth is that a more fundamental and universal cause presents itself as misplaced emotion. All of the points outlined below deal directly with this issue and provide a way for humanity to address serious, global issues rationally. It represents an executive summary of what this author has been working on for several years now and a full treatment and justification can be found in later works to be shared.

The most fundamental changes needed can be summarized below:

  1. Matter and Energy; an evolutionary step in our understanding of economic theory such that we delineate the most fundamental factors affecting economies. The most fundamental foundation of an economy lies in how we manage matter and energy. Economic “theory” merely rides on top of this fundamental, limiting fact. For any beings of technological agency, consumption of matter and energy is likely the gravest long-term threat to survival. Yes, this is a universal claim. Today we call this sustainability, but sustainability at such a fundamental level as what we are describing here finds a nexus in economic prosperity as well. They are the same thing; most people just don’t realize it yet. The prevailing myth in our time has us believe that it is a one-way street: prosperity depends on sustainability. The truth is that both depend on each other. So, wherever there is technological agency, consumption of matter and energy will increase with time. Therefore, long-term planning should focus on increasing access to matter and energy. Currently, this is sharply limited because we do not have the means to create an actuarially sound space transportation infrastructure. This author’s primary area of interest and effort lies in work that will grossly antiquate all existing space flight technology and make this an economic reality. We will see more about this in the next 2 to 4 years as this author matures his current work, now being done outside of the public radar. It will be known later in the form of a non-profit named the Organization for Space Enterprise. The reason why space is the focus is lengthy, but our current approach of trying to hold fast to an existing form of matter (such as petroleum) or to transition to a new form of matter (periodic elements used in solar panels, for example) is not scalable. It will ultimately destroy humanity (by a gradual starvation of matter and energy) and the only long-term solution is to source matter and energy in quantities vastly larger than what is available on Earth alone. Because of the time frames involved, this effort must begin now.  This will require nimble, systemic change in the underpinnings of the free market. A clever solution is an optimization that “does no damage” to the existing system but affords more directed use of matter and energy, and this author has a proposal. Whatever this author does, USG would be well-advised to invest heavily in the development of the means and methods (not all of which involves new technologies) required to render space flight economically viable and actuarially sound.
  2. Social Justice; the evolutionary step in our normative understanding of social justice. We need to transform the public understanding of social justice to inhere the more that social justice should be blind to personality and demographic and should rather focus on behaviors of merit and those that lack merit. The old saying that violence begets violence likewise extends to the notion that emotional extremism begets emotional extremism. Almost all notions of social justice today rely on emotional domination of the issues and feed off of ancient and barbaric fears that do nothing but generate a vicious cycle of repeated but varying “causes” through history. The result is that throughout history we see a pattern of social justice never materializing generally throughout society, with one cause giving rise to another in multi-regional and multi-temporal cycle that has been going on for at least 1000 years. This is difficult to see in our immediate present because these patterns take considerable time to cycle and may occur in disparate geographies. At the base of this cycle we see the exclusion of reason in discourse on account of the emotion so naturally abundant in matters of social justice. While emotion has a legitimate place and time, if humanity is to prosper, we must learn how to separate emotion from issues requiring reason to solve. Due to vested interests in the current norm of emotionally-driven understandings of social justice, this is a grave threat to the future of humanity. This will require nimble, systemic change advanced mostly through cultural efforts.
  3. The Political Class and public distrust: Lack of participation and therefore some semblance of control, whether a good thing or not, evokes fear. Fear undermines trust. The solution is to find a reduction of the scope and scale of the political class such that representation and participation of the public is dramatically enhanced. Direct democracies simply do not work, therefore, a totally novel and new understanding of how to merge a political class with a more direct form of participation is urgently needed. This author has a proposal. The future of neo-liberal idealism is the evolution beyond representation alone and more into the area of direct participation. A clever means of rendering that participation competent via a political class is key to this solution involving an Assembly (analogous to a jury) and a Federation Civil Corps of citizens. As organic power decentralizes via technological agency, the duty to participate will quickly transform from nuisance to demand. The key is not to view this as an elimination of the political class, but as a “force multiplier” of the same, permitting the political class to take on a more focused role centering on providing competence to govern. Additional mechanisms within the participatory role of the public are needed to dilute incompetence and enhance representation. This will require nimble, systemic change.
  4. Organic power structure: Organic power structures in any society of technological agency will tend to decentralize over time and organic power in the future will more likely exist as a force of change coming from the populace in mass. The very meaning of “organic power structure” is shifting beneath our feet, and victory will go to those that see it. It is important to warn future generations that this is a consequence of technological change itself and not an objective or goal. We must prepare for this, and it is a key reason for the need to re-frame our normative understanding of social justice (but it must be done for all matters of social justice in order to ensure that a durable norm is the product). Class differences cannot be resolved if justice for one means injustice for another, regardless of the relative differences. This author has a solution that will ensure justice for all, which includes a mechanism that does not rely on schemes of income redistribution or the denial of social reward already accrued through lawful and legitimate means. This transition will occur over many generations no matter what is done, but this author’s solution provides a structured way to do this without injustice and general social collapse; and under a durable framework of governance. The key finding here is that organic power is evolving into something never seen before: organic power has throughout all of human history derived of relatively large differences in wealth but is now, for the first time, evidencing a pattern of change toward balance of power derived of wealth and power derived of technological agency. To remain durable, a responsible government must take these forces of influence into account. This will require nimble, systemic change.
  5. Implementation: A General Federation must be extremely flexible over time such that it can begin as a non-profit, then promote to accession by a nation-state. Then over time it must include other nation-states limited in pace to inclusion of states only where the norms cited herein are sufficiently strong to support it. An alliance of states that do not possess these norms will not be durable or effective and is the primary reason why multilateralism has failed. Currently, the only candidates that exist are the United States, Israel, Germany and the UK, and those states will require much preparatory work in nurturing a healthy set of norms as listed here before that can happen. Currently, the United States is number one on the runway, despite its relatively poor advancement in matters of social justice. Additional mechanisms have been devised to also allow scaled accession of developing nations. But it should not be forgotten that while normative practices are necessary, codification in explicit rule of law must come alongside it. Schemes that deny the central necessity of codified, transparent rule of law gathered by consensus will fail. This is the second cause of the failure of multilateralism. Disaggregated states and other schemes that pretend to operate “from the back door” are not durable in time. We don’t need more politicians or technocrats as they are not a solution to the problem, they are in the near future likely to be the problem. And that is because, wherever the scope of the political class expands, the fear increases. In a future world of ever advancing technological agency failure to better balance competence with participation will be disastrous. The public must be enlisted to fulfill this balance and give agency a voice. To be clear, this identifies the much larger, longer-term threat which it encompasses (but includes much more) we today call terrorism, the canonical, most extreme example of this failure mode.
  6. Who can implement this: Such an effort can only be achieved if spearheaded by the leadership of a nation most “advanced” in these norms and whose relative economic power can sustain it. The United States still isn’t there, but it is humanity’s best hope. It’s time to get novel and advance the state of affairs in the management of human society. The clock is ticking.

Listen to me now, hear me later.

The future will involve many quick pivots as asynchronous events seem to drive us in all directions at once. Multilateralism demands the kind of decisive action only a durable force can provide. A strong federal executive and its lot, constrained by the idealistic, normative values that tame it, is where it’s at. This has been evidenced most recently in the crisis with ISIL and Ebola. One week it was ISIL, the next week it was Ebola. No one invented that. It’s our future, get used to it.

A final, related note is the question of where is Russia, PRC, DPRK et al when Ebola threatens millions of human lives? Yes, they are offering assistance, but no one has acted as assertively as the United States. This is a time-tested pattern. From Fort Sumter to Leyte Gulf, from Ia Drang to Somalia, America has repeatedly shed blood for a greater good. Now, 3000 servicemembers are about to risk their lives once again and thousands of tons of hardware move into harm’s way. It tells us that idealistic normative values coupled with clever fundamental law are the forces of idealism and assertiveness humanity needs. The lack of response elsewhere is not because of a lack of ability. Russia has a fine navy. PRC has a massive army. Criticize the United States if you wish (and there is a time and place for that), but it is a cheap shot that merely jeopardizes humanity’s future. It’s time to get real.

– kk

big_SIS

Big Sis modeling in … an oilfield. Our second tour of Bakken with our dad.

I’ve written on the petroleum issue before and as I learn more the worse it seems to look. Yes, there is good news about unconventional oil, like tight oil. And there are all kinds of alternative energies out there, too. But the choices for humanity are beginning to narrow to a point that open, frank discussion about what is going on is desperately needed.

First, I should mention something about the developments here in the States regarding new sources of oil that has everyone in the industry so excited. These sources are unconventional oil. There are two types of “unconventional” oil. First, there is what is called shale oil, AKA “tight oil”. This is really just conventional petroleum that happens to be found inside shale rock. The only reason it hasn’t already been exploited is because the drilling techniques needed to get to it are a bit more complicated than a simple, vertical bore shaft drilled in one spot in which the petroleum is “loose” enough to flow into the pipe on its own. With tight oil not only do you have to drill horizontally to exploit the relatively horizontal orientation of the shale rock, you have to “encourage” the oil to move into the bore pipe because it is tightly bound to the shale rock. This is done by a process that has come to be known as “fracking” in which explosive charges are placed in the bore pipe to perforate the casing and water is pumped in to crack the shale and create small paths within it and through which oil can flow into the bore pipe. Second, we have what is known as oil shale. Read that carefully, I just swapped the order of the words to get a new beast. And that’s why people get confused over these two types of resource. Their names are about as similar as one can get. But oil shale is a totally different thing. Here, the oil is not simply conventional oil tightly trapped in a rock. Here, the “oil” is not fully developed by nature and has not reached the final stage of its conversion into petroleum.  It is in an intermediate stage between fossil and true oil. This fluid is called “kerogen”. The problem with kerogen is that in order to complete the transition to petroleum, which you must do to make it a viable fuel source, requires considerable heating. If we just look at the energy equation it means that we are putting more energy into the production of oil from oil shale than we get out (with current techniques). And what many economists and optimists don’t seem to realize is that this problem is a physics problem, not just an economic one. In other words, there is no technology or economic model that will change this. Kerogen, in the form we know it, will never be economically viable in and of itself.

So-called “Peak Oil” tells us that, because petroleum is a finite resource, it must exhaust at some point in the future. Like so many academic statements, one can see that this is incontrovertible but as is often the case, the practical reality does not so easily admit of a simple application of a general principle to a specific problem. Such is the case with Peak Oil. People promoting this theory are effectively overgeneralizing to a specific set of circumstances and reaching erroneous conclusions. I’m going to try to sort out this mess here and explain what Peak Oil means for humanity in realistic, probable terms. First, I noted that the energy one puts into extracting petroleum must obviously not equal or exceed the energy they can extract from the recovered petroleum itself. Otherwise, there isn’t much point in extracting it. But from this point forward in the popular discussion of Peak Oil the conversation diverges into Wonderland. The crux of the problem, from what I can see, is that those that understand the geology and science of petroleum don’t understand economics and those that understand economics don’t understand the fundamentals of science. Add to that the inherent opaque nature of the petroleum industry and its methods and it is no wonder that there is immense confusion over this topic. Okay, so why is “Peak Oil” an “overgeneralization” of the human energy consumption problem? First, we need to point out that the idea that something is finite, and that one’s ability to extract it in situ will likely follow a bell curve in which the rate of recovery rises and then falls is an incredibly general proposition. And it’s that phrase rate of recovery that we need to understand better.

All finite things will tend to exhibit bell curve, or normalized behavior; that is, one’s extraction of them in situ (limiting the generality to resource exploitation for this discussion) will likely get faster in the beginning, then slow down as it depletes. But global Peak Oil is just one application of this broad generalization. Notice that an oil well, if all else remains the same, will also tend to extract petroleum at normalized rates, increasing sharply in the beginning and tapering as its reach into a reservoir diminishes. This has nothing to do with global peak oil. Likewise, a reservoir will, all else being equal, tend to follow a normalized pattern of extraction rates. This also has nothing to do with global peak oil. And please notice the qualifier “all else being equal”. Let me explain. The rate at which an oil well can extract oil from a reservoir, assuming the supply from the reservoir remains essentially constant (its really big), depends on numerous factors. The depth, diameter and bore length of the bore hole all affect that value. The fatter the pipe, the faster you can get petroleum out. Depth can affect pressure which will affect how fast you can pump it out. Indeed, even your pumping equipment can affect those rates. But things like the permeability of the rock also matter. I should point out that oil doesn’t usually sit in the ground in pools. Rather, it is “locked up” in the pores of rocks. Different rocks allow it to escape at different rates. Shale, for example, doesn’t give it up easily. So, that too, affects the rate of recovery. So, the reach an oil well has into a reservoir is a time dependent function that is highly localized and dependent on all the factors mentioned. Thus, it may be possible to drill another well nearby, but importantly, no less than some minimum distance away, to increase the flow rate. That minimum viable distance is determined also by those factors. Finally, for any given well, as the pressure begins to drop due to the peaking of that single well, not necessarily the entire reservoir, one can increase the internal pressure, forcing petroleum out faster, by boosting it with water. If that isn’t enough, you can inject gases under pressure to increase the flow rate.

In other words, the rate at which a single well delivers petroleum product is highly dependent on capital investment in the well. And producers have to consider how much they want to invest based on market conditions and overall performance of their overall recovery operations. Thus, the so-called “bell curve” becomes a joke. One can artificially shape this curve however they want depending on all the factors mentioned because, at the oil well level, the supply is halted only as a time dependent function of the presence of oil locally around the well bore. What this means is that, you can drain the region around the bore hole but over a very long time the rest of the reservoir will push oil back into that region and refill it. So, that also can be seen as a production rate variable. The reader should be able to clearly see now that the “peaking” of an oil rig is totally dependent on numerous variables, only one of which is the presence or availability of oil locally around the bore hole. Thus, simply yanking production rate figures for a well out and suggesting that it or its reservoir has hit a fundamental peaking of capacity based on those numbers is absurd. You cannot know that unless you have access to all the data and variables I’ve mentioned, and only then can you analyze the well and understand if an observed peaking is due to some natural, finite barrier or is rather due to the particulars of the well design and operation.

We can extend this discussion in scale and apply similar logic to the reservoir itself. We cannot know if a reservoir is reaching a true, finite and natural peak unless we know about each of those wells and, importantly, what percentage of the acreage from which a well is viable is actually covered by a well. So, in the same way, one cannot pluck data from a reservoir and conclude anything from that.

At the global level the same limitation applies. We need to know the true facts about each reservoir in order to reach any conclusions about:

  1. Actual, existing production capacity globally
  2. Total, defined reserves remaining

But can’t we see that if global well spudding is increasing and peak production in various countries has occurred that it must be occurring in the near term globally? Yes … unless we consider the powerful impact economics has on all this. The United States reached a peak around 1970 and its domestic production declined thereafter (until recently as shale oil has pushed production up considerably). But what we don’t know is why. Was it because the actual recoverable oil had diminished to something below one-half its original amount? Or was it because the investments necessary to continue producing the fields in the States were considered economically unsound given the global prices for petroleum at the time? Did petroleum companies just forego water and gas pressurization, increased drilling into existing reservoirs, etc. because it was cheaper to buy overseas? Did environmental regulation drive this? There is reason to believe that other factors were in fact at play because domestic production in the United States has risen again even if we control for shale oil production. And much of that is occurring from existing fields. But there’s more. Various agencies tasked with estimating reserves continually come up with reserve figures much, much higher than peak oil advocates claim. USGS and IEA, while they don’t agree on all the numbers, clearly state that conventional oil reserves in the United States are over 20 billion barrels. Where did that come from? It comes from the same fields that have always been producing petroleum in the United States. But for whatever economic reason, the additional investments in those wells simply have not been made. That is changing now. If the United States were to continue consuming at its present rate, and if that 20 billion barrels was the only source of oil for consumption in the States, it would last about 3 years. But since Canada supplies about ¼ of U.S consumption and shale oil is providing an ever increasing portion (quickly approaching ¼) that number is likely closer to 10 years.

Numbers for shale oil are about 20 years; that is, if all oil were drawn from those fields it would last about 20 years. This combined with the remaining conventional oil is 30 years, at least (and assuming Canada disappeared), of petroleum supply. But Canada’s reserves are yet larger and their consumption is an order of magnitude lower than that of the United States (their population is an order of magnitude lower than that of the U.S.). Thus, realistically, the U.S./Canada partnership, which is unlikely to be broken, will easily put the U.S. supply beyond 50 years. And that assumes that the middle east and everything else just vanishes. If we plug that back in its even longer. Let’s be clear, regardless of what’s going on around the globe, the U.S. and Canada are not going to trade their own oil away if it means their own consumption must drop. Nor would any other nation. Shale oil production in the United States is climbing meteorically, to about 4 million barrels a day in 2013. This is unheard of since less than 5 years ago it was virtually zero.

The more challenging oil shale; that is, kerogen bearing rock, is a U.S. reserve so large it is hard to calculate or predict where it might end. Needless to say, we have about 50 years to develop it and get it online. It seems unlikely that this goal will not be achieved, but I’ll discuss its challenges more later.

Okay, so is the problem solved? Can we all go home now? Not hardly. The same nuances mentioned earlier that better inform our discussion of peak oil also inform our understanding of the current petroleum situation, to include shale oil and oil shale options. Thus far, we’ve spoken only of production rates of petroleum. But here is the real, fundamental problem with petroleum: when it was first discovered and used on a wide commercial basis, beginning about 1905, it was so easy to obtain that in terms of energy it only cost us about 1 barrel of crude in power generation to draw and collect 100 barrels of crude for sale in the marketplace.  Some speak of this relation as the Energy Returned On Energy Invested, or EROEI ratio. I alluded to it above. It basically begins by noticing that if a fuel source is to be viable then we cannot expend more energy to get it than the energy it provides to us. In the case where those energies are equal, EROEI = 1. In the event that we consume more energy to get petroleum than the petroleum recovered provides, then the EROEI < 1. This is unsustainable also. Therefore, for petroleum, or any fuel, to be viable it must have an EROEI > 1. Having cleared that up, some confusion over how physics and economics overlaps on this matter has gushed out on the internet and elsewhere like water over Niagara Falls. Why? If we will recall, around 1905 the EROEI must have been 100, since for every 100 barrels of crude we could sell we expended 1 barrel’s worth in energy to get it out of the ground. The problem is that since that time the EROEI has dropped precipitously by about one order of magnitude. Thus, the global average EROEI is about 10 nowadays. But what this implies is what seems to be confusing people. Some think that if the ROEI gets any closer to 1 we’re doomed. Some have even said that you need an EROEI of 3 or 4 to make petroleum economically viable. This is not true and is based on certain assumptions that need not be true either. In order to be not only economically viable but economically explosive in its market power the ROEI simply needs to be greater than 1. That’s all. Let me explain.

There is this thing called “economics of scale”. To explain it’s relevance here, consider the following thought experiment. Suppose we discover a massive petroleum reserve in Colorado that contains some 2 trillion barrels of recoverable “oil”. At current U.S. consumption rates, if every drop of petroleum consumed in the U.S. were pulled from that one field, it would last 275 years. Ah, but you say, that reserve is kerogen. Kerogen is the play I referred to above where I pointed out that we had about 50 years to figure out a way to economically utilize it. This is because the other oil, so-called “tight oil”, or shale oil, will run out by then. But the big, big problem with kerogen is that lots of energy are needed to make petroleum out of it. Current retorts (heaters for heating kerogen) run at about 400 C and have an EROEI of about 3 to 4. Of course, this is first generation technology, but for the sake of discussion, let’s assume it is 3. For demonstration, we assume that the current, conventional EROEI on oil is about 10. How could kerogen possibly be cost effective? Economies of scale. Great, problem solved? Nope. Let me finish.

Let’s assume that, for the sake of discussion, we have an infrastructure that can begin producing petroleum at incredibly high rates. How is this? Kerogen is located only about 500 meters in the ground and can be manually extracted. This means that there are no “pressure curves” or constraints on how much can be removed how fast. It’s simply a matter of having sufficient resources to do the work. But more importantly, these rates can be achieved because, as one increases the rate of recovery, you are not fighting against a finite maximum lode (effectively) and the economics of scale work because it is one field, not several fields geographically separated over great distances. Thus, as petroleum flows out at rates far exceeding what was possible before, the price of that petroleum drops. And it keeps dropping as the market is flooded with petroleum. Imagine that before this operation commences oil costs 1 dollar a barrel (to make the math simpler). Let us say I have 100 dollars to spend on energy. So, I purchase 100 dollars worth of energy. But, it took 10 dollars worth of energy to get the oil I’m using as energy. So, my net return is 90 barrels of crude. Now, suppose after operations commence 100 dollars now buys 1000 barrels of crude. This means that I can net 900 barrels of crude for the same 100 dollars. My energy has gone up dramatically but my economic cost is constant. Of course, our EROEI is lower now, so we have to adjust and recalculate. 100 dollars buys 200 barrels of crude with an EROEI=3. Thus, for the same economic cost I have doubled my energy and have done so in the same amount of time because, by economy of scale, I can obtain that petroleum twice as fast as before. And I can achieve that production rate because I do not have to worry about running out for quite a while.

So, as with peak oil, simply blurting out EROEI doesn’t explain everything. You have to take all variables into account. Okay, will we finally get to the bad news? Yes, we are now ready to see the deeper problem and the key point so many are tragically missing. I somewhat glossed over economies of scale and production rates for kerogen and assumed that we actually had the ability to ramp up to that. In other words, we have to be able to invest in that massive infrastructure in Colorado to start this voracious beast up. Do we have what we need? Well, we have the petroleum in shale oil. But is that really all that matters? Of course not. We will not be able to reach that kind of production rate in kerogen to petroleum with excesses of tight oil alone. And this is where it gets interesting.

Those that study economics and petroleum often point out that the strength of an economy is largely dictated by the per capita energy per unit time that a country or region achieves. Energy per unit time is power and it is measured in watts. So, what they are saying is that the strength of an economy ultimately falls back to per capita power consumption. This is why climate change is so controversial overseas. Other countries know this and they see attempts by western, industrialized nations to limit CO2 emissions as nothing more than curbing per capita power consumption; thus derailing economies. For the western world, the association between per capita power consumption and CO2 is not nearly as strong, so it does not affect them as badly. But for countries still burning lots of coal and for countries without efficient cars and trucks, such cutbacks in CO2 would have drastic effects on their ability to industrialize. But for our discussion it is important for what it does not capture. To explain this, another example is in order. Consider a farmer living in the 1700s in North America. They plow fields using a mule and bottom plow. The per capita power consumption for the farmer is, say, x. Now, a farmer in North America in 2013 performs the same task using a very small, diesel powered tractor with plows, harrows and the like. In this case, the per capita power consumption is considerably higher and we’ll denote it y. Notice that over the years the transition from x to y is gradual as each new technology and piece of equipment increases the power consumption available to the operator. But why, exactly, does this seem to be correlated with overall quality of life? Why is it that better health, education and so on are so common as power consumption increases? The reason lies in the definition of energy and power. In physics the term “useful work” or “work done” in an “environment” is a term that refers to the effect, or result, of applying energy to a defined “environment”. Thus it is often called the negative of energy. Thus, when we apply energy to an environment we are dumping energy into that environment in some controlled, intelligent manner. In the case of the farmer example, the “environment” is the soil, or the Earth itself, which we transform intelligently into something favorable to plant growth. This takes lots of energy. In fact, the mule and plow ultimately expend exactly the same total energy as the tractor does. The difference however, is how fast it happens. The tractor does it orders of  magnitude faster. In other words, it is the power of the tractor over the mule that makes the difference. Thus, we can fundamentally improve our lives by intelligently applying power to satisfy a human need with speed, giving us the time to engage in other worthy tasks. We can use that time for leisure, education or other work related tasks. In the end, the quality of life improves.

Thus per capita power consumption is key to the advancement of humanity, period. We have no time to march in protest of an unjust ruler, time to educate our children, time to do other useful work such as plow our neighbors garden for them, or anything else, if we are captive to spending most of the hours of our lives slowly expending all the energy necessary for our survival. Power is freedom.

But power provides other improvements to quality of life indirectly as well. We can afford to have clean, running water because we had the power to dig enough water wells, we have the power to run the factories that make our pharmaceutical medicines which alleviate suffering, we have the power to build massive buildings called schools and universities in which our capacity to learn is enhanced, and on and on.

So, in the petroleum discussion, when we speak of “ramping up” to a new way of obtaining petroleum which requires more upfront energy than the old forms of petroleum, we are talking about a per capita power consumption problem. The shale oil can solve that for us. But what this discussion is missing is a key ingredient in this ramp up. Once again, we have to be careful not to overgeneralize. Generally speaking, this power statement is correct. But in reality, we have to consider something else. And that something else is the “environment” we just discussed. We can usually just ignore it because

The rate at which we can access the environment is assumed to be infinite, or, at minimum, proportionately greater than the rate at which we are expending energy into it.

This will not always be the case. Let me explain. In the case of the tractor, of course we have access to the ground because that is what we’re farming and there is no obvious constraint on how fast we can “get to it”. But what if we change the example a little. Suppose we now think of a factory that takes aluminum mined from the Earth and smelts it, producing ingots of aluminum that can then be shipped to buyers who may use it to build products that society needs. Well, the mines that extract the aluminum can only do so with finite speed. And if that resource is finite, and especially if it is rare or constrained in volume, the rate at which we can recover it is indeed constrained. Now, if I am a buyer of ingots and I make car parts out of the ingots, the rate at which I can make cars no longer depends solely on the power consumption available on my factory assembly line. Now, I have to consider how fast I can get ingots into the factory. This is a special case and we can see that generally this is not actually an issue. But to understand that we have to increase our altitude yet more over the forest to see the full lay of the land. Ultimately, all matter is energy. We should, in principle, be able to generate raw materials from energy alone if the technology for it exists. However, as a practical matter, we can’t do that. We depend on raw materials, aluminum being but one example, which come from the periodic table of the elements, plants and animals and minerals of the Earth. But the most constrained of all of them is the periodic table. As it turns out, petroleum is not our only problem and, not surprisingly, the crisis of the elements is of a very similar nature. It isn’t really that we are “running out”, it’s that the rate at which we can access them is slowing down while consumption goes up. And that’s the problem with petroleum, too. We have plenty of reserves, but our ability to access it fast enough is what is getting scary. Unfortunately for us, raw materials and rare Earth metals especially, are hard to find on the Earth’s surface. Almost all of the rare elements are deep within the Earth, much too far down to be accessible. Thus, our supply chain is constrained. This is why plastics have become so popular over the last three or four decades. In fact, some 90% of all manufacturing in the world now depends in some way on petroleum, ironically, because the raw materials we used to use are drying up. And the rate at which we can recycle it is not nearly fast enough.

So, the very same problem of production rates in petroleum exist for the elements and what we have not discussed is the world outside the United States. I have deliberately focused on the U.S. and Canada for a reason. The global situation beyond is dire. Why is this? Because, even if we solve the petroleum production rate problem in the United States, as I’ve suggested it will be,

It will be frustratingly constrained in its usefulness if dramatic improvements in the rate of production of elements of the periodic table are not found rapidly.

And that’s just the U.S. and Canada. The situation in the rest of the world is far, far worse. There is only one place where such elements in such large quantities can be found and exploited rapidly. And it is not in the ground, it is up, up in the sky. Near Earth asteroids are the only viable, natural source that can fuel the infrastructure creation necessary to drive the kerogen production needed. But said more fundamentally, if we don’t find a solution in the staged development of shale oil, then kerogen, coupled with massive increases in natural resources which increases in power consumption can take advantage of, humanity will die a slow, savage and brutal death.

What???

What we really need here to express this economic situation is a new figure of merit that combines per capita power consumption with the rate at which we can access the raw materials that are being manipulated by any given power source. We cannot perform a meaningful study of this issue without it. Thus, for now, I will call it Q and it shall be defined on the basis of a per operator required figure (analogous to per capita but based on a per operator figure, a technologically determined value). And I shall define it as the product of a given power consumption and the raw materials in mass kg operated on by the power source per second time. Q would be calculated for each element, mineral or defined material it operates on using a subscript. So, for aluminum it would be:

QAl

And for any particular, defined economic enterprise the collection of such materials I will take to be a mean of all such Q and denote it:

Qm

Now, the Qm for the kerogen to crude conversion (retorting) must be greater than some minimum value that is actuarially sound and economically viable. For a sufficient value we can expect economic prosperity and for some lesser value we can expect a threshold of survival for humanity. That threshold is determined by the pre-existing Qc (Q based not on an operator but on true per capita basis) and the maximum range of variance an economy can withstand before becoming chaotic and unstable (meaning, before civilized society breaks down). So, what do we mean by death and destruction. Well, here’s the bad news.

The problem we are facing is a double faceted one.

We are seeing a reduction in global “production” rates for both energy and matter.

As populations increase this will get worse. Only Canada and the United States appear to be in a position to respond with the favorable geology and sufficient capital, technology and effort to compensate for dramatic losses in conventional oil production rates: if you are pumping water and gases into oil wells now to boost production the drop off after peak won’t be a smooth curve but will look more like a cliff. And now we can see why the Peak Oil concerns are real, but for the wrong reasons. The problem is that though the oil is there, it is costing more and more to get it out and the raw materials (capital) needed to invest in ever increasingly expensive recovery – economies of scale – are not forthcoming. The “cliff” is economic, not physical. Thus, even in the few countries where reserves are still quite large, economies of scale do not appear to be working precisely because of a lack of raw materials (capital) and, to some degree, energy. The divergent state of affairs between North America and everywhere else is due to several factors:

  1. Whatever the cause, conventional petroleum production rates are declining or are requiring greater and greater investment to keep up with prior production rates. This could be because of fundamental peaking or it could be because nominal investments needed to improve production rates have simply not been initiated until now.
  2. Tight oil is the only petroleum product that has been shown to be economically viable and that is not affected by the problem in 1.
  3. North America has by far the most favorable geology for shale, which is why it has been possible to start up tight oil production in Canada and the U.S.
  4. North America has the strongest economy for fueling the up-front, very high investment costs that a new infrastructure in tight oil will require
  5. The U.S. and Canada have been studying the local shale geology for over 30 years and have developed a sufficient knowledge to utilize it, to a degree far surpassing what has been done anywhere else.
  6. North America has the most advanced drilling technology for this purpose than any other locale can call upon or utilize.
  7. Despite the massive consumption in the United States, Canada and the U.S. appear to be at or near energy independence now, which means that instabilities around the globe will not likely have a negative impact on tight oil production as a result of its economic shock (at least not directly).

The biggest question for the United States is this. What are you going to do about raw materials? The good fortune found in tight oil will avail nothing if the United States doesn’t also dramatically increase the rate at which it can “produce” raw materials, particularly elements of the periodic table. The only way to do this is to create a crewed space flight infrastructure whose purpose is to collect these materials from asteroids, where they appear in amounts astronomically greater than anything found on Earth. If the United States fails to do this, it and Canada will go the way of the rest of Humanity. To explain, it may survive the tight oil period. The problem won’t present until the switch to kerogen is attempted in some 30 or more years. But it would take 30 years to develop such a space flight infrastructure. There is no room for gaps. Because of kerogen’s poor EROEI, it will absolutely depend on higher production rates of raw materials; i.e. increased flow of capital.

Of course, at some point alternative energy will have to be developed and the entire primary mover infrastructure will have to be updated. That is really the end goal. But this is no small task. It will cost trillions and will take decades to convert humanity over to a fully electric infrastructure. That is one of the key requirements for comprehensive conversion to alternative energies. And alack, we do not have the raw materials on Earth to build enough batteries for all of it. Thus, once again, the asteroids loom as our only hope. When and if we achieve an energy infrastructure that does not include fossil fuels we will have taken a key step in our development. At that point, for the first time, humanity will be progressing using the fundamental physical principles common throughout the universe and not specific to Earth. It will be a seminal transition.

What does this mean? I had written a few paragraphs on that question but, realizing how depressing it all is, I leave it at this. USG needs to start developing this space infrastructure yesterday and they need to keep hammering away at kerogen. I hope I’m wrong about this.

– kk

%d bloggers like this: