# The Three Axioms of Deconversion

The First Axiom of Deconversion

Before describing the specific method used, we will first note that this method is thematically driven by a phenomenon known as the “conjunction fallacy”; which refers to a violation of what academics call the *conjunction rule* of probability theory:

P(A + B) ≤ P(A) ∀ A, B ∈ ℜ ; that is, the probability that A and B are simultaneously true, is always less than or equal to the probability that A is true.

This redounds to the notion that whenever you add detail to something (make it more specific and less general) it may sound more plausible to human beings. However, the more general version is more probable. In the vernacular this is usually stated as “the simpler explanation is the more likely one” because simpler in this case means more general. This rule is a formalization of Occam’s Razor also known as *lex parsimoniae* (the law of parsimony, economy or succinctness). The deconverter must convey this trick of con jobs clearly to the adherent as the process to be outlined is applied. And the reason why this is so important is not just because of how deceptive it is. It is important for the deconverter to internalize an understanding of the conjuction rule in this context *because of its frequency* in religious thinking and apologetics generally.

Please note also that the clever con artist (one of exceptionally rare skill) will be able to recognize the “anti-mark”; that person resilient to manipulation who can more easily see through the ploys utilitizing embellished details and recognize the the story or narrative that is too specific (too detailed) to believe.

The Second Axiom of Deconversion

Adherents have a predictable pattern of appealing to cognitive modes of fantasy. This creates fundamental logical problems in their arguments which the deconverter can readily exploit. We can frame this formally as a problem having to do with what is called the “necessary and sufficient” clause of empirical reality. To wit, a thing is necessary and sufficient provided:

1.) A set of values for independent variables are operationally observed to have been necessary and sufficient for an hypothesized value of a dependent variable to obtain AND

2.) All independent and dependent variables are sufficiently well defined.

Something is sufficiently well defined iff there can be found some causal link in which it can be entrained consisting of at least one antecedent and one descendent such that the effect of the causes entrained can be, in principle, reliably predicted in advance. That is, the causal train must be, in principle, algorithmic and deterministic.

An excellent hands-on example of the Second Axiom regards the so-called First Causes argument used by apologists of various faiths. Popular and oft-accepted without any critical thought whatsoever, even some of the most educated members of society repeat this fallacy *ad nausea*. Let us dispatch it by contradiction to illustrate:

First please allow me to introduce some basic logical terminology to describe what one might call the No Evil Genius’ Proof:

“Something” (think an event) is possible in a system Q (think universe) “in principle” if Q admits of “Something” that is sufficiently well defined relative to Q.

The word “admit” here is taken to mean “allows”; in the sense that the “laws” governing all behaviors in Q “allows” an event to occur. Those laws are simply the essence of what Q is; it is what defines Q as Q.

“Sufficiently well defined” relative to Q here means the set of properties (to include possibly laws) in Q minimally sufficient to causally entrain an arbitrary event, call it **k**_{1}, occurring in Q into the causal history of Q. The causal history of Q is the set of events that did, are and will (think all conjugations of “to be”) occur in Q “since” its creation. Think of it like a proton. a proton in free space has what is called a Hilbert Space that describes all its possible states (degrees of freedom). All those allowed states are allowed because of the properties of the spatial system in which it is defined; that is, Q. So, a particle can have mass, for example. That is “allowed” because that is how Q (the universe) works.

Now, we can formalize our statement supra to a first-order approximation of where we’re going with this:

Let an event **k**_{1} be sufficiently well defined relative to a spatial system Q. An event **k**_{1} is possible in a spatial system Q in principle if Q admits of **k**_{1}.

Now, consider two spatial systems R and S. Let an event **k**_{1} be sufficiently well defined relative to R.

In order for causality between R and S to exist, a special condition must be met. Let an arbitrary event **k**_{2 }∈ S.

Let the subset of all properties A ∈ R necessary and sufficient to define **k**_{1} relative to R be denoted, r, and the subset of all properties B ∈ S necessary and sufficient to define **k**_{2} relative to S, denoted s.

Now, the required condition is trivial,

r ∈ S, R and s ∈ S, R ∵ s ≡ r.

must hold.

But this is just the same as if r ∈ R and s ∈ R where R is the natural world exposed to empiricism and s contains all the properties necessary and sufficient to define a cause that is *super* natural. But that means that s can be fully predicted and understood using empiricism alone, which is not allowed under the presumptive definition of a god. Q.E.D.

This is just how easy it is to disprove “gods”. Unfortunately, few understand it. Therefore, other approaches should be put before the adherent. This proof is provided primarily for the deconverter to better understand both the problem and opportunity outlined by the Second Axiom of Deconversion.

Scholium:

Due to the somewhat obtuse manner in which a formal topic such as the above can be introduced, and appreciating how important it is for adherents to understand all Axioms if at all possible, we will attempt to expand on what this implies in a real world scenario.

Recall the key *lemma*:

*Something is sufficiently well defined iff there can be found some causal link in which it can be entrained consisting of at least one antecedent and one descendent such that the effect of the causes entrained can be, in principle, reliably predicted in advance. That is, the causal train must be algorithmic and deterministic.*

Let’s be blunter. Whenever a condition ‘exists’ such that it is not sufficiently well defined we are, in effect saying it in the equivalent way as well:

Whenever a condition ‘exists’ such that it ** cannot** be sufficiently well defined in order to entrain any such event (“condition”) in a causal linkage such that it can be algorithmically and deterministically predicted in advance, the “condition” referenced is logically meaningless and irrational,

**whose existence is necessarily and sufficiently defined in nature (as opposed to super nature).**

*relative to any observer*k_{n}Yea, but why?

Whenever a condition ‘exists’ such that it ** lacks** the definition required in order to entrain any such event (“condition”) in a causal linkage such that it can be algorithmically and deterministically predicted in advance, the “condition” constitutes an infinite effects scenario whereby the “condition”, as we attempt to entrain it, can produce

**effect and we have**

*any**no way of knowing which one is the correct one*. It is the very concept of “definition” itself that allows us to narrow the infinite list down to something finite and, if sufficient, to a single cause.

The reader may care to note now, as this concept begins to sink in, how philosophical arguments all have the quality of over generalizing in such a manner as to deny logically valid application to the real, tangible, physical universe. And this is why all the philosophical arguments (or the vast majority) are nonsense. In our case we held strictly to nature and required, as we must, that whatever we claim can be causally entrained in nature, even if only in principle.

We need to be clear here that we are not lauding the scientific method by suggesting that *it* is the only way to define something or show something to be necessary and sufficient. There is a logic we’re describing here that is more fundamental than the scientific method proper. It just happens that some really smart people a long time ago saw this same logic and built a formal process on top of it … called the scientific method. If we can get people to behave honestly, come up with the right rules and conditions to ensure fidelity to this process, and thus follow the formal process (uh hmm, East Anglia Institute) then we have a silver bullet to truth that inexorably leads us to more and more of it. Scientists would be well served by making sure dishonesty in their profession is ruthlessly extirpated, not apologized for and white washed. I have a dream.

*It is a word to the wise engaged in counter apologetics that because of the imaginary and fantastical realm enjoined by religion the failure to satisfy the necessary and sufficient clause, and to make arguments based on ill-defined concepts, will dominate almost all discussions with adherents. Therefore, the deconverter should be astutely aware of their frequency and should be ready to use these missteps wherever they appear.*

The Third Axiom of Deconversion

We now complete the axiomatic toolbox by introducing what is perhaps the most difficult to catch and identify as being relevant in conversations with adherents. This axiom comes about as a result of the all too common tendency of human beings to confuse cause and effect when applying probability to causality. It is also an elegant bridge between the purely abstract which is fully independent of nature and nature. The best way to explain this is to start with a first order approximation in the form of a thought experiment. We will then formalize our conclusions into an Axiom.

We imagine a bucket of marbles, say a few billion of them, and we paint each one with a unique number so that each can be uniquely identified. We choose integer multiples in the form of a count, beginning with 1 and going in order to the highest value we have. Now, suppose we take a number of marbles much smaller than that total, say 10. Now, let us randomly pick those 10 marbles and place them in another bucket, call it the bucket of Intelligent Designs. Most people can clearly see that the probability of those 10 marbles having values within an interval of, say, 100, is exceedingly slim. This is the Intelligent Design argument. It is nonsense. ** And the reason why is that we have no way of knowing how many marbles were placed in the Intelligent Designs bucket.** So, if we placed every marble from the starting bucket into the Intelligent Designs bucket the probability would then be exactly 100%; that is 100% that human beings were created by natural events only.

Let us tighten this up.

Let us begin a causality train whose program will be to generate a set of dependent variables, effects, from a set of independent variables, the causes. To get the set of effects A and the set of causes B which caused the set A, A and B must contain members that are not strictly arbitrary. We can define a rank *n* order *m* metric tensor, **₵**, of “causality” generators; each denoted ϕ^{1}_{1}, ϕ^{1}_{1} , … , ϕ^{n}_{m}. Then for each ϕ^{i}_{j} we can define a domain and range for each; corresponding to the sets B and A respectively. Now, let it be observed empirically that there exists a set a and b such that a ∈ A and b ∈ B; and both a and b contain one or more elements, that is:

r ∈ a ∈ A, s∈ b ∈ B and we guarantee that an enumeration of elements exists such that:

u < v.

where u is the enumeration r* _{u}* ∈ a and v is the enumeration r

*∈ A.*

_{v}If the generators ϕ^{n}_{m} meet the definition of a function, that is, a rule that assigns to each element b ∈ B exactly one element ϕ^{n}_{m}(b) ∈ A, then it is likewise possible to find a set of generators which also meet the definition of a function, δ^{n}_{m}(s) ∈ a.

Now, we let the generators ϕ^{n}_{m} and δ^{n}_{m} be functions that strictly assign each element of its corresponding domain ** randomly** to exactly one element in its corresponding range.

Then the probability that there exists a generator δ^{n}_{m} is ∝ u / (v – u). However, the probability that there exists a generator ϕ^{n}_{m} = 1.

QED.

In other words, appeals to beauty, order and the appearance of an intelligent design, as just one example of this proof’s application, to suggest, imply or otherwise provide evidence for an intelligent actor as the cause thereof is nonsense. Let’s go back to the marbles to explain why and try to state this proof in English since we promised no advanced math here.

In the marble example the starting bucket represents *all* the possible independent variables of *any* nature or “universe”; that is, the environment variables in which, for example, something like Deoxyribonucleic Acid was first “created”. What Intelligent Design incorrectly assumes is that we are selecting only a subset of all possible independent variable values (physical properties of an environment) and placing them in Intelligent Design bucket (the thing so ordered and “intelligently designed”) for which we require the filling of a similarly narrow band of marble number values. This is not reality. The reality is that we must consider the full range of values of independent and dependent variables, which results in a 100% percent chance that intelligent design can generate something of any complexity. The probability of producing something that we subjectively call complex is proportional to the dependent variables associated with that complex object (to be exact, the dependent variables that are responsible for the complex character of the object) and all independent variables, which are essentially infinite, and thus also 100%. So, it is 100% probable that nature generated those complex structures. Ignorance and superstition never quits.