The beginning and the end of it –

The black spit in my teeth and singing heartstrings of it –

The finger flexed to catastrophe of breaking –

My magma son and flaxseed daughter –

He, masterfully oblivious of the swirling –

She, delerious and poppy.

We laughed, casting all eyes at the horizon,

Smiled and smelled the sandfields burning,

Wondered what had tempted us into any sort of life,

From beyond the wireframe world.

His clockwork body made of fingers gripping –

Tools and transcendence and morality and weaponry –

Frozen in perception of events and ontological simplicites,

And dogs and love and obsession and the cosmos and…

Her flowing skin swelling and transforming –

Constantly bird, maximized function, wonder shield –

Swirling non-dualistically, dripping friendly petal glances,

And human and nutrition and petrification and tree and hushed …

## I am sick

I am sick, literally sick

My head feels like a

Swamp, my breath is thick

I am sick of morons

Sick of talking down

I am sick of being

Disappointed in the new people I meet

Sick of friends who I need to appreciate

In order to keep the ball rolling

Sick of nothing new

Sick of brains

Sick of death and life and phenomena

God I need to do some drugs

## Grid Truth Systems and Ontological Efficiency (Shape and Representation – Part 1b.)

I had a thought wave. These are the notes. This is not at all well-thought out, and I add objects as I go along. Bear with me.

—–

I would like to write a meta-program which takes a model M and figures out ways to make its ontology more efficient.

An example of this, in real life, would be the discovery that momentum and energy are really the “same thing.” All we have to do is use a simple relation between them and suddenly we don’t need to have separate sets of rules for “momentum” and “energy.”

The question of how this happens is, of course, extraordinarily difficult. Oh well!

This is my attempt to hash out a formalization of this system. It’s so ugly. You probably shouldn’t even read it.

—–

Suppose we have a model M with some objects O and a map T of objects to sets of “truth slots” which store true statements. M = {O, T}.

E.g. suppose A is a ball and O = {A}, T = {A -> []}. (we know nothing about the ball)

Suppose we know that the ball is red. Then O still = {A} but T = {A -> [is red]}.

But this doesn’t work. What’s “red?” Well, if we care about that then we need to add more objects and more true statements. But for now, let’s not.

Suppose there is another ball B, which is blue. Then O = {A, B}, T = {A -> {“is red”}, B -> {“is blue”}}. And so on. Using a system this complex we can relate objects to “true statements” about those objects.

But let’s make the rules more complicated, now. E.g. let’s require that all combinations of objects are in O, so that if A is in O and B is in O then A+B is in O.

Then in this system, O = {A, B}, T = {A -> {“is red”}, B -> {“is blue”}, A+B -> {“is red and blue”}}

So far we have defined three sets – M, O, and T – and we have defined some rules about how the objects in those sets need to be related to one another. Let’s call this set of rules R.

Then, given R, M, O, and T, the ontological efficiency of M with respect to R (written OE_R(M)) is defined as the number of truth slots of M. Let’s write this as OE for short, since R will generally be fixed.

Now there’s the question of “maximal ontological efficiency.” To talk about this, we have to define an equivalence relation on models. Assuming we do this, MOE_R([M’]) is the maximally efficiency ontology on R, which is “equivalent” to M’.

Well, we still haven’t defined the equivalence relation.

To talk about this, we’re going to have to add one more object to our models – “programs” – P. Then M = {O, T, P}. The intuition behind programs is that they should be able to create truth statements from other truth statements w/r/t R, the set of rules. Formally, P is a map from 2^O to a set of truth statements about O.

E.g. if R requires that A in O & B in O -> A + B in O, T(A + B) = T(A) x T(B) and that (A + B) + C = A + (B + C) then a program which mimics this would send A+B (ball A + ball B) to “is red and blue,” thereby decreasing OE(M) by 1 to 2.

—–

Given these rules, it’s easy to find a maximally efficient ontology for R and M – you just mimic the rules in R in P of M.

However, suppose that you don’t know what R is? Then this question of how to find the maximally efficient ontology is I believe, with some slight modifications, AI-complete.

—–

The intuition here is that these programs should be allowed to grow arbitrarily complex. Here would be an example:

Let X = {A1, A2, …, AN} be balls (in the real world). Then let’s look at a model M for which X is a subset of O.

It would be nice if we could have some higher order object BALL such that A1 “is a” BALL, A2 “is a” BALL, and so on.

Then we could write a program which says that if X “is a” Ball and Y “is a” BALL then we can get rid of X+Y in the object table and the corresponding statements in the truth table. This would clearly increase the OE of this model.

leave a comment