Bigsley the Oaf

Skeleton of Argument

Posted in Uncategorized by bigsleytheoaf on June 25, 2011

Here is a skeleton of the argument I’m trying to put forward in my previous post:

– Given a system S, and a model M of that system, I can say whether M is “good” or “bad,” in a systematic way.

– My system of judgement relies on an analysis of the ontology (system of types of elements) underlying M.

— One way that a system M can be better than a system M’ is if the difference between its cardinality and the cardinality of the “true ontology” is smaller than that of M’ and the true ontology. In other words: abs(C(M) – C(S)) < abs(C(M’) – C(S))

— Physics has evolved along a trajectory wherein it has become “better” with each “revolution.” It serves as an example of a set of ontologies which are better/worse than one another. Each “revolution” in physics corresponds to an action on the ontology of the mainstream conception of physics.

— Models can be “more macroscopic.” A model M is better than a model M’ if the tradeoffs associated with its level of macroscopicity (word?) are good. E.g. if precision is not decreased too much, but computational efficiency is increased greatly.

— Chemistry > Physics in some cases. Although chemistry is less precise than a faithful computation of all the physical properties of a system, such a computation is intractable, so chemistry wins sometimes.

– Therefore, we can say that neural nets are not good. First of all, they have too small of an ontology. Secondly, they aren’t macroscopic enough to be computationally useful for some tasks.

Advertisements

One Response

Subscribe to comments with RSS.

  1. Graham said, on June 29, 2011 at 6:46 am

    I agree. I also feel like you’re leaving out abstraction, although the distinction is subtle. Chemistry > Physics for the reasons you’ve stated, but in addition quantum chemistry makes a black box out of quantum physics, trading statistical precision for perfect analytical precision. And then molecular chemistry abstracts certain basic rules about quantum chemistry, and so on, until you have shit like the ideal gas law and fluid dynamics at the higher levels. To discuss fluid dynamics in terms of quantum physics is more than wrong, it’s impossible even to begin. Likewise, if all you’ve got is an understanding of the stimuli/response algorithm for an ant, understanding how an anthill moves across a field in response to a perceived threat is impossible to calculate or even visualize.

    Of course information crosses levels of abstraction, it’s not a total black box. The ideal gas law works for some gases at low temperatures and low pressures, but as soon as the initial energy/complexity of the system is increased beyond some threshold, sensitivity to initial conditions kicks in and the gas behaves unpredictably.

    I suppose ultimately what I’m moving towards here is a comparison between the structure of the brain and other highly complex non-linear systems, like weather. Because of various level-crossing effects, behavior of any sufficiently complex model is likely to be both non-repeatable and unpredictable. So then, for neural nets, while they can be used to model basic learning, and perform simple tasks (like flight simulation), the fact that the behavior of these nets can be both predicted and controlled (to the best of my understanding) implies that they aren’t sufficiently complex to model human “intelligence”, as defined by you.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: