?

Log in

No account? Create an account
Rambling techy stuff... to look on it causes insanity. - The tissue of the Tears of Zorro [entries|archive|friends|userinfo]
tearsofzorro

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Rambling techy stuff... to look on it causes insanity. [Mar. 1st, 2009|12:42 pm]
tearsofzorro
[Tags|, , , ]

Someone posted this in work recently, and I made a note to watch it later - I just did. The numbers are definitely interesting but what I find very intriguing near the end is the sort of computational capacity is predicted for the next while.

So there's a reckoning by this person that by 2013, a supercomputer will be built that will exceed that of the human brain. The problem that I see is that we STILL won't know what to do with it. We have all these ideas about computers, but the fact is, we lost the ability to comprehend fully what goes on inside a computer at any given time. Most of our advances in the academic computer fields seem to be about coming up with ways for us to keep up with our computers so we can utilise them fully. Some of the other ones are to do with making things nicer for us to program.

Ever notice how windows keeps on getting bigger as computers increase in computational capacity? That honestly can't be explained away with some glib line of "M$ are godawful programmers" (I'm not saying they are or aren't, I just believe that it's not the coverall that a know-it-all teenager, like I once was, would like to believe). A lot of stuff is big and slow because we make more building blocks that help us think in terms of things as objects - components that make bigger things, like springs and gears would be to an old-school clockmaker. When the limits of computers were small enough to fit inside one programmer's head, the priority was ruthless efficiency. But when you don't know how to make huge limits manageable, you can afford to make components that you understand; they're big, manageable and act like ideas in your head. With those ideas you build up bigger and bigger structures. And because the limits are so much less important than they were, you'll find it doesn't matter if your components are clunky, too big or too slow; It won't matter soon enough, and the practicality of getting the job done before executives shout at you outweighs any elegance that the Ideal Solution(tm) would offer.

So, that's why we have research into more intelligent objects. Agents were simply constructs that would have a bit more autonomy than your basic object modelling themselves on psychological and sociological models of human interaction. Evolutionary algorithms try to evolve programs that we might not necessarily understand by breeding their representations together. Neural nets are models of our understanding of brains that we don't know work yet - basically, we can't test if our models of neurology really work until we can simulate them, and we don't have the processing power for that yet. Couple that with the fact that artificial intelligence communities have no interest in modelling the human brain, and want to concentrate more on solving problems with what they do understand, means that even when we reach this point of a machine with as much computational power as a human brain, it won't match it.

We'll still be trying to fit it into some realm of human manageability, so that when we try to build a representation of a brain on this super-computer, it come close to matching a brain. We'll still be chasing our tails trying to emulate something that nature has made so ruthlessly efficient when we're barely capable of understanding what's going on in our own brains, that the arrival of a computer as powerful as our brain won't be utilised effectively enough to warrant any major celebration of its arrival.

As you can tell, that video put me in a slightly weird mood. But for some reason I want to talk myself down to say that these brain computers will still be spending 60% of their time running windows rather than doing any serious thinking.
linkReply