bluegargantua (
bluegargantua) wrote2009-10-07 12:49 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Entry tags:
Accelreview
Hey,
So last night I finished up Accelerando by Charles Stross. I've never read much Stross, but lots of people seem to like him, so I thought I'd give it a whack.
Accelarndo is essentially an envisioning of the Coming Singularity as seen through the eyes of three generations of the Mancx family. We start with Manfred, an early 21st century idea man trying to obliterate money. After disputes with his wife, spiny lobster AIs, a French aerospace manager, the Russian Music Mafia and a identify mugger, we move on to his daughter Amber who runs away from home to Jupiter at age 12 to become Queen of her own personal kingdom. Finally, we catch up with Amber's son Sirhan who sits in a bubble dome floating in Saturn for the return of the last known copy of his mother.
The book was kinda hard to get into at first. Stross wants to invoke a sense of the future shock that all of his characters go through, but that makes for some rough reading in spots. He sorta catches it up as he goes along. But he does go head on into one of the thornier problems that is often unaddressed in near-Singularity books like this -- the future super-intelligences descended from ourselves and our creations are as completely unknowable to us as we are to a tapeworm, and there's no reason to think that these super-intelligences should have any more care, or treat us any better than we would a tapeworm. Which isn't to say that these AIs are particularly malicious or ill-disposed towards us, they just can't recognize their intellectual ancestors and could wreak untold havoc unintentionally. I think it's an important point that doesn't get addressed enough. It's sort of the flip side of Blindsight's meditation on non-sentient intelligence.
It was an interesting book, although I'm not super interested in picking up more Stross right away. I've heard good things about Halting State so maybe I'll give that a look-see sometime later.
later
Tom
So last night I finished up Accelerando by Charles Stross. I've never read much Stross, but lots of people seem to like him, so I thought I'd give it a whack.
Accelarndo is essentially an envisioning of the Coming Singularity as seen through the eyes of three generations of the Mancx family. We start with Manfred, an early 21st century idea man trying to obliterate money. After disputes with his wife, spiny lobster AIs, a French aerospace manager, the Russian Music Mafia and a identify mugger, we move on to his daughter Amber who runs away from home to Jupiter at age 12 to become Queen of her own personal kingdom. Finally, we catch up with Amber's son Sirhan who sits in a bubble dome floating in Saturn for the return of the last known copy of his mother.
The book was kinda hard to get into at first. Stross wants to invoke a sense of the future shock that all of his characters go through, but that makes for some rough reading in spots. He sorta catches it up as he goes along. But he does go head on into one of the thornier problems that is often unaddressed in near-Singularity books like this -- the future super-intelligences descended from ourselves and our creations are as completely unknowable to us as we are to a tapeworm, and there's no reason to think that these super-intelligences should have any more care, or treat us any better than we would a tapeworm. Which isn't to say that these AIs are particularly malicious or ill-disposed towards us, they just can't recognize their intellectual ancestors and could wreak untold havoc unintentionally. I think it's an important point that doesn't get addressed enough. It's sort of the flip side of Blindsight's meditation on non-sentient intelligence.
It was an interesting book, although I'm not super interested in picking up more Stross right away. I've heard good things about Halting State so maybe I'll give that a look-see sometime later.
later
Tom
no subject
no subject
Also, read A Colder War.
no subject
no subject
no subject
no subject
no subject
WJW's This Is Not A Game really does it for me as far as game-related books go.
no subject
At any rate, you really should have come to the Singularity Summit! It was incredible. Check out the videos when they come up: http://www.singinst.org/
no subject
I don't think sentience is a unique fluke in Blindsight, it's just an evolutionary dead end. Either you run into something that eats you, you resurrect something that eats you, or you just wire yourself up to a box and bliss your species away.
later
Tom
no subject
no subject
"The odds are really good that life generally exists in the same way that we do because that assumes that we're the norm, and not some special case!"
There's nothing normal about us. :)
Also, I would point out that intelligence is not what's under consideration in Blindsight -- sentience is. There are a lot of smart animals in the world who don't exhibit any real sense of sentience. None of them are as smart as us, but it's not clear that sentience is a requirement for human-level intelligence. Co-joined with Blindsight's other concept "Survival of the Least Inadequate", it may be that our sentience-handicapped intelligence is just good enough to outpace everyone here, in other environments, perhaps not.
There are plenty of examples here on Earth of animals who have evolved in different ways to solve the exact same problem. Some of these animals have a much less efficient solution, but because the less-efficient solution works sufficiently well for the animals' local environment, they never get pushed to a more efficient one. In fact, because of the random influences of evolution, you might never be able to get to a superior solution -- your initial random mutation lead you down a path to a locally-superior solution, but not a global one and you can't back-track or cross the gulf between you and the other solution states.
Anyway, we've got way too little comparison data.
Also? Those Singularity guys? Way too rah-rah on intelligence. More intelligence != Always better. Look at it this way: We know that most other apes are fairly intelligent, but our treatment of those species is pretty awful on balance. If we come up with a super-intelligence that's just as much smarter than us as we are over apes, there's no reason to expect that it will behave better towards us than we do towards apes. Granted, computational intelligences have a completely different set of resource requirements than we do. It's quite possible that we wouldn't really be competing in the classic evolutionary sense.
Frankly, while I hope for super-intelligences that are interested in helping us become better intelligences ourselves, I'll settle for benign neglect.
later
Tom
no subject
One of the things that survival of the least inadequate also implies is that evolution generates solutions with the least amount of effort, on the path of least resistance - hence we have a single spinal column, instead of a tripod or redundant nerve bundles or what have you. Evolution takes all the existing pieces it has available, shifts them around a little bit, and rearranges into something similar, and the thing that's successful first wins no matter how inferior the engineering seems when a sentient intelligence examines it. So as you say "superior" solutions honestly never happen in evolution except by pure, pure chance.
But this too ends up being an argument that sentience is a fundamental part of the equation; it wouldn't have arisen if there was no advantage to it. Sentience clearly brings something more to the table than pure intelligence - if all the other creatures on the planet have low sentience, it demonstrates that sentience and intelligence really aren't divisible after a certain critical mass is achieved. This is probably best observable with infants and toddlers; I'm told that parents can see when sentience begins occurring in children, and they see it as a "spark" that lights up their faces, and is a step above pure mimicry or response to outside stimuli. And I'm sure that various forms of autism are also related to level of sentience and different kinds of mental processing...
As far as super intelligences go, I'm convinced of a couple of things:
1) Integrating ourselves with them is the best option
2) AGI (artificial general intelligence) will be achieved biologically before it is achieved non-biologically (growing synthetically designed brains in jars will work better than writing computer programs)
no subject
"2) AGI (artificial general intelligence) will be achieved biologically before it is achieved non-biologically (growing synthetically designed brains in jars will work better than writing computer programs)"
Really? I'd think that software would be the way to go. It's harder on the front-end to build up a solid evolutionary model, but once it's going you can churn through thousands of generations and explore lots of different evolutionary paths. And the whole thing is virtual so it's much more compact/efficient. Also, I think our understanding of computing as a general process is further along than our understanding of how biological systems do computing.
But what do I know. I'm going to be a homeless bum living under a bridge with you when the MindNet kicks in.
"BADWRONGTHOUGHT! ACCESS DENIED!"
Tom