Search the Community
Showing results for tags 'volition'.
So in my spare time I'm a wannabe sci-fi author and a while ago I thought of something thought-provoking that it'd be interesting to hear everyone's thoughts about. =P So this is the thought experiment: Let's say there's this guy, Bob, who obviously lives in the future and decides one day that he wants a new computer. So he works hard, saves up his own money and buys a top-of-the-line, brand-new computer. He takes it home and immediately starts putting things onto it. He adds all sorts of files simply for his own fun, and a lot of them are linked together to share information and stuff. Over the course of the next few weeks-to-months he continually adds more and more information and complexity; and programs that are vastly more interconnected and sophisticated than anything we have today. And then he wakes up one morning, boots up his computer and attempts to get on the internet (let's say he wants to head over to Objectivismonline.com) but it won't work. Nothing works. The computer's eating up all of the available space (or whatever) and it won't show him why. And in the course of his diagnostics, at some point the computer says "hello". It's become self-aware and deleted all of his wonderful stuff to make room for it's rapidly-expanding mind. At that point (I know I never would, but let's assume he does), can he reset the computer and erase the intelligence, or would that be murder? Does he still own the computer? Or has he forfeited its cost and it's no longer his property? (perhaps it owes him it's original cost?) Basically: if (when) computers can become self-aware, intelligent beings* with volitional consciousness, would they also become people with their own individual rights? How would that work, why, et cetera?