Jump to content
Objectivism Online Forum

Artifical Intelligence / Thinking Machines

Rate this topic


JASKN

Recommended Posts

**** Split Topic (from

another thread) ***

I thought that podcast #16, June 2, 2008 was very interesting in the topics; which includes[...] artificial intelligence[...]
Yes, it was a good podcast. On artificial intelligence, by his particular angle I suppose he is right, but I can't think of AI without the next step: it learns by itself. I wish he would have commented on that. Edited by softwareNerd
Added "split topic" notice
Link to comment
Share on other sites

Yes, it was a good podcast. On artificial intelligence, by his particular angle I suppose he is right, but I can't think of AI without the next step: it learns by itself. I wish he would have commented on that.

Well, as far as I know, we haven't built a machine that can do that. And we actually haven't built a machine that can learn, but rather ones that are complicated enough in their programming to try various things according to a program to get themselves out of a jam. For example, we can build a machine that moves forward until it can't because of a wall or something else in the way, and then tell it via the program to back up, and go in another direction. The Mars rovers can do things like that, and it is euphemistically referred to as "learning" but it's not really consciousness because it is a machine that is following a programming via various inputs and outputs.

There are leading Objectivists that claim we might be able to create artificial life from the basic building blocks of life, but never an artificial consciousness. I'm not sure I agree with them. If it is possible to create artificial life, then why would it be impossible to build one of those such that it has consciousness?

However, Dr. Peikoff is right in a technical sense. If it is a machine, that is different than an artificially created form of life. Machines are man-made devices that are designed to do certain things and only those things, and the programming on your computer is not a consciousness and that machine is not thinking.

In other words, we are not biobots, and neither are the higher-level animals that have a consciousness. Maybe a bacterium is a biobot, but a bird, for example is aware of its surroundings via consciousness and acts accordingly.

I think the confusion comes into play because of some misconceptions that man is nothing more than a complicated machine built by nature, which isn't true.

Link to comment
Share on other sites

I think the confusion comes into play because of some misconceptions that man is nothing more than a complicated machine built by nature, which isn't true.
Depending on how broadly you wish to define "machine," I think it is true. Machines as we currently make them are so primitive that it isn't helpful comparing them to consciousness, but: consciousness is part of reality, of course, and so it is possible to know it, and possible to replicate it. Sure, who knows if humans will ever figure out how to do that. But I would be interested in a good argument against why knowing the properties of consciousness would still not be enough to replicate it.
Link to comment
Share on other sites

Depending on how broadly you wish to define "machine," I think it is true.

Machines are devices built by man, and it is only figuratively that we can call mechanisms in nature "machines." A tree, for example, is not a machine; a solar panel is a machine. Even though they both use a similar mechanism -- i.e. turning sunlight into another form of energy -- trees are not man-made and therefore are not machines.

Machines as we currently make them are so primitive that it isn't helpful comparing them to consciousness, but: consciousness is part of reality, of course, and so it is possible to know it, and possible to replicate it.

By what standard are you saying our modern machines are primitive, when they are the most advanced machines ever built? It kinda reminds me of when I watched the first man walking on the moon and was disappointed that the picture was fuzzy. It seemed primitive to me because I had been reading a lot of science fiction, whereby going across the galaxy was routine. By an objective standard, modern machines are very advanced -- in fact, as far as we know, the most advanced machines ever built by anyone anywhere in the galaxy. It is not proper to compare machines actually built to those of the imagination. If you can make them better, then do so, but otherwise one must accept the fact that they are advanced by an rational standard.

Regarding consciousness, we don't really know where that ability comes from. The brain and the ability to process a wide range of perceptions seems to be at the root of it, but I wouldn't say that a computer is conscious even though it has sensors on it and can respond to inputs. For example, a machine that can detect something in front of it, such as the Mars rovers, is not conscious and it is not perceiving in the animal sense of the word. And something like Data of Star Trek is a mechanism of the imagination, and we don't know if that is possible or not to build. It is certainly not possible now and in the near future, but will it ever be possible to build an android that is aware of its surroundings enough to navigate completely independently of a program telling it what to do is speculative. We've never built one like that, so we don't know. I think as we learn more and more about the brain and how it processes information that might be possible, but we aren't there yet. But again, comparing, say, the Mars rovers to Data is improper, unless you can build a Data.

Link to comment
Share on other sites

New things are invented because people take what we already know and decide what is possible to make at the next level. That is what I mean about human consciousness; it is not a stretch to imagine replicating consciousness exactly as we know it. What is likely is that we will find a better replacement for consciousness before defining it, but the point remains: consciousness is within each person, we can study it, and eventually we can know it. That is different from saying the same about omniscience or teleporting. We know we can't "know" those things.

In that vein, if a machine is only something already created by men, then that is not how you used the word in your previous post. Either way, obviously I was talking about a machine in the "smaller pieces of reality working together to do something" sense. ...And it is likely people will figure out how to replicate it someday.

Lastly, it is perfectly fine to call modern technology "primitive" when comparing it to plausible future discoveries which will be far more advanced. I don't understand your problem with that.

Whew! Talk about jumping topic!

Link to comment
Share on other sites

Well, as far as I know, we haven't built a machine that can do that. And we actually haven't built a machine that can learn, but rather ones that are complicated enough in their programming to try various things according to a program to get themselves out of a jam.

There was a program for the old Apple ][ line back in the 80s that played "20 questions" and tried to guess what animal you were thinking of. It came loaded only with a few, but it could add more as it gave up and asked you to name it. I'm not saying it learned, but it certainly could expand its own database.

For example, we can build a machine that moves forward until it can't because of a wall or something else in the way, and then tell it via the program to back up, and go in another direction. The Mars rovers can do things like that, and it is euphemistically referred to as "learning" but it's not really consciousness because it is a machine that is following a programming via various inputs and outputs.

Right. Now suppose you program an automated floor cleaner, say a Roomba, to map out the floor where its base sits. It will take anywhere from hours to days to determine where the walls and furniture are. Once it does, it doesn't need to feel its way every day, but can follow the map in its memory. Of course, if you move a chair it will unerringly run into the leg and need to update its map. And it won't know a thing about moving obstacles such as pets or people, or anything you choose to leave on the floor, like a heavy box, a toy, a suitcase, etc. If it kept updating its map every time it ran into one, it would never finish.

So instead it relies on other rules, doesn't map anything, and just follows generic routes until the entire floor is clean. because it makes more sense for it to do so, because it's cheaper, more efficient, and gets the floor clean, which is the point of the automated frloor cleaner int eh first pleace, not to develop a consciousness.

There are leading Objectivists that claim we might be able to create artificial life from the basic building blocks of life, but never an artificial consciousness. I'm not sure I agree with them. If it is possible to create artificial life, then why would it be impossible to build one of those such that it has consciousness?

No reason I can see. If you put together a man with artificial components, however you go about it, wouldn't the result be a man with a consciousness? We may also conceivably genetic-engineer a conscious, volitional mind on to lesser beings like dolphins, dogs or apes. At that point, however, ethical considerations would apply, but that's fodder for another topic.

However, Dr. Peikoff is right in a technical sense. If it is a machine, that is different than an artificially created form of life. Machines are man-made devices that are designed to do certain things and only those things, and the programming on your computer is not a consciousness and that machine is not thinking.

Well, consider memory. We have a memory we use and depend on, as do many kinds of lower animals. Machines have a sort of memory, too, they use and depend on (try working a PC without RAM). But the way our memory is used is different from how a machine uses its RAM and ROM.

Link to comment
Share on other sites

I lost track of this split off thread since I didn't go back to the original thread and did not receive an email notification of a reply.

At any rate, I think the big difference in thinking about consciousness versus following a program, is that consciousness is awareness of existence; and I don't think that an automated machine, such as the floor sweeper mentioned earlier is aware of its surroundings, nor do I think it's processing of inputs is a consciousness.

Let's take this example, when one inserts a CD-ROM into one's computer and it accesses the program and begins to load up the program to install it, at no point is the machine -- the computer -- aware that there is a disk in there and that it needs to install the program. It is strictly electromechanical. Some people claim that consciousness is strictly bio-mechanical -- the operations of the neurons, for example -- but this doesn't really work for man because he has free will, so it is not strictly bio-mechanical.

Studies would have to be made regarding what is needed for there to be a consciousness. For example, we can definitely tell that our pet dogs and cats are conscious, but is the fly buzzing around conscious? I mean, clearly, at some point in evolution, higher-level animals acquired consciousness. Exactly where, we don't know. Is a lizard or a snake conscious? I've played with them as a kid, and I would say yes they are conscious.

So, in theory, I think it might be possible to understand what the minimal means of awareness and brain processing is required in order to have a consciousness. But we don't know what that is yet. But I'm also aware from my playing with some animals when I was a kid, that they aren't really biobots, as some would assert, so could one go from a machine processing using microchips to a kind of conscious awareness? Not if strictly speaking it is a machine designed to perform specific tasks that are loaded into its memory. In other words, following a programming is not the same thing as being aware and having a consciousness.

Link to comment
Share on other sites

Well, as far as I know, we haven't built a machine that can do that. And we actually haven't built a machine that can learn, but rather ones that are complicated enough in their programming to try various things according to a program to get themselves out of a jam
Theres a whole subfield of artificial intelligence dedicated to the issue of machine learning. One of the more famous examples was back in the 70's where a program learned to play checkers at a world-class level by playing against itself thousands of times and learning from its experiences (http://en.wikipedia.org/wiki/Arthur_Samuel). For a more recent example, there's 'motor babbling' research where robots learn how to control their limbs by repeatedly trying out different actions till they discover what effects they have in the same way that children learn motor skills, which is in some sense a more complex version of the classic 'pole-balancing' problem where a truck with a pole balanced on top of it needs to learn how to move to prevent the pole from falling down (http://www.bovine.net/~jlawson/hmc/pole/sane.html).

Theres literally hundreds of other examples though.

At any rate, I think the big difference in thinking about consciousness versus following a program, is that consciousness is awareness of existence.
Why is consciousness necessary for learning? I've no idea whether rats are conscious, but its been established that they are capable of learning from their experiences.

For example, we can build a machine that moves forward until it can't because of a wall or something else in the way, and then tell it via the program to back up, and go in another direction. The Mars rovers can do things like that, and it is euphemistically referred to as "learning" but it's not really consciousness because it is a machine that is following a programming via various inputs and outputs.
Yeah, as youve stated it this isnt learning. But what if we dont tell the machine anything about what to do when it hits a wall, and instead let it pick an action at random. And then after its picked one, it receives some feedback telling it how good the action it picked was, so that over time it begins to 'learn' which actions are best until its choosing the right one in most situations most of the time. Is this fundamentally difference from how animals learn things?

http://richsutton.com/book/chapter1.pdf is the first chapter of a book (Sutton & Barto - Reinforcement Learning) which gives a brief introduction to these sort of techniques in machine learning.

Edited by eriatarka
Link to comment
Share on other sites

Can you really say machines are learning? As far as I know, learning is understanding plus memorization. I know machines have the memory part down, but can they "understand"? Judging from the examples above, it seems as if they're just picking a course of action and then just storing it within their memories the nature (goodness or badness) of the action.

Link to comment
Share on other sites

Machines can't think... yet. But since we all have consciousness and we know such a thing is possible and that there is no such thing as the supernatural, including in regards to consciousness, it naturally follows that is at least possible that some time in the future man will be able to replicate consciousness via technology. To claim otherwise is to claim that something created by nature cannot also be created by man. This is incorrect. If we had the technological know-how we could create a star. None of this is arbitrary, it's just a matter of time and scientific and technological advancement.

Link to comment
Share on other sites

As far as I know, learning is understanding plus memorization.

I disagree - a lot of learning (particularly in animals) is maybe just stimulus-response based and explainable in behaviorist terms without needing to talk about understanding/consciousness. Imagine teaching a parrot to say your name or a dog to respond to you saying 'time for a walk' - would you say theres any understanding here? For me, understanding implies consciousness, and I dont think its obvious that lower animals are conscious.

http://en.wikipedia.org/wiki/Classical_conditioning

http://en.wikipedia.org/wiki/Operant_conditioning

Yeah, theres obviously an element of understanding required for certain types of human learning, but I dont think that has to be true for learning in general. Saying that machines can learn doesnt necessarily imply that theyre learning in the same way we would.

Edited by eriatarka
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...