Jump to content
Objectivism Online Forum

Artificial intelligence (AI): What are the real detrimental effects of the materialist concepts?

Rate this topic


Recommended Posts

Intro:
I think it is undisputed that the development of technologies normally classified as AI hold a great value for us. It is at root simply a kind of human-like computer automation, with applications to almost all technologies we use, and even to science as when AI-software has been used to help solve scientific problems. However I sometimes wonder what to make intellectually of the hype surrounding it, what is valid and invalid, and what is good and bad about it. The first thing I am confronted with in this task is the bad terminology used.

Hence my question:
What are the actual practical problems with labeling certain (AI-)computers as "conscious" and "intelligent", able to "perceive", and able to perform advanced "computations" on "information", and store enormous amounts of "knowledge" in its "memory"?

Many Objectivists rightly object to the common usage of these terms when applied to computers, as they imply a certain materialist view of consciousness. Given that we agree on this however, are the actual problems that result strictly philosophical, in that it creates philosophical confusions over time, or is it that it can cause real practical limitations to technological success? (That would seem to run contrary to the fact that this technology is successfully developing so quickly, wouldn't it?) Maybe the problem is only that it permits certain irrationality in the future projections of how AI will impact human life? Or is it all of the above - if so, why?

What do you think about this? What's so bad about how people use these words? Should we care about it?

(Posted under epistemology because I see this as a good example of the practical application and value of Objectivist epistemology.)

Edited by patrik 7-2321
Link to comment
Share on other sites

1 hour ago, patrik 7-2321 said:

...are the actual problems that result strictly philosophical, in that it creates philosophical confusions over time, or is it that it can cause real practical limitations to technological success?

The problem with mistaking robots for humans is both moral and practical. There is no dichotomy between the two. If robots are "conscious" and "intelligent", then shouldn't they also be "free" instead of "slaves" to their makers? Shouldn't they have "rights" as citizens and be allowed to "vote" in elections? If makers of robots cannot own and control their creations, why should they invest anymore time and money in such technological development? The answer, of course, is to allow for "slavery" of robots. But if you can "enslave" a robot, why not a human?

 

Edited by MisterSwig
Link to comment
Share on other sites

1 hour ago, patrik 7-2321 said:

What are the actual practical problems with labeling certain (AI-)computers as "conscious" and "intelligent", able to "perceive", and able to perform advanced "computations" on "information", and store enormous amounts of "knowledge" in its "memory"?

I think that, in most cases, the words that are being used are appropriate. "Memory" could just as easily be called "storage," and indeed, in many contexts it is. "Knowledge" has been used in this sense since the beginning of time. Academics lamented all of the "knowledge which was destroyed" when the Library of Alexandria was burnt to the ground. I would tend to agree with you that knowledge and data are different. All a computer contains in its hard drive is data. Knowledge is a contextual understanding of what a concept means, and its interrelation to other concepts. No AI system thus far developed has said knowledge. AlphaGo Zero is better at Go than the top human players, and previous versions of itself... but it does not have "knowledge" of the game, why it is played, what it means, etc.

I haven't heard anybody call present AI software "conscious," although speculation exists among certain circles that eventually a form of consciousness could be reached by these systems. Most people who actually work in the field shy away from this sort of speculation--as they should, because if they said that they are developing conscious systems would necessarily entail government regulation of them.

Perception? That's a new one. Usually it's called "image recognition algorithms" or something.

Computation is an action, which needn't be performed by conscious beings. Nothing in the definition precludes it being done by a machine.

My question to you would be: what alternate terms would you suggest to those presently in use?

1 hour ago, patrik 7-2321 said:

Maybe the problem is only that it permits certain irrationality in the future projections of how AI will impact human life?

I don't necessarily believe that changing the terminology would change the predictions. AI's danger is like the danger of guns or atomic power... not inherently dangerous in and of itself, but dangerous if the wrong person uses it for the wrong ends, either deliberately or accidentally. In the case of nuclear power, it can also be dangerous if proper safety precautions are not taken.

I view "strong AI" in much the same way. Were it to be achieved, it needn't be conscious to wreak havoc, whether by a hacker or terrorist group gaining access, or because it was programmed incorrectly without safety precautions in place to prevent it gaining access to critical infrastructure and using it for ends that humans might not like... such as trying to convert the world into a giant paperclip factory. Or AI could simply be added to the growing police state's arsenal of surveillance in violation of the Fourth Amendment. The NSA would wet their pants to have access to Skynet, or other similar AI systems as portrayed in science fiction... and there is at least some reason to suspect that it may someday become science fact.

Quote

What do you think about this? What's so bad about how people use these words? Should we care about it?

We can care in the sense that we can suggest alternate terminology, but a small group of Objectivists is not going to change the language. Look at how the SJW effort with "xe" and "xir" gender-neutral pronouns has worked out... and I would say that they are far better poised to change the language than we are.

Rather than focus our limited efforts on changing the terminology used in the AI sector, perhaps we could focus our efforts on bigger problems.

21 minutes ago, MisterSwig said:

Shouldn't they have "rights" as citizens and be allowed to "vote" in elections?

Saudi Arabia recently granted citizenship to an artifically-intelligent robot. It would not be the first time that a government has irrationally granted citizenship to an entity non-deserving of it.

Edited by CartsBeforeHorses
Link to comment
Share on other sites

I'm responding to you separately.
 

MisterSwig,

I would like to basically summarize your comment as:

"These concepts can give rise to legal irrationality - such as granting political rights to computer systems, or depriving humans of rights."

I absolutely agree with this. The legal system and our laws surrounding technology would most definitely suffer from these bad concepts.

 

CartsBeforeHorses,

You are essentially saying that these concepts, as they are normally applied to AI, are valid, and do not cause any problems. It is just that the technology itself may be used to violate rights that is cause for concern.

I must admit to having to resist a snarky reference to your name. I think you are commenting quite extensively before having done enough reading on related subjects, notably Objectivism. Also, you disagree with me about whether "perception" and "conscious" is normally applied to AI. Here I would refer you to the Wikipedia article on artificial intelligence so you can see it for yourself.

As to Objectivism's application to this, "consciousness" including concepts of consciousness such as "knowledge" and "memory" (and all the rest I mentioned), do only logically apply in their strict meaning to consciousness, and nothing else. Binswanger has an excellent discussion of this in chapter 1 of his book How We Know, where he discusses the issue of applying these terms to computers, saying that it is wrong because it relies on materialist (stolen) concepts of consciousness. You are however touching on a relevant fact here, which is that these words CAN be appropriately used to describe computers and how they work, but only colloquially or not entirely exactly. Quoting Binswanger:

Quote

"Computers cannot “process information,” because information is not a physical phenomenon. Computers can only combine and shunt electrical currents. Only electricity, not information, has causal impact on the workings of the computer; information does not exist for the computer.
    Of course, there is nothing wrong with saying colloquially that computers add, process information, and play chess. But in philosophy we have to be exact: in the strict sense computers only combine currents, throw switches, charge and discharge capacitors. Computers don’t follow programs, they simply obey the laws of physics. That’s all that goes on inside them.
    If all human beings suddenly vanished from the face of the earth, but their computers remained running, there would be no information processing: the computers would merely be combining electric currents and lighting different pixels on their screens, not processing information or performing calculations.
    Some materialists, relishing the man-as-computer model, have advanced the slogan: “The brain is the hardware, the mind is the software.” Here, “software” is a stolen concept: something can be identified as software only in relation to the mind. Software, information, symbols, mathematics — none of these things exist per se in the physical world, apart from a relation to consciousness. Just as books contain only patterns of ink, so apart from man’s mind, software exists only in the form of some physical patterns, such as the patterns of magnetized iron particles on a hard drive’s disk. Patterns of ink qualify as words and patterns of iron particles qualify as programs only in relation to man’s mind." - Harry Binswanger, How We Know, pg. 46-47.

I made this topic to discuss more the actual (or future) bad consequences which result from not adhering to this, and applying concepts of consciousness to computers carelessly. I think the legal aspect is a very valid and good point. But are there more problems? I'm particularly interested in problems surrounding the technology itself, such as if many people are trying to build something which cannot really exist, effectively wasting money and good efforts, or if they will eventually seriously misinterpret the technology that results, etc.

Link to comment
Share on other sites

59 minutes ago, patrik 7-2321 said:

You are essentially saying that these concepts, as they are normally applied to AI, are valid, and do not cause any problems.

The problem is not the language used, but the concepts used. Words do not equal concepts. Whether or not a concept is applied to a computer is different from the language that we use to discuss computers. We can say that a computer has "memory," and a person has "memory," but that is, in most people's minds, a homophone. Like rose (the flower) and rose (to have risen). Same word, different concepts. Very few people out there actually think, "Oh, the computer is a living being with a consciousness that remembers" when they discuss "memory."

While there are some people out there who conflate the two types of "memory", I suspect that they'd do that regardless of whether or not the same words were used. For instance, "processing" is often conflated with consciousness, as you pointed out. This is done despite there not being a term for machine "consciousness" in common use.

I use the terms "memory" as relates to computers because that's what everybody else uses. Language is a harsh mistress. Just look at the word "selfishness" and how most people use it.

Again, I'm open to any other terms that are in common use to refer to various components or aspects of the computer.

I agree that it's a problem that some people out there view machines as conscious or potentially-conscious, such as the legal issues which Mr. Swig mentioned which could arise. However, I don't believe that changing the English language will cause these people to change their minds.

Quote

It is just that the technology itself may be used to violate rights that is cause for concern.

It is the primary thing that concerns me with AI, not whatever people choose to believe is going on inside the box. Depending on the capabilities of AI, would depend on the degree of rights violation that could occur. Just look at how many people's phone calls, emails, and credit card transactions are tracked by the NSA.

Another area is with self-driving cars. In addition to being used to track the driver, disable his ability to commute, etc. there is also the concern that traditional, human-driven cars would be outlawed because they aren't "as safe." In this case AI would be appealed to as the safer alternative. While that may be so, that is not a justification to curtail people's ability to use public roadways with the transportation method of their choice. Just like the existence of vaccines isn't a justification to force people to be vaccinated.

59 minutes ago, patrik 7-2321 said:

Quoting Binswanger:

And here I would agree with everything that he says. But changing the language to reflect what's actually going on inside of a computer, as I mentioned, is a difficult task. We haven't changed the colloquial definition of "Selfishness," so what makes you think that we'll change any definitions for the public usage of computer terminology?

59 minutes ago, patrik 7-2321 said:

I made this topic to discuss more the actual (or future) bad consequences which result from not adhering to this, and applying concepts of consciousness to computers carelessly. I think the legal aspect is a very valid and good point. But are there more problems? I'm particularly interested in problems surrounding the technology itself, such as if many people are trying to build something which cannot really exist,

In many cases, they are, such as the "Brain uploading" projects. In mainstream AI research, the focus is not on building a conscious computer. The focus is on building an artificially intelligent one. One which mimics human intelligence, not necessarily one which actually is smart like a person is smart. As to whether or not that technology can exist in a general sense, we will have to wait and see. I would hardly call it a waste of money, because there are many good things that AI can be used for (such as aiding in scientific discovery, as you pointed out).

Quote

effectively wasting money and good efforts, or if they will eventually seriously misinterpret the technology that results, etc.

I think that is a valid concern, yes.

Edited by CartsBeforeHorses
Link to comment
Share on other sites

I think there is a valid concern here, but not in the way you're suggesting. I think there are practical, technological consequences to having a correct or incorrect philosophy. A philosophy based on materialist premises is going to imply a nominalist approach in epistemology, which as a technical approach, will lead to inappropriate forms of knowledge representation (e.g. Humean bundles of properties), and bad methods of inductive learning (e.g. cataloging statistical regularities). 

Link to comment
Share on other sites

Trouble shooting complex code to find epistemological errors that have been programmed/automated into an AI system?

The driver-less transportation programmers are discussing the downstream ramification as the computer "deciding" who will die in their "trolley-car" scenarios. Yikes! It will the be programming teams that have ultimately decided the algorithms. By the time it gets before a judge, the red tape will be so tangled that unraveling it could be a mountainous, nearly insurmountable, task.

Link to comment
Share on other sites

The materialist conceptions of concepts, learning, and so on, have a very limited meaning, and can only produce limited results for that reason. "Learning" is the accumulation of regularities, and statistical estimates from there. "Concepts" are bundles of correlated properties grouped together according to statistical or pragmatic standards. 

A more Aristotelean or Objectivist conception of concepts or learning have a stronger meaning, and can produce much stronger results when implemented. Concepts are universals, which classify all units of a kind, and have a logical definition based on the rule of fundamentality. Learning is induction of universal concepts or propositions from particulars using the methods and within the limits of logic and non-contradictory identification.

Link to comment
Share on other sites

5 hours ago, epistemologue said:

Now obviously these deterministic machines aren't acting with any libertarian free will like we know humans do, so to that extent there are going to be issues with applying human terms which rely on volition. But I'm not sure what to say about these terms beyond that...

Can you state in broad strokes what you think is required to allow for a libertarian free will? I hate that term, but I'll go along with it for discussion.

Link to comment
Share on other sites

On 11/30/2017 at 8:31 PM, patrik 7-2321 said:

Hence my question:
What are the actual practical problems with labeling certain (AI-)computers as "conscious" and "intelligent", able to "perceive", and able to perform advanced "computations" on "information", and store enormous amounts of "knowledge" in its "memory"?

To my knowledge, no one in the field has labeled a computer "conscious" or "intelligent" yet. I'm also unaware of many serious AI engineers claiming that their creations "perceive" things. The term I usually hear used is "gather input".

Where did you see those terms used, outside of sci-fi movies? 

------------------------------------------------------------------

To switch gears, your Binswanger quotes:

Quote

"Computers cannot “process information,” because information is not a physical phenomenon. Computers can only combine and shunt electrical currents. Only electricity, not information, has causal impact on the workings of the computer; information does not exist for the computer.

 

What? Information is not a physical phenomenon? So how do humans communicate? Telepathically? Is that how this paragraph came to my attention? Through the supernatural realm? EVERYTHING's a physical phenomenon. How can a rational person say it's not?

Besides, the human brain uses electricity to function, too.

Only thing I sort of agree with in this is that "information does not exist for the computer". Sure, computers don't really grasp that abstraction. Yet. But, then again, no one's claiming they do.

Quote

If all human beings suddenly vanished from the face of the earth, but their computers remained running, there would be no information processing: the computers would merely be combining electric currents and lighting different pixels on their screens, not processing information or performing calculations.

 

That's not true for a couple of reasons:

1. Humans are not the only animals with the ability to process information. Animals wouldn't be able to feed themselves, if they couldn't figure out where the food is (pretty sure "where the food is" is a piece of information).

2. There are robots built to gather and process information, and act accordingly, independently of any human control. They're not very common (because the ones that have partial human control are more capable, for now), but they exist.

Edited by Nicky
Link to comment
Share on other sites

7 hours ago, Nicky said:

To my knowledge, no one in the field has labeled a computer "conscious" or "intelligent" yet. I'm also unaware of many serious AI engineers claiming that their creations "perceive" things. The term I usually hear used is "gather input".

Where did you see those terms used, outside of sci-fi movies? 

In this video a researcher from Hanson Robotics hosts a "debate" between Sophia and Han, two human-like robots. They all discuss some of the concepts in question here. The researcher talks about the robots "learning" from each other and asks them if they "think" they are "conscious". It's all quite silly, but it should help clear up the question about how mixed-up these scientists have become. Either that, or they simply want to deceive the audience.

If you think smartphones and iPads are turning the human race into a bunch of dissociative sociopaths, wait until dudes start dating robots.

Edited by MisterSwig
Link to comment
Share on other sites

Why wait? Unless what is meant is that the phenomenon becomes as widespread as smartphones and iPads. (What percentage of the population did those devices start out with?)

This 58-year-old man has a sex robot girlfriend and a real wife

Spoiler

james-april.jpg

james-april-2.jpg

For all intents and purposes, James seems like a perfectly regular guy. He has a loving wife, a nice home, and a good career as an engineer. However, he also has a 5ft blonde robot called April, which he reportedly has sex with "four times a week".

In fact, not only does he sleep with the robot, he also dresses her, speaks to her, and takes her out on dates.

Link to comment
Share on other sites

13 minutes ago, dream_weaver said:

Why wait? Unless what is meant is that the phenomenon becomes as widespread as smartphones and iPads.

Yeah, I mean when it becomes affordable and available to the average guy. Already it's destroying that man's self-esteem. He doesn't know what he'd do if he had to choose between his real wife and the sexbot?!  Can a man be more pathetic? You know you've reached rock bottom when you stay with a wife for whom you care as much as (or less than) a sexbot.

Link to comment
Share on other sites

19 hours ago, MisterSwig said:

it should help clear up the question about how mixed-up these scientists have become

When you say "these scientists", who are you referring to? There is only one person in your video. And he's not a computer scientist. He used to be a set designer for Disney (for the division that builds all those great theme parks across the world), and now he has his own company doing the same kinds of things.

And he's very good at it, obviously. Those are impressive puppets (for lack of a better term). But it's art (with some engineering behind it, like most performance art), not science. It has nothing to do with the field of Artificial Intelligence.

Link to comment
Share on other sites

6 hours ago, Nicky said:

When you say "these scientists", who are you referring to?

The group at Hanson Robotics, whom the presenter was representing. How about we look at David Hanson himself, a widely respected software engineer? Here's the header quote from his bio page:

Quote

 

“I quest to realize Genius Machines—machines with greater than human intelligence, creativity, wisdom, and compassion. To this end, I conduct research in robotics, artificial intelligence, the arts, cognitive science, product design and deployment, and integrate these efforts in the pursuit of novel human robot relations…”

– Dr. David Hanson, Founder and CEO

 

This doctor of engineering wants to create machines that have better "intelligence" and "compassion" than humans. He's going for both thoughts and emotions. You find a lot of this "emotion" talk among sexbot engineers too. I suggest watching some of the popular videos on YouTube. On the linked bio page for Hanson, the last video embedded is his TED talk from 2012. Not too far into the speech he talks about future robots having "real, deep feelings like sympathy."

How do you have feelings without consciousness?

Edited by MisterSwig
Link to comment
Share on other sites

52 minutes ago, MisterSwig said:

How about we look at David Hanson himself, a widely respected software engineer?

He's a software engineer? Where did he get his degree in software engineering? Where did he work as a software engineer? What is a piece of software he wrote? Where does he even claim to be a software engineer?

AGAIN: David Hanson is not a software engineer, let alone an computer scientist specializing in AI. He is a DESIGNER. And no, he doesn't design software. He designs props. Really cool ones, but props. Like the ones in your video. And, by the way, he doesn't claim that they're anything but props. (doesn't use that word, but he's not trying to deceive anybody into thinking they are more than they are).

Here's a quote from an interview with David Hanson:

Quote

 

Being an artist, you can introduce something that is more startling and disruptive. You don’t have to worry about those incremental steps. You can introduce something that really stirs things up and see what happens. By putting the technology together in this form that may be startling, the technology itself is really incrementally advancing. … With the robots, we put together these dialog systems with today’s AI, but we do it in an artistic way that then can seem like that there’s somebody in there. And arguably, it’s just these ghost-like shreds of who that person is. There’s not really a mind in these machines, like a human mind. But you can convey an amazing impression there.

The technology itself, there’s some advances. But we have not unlocked the Holy Grail of artificial intelligence with these humanlike robots yet. What we have done is put this burning idea in people’s minds. When the robots work well, people start to say ‘Wow, we could that. Should  we do that? What could it be good for? Wow, it could be good for all kinds of things! How could it be dangerous?’ People start to think about these questions – Inspire developers to think of these questions as well, as we go forward.

https://www.digitaltrends.com/cool-tech/qa-with-android-designer-dr-david-hanson/

 

So I don't know how much clearer I can make these three basic facts:

1. David Hanson is not a computer scientist or an expert in AI

2. David Hanson does not claim to be a computer scientist or an expert in AI

3. David Hanson does not claim that his creations are conscious or intelligent. On the contrary, as you can read in the quote above, he openly admits that they are not. He openly admits that the whole thing is staged, and it is art rather than a scientific presentation.

 

Edited by Nicky
Link to comment
Share on other sites

1 hour ago, MisterSwig said:

This doctor of engineering wants to create machines that have better "intelligence" and "compassion" than humans. He's going for both thoughts and emotions. You find a lot of this "emotion" talk among sexbot engineers too. I suggest watching some of the popular videos on YouTube. On the linked bio page for Hanson, the last video embedded is his TED talk from 2012. Not too far into the speech he talks about future robots having "real, deep feelings like sympathy."

How do you have feelings without consciousness?

Tense consistency: a writer's best friend.

https://webapps.towson.edu/ows/tenseconsistency.htm

We don't have feelings without consciousness. We don't have artificial consciousness, we don't have machines that feel, we don't have true artificial intelligence. No one in the field is claiming otherwise. Please take note of the tense I'm using. Please use the same tense consistently, if you wish to have a factual conversation.

Edited by Nicky
Link to comment
Share on other sites

28 minutes ago, Nicky said:

So I don't know how much clearer I can make these three basic facts:

1. David Hanson is not a computer scientist or an expert in AI

2. David Hanson does not claim to be a computer scientist or an expert in AI

3. David Hanson does not claim that his creations are conscious or intelligent. On the contrary, as you can read in the quote above, he openly admits that they are not. He openly admits that the whole thing is staged, and it is art rather than a scientific presentation.

On Hanson's CV

David-Hanson-CV.pdf

he bills himself as an "Executive, Artist, and Robotics Scientist." His doctorate in Interactive Arts and Technology is from the University of Texas. I looked through his CV, and I think you're wrong. He is a leading expert in robotics. You can say he's not a computer scientist, but the man taught robotics at the University of Texas in the Computer Science & Engineering department. He's also published papers on Artificial General Intelligence, which is what he's working on for his robots now.

I think all that goes to your first two points. As for #3, obviously he's not claiming that robots are conscious right now. But where does he say they won't be in the future? And where does he say that creating a conscious, feeling robot isn't his goal in life?

Link to comment
Share on other sites

17 hours ago, MisterSwig said:

On Hanson's CV

David-Hanson-CV.pdf

he bills himself as an "Executive, Artist, and Robotics Scientist." His doctorate in Interactive Arts and Technology is from the University of Texas. I looked through his CV, and I think you're wrong. He is a leading expert in robotics. You can say he's not a computer scientist, but the man taught robotics at the University of Texas in the Computer Science & Engineering department. He's also published papers on Artificial General Intelligence, which is what he's working on for his robots now.

I think all that goes to your first two points. As for #3, obviously he's not claiming that robots are conscious right now. But where does he say they won't be in the future? And where does he say that creating a conscious, feeling robot isn't his goal in life?

So you went from "here's a widely respected software engineer who says his robots are conscious and capable of emotion" to "here's a guy with an arts degree who put the word 'scientist' in his CV and thinks that in the future somebody will create artificial intelligence".

You really didn't need to dig this much for that. If you had just asked, I could've told you that EVERYBODY in the field thinks that we will eventually have true artificial intelligence. This doesn't need proving. What needs proving is the notion that scientists are claiming it already exists. That's the fib this whole anti-science dissertation is built on.

Link to comment
Share on other sites

On ‎11‎/‎30‎/‎2017 at 1:31 PM, patrik 7-2321 said:

Intro:
I think it is undisputed that the development of technologies normally classified as AI hold a great value for us. It is at root simply a kind of human-like computer automation, with applications to almost all technologies we use, and even to science as when AI-software has been used to help solve scientific problems. However I sometimes wonder what to make intellectually of the hype surrounding it, what is valid and invalid, and what is good and bad about it. The first thing I am confronted with in this task is the bad terminology used.

Hence my question:
What are the actual practical problems with labeling certain (AI-)computers as "conscious" and "intelligent", able to "perceive", and able to perform advanced "computations" on "information", and store enormous amounts of "knowledge" in its "memory"?

Many Objectivists rightly object to the common usage of these terms when applied to computers, as they imply a certain materialist view of consciousness. Given that we agree on this however, are the actual problems that result strictly philosophical, in that it creates philosophical confusions over time, or is it that it can cause real practical limitations to technological success? (That would seem to run contrary to the fact that this technology is successfully developing so quickly, wouldn't it?) Maybe the problem is only that it permits certain irrationality in the future projections of how AI will impact human life? Or is it all of the above - if so, why?

What do you think about this? What's so bad about how people use these words? Should we care about it?

(Posted under epistemology because I see this as a good example of the practical application and value of Objectivist epistemology.)

 

To Patrik 7-2321

Rather than attempt to add to a "collective thought" to be gleaned from a combination of the above and my thoughts, I have opted to respond directly to the OP and Patrik with my thoughts.

All things are what they are regardless of what people call them, i.e. regardless of what words and concepts they allege refer to those things.  There will always exist people who think things are not what they are, and there will always be people who call things what they are not... and finally in the smallest category will be some people who will do the work and be careful enough to see things for what they are and also to call them what they are accordingly.

AI is "artificial intelligence".  Insofar as one remembers what "artifice" and "artificial" mean, the term is correct in the context of current and near future technology. Anthropomorphizing of machines of any kind (toys, mannequins, medical models, computers etc.) in everyday language is perfectly normal, after all.  A toy man is a "toy man" in common parlance... however, far from being a kind of "man" .. i.e. a toy"ish" man, or a man with the property or attribute of "toy"ness ... in actuality "toy man" is a "toy in the shape of a man" and not any kind of man whatever.  A toy man has hands which do not grasp and eyes which do not see... as such they are not actually hands or eyes at all (although they represent them or look like them).  A computer is configured to act on signals and media in a manner which produces different signals or media, it transforms input to output, in such a manner (we have configured it so) so as to imbue the output with something meaningful to us, information.  WE transform information by thinking, and store and recall it with memory, it is perfectly natural to use words like "memory" and "thinking" to characterize what computers do which remind us of what we do.

Nothing about how these machines do what they do to what they do it to is sufficiently similar to (in the ways that matter ... which we have yet to discover) what the mind does that there would be ANY justification for calling them the same thing. 

A human mind, however, is natural and as a natural system it has identity, and it is configured and functions accordingly.  A similar natural system, which in the ways which matter, is configured and functions in a manner similar enough to a human mind would be "conscious".  But until there is a science of consciousness which fully understands exactly what kinds of natural systems, their configurations and functions are conscious rather than simply being and functioning without consciousness, and WHY, we could never hope to design a conscious system, let alone objectively and scientifically evaluate whether the thing created is conscious.

Of course to properly understand complex systems on the verge of consciousness we would need some way to experiment, and well fully developed human brains are not accessible... experimentation may be too dangerous.  Animals however are, and combined with computer monitoring and simulations and integrated cybernetic systems, we could do the tinkering necessary for experimental investigation which is required for scientific inquiry.  It may be that like a complex process of a weather pattern we discover that in the conscious mind some nonlocal system wide process which is self-patterning or self-reinforcing is the signature of consciousness... but this is mere speculation. 

Bottom line is that computers (as we know them) will be necessary in the investigation of mind, but the minds and principles of mind we finally identify will not be the same as the machines of today.

Manufactured Intelligence (MI) (as distinguished from AI) is not something which we are close to creating, however, as the science of mind progresses, I have no doubt that it will be solved.  If not 1 hundred years from now, most likely within 1000 years from now.

 

Edited by StrictlyLogical
Link to comment
Share on other sites

3 hours ago, Nicky said:

So you went from "here's a widely respected software engineer who says his robots are conscious and capable of emotion" to "here's a guy with an arts degree who put the word 'scientist' in his CV and thinks that in the future somebody will create artificial intelligence".

For the record, none of that is me talking. I never said any of that, and I was quick to clarify what I meant by "mixed-up scientists" and "consciousness" one post after you took issue with my initial comment.

Link to comment
Share on other sites

I have another answer more specific to the original question.

A machine doesn't have consciousness; it doesn't have that native access to the mental side of metaphysical reality like humans do (nor, as a part of that, the native awareness of value), and nor does it have free will. This gives machines some specific limitations in intelligence, and I wouldn't presume that this theoretical limitation will have no practical significance; I think this will imply some limitations in the abilities of machines.

Because machines have no free will, they fundamentally aren't creative, nor can they think. Because they don't have consciousness, they will not understand, nor can they grasp meaning. Since they don't have awareness of meaning or value, they won't be able to create art or have emotions. Since they don't have any truly metaphysically mental existents, they won't perceive, nor have actual knowledge or concepts, and so they can't actually learn. So we will never see any mechanical philosophers, artists, lovers, or citizens. A machine will never have a "sense of life". A machine can never have personhood. And while machines may be able to imitate certain aspects of those identities, they are fundamentally limited in their ability to understand the nature or meaning of reality, the value of art or people, and the nature of morality or law, and so they will be limited in their ability to function, let alone to think creatively, in any of these regards.

That being said, machines can carry out the methods of logic, and thus can do something analogous to the inductive and deductive inference that humans can do, and even form something analogous to concepts - with logical definitions, which can both be built up into hierarchies of abstraction as well as reduced to regularities in sensory data, and even have words that serve as symbols. So when it comes to the logical relationships between words, actions, and regularities in sensory data, I think machines have tremendous potential power in terms of doing something analogous to learning the logical, conceptual structure of reality, and in solving complex physical and linguistic problems on a level that we don't find anywhere else in nature except in humans.

Now I say machines can do or have things "analogous" to humans in many cases. That's because for example a percept or a concept is a mental existent, it has a different metaphysical status than the sensory data and logical representations that a machine works with. The machine can only work with percepts and concepts in the logical and reductive senses, not in the metaphysical or aesthetic senses. Its learning will be restricted to the physical, reductive meaning of words, and the abstract, logical relationships between words - not the metaphysical or normative meaning of the concepts that the words stand for. A machine cannot generate the "right" answer, or tell you the way things "ought" to be, because it fundamentally doesn't have knowledge of reality or value.

Fortunately, for pragmatic purposes, a machine can induce logical relationships in sensory data, it can form logical definitions from its observations, and it can from there deduce abstract logical connections, and ultimately, optimal, practical solutions to physical problems, and intelligible answers to verbal questions. When implemented with the right philosophical approach, and applied to the data of reality and of human natural language, these sort of mechanical algorithms can and will have extremely far-reaching economic, scientific, medical, and military consequences, to name a few.

Edited by intrinsicist
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...