Jump to content
Objectivism Online Forum

Zombies and Artificial Minds

Rate this topic


Recommended Posts

Harrison,

I'm glad you revived this post.  As you may recall, I don't believe that AI is possible, because, my position is, that it represents a fundamental misunderstand of what Algorithms are.

Let me ask you a question.

Assuming that you believe that it is possible to write an AI program, would it be possible to write a program that could write an AI program?  And, if yes, would it be possible to write a program that can write a program that can write an AI program?  And, if yes, would it be possible to write a program that can write a program that can write program that can write an AI program?

Link to comment
Share on other sites

13 hours ago, New Buddha said:

... Assuming that you believe that it is possible to write an AI program, would it be possible to write a program that could write an AI program?  And, if yes, would it be possible to write a program that can write a program that can write an AI program?  And, if yes, would it be possible to write a program that can write a program that can write program that can write an AI program?

Not so much a problem of writing, as one of editing...

"The thinking is that just as species in nature mutate and genes are deleted, added and merged to adapt to different environments, the mother robot would facilitate its own version of evolution. Given only a single command to build a robot capable of movement, without any human intervention or computer simulation, the mother robot did just that." http://newatlas.com/evolution-machine-mother-robot/38903/

I think Philip K. Dick's threshold of empathy is probably the more relevant threshold to overcome.

Link to comment
Share on other sites

Slight tangent: would a system which behaves, functions, like a real conscious human brain qualify as "artificial"?  If neurons and their connections, synapses, and neurotransmitters, in their complex organizations and ebbing and flowing processes were replicated ... perhaps with synthetics, or biologics, or partly electronics... if it did what our minds do, and gave rise to consciousness... should we even call that artificial? 

 

I tend to agree with Budda, if I understand his thrust, that simulation, algorithmic mimicry to give the appearance of a function/process i.e. which reproduces an expected result of a process/function is not the same as reproducing the function/processes using different means, or material.

Starting with a wooden lever, calculating what a wooden lever does simulates leverage, creating a steel lever creates another instance of functioning leverage in reality.

Link to comment
Share on other sites

The idea that I'm trying to develop is, What are the limits of computation and/or formal, deductive logic?

People generally think that "in theory" it should be possible to write an AI program.  If so, would it also be "in theory" possible to write a non-AI program that can write an AI program.  (To be very clear, I am not advocating a "theory vs. practice" dichotomy.)

Link to comment
Share on other sites

I'm looking for distinction, which I presume would define alternate types of intelligence.  If the question is, can programs be written to produce intelligence and/or simulate intelligence, I think so.  If it looks like a duck and thinks like a duck, it's a duck, at least effectively.  But can an intelligent duck think like a human?  That's why I bring up Dick's premise that empathy might be the kind of distinction that separates human intelligence from other types of intelligence.

It's an interesting question to me because I think we are nearing a time when artificial intelligence will achieve independence from, and enter into competition with, human intelligence.  Sci fi resources tend to agree that it doesn't work out too well for the wetware at that point.  I guess what I'm working towards is, it may not be so relevant how a program acts, as how a program feels about its actions.

Link to comment
Share on other sites

52 minutes ago, New Buddha said:

People generally think that "in theory" it should be possible to write an AI program.  If so, would it also be "in theory" possible to write a non-AI program that can write an AI program.

Yes. It wouldn't just be possible, it would be easy. You just tell it to write the same exact program the people who wrote the AI did.

Link to comment
Share on other sites

1 hour ago, StrictlyLogical said:

Slight tangent: would a system which behaves, functions, like a real conscious human brain qualify as "artificial"?  If neurons and their connections, synapses, and neurotransmitters, in their complex organizations and ebbing and flowing processes were replicated ... perhaps with synthetics, or biologics, or partly electronics... if it did what our minds do, and gave rise to consciousness... should we even call that artificial?

Yes, artificial means man-made. Not that it matters what you call it.

The significance of such an artificial brain would be that it wouldn't have the physical limitations of the human brain: it could be made bigger in scale. It's just ridiculous to think that we could have the technology to re-create a human brain, but we would be limited to the same exact size as the human brain.  And, of course, once it's bigger than a human brain, it could expand itself faster than any human could expand it....resulting in Singularity.

That said, I seriously doubt that's the path we will take to Singularity. I see two easier paths:

1. Rather than re-creating the human brain from scratch, we could take an existing human brain, and expand it. The end result would still be Singularity. The expansion would be exponential, for fairly obvious reasons.

2. Creating a software/hardware combo that is smarter than a human. Same result.

My suspicion is that we'll be able to enhance ourselves (using computer tech) before we're able to create a pure AI. But you can't tell which will happen first for sure, because it's subject to human choice: one of the two paths (brain modification, or pure AI) could be closed off through political means, or by the choice of the researchers themselves, for long enough to allow the other path to come about.

But every argument I've heard against the possibility of either happening, given our ability to expand computing power and the ability of computers to interact with the world, has been very, very weak, and very easy to dismiss.

 

 

Edited by Nicky
Link to comment
Share on other sites

1 hour ago, Nicky said:

Computation doesn't have a limit. Feel free to try and name that limit, I'll just add one to it and prove you wrong.

The limits of computations within Euclidean geometry are determined by it's axioms.  You cannot derive Non Euclidean geometry without the introduction of new axioms - which required inductive reasoning.  Inductive reasoning does require thinking logically, but this is distinct from the application of deductive, formal logic within a defined system.  That's why I also said, in the above post, formal deductive logic.  Deductive reasoning is not capable of generating new knowledge.

I've freely admitted on another post that my knowledge of programming is very limited.  But I'm genuinely curious.  How does programming address inductive reasoning?

Link to comment
Share on other sites

8 minutes ago, New Buddha said:

I've freely admitted on another post that my knowledge of programming is very limited.  But I'm genuinely curious.  How does programming address inductive reasoning?

I don't think this is the right question. For programming to address induction, there is no "special" task to accomplish that's different than explaining how induction works. So, this isn't the right question - what you'd be better asking is what examples of induction are there in computing today, if any? And there are:

http://neurosciencenews.com/robotics-learning-neurodevelopment-3187/

http://www.bloomberg.com/features/2015-preschool-for-robots/

The "limits of computation" you're thinking of doesn't really refer to the limits of computing. Rather, it's the limits of deductive axiomatic systems. This is not what stops or prevents the development of artificial (man-made) intelligence. Theories of computation deals with the sort of resources needed to solve certain classes of complex problems. The limits of computation are determined by a computability theory. I don't know a lot about the P=NP problem, but so far no one has a definite answer.

Not sure it's so relevant, but this psychology professor at Princeton does a lot on induction. A lot is applicable to thoughts you've had so far and are asking: http://www.princeton.edu/~osherson/ 

Link to comment
Share on other sites

1 hour ago, New Buddha said:

The limits of computations within Euclidean geometry are determined by it's axioms.  You cannot derive Non Euclidean geometry without the introduction of new axioms - which required inductive reasoning.  Inductive reasoning does require thinking logically, but this is distinct from the application of deductive, formal logic within a defined system.  That's why I also said, in the above post, formal deductive logic.  Deductive reasoning is not capable of generating new knowledge.

I've freely admitted on another post that my knowledge of programming is very limited.  But I'm genuinely curious.  How does programming address inductive reasoning?

I'm probably not the right guy to ask about this  (because Prolog, which is the programming language most people learn to get themselves started with inductive programming, probabilistic programming, and other forms of machine learning, is the only class I ever failed in college...and, since then, I've made sure to keep a safe distance from anything even remotely to do with the subject), but, from what people who passed tell me, things are going well.

Inductive logic learning is the reason for advances in natural language processing, for instance. The next big thing for it (again, from what I hear) is probably gaming (making the AI opponent able to learn).

Another interesting concept in programming (that, like I said, I prefer to marvel at from a safe distance) is abductive reasoning, and the PRISM programming language AI researches in Japan have developed (no relation to the NSA program of the same name): http://rjida.meijo-u.ac.jp/prism/ It allows the programmer to write "probabilistic inference algorithms", and it is constantly being improved for efficiency.

Link to comment
Share on other sites

26 minutes ago, New Buddha said:

In your opinion, what is?  AI is always "15 years away!", and has been since the 1950's.

Understanding how conceptual learning works is the barrier. We barely understand it in children, let alone making a robot that learns like a child. But, great strides are being made to solving the problems of learning with work in robotics and psychology.

Programming isn't a deductive system. What it IMPLEMENTS may or may not be. It's better to say programming LANGUAGES may operate deductively or not, depending on how they're built. Computer science itself is not necessarily any more inherently deductive than engineering.

NOTE: I'm not a programmer except from about 4 programming classes in an IT program, but I try to keep myself informed of computer stuff.

Edited by Eiuol
Link to comment
Share on other sites

1 hour ago, Eiuol said:

Computer science itself is not necessarily any more inherently deductive than engineering.

But engineering, at any given moment in time, is a deductive axiomatic system.  When an engineer sits down to calculate the size of a beam, the outcome of his calculations are determined by the premises/axioms from which he works.  He doesn't "learn" anything new.  This is why an Engineer can use a computer program to perform the calculations.  New empirically derived information IS  constantly being incorporated into engineering disciplines, but for the same reason that men learned, by experience/induction, that the axioms of Euclidean geometry did not apply to curved surfaces.

The ideas that I'm working thru is that the premises behind AI (as commonly held) lie in Analytic Philosophy, and are a form of bottom-up Reductionism.  Edit: And this would stand in stark opposition to Objectivist Epistemology.

 

 

Edited by New Buddha
Link to comment
Share on other sites

11 hours ago, New Buddha said:

The ideas that I'm working thru is that the premises behind AI (as commonly held) lie in Analytic Philosophy, and are a form of bottom-up Reductionism. 

Ok. Give an example of bad philosophy being applied in AI development.

Frankly, that should've be the starting point of this "work". That's your premise. You should've confirmed it before going off dismissing all AI research.

Link to comment
Share on other sites

5 hours ago, Nicky said:

Ok. Give an example of bad philosophy being applied in AI development.

The current displacement of human labor by robotics is an indication that the advancement from useful tool to competitive life form, if not bad, is at least a dangerous philosophy to pursue.  For example, if Henry Ford introduced self governing robotics, would the benefit of increased capacity for human labor and earned wages during the industrial revolution occurred?  I think we're already seeing indications that humans are falling behind in a labor race designed to make them obsolete.

Link to comment
Share on other sites

1 hour ago, Devil's Advocate said:

The current displacement of human labor by robotics is an indication that the advancement from useful tool to competitive life form, if not bad, is at least a dangerous philosophy to pursue.  For example, if Henry Ford introduced self governing robotics, would the benefit of increased capacity for human labor and earned wages during the industrial revolution occurred?  I think we're already seeing indications that humans are falling behind in a labor race designed to make them obsolete.

This reminds me of the Luddite reasoning that was behind trying to destroy the cotton and woolen mills.

Link to comment
Share on other sites

Yes, and so it would be as long as there remained some human niche in a marketplace dominated by an artificial labor force, superior by every measure to the human one.  Perhaps you can suggest such a niche?  The philosophy being acted on in this case, carried to completion, would replace that niche entirely.

Link to comment
Share on other sites

On 9/17/2016 at 9:41 PM, New Buddha said:

But engineering, at any given moment in time, is a deductive axiomatic system. 

As a field, engineering is not itself purely deductive. That was my only point. No -field- is. You could argue some programming styles or techniques operate as purely axiomatic systems, but that isn't to say all programming only allows for purely axiomatic systems.

I don't think all of AI is all derived from analytic philosophy. Some thinkers were, some weren't. You seem to be stuck in computing from the 60s or earlier. You're just skeptical of computational ability or technology as compatible with induction or experience. The links I gave before are supposed to show the ways that induction is being thought about.

Link to comment
Share on other sites

3 hours ago, Eiuol said:

As a field, engineering is not itself purely deductive.

I don't claim that it is.  In fact, just the opposite.

On 9/17/2016 at 6:41 PM, New Buddha said:

New empirically derived information IS  constantly being incorporated into engineering disciplines,

Engineering, and science in general, is "Post Bacon".  Meaning that it is inductive and  empirical, i.e. derived entirely from particulars to generalizations.

My post starts out with, "at any given moment in time".  Meaning that applied sciences are both "open ended" and that rigorous attention must be applied to definitions as they are currently understood and used.

   

 

Edited by New Buddha
Link to comment
Share on other sites

Eiuol,

To clarify, I am talking about the actual mathematical equations that are used in Engineering to solve for such things as moment, shear, deflection, etc.  They may, and do, change over time (years, decades, centuries), but the operations of the equations are purely deductive.  My introduction of Induction to this post is following along the lines of Harriman's The Logical Leap as an attempt to understand how programming, which is largely, if not totally deductive, can account for the acquisition of new knowledge.

 

Edited by New Buddha
Link to comment
Share on other sites

23 hours ago, dream_weaver said:

... Luddite reasoning ...

My counter to your suggestion that the Henry Ford example was similar to that of the Luddites was serious, as a friendly challenge :devil:

The Luddites were opposed to losing a familiar form of labor, even though the advance in technology made alternate forms of labor possible.  Even today's opposition to migrant laborers retains an ability to compete (at a lesser wage) for available jobs.  The kind of technology we are discussing here doesn't improve human labor, it removes it.  Let that sink in...

With no upper limit to technological advances in robotics and AI, all humans will be effectively displaced from any form of wage for labor role in the marketplace.  No wages = No consumers = No marketplace.  My position is that the introduction of sentient autonomous entities as competitors in a human marketplace, as a practice of laissez faire capitalism, would be a form of social suicide akin to the development and release of a super predator designed to eliminate all human competition.

So I ask you again, what labor niche remains for humans to fill in such a scenario?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...