Jump to content
Objectivism Online Forum

Can computers engage in concept-formation?

Rate this topic


Recommended Posts

When I'm thinking about concept formation and objectivism, "measurement omission" comes to mind.

M.O. is a great way of understanding and explining concept formation for/in humans. Unfortunatelly, when it comes to computers, things are very different.

Basically, people extract an abstract class of objects by their common properties, properties of particular meaning to men. It just seems to me that *identifying* properties comes naturally to men... omitting their measure to group objects into classes is secondary.

If I were to teach a computer how to form concepts, my first problem would be how to have it identify, detect and define properties.

Those of you familliar with Object-Oriented Programming know that contemporany programming languages work with classes and particular instances of those classes, objects.

Basically, what the programmer does is define each class' properties and then operate with objects of that class. Particular values to those properties (measurements) are of secondary importants.

So... how do people extract properties from the vastness of impulses out there, and how do you think a machine could do it? (since computers don't really care about measurements the way we do, but have serious problems with properties)

Link to comment
Share on other sites

  • Replies 199
  • Created
  • Last Reply

Top Posters In This Topic

I do some Flash programming (intermediate level - I don't do any data-driven programming), so I understand what you are talking about here.

I don't understand why it is important to teach abstraction to a computer. All that is important is that I am able to abstract, and can write my ActionScript based on what I know the computer can understand.

Link to comment
Share on other sites

Gabrielpm,

I've done a fair amount of programming in C++, php, and VB, and I know a thing or two about object-oriented programming.

The resemblance to Objectivist Epistemology is actually quite striking. However, the human does all the really tricky stuff when he writes the code.

Before writing the program, he says to himself, "What are the properties that each data object will have to have?"

A computer doesn't deal with "things" quite like an animal does. It deals only with information. In order to "program a computer to form concepts," you'd have to tell it what to measure, and how, before even starting. Then you'd have to program it to separate certain things according to similiarity and difference and program it to be able to define those qualities on the fly (another HIGHLY non-trivial task.) Even before doing that, you'd have to build a machine capable of goal-directed activity, or all the measurement in the world is still not concept-formation.

Mind you, such an application, even if it modelled concept formation only very roughly, would have some really incredible applications. It is conceivable that such a system would allow us to feed it a huge amount of raw data, and turn out reports and analyses of factors that we hadn't even considered. (Incidentally, I'm sort of working on something somewhat in this direction. I'd share more, but I'm paranoid about my intellectual property being stolen!)

how do people extract properties from the vastness of impulses out there, and how do you think a machine could do it?

1. Automatically as the result of millions of years of harsh evolution.

2. After an unfathomable amount of work and at least a fair amount of genius went into the project.

Isaac

Link to comment
Share on other sites

Instead of making a computer take precise measures, you can have it compare the size of a given object to the size of some other object.

I don't think that you can have a concept-building computer that doesn't have any sensors which basically perceive and interact with reality. Therefore, let's say you make a robot and it has a hand. There's a cup on the table in front of it. The task of the robot is to learn to hold the cup in its hand. You don't need a measurement in centimeters or milimeters of how big the cup is, or how far away it is. What you need is simply to check whether the distance between the robot and the cup is shorter than the robot's arm and whether the hand isn't too small to hold the cup. When the robot begins reaching for the cup, it doesn't reach for it by a trajectory that's already calculated. It moves its arm and then checks whether the distance between its hand and the cup is shortenning. If not, it must move the arm in another direction. The robot moves the arm in that direction then, which makes the distance shorten fastest.

This example, although somewhat trivial, holds the key of concept building. You have a goal and you find a way how to achieve it. Then, when you're done, you remember how you reached the goal the first time and next time you're more experienced at it. However, in this manner, you would have to make a new program every time the final objective was changed. This is why I believe that there is a more fundamental pattern of concept-building - what you basically do when you make the above robot's program is tell it how to build a concept of getting the cup and hold it in its hand. The goal is, however, for the computer be able to "write its own programming." To check the means at its disposal, to choose the objective and to find out on its own how to use or acquire the necessary means to reach the objective. The part of finding the way to use the means to reach the objective is, IMO, by far easier than the part of choice-making.

Link to comment
Share on other sites

Teach a computer to form concepts? There are some breakthroughts in computer science that one would have to develop before one could even approach concept formation.

First, one would have to tackle Turing's Thesis, which states that the most powerful (in terms of computability) kind of computer is the Turing Machine. This is a computer with a single tape of unlimited length. It is limited to the commands (as I recall) of move forward, move backwards, read from the current tape position, write, addition, conditional test, and branch.

Turing proved that such a computer could not solve the "halting problem". I.e. it is not possible to write a computer program which determines if another program along with its entire input data set will halt or infinite loop.

I don't recall who proved it, but Von Neumann machines (i.e. modern CPUs with RAM, etc.) are Turing-equivalent.

One would at least have to show that Turing's error was not to recognize that a premise can be true, false, or arbitrary (he recognized only true or false). But then, the bigger problem is to try to program this into a computer. True and false are easy at least to represent: 0 or 1. This allows one to sidestep the distinction between the intrinsicist's and objectivist's view of true or false. In conventional computer programming, such distinction is not necessary.

How does one represent arbitrary, and how does one program a computer to determine it? Context may as well be the next galaxy as far as getting a computer to see it.

A Turing machine operates on one datum at a time. Context is something that's built on many individual data all seen as part of one "big picture". How do you code this into a machine?

Anyways, the first breakthrough is one of two things. One is that the Turing Machine was more capable than he thought. This would include a more advanced grammar and state machine. Two would be to describe a more advanced machine (e.g. the human brain?) and particularly to show how it is not Turing-equivalent.

Once past this issue, I think there are a number of others that one would have to address.

P.S. If you think you have a solution to Turing's Thesis, I'd like to discuss with you offline! ;)

Link to comment
Share on other sites

Bearster,

I haven't done much research on this, but I've read somewhere that quantum computers could have both true and false as ordinary computers, but can also recognize this arbitrary bit. As I said, I haven't been into it much, but if that's really possible then problem is solved. :D

Link to comment
Share on other sites

People use the word "consciousness" a lot, as if it is a deep and meaningful term which proves lots of theses, without any clear understanding of what really constitutes consciousness, (beyond, of course, the direct knowledge that all conceptually conscious entities posess.)

The fact that all conscious entities at present are biological organisms does not prove that biological life is a logical requirement for consciousness. When we say "consciousness," we are referring to a particular way of processing data that is gained from the world around an agent through the use of mechanistic information-gathering parts. I'm not going to launch into a detailed analysis of what exactly consciousness constitutes, but suffice to say that if the information processing architectures meet certain requirements, then we can say that the agent is consciousness. (This is basically a brand of functionalism - "if it thinks like a conscious agent, it's a conscious agent.")

An important part of consciousness is that we can describe the process without having to describe the particular physics of the conscious agent. One does not have to discuss neurology to discuss phychology, for example. Granted, neurological knowledge may enrich psychological knowledge, but they're still two separate fields. A 2-year-old or a primitive knows nothing of brains, but he can still say true things about his thoughts. We can talk about information storage and processing independent of the media, while assuming at all times that there always is *some* medium. But if the structure of the agent is such that it permits the particular sort of processes that constitute consciousness, then the agent is conscious - and it has not been shown conclusively that a biological implementation is a requirement of this functionality.

"But it HAS to be alive!"

That's not an argument. The "non-conscious perfect human replica," which can pass for a human but does no more thinking than a highly-specialized toaster, is an impossibility. In order to do human-like things reliably, an agent would require highly complicated control structures much like those in a human mind. (Of course, it's possible as well that they might be radically different in some ways.) If that were the case, then we could talk about that information processing apart from talking about the physics of its "brain," and it would have to posess information regarding its internal states in order to perform its activities, so, according to any reasonable account of consciousness that I've seen, it would be conscious.

However, at this point in history, computers are closer to toasters in their information control architecture than they are to people. Meaning, they are completely non-self-determined, non-goal-oriented, and trivially mechanistic. They're complicated, but not conscious by a long shot.

Isaac

http://isaac.beigetower.org

Link to comment
Share on other sites

I have a degree in Cognitive Science, Isaac, and I have to say that you have stated its position very well. I completely reject that view and it doesn't just come down to "it HAS to be alive."

If you are interested in why Objectivism rejects such a view, I refer you to Ayn Rand's "The Objectivist Ethics" and Harry Binswanger's October 1986 article "The Goal-Directedness of Living Action" (The Objectivist Forum).

P.S.--I forgot an important section from OPAR ("Objectivism: The Philosophy of Ayn Rand" by Leonard Peikoff). See "Life as the Essential Root of 'Value'" pp. 207-213 hard cover ed.

Link to comment
Share on other sites

If a computer could be programmed to be consciouss, could it be called alive? If it could choose its values, acchieve them, if it could own property, earn it and make other consciouss computers, should we call it alive and grant it individual rights?

Link to comment
Share on other sites

Short answer: yes and no.

Long answer: Yes, it is conceivable that a non-biological entity could qualify as a person, and if it did, then it would have rights.

However, you've radically over-simplified the problem. A conscious artifact would be as dissimiliar as can be from computers as we know them today. So, "if you could program a computer to be conscious"--stop right there; you can't. That's like saying, "if you could ride a tricycle naked up Mt. Everest, would you be able to see my house from up there?"

Isaac

http://isaac.beigetower.org

Link to comment
Share on other sites

Bowzer,

I've read OPAR and "The Objectivist Ethics." I've also read Binswanger's, "Life as teleological goal-directed action," and "Volition as cognitive self-regulation." I'm not sure if maybe you're referring to one of those, or if this is some other Binswanger article that I haven't seen.

However, I'm still not convinced. The argument comes down to, basically, "An agent must be meaningfully goal-directed in order to be conscious." That is, we can reject the Dennet-style argument that a thermostat exhibits goal-directedness simply because it keeps the room a certain temperature. The goal must "make a difference" in some meaningful sense to the agent pursuing it.

It is not that difficult to imagine an artifact that would exhibit meaningful self-generated goal-directedness on the Objectivist account, which could be created with current technology. Of course, it's cheaper and easier to build devices which do the same tasks without the overhead of complicated image processing and goal selection and whatnot, not to mention the inherently delicate nature of an agent that must take certain action or die. (That's not to say that such a device would be a person or would be purposeful, mind you, but neither are lots of other goal-directed agents, like sunflowers and sea sponges.)

I'm also not convinced that the requirement of mortality is really justified, and this is the crux of the argument for life as a requirement of meaningful goal directedness. If there were some way to inject me with a serum that would make me unable to be destroyed by any known means, and remove my requirement for food, water, and sleep, and so on, I would still be a person. I would still have goals, and pursue them. Things would still "make a difference" to me, even though I would not be threatened with death. For example, I could crash my car, just for the fun of it, which I would never do now. So why not do it? Because I enjoy the car. So it would still suck. One does not have to be mortal to regard some goals as more worthy of pursuing than others.

In short, I think that the Objectivist argument that an agent must be biological in order to be conscious really DOES come down to "but it HAS to be alive!" Why does "the meat matter"? Consciousness is a functional concept, not a structural one. If an agent were to process information in the same way that a human does, (or in a way that was similar to a relevant degree in the relevant aspects that make up what we call "consciousness"), then I'd be prepared to say that it is conscious.

Binswangers argument, as I remember it, was basically, "Machines are programmed to do only what we tell them to do. Therefore, they're fundamentally incapable of goal-directedness." This reflects a grave lack of knowledge about computer engineering on Binswanger's part. (Actually, since it was written in the 80's, perhaps it was just that he allowed his philosophical realm of possibilities to be bound by the technical capacities of the day - also a mistake.) None of the people who designed and programmed Deep Blue could have beaten Kasparov in a game of chess. I'm not claiming that Deep Blue was a person, or even goal-directed; but it is an example of a machine who's bahavior is radically outside the realm of things that the programmers could have forseen.

You can make two objections to this point, that I can see.

1. It is not feasible to create a non-biological agent that does the "consciousness things", whatever they may be.

This is an empirical question, for cognitive scientists and engineers to solve, not philosophers. Philosophy's task in this is to set up the problem - define what consciousness really is, and thus, figure out what would qualify as a solution. It is NOT to state at the outset whether or not this is possible. It would have been right for a philosopher to define what "flight" is in the 1800s, but a philosopher who said that a heavier-than-air flying machine is impossible would have been myopically overstepping his bounds.

2. Even if it processed information in exactly the same way, if it's not biological, it can never be a person.

This is ridiculous. It comes down to saying that there is some intrinsic quality of biological organisms apart from any of their characteristics. Objectivism rejects this sort of view in all other areas - yet Objectivists seem to make this argument quite often regarding this question, in my experience.

But when you destroy a machine, it just changes form, it isn't really "gone." Only living things can be destroyed.

If I write a document on my computer, and you smash my computer, the matter has only changed form. But the document may well be gone forever, and the computer may be irreparable. Similarly, if you shoot a man in the head, his matter remains, but his information and processes are quite likely gone for good. I do not see any reason to believe that this is a property that is restricted to biological organisms.

<$0.02>

Objectivism is strong on ethics and politics and metaphysics, and the best account of concept formation ever. But when it comes to the trickier aspects of properly marrying epistemology and metaphysics, I think that Ayn Rand had better things to do, and no one yet in her wake has been up to the task of expanding her philosophy in this area.

</$0.02>

Isaac

http://isaac.beigetower.org

Link to comment
Share on other sites

Binswangers argument, as I remember it, was basically, "Machines are programmed to do only what we tell them to do.  Therefore, they're fundamentally incapable of goal-directedness."  This reflects a grave lack of knowledge about computer engineering on Binswanger's part.  (Actually, since it was written in the 80's, perhaps it was just that he allowed his philosophical realm of possibilities to be bound by the technical capacities of the day - also a mistake.)

Before you accuse Harry Binswanger of "a grave lack of knowledge" or of making a philosophical "mistake," perhaps you might read and directly address the arguments he makes in regard to the meaning of "goal-directedness."

None of the people who designed and programmed Deep Blue could have beaten Kasparov in a game of chess.  I'm not claiming that Deep Blue was a person, or even goal-directed ...
Then why bring it up? The issue is goal-directedness, and its philosophical and scientific meaning in regard to computers.

<$0.02>

Objectivism is strong on ethics and politics and metaphysics, and the best account of concept formation ever. But when it comes to the trickier aspects of properly marrying epistemology and metaphysics, I think that Ayn Rand had better things to do, and no one yet in her wake has been up to the task of expanding her philosophy in this area.

</$0.02>

Which only goes to show that you get what you pay for.

Link to comment
Share on other sites

IIf you are interested in why Objectivism rejects such a view, I refer you to Ayn Rand's "The Objectivist Ethics" and Harry Binswanger's October 1986 article "The Goal-Directedness of Living Action" (The Objectivist Forum).

Just a small correction. The article is in the August 1986 issue, not October.

Link to comment
Share on other sites

Just a small correction. The article is in the August 1986 issue, not October.

As an extra added bonus, that issue also has an article I wrote: a review of the book Marva Collins Way.

Link to comment
Share on other sites

It seems like the contradiction is that a human would have to create that first concious computer, and the way it is programmed would control its functions and abilities. So if the computer's "conciousness" is dependent on how it was programmed, would it really be an independent concious being - or simply the tool of whoever created it?

Would it's choices be pre-programmed outcomes of logical thought originated by human premises of logic, or would it be created in a way to allow for a kind of evolution of it's own logic? Another question would be, would it have the concept of mortality (because without that concept morality would be altered significantly)?

I think this is an interesting idea of a concious computer, but I don't know enough about computer science to undertand the conceptual programming that goes into these machines; so hopefully someone with more knowledge can jump into the conversation and answer some of my questions or explain the logical fallacy within the idea of a concious man-made machine, if there are any.

Link to comment
Share on other sites

"It seems like the contradiction is that a human would have to create that first concious computer, and the way it is programmed would control its functions and abilities. So if the computer's "conciousness" is dependent on how it was programmed, would it really be an independent concious being - or simply the tool of whoever created it? "

No more so than our inability to digest the fiberous plants of the plains and our lack of predatory locomotion/attack physiology after a dramatic climate change which forced the Jungles to recede is the master of our minds (although it probably was the cause IMO)

But this of course assumes that the "premises" which the computer would be programmed with are neutral, and do not interfere with its "independence" . While I do not know much about Computer Science, I would imagine this would be an incredibly difficult , and most likely undesirable situation.

I like my computers like the Robot in Rocky IV, designed to be my servant :)

Link to comment
Share on other sites

I'm a radical optimist when it comes to computers; I believe firmly that the technological revolution will enhance our lives to a greater extent than even the industrial revolution did. I am ceaselessly taken aback by what computers do for us every day. That said, I also know that computers will never be conscious.

I know where you're coming from, source. Given the state of academic fields today--fields like cognitive science, artificial intelligence and (god save us) philosophy of mind--it is understandable that a question like this would arise. Computers are anthropomorphized and ascribed characteristics of consciousness (e.g., "information processing", "learning", memory", etc.) in pretty much every theory out there. Even basic computer textbooks make this mistake. If you know the term from Objectivism, stolen concepts are found in abundance.

We know that a program will never make a machine conscious because of what we know about the nature of consciousness. Consciousness is a teleological function of living organisms. Consciousness--at it's most fundamental level--is a survival mechanism. This is just as true if you are talking about a rat as it is if you are talking about a human.

On the other hand, it will never be true of computers which will have no need to act. You cannot instill the fundamental alternative of life or death into a machine no matter how complex the program.

I suggested some readings in this thread.

Link to comment
Share on other sites

Richard Taylor gives a good argument against AI in his book "Action and Purpose". I'll summarize it here.

Imagine a little old lady working in a factory line, threading needles. She's been at it for a while and she's gotten pretty good at it. She can manage to get the thread through the hole 95% of the time. One day she maliciously decides to thread ineptly. She fumbles, shakes, etc., and only gets the thread through 50% of the time. The difference in her behavior can only be understood by reference to her purpose -- say, a grudge against her employer.

Eventually someone develops a needle-threading machine, which replaces the lady on the assembly line. It has the same accuracy she originally had: it only misses 5% of the time.

Could we say that the machine is trying to get the thread through the eye on purpose? No, obviously not. Imagine telling an engineer to construct a machine which would miss on purpose. What could he do, except make a flawed machine?

[Quoting now from Taylor]

"Now the lady, as we described her, was not merely in need of adjustment when she began missing half the time. She could do much better; she was still perfectly able to get the needle through almost every time; she was only missing intentionally, and perhaps pretending otherwise. But what would a machine be like which performed in the same way, but which was still, without any adjustment, able to do better, one which could get the needle through almost every time but instead missed half the time, on purpose? Suppose an engineer were told to construct two needle-threading machines, each of which missed half the time, but which were nevertheless different in this one respect: that while one of them would be such that it simply missed half the time, the other would be such that, like the lady, it missed half of the time on purpose. How would the two machines differ? What could the engineer add to the second to achieve such a difference?"

Link to comment
Share on other sites

Richard Taylor gives a good argument against AI in his book "Action and Purpose".  I'll summarize it here.

This argument suffers from the fallacy from ignorance. In essence, it states: “Humans are goal-directed (volitional), machines are not. Since we don’t know how to build a volitional computer, it’s not possible.” The awkward phrasing dresses up the point so that the fallacy is not as evident. The fact that we don’t know how to build a volitional computer doesn’t make it more or less possible.

Anyway, isn’t there already a thread on AI?

Link to comment
Share on other sites

Yes, my mistake. I'd move the posts, but I can't see how to move them individually.

In response: Taylor is primarily responding to Turing-type behaviorist claims. That said, I think it still has force against people who merely think AI is possible. Possible on what grounds? How would YOU build the machine?

There's also a lot of background information I couldn't possibly type up: a whole book's worth, in fact. Taylor thinks that purpose is inexplicible in the model of the physical sciences, because the physical sciences deal with passive action (in essence, event-event causation), whereas purpose is only possible to an agent (i.e., one who can cause things without being caused to do so.) So, since mechanisms (like computers) are always fully explicible in terms of the physical sciences, agency and thus purpose is impossible for them.

(This is a huge oversimplification. I highly recommend the book, if you're interested in causality; it's one of the best philosophy books I've read since I've been in college.)

Perhaps you could make a computer which was an agent; but doing so would require making a computer which is fundamentally metaphysically different from the sorts of computers we have in mind -- in other words, you'd have to make a computer that was literally alive. And at that point, you're just saying that you could make an admittedly weird organism which was purposeful -- but that's not essentially different from having children, except that it's more technologically impressive.

Link to comment
Share on other sites

My favorite argument for "no":

http://globetrotter.berkeley.edu/people/Se...earle-con4.html

The Chinese Room Argument

Q: In your work on the mind and the brain you talk about how there is always a turn in an era to a metaphor that is dominant in technology, hence the dominant one now is to say that the mind is like a computer program. And to answer that you've come up with the "Chinese Room." Tell us a little about that.

A: Well, it's such a simple argument that I find myself somewhat embarrassed to be constantly repeating it, but you can say it in a couple of seconds. Here's how it goes.

Whenever somebody gives you a theory of the mind, always try it out on yourself. Always ask, how would it work for me? Now if somebody tells you, "Well, really your mind is just a computer program, so when you understand something, you're just running the steps in the program," try it out. Take some area which you don't understand and imagine you carry out the steps in the computer program. Now, I don't understand Chinese. I'm hopeless at it. I can't even tell Chinese writing from Japanese writing. So I imagine that I'm locked in a room with a lot of Chinese symbols (that's the database) and I've got a rule book for shuffling the symbols (that's the program) and I get Chinese symbols put in the room through a slit, and those are questions put to me in Chinese. And then I look up in the rule book what I'm supposed to do with these symbols and then I give them back symbols and unknown to me, the stuff that comes in are questions and the stuff I give back are answers.

Now, if you imagine that the programmers get good at writing the rule book and I get good at shuffling the symbols, my answers are fine. They look like answers of a native Chinese [speaker]. They ask me questions in Chinese, I answer the questions in Chinese. All the same, I don't understand a word of Chinese. And the bottom line is, if I don't understand Chinese on the basis of implementing the computer program for understanding Chinese, then neither does any other digital computer on that basis, because no computer's got anything that I don't have. That's the power of the computer, it just shuffles symbols. It just manipulates symbols. So I am a computer for understanding Chinese, but I don't understand a word of Chinese.

You can see this point if you contrast Searle in Chinese with Searle in English. If they ask me questions in English and I give answers back in English, then my answers will be as good as a native English speaker, because I am one. And if they gave me questions in Chinese and I give them back answers in Chinese, my answers will be as good as a native Chinese speaker because I'm running the Chinese program. But there's a huge difference on the inside. On the outside it looks the same. On the inside I understand English and I don't understand Chinese. In English I am a human being who understands English; in Chinese I'm just a computer. Computers, therefore -- and this really is the decisive point -- just in virtue of implementing a program, the computer is not guaranteed understanding. It might have understanding for some other reason but just going through the steps of in the formal program is not sufficient for the mind.

And so the computer program, then, has not explained consciousness.

That's right. Nowhere near. Now, that isn't to say that computers are useless and we shouldn't use them. No. Not a bit of it. I use computers every day. I couldn't do my work without computers. But the computer does a model or a simulation of a process. And a computer simulation of a mind is about like computer simulation of digestion. I don't know why people make this dumb mistake. You see, if we made a perfect computer simulation of digestion, nobody would think, "Well, let's run out and buy a pizza and stuff it in the computer." It's a model, it's a picture of digestion. It shows you the formal structure of how it works, it doesn't actually digest anything! That's what it is with the things that a computer does for anything. A computer model of what it's like to fall in love or read a novel or get drunk doesn't actually fall in love or read a novel or get drunk. It just does a picture or model of that.

Link to comment
Share on other sites

Computers are anthropomorphized and ascribed characteristics of consciousness (e.g., "information processing", "learning", memory", etc.) in pretty much every theory out there. Even basic computer textbooks make this mistake. If you know the term from Objectivism, stolen concepts are found in abundance.

“Computer memory” is not a stolen concept. To “steal” a concept, you have to affirm a concept while denying its antecedent. (The antecedent being a concept that logically and hierarchically precedes it.) As long as you don’t deny human memory or claim that the two are equivalent, computer memory is a correct and useful analogy. Computers do in fact resemble many functions of the human mind because both contain many functions essential to any data-processing system. In fact, just as the human mind was the original inspiration for computers, computers can teach us about the human mind by demonstrating the necessary functions of logical processes. I know I’ve improved my own thinking in the process of improving my programming skills.

OK, back to studying for finals :-/

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...