Jump to content
Objectivism Online Forum

Harrison Danneskjold

Regulars
  • Content Count

    2513
  • Joined

  • Last visited

  • Days Won

    39

Everything posted by Harrison Danneskjold

  1. I'd just like to point out that if a Paraguayan murderer can get a clean slate by simply moving past some totally open border (without so much as giving his name) then not only would he be a threat to all the citizens of the new country but a grave injustice to all his victims in the old. Totally open immigration (like Binswanger argues for) wouldn't just harm us. I've recently realized that this whole debate has been framed by terms like "foreigners" this entire time (even in Binswanger's essay, which you'd expect to be of a slightly higher caliber than whatever we can come up with here) when it could have been framed as a question of how to handle people moving from the territory policed by one government to that of another. The latter would be a nuanced and very interesting question, the answer to which would probably dissolve any genuine confusion about the former. It seems like some on this thread (as well as Binswanger, himself) - even me; we either wanted to respond to the open xenophobia that's starting to crop up all over the place or to nuts like Angela Merkel who'll hand out free money to anyone from anywhere on Earth. And in our haste to give those truly despicable phenomena the answer they so desperately need, we seem to have uncritically and unTHINKINGLY accepted the warped terminology used by the rest of the anti-intellectual culture, at large (even Binswanger and I). Perhaps the endless rabbit-holes we now find ourselves stuck in are the punishment we deserve for failing to challenge the roots of their premises. I won't name any names other than Binswanger's and my own, but if you're reading this you should ask yourself if the shoe might fit you, too. Binswanger is absolutely right that foreigners have all the same rights we do and must be treated accordingly. The only thing that's wrong with his essay is the context it drops: that the American justice system doesn't concern itself with those who live beyond its geographical territory, and consequently that there's a real question to be asked about how to handle peoples' entrance to AND exit from that territory. As I pointed out at the very beginning of this post, the consequences of dropping that context would perpetuate danger and injustice to everyone on either side of the border. As the consequences of any failure to think usually tend to. As to that question of how to handle movement across different jurisdictions, I don't know what the right answer would be. I'm inclined to agree with DA about border stops, but maybe it would be better for our government to share its criminal database with Mexico (and vice-versa) or maybe there's a completely different third solution; I really don't know. But given what most of this thread has been about so far, it doesn't seem like the right place to start trying to sort that out. Neither do I see a point in correcting anything else I've added to it before this. Personally, I'm very disappointed in myself. But I intend to challenge those premises and step outside of that frame (as I should've done before I said one word to anyone else about it) and say something once I arrive at some well-reasoned conclusion. Live long and prosper.
  2. Not that they're foreigners, but that they're crossing from one government's jurisdiction to another. That's why I asked DA whether the stops he's advocating should be applied equally to vacationers returning to their homes in, say, New York or New Jersey. The question of citizenship isn't relevant to the argument he's making. That was a pretty funny video, though.
  3. Yes, but thinking is not a team sport. I've been working these ideas out in any odd moments I've been able to find, but the problems they've highlighted will take a bit more than that. The "remote mountaintop" bit was a dash of my own color, though; it shouldn't take more than a week or two, once I get to it. Besides. The Golden Age arrived today.
  4. No, like what programs did you use to put the slideshow together with your own audio?
  5. Maybe. I've been describing my reasoning skills as "rusty" but the more I review what I've been posting, the more atrocious thought-habits I discover. Don't be surprised if I drop off this site sometime soon: I'm considering taking one copy of Atlas and one of the ITOE to a remote mountaintop somewhere. Except for that cheeky line at the very end, I have no other "maybes" for the rest of that. That's exactly why I wanted to avoid using those terms in the first place. I won't be happy if you're right about such flawed concepts STILL being a factor in this thread - because that's exactly what I think of them, too. That's another excellent point. The kinds of "artificial intelligence" we have today really shouldn't be called "intelligence" at all; it only serves to confuse the issue. It doesn't yet make me think I'm wrong about the nature of "artificial intelligence" whenever we manage to actually achieve it. But if you know a better term for our modern toys then I'd prefer to use something else. Actually, the "bot" suffix might suffice. Speaking personally, that would convey to me exactly what we have today and be totally inappropriate for a truly thinking machine. I'll use that for now. Depending on exactly whom you mean, I would very much disagree with that. In Computing Machinery and Intelligence Alan Turing said: That doesn't sound like the thrill of tricking someone, to me. Sam Harris and Isaac Arthur aren't eager to fool anyone, either; in fact, since they're both hard-line agnostic as to whether the Turing Test would indicate consciousness or not, they'd probably agree more with you than me. Nor can I think of any of the computer scientists who're currently working on it (because I've listened to a few in my time) who cared enough about what others thought to ever be accused of such a motive. They're operating on some twisted premises about the nature of consciousness and most of them are wildly overoptimistic, but not that. I believe the last time we had this discussion it was established that the Turing Test has already been "beaten" by a grotesquely crude sort of chatbot which was programmed to monologue (with an obscene frequency of spelling and grammatical errors) about its sick grandmother. The judges "just felt so strongly" it had to be a real boy. The thing I remember most clearly was being absolutely enraged when I reviewed some of the transcripts from this test, saw that a single shred of THINKING would've showed it as an obvious fraud and revised who I thought should qualify as the judge of a proper Turing Test. I'll be screaming at my computer screen. That's the thing, though. What would one expect to see if one looked at the inner workings of another consciousness? And would a machine consciousness (if possible) look anything like an organic consciousness "under the hood"? The question of what one would find under your hood or mine is big enough to warrant its own thread, let alone some artificial kind of consciousness. --- Since we agree that a REAL human-level intelligence necessitates consciousness (thank you) I'm not sure what else I want to start before I return from that remote mountaintop. But this, I really must share. This is amazing. According to Wikipedia, I'm not the first one to think of training a Neural Net to participate in an online forum. They called it Mark V Shaney And Mark was much more amazing than what I hypothesized in that last post. Mark was something really special... My friends, a new age is dawning. PS: Do we already have a few bots hanging around here?!?!!
  6. This is nearly 3 hours of Sam Harris discussing AI with various people (Neil Degrasse Tyson comes in at around 1.5 hours and Dave Reuben at almost 2.5). I don't agree with everything he says (in fact it reminded me of all the aspects of Sam that I despise) but it ended up helping me reformulate precisely what I'm trying to say, here. He repeatedly mentions the possibility that we'll create something that's smarter and more competent than we are, but lacking consciousness; a "superhuman intellect in which the lights aren't on". What I was trying (very, very clumsily) to say by way of the Turing test example is that that's a contradiction in terms. Consciousness is a process; something that certain organisms DO. This process has both an identity (influencing the organism's behavior in drastic and observable ways) and a purpose. I don't think there could ever be some superhuman artificial intellect that was better than us at everything from nuclear physics to poetry WITHOUT having all of its lights on; after all, such capacities are why any of us even have such "lights" in the first place. This obviously is relevant to the Turing test, but in retrospect (now that I've formulated exactly what I mean) that really wasn't the best route to approach this from. But now that we're all here, anyway... As any REAL Terran will already know, AlphaStar is Google's latest AI system. Having already mastered Chess and Go, this one plays StarCraft. There'll be no link for anyone who doesn't know what StarCraft is - how dare you not know what StarCraft is?! Anyway; AlphaStar learned to play by watching thousands upon thousands of hours of human players and then practicing against itself, running the game at far faster than it's supposed to go, for the equivalent of something like ten thousand years. It beat the human world-champion a year or two ago, so suffice it to say that it is very good at StarCraft. My question is what it would be like if Google tried to train another Neural Net to participate on this very forum, as a kind of VERY strict Turing Test. What would such a machine have to say? Well, from reading however-many millions of lines of what we've written there are certain mannerisms it'd be guaranteed to pick up; things like "check your premises" or "by what standard" (or maybe even irrelevant music videos). And from the context of what they were said in response to it'd even get a sort of "feel" for when it'd be appropriate to use any given thing. Note that this approach would be radically different from the modern "chatbox" setup - and also that it could only ASSOCIATE certain phrases with certain other phrases (since that's all a neural net can really do), without the slightest awareness of what things like "check your premises" actually MEANT. Given enough time, this system (let's call it the AlphaRandian) would NECESSARILY end up saying some bizarre and infuriating things, precisely BECAUSE of its lights being off. In a discussion of the finer procedural points of a proper democracy it might recognize that the words "society" and "the will of the people" were being tossed around and say "there is no such thing as a society; only some number of individuals". And if questioned about the relevance of that statement it'd probably react like (let's face it) more than a few of us often do and make some one-liner comeback, dripping with condescension, which nobody else could comprehend. On a thread about the validity of the Law of Identity it might regurgitate something halfway-relevant about the nature of axioms, which might go unchallenged. On the morality of having promiscuous sex it might paraphrase something Rand once said about free love and homosexuals, which (being incapable of anything more than brute association) it would be totally incapable of making any original defense for, and most likely incapable of defending at all. It would very rapidly become known to all as our most annoying user. And further: since the rest of us do have all our lights on, it'd only be a matter of time before we started to name what it was actually doing. There would be accusations of "anti-conceptuality" and "mere association, with no integration". And since this is OO, after all, it would only be a matter of time before it pissed someone off SO much that they went ahead and said that it "wasn't fully conscious in the proper sense of the term". We all would've been thinking it, long before that; past a certain point it'd only need to make a totally irrelevant "check your premises" to the wrong guy on the wrong day and he'd lay it out explicitly. And if Google came to us at any time after that to say "you probably can't guess who, but someone on your forum is actually the next generation of chatbot!" we'd all know who, out of all the tens (or hundreds?) of thousands of our users, wasn't a real person. Granted, that was one long and overly-elaborate thought experiment, and you might disagree that it would, in fact, play out that way (although I did put WAY too much thought into it and I'm fairly certain it's airtight). I only mention it as one (of what's going to be many) example of my primary point: You cannot have human-level intelligence without consciousness. That's fucking amazing!
  7. The Aristotelian concept of "essences" being metaphysical (rather than epistemological) seems applicable.
  8. Yeah; sorry about that. I was already a little bit tipsy. Thanks for not giving me the kind of answer you definitely could have. Sure, it can be text-only, but I wouldn't be comfortable with the kind of emotionalist "average human being" that'd fall for any program sufficiently capable of tugging at their heartstrings. Obviously I'd prefer to be the judge, myself, but the average Objectivist should suffice. Let's not limit the processing power or memory capacity. Well, that's just it. If it was generating any of its own content on-the-fly then it wouldn't be part of today's "chatbot" paradigm (in which absolutely every response is pre-scripted). But even if it could generate its own content on-the-fly, if it had no basis to know what its words REFERRED to (like if it was just a neural net trained to imitate human speakers) then it would still end up saying some equally-bizarre things from time to time; things that could not be explained by the intentions of some bona fide consciousness. How long it'd take for it to make such a mistake isn't really the point. A more sophisticated system would obviously be able to mimick a real person for longer than a more rudimentary one could; all I'm saying is that sooner or later they would ALL show obvious signs of their non-sentience, unless they truly were sentient. Alright; maybe we'll see certain things that could fool MOST people in the short run. That's not really what I was trying to get at (and I do see that I wasn't being very clear about it; sorry). I think the better way to phrase this principle is that any non-sentient system can eventually be shown to be non-sentient by an outside observer (who perhaps has no idea what's going on "under the hood") who's at least somewhat capable of thinking critically. I have to start getting ready for work soon, but maybe it'd help if I showed some examples of what I mean later on?
  9. And presumably this would apply equally to American citizens and foreigners, alike? For instance, if I take a vacation to Cancun then the Mexican government should stop me on the way there and my own government should stop me on the way back (just in case I took up murdering random strangers while I was there)?
  10. Well, yeah. I said I wasn't entirely comfortable with where Binswanger's logic leads, but that is precisely it.
  11. I think that's where "access to criminal records" comes in. If you move from one state to another, it is generally possible for the police in your new state to find out about anything you've been convicted of in the previous one; not necessarily so with Mexico or Uruguay. Of course, this seems to imply that all the data our government currently keeps on every single one of us is proper. I'm not saying whether it is or it isn't (frankly, there have been a number of recently made points I still need to chew on a bit); only that it does seem to be a part of it. Because if it is right for our government to keep tabs on every single one of us (right down to our unique "social security numbers") then it would be wrong to just let people into the country for whom we don't have that information. But I think a case could be made that we shouldn't be keeping such close track of our own citizens, in the first place. I really don't know. But "double standard" probably isn't applicable and "circular" most certainly isn't.
  12. Really? I'm still reading through that last page (and I'm actually more inclined to agree with you about this than with Don Athos) but subjectively feel like considering this a special case??? Citation very much needed, sir.
  13. Yeah but you have argued for banning socialist ideas on the basis that they're objectively dangerous. It was a much stronger case than "we should ban them because they're offensive"; they are actually dangerous ideas. And both cases boil down to "but what if X happens", which is not the proper question to ask. I believe Harry Binswanger already did. And what you outlined is the best argument I've heard so far for having a somewhat-controlled border. I'll have to chew on that while I'm at work today.
  14. But not from writing or selling books about their (truly offensive) ideas, nor teaching it to their own kids (and those of any other consenting adults) nor talking about it online with anyone who's willing to listen. So not only would it do us very little good to ban "public collectivism" on such grounds, it would add that air of mystique to such ideas (I believe it's called the Streisand effect) and in today's culture would be much more likely to get our ideas banned from the public, instead of theirs. Honestly, your defense by the horrors of "what'll happen if enough people start taking them seriously" was so much stronger.
  15. See, that actually seems to be true. If "public property" is a valid conception then something similar to your position certainly would follow (although there would still be the problem of your inappropriate yardstick). So... What do you think of all the arguments Rand made against public property? I mean, as an advocate of "open Objectivism" I'm always willing to entertain the possibility that she was wrong about it; maybe this was something she really screwed up, and now you've found the solution that she hadn't. That's not sarcasm; it's actually how I try to approach this kind of thing. But it would make this whole conversation much easier if you'd address her arguments about it, first, to give the rest of us a clearer picture of where you're actually coming from. Would it help if someone (preferably not me) went and tracked down everything she said about public property?
  16. Alright. First of all you're arguing for the validity of "public property" in a big way. I don't remember all of Ayn Rand's arguments against it off the top of my head, but she's made quite a few and they all apply. For starters: since there is no such thing as "the public", only some number of individual men, it has all the same problems with it as that of the concept of a "public good". Who gets to decide how best to use such public property and by what standard? Now, if you were to mention the Democratic process (as I suspect you probably will) then it wouldn't be too difficult to show how, in practice, this would actually mean pressure-group warfare. In short, everything that's wrong with today's "mixed economy" would also apply to what you're arguing for (since they're both based on the same kind of fallacy). Secondly, you say that "society in general has no claim on private lands", which I would wholeheartedly agree with. That's absolutely right. It also means that if some rancher on the border wanted to hire a truck-full of Mexican (or Columbian or Somalian or whatever) laborers to work his own land -or if someone who owned some land in Minnesota chartered a private plane for the same purpose- then it's none of "society's" business. Right? I do agree that your argument deserves serious consideration (as you mentioned in the other thread). But I don't think it's sturdy enough to survive it. Actually, it does. By defending the rights of immigrants at the borders we are also defending our own rights, inside of them (and several of Binswanger's examples demonstrate precisely how); anyone who defends the rights of one man is defending the rights of all. You are right that it doesn't take much imagination to think of ways in which open immigration could go horrifically wrong. That is true. But the same could be said for every other way in which our government is not allowed to meddle in our private lives. Think of warrantless wiretapping and surveillance. Surely it's important that we allow our government to do the necessary snooping to discover who is or is not an objective threat to everyone else. Not much imagination is needed to think of the terrible things that could happen if we don't allow the government to do that. Not much imagination is needed to think of what could happen without legally mandated insurance (of either the health or automotive varieties), either. You'd probably think it was a straw man if I threw drug prohibition on top of the pile, but it wouldn't be. "What could happen if we allow people to do X" is not the proper yardstick to apply in this situation. And it turns out that pointing that out does, actually, show regard for the lives and property of our citizenry. Because neither really matter without freedom.
  17. Well, here is Binswanger's essay. The main bit that got me was when he pointed out that our government has no right to start bothering random people on a bus or on the street, checking to see that everyone can prove their citizenship somehow. I agree with that: such a policy would be a gross perversion of all the proper procedures for actually protecting individual rights. Well, if we have the right to be free of that anywhere inside of America then presumably anyone trying to cross the border has that same right to be left alone (as long as they aren't obviously carrying bodies or bombs with them). As I've mentioned more than once, I'm not entirely comfortable with the implication that we should just be letting anyone cross in either direction, and trying to take care of the actual threats after-the-fact. But whether I'm comfortable with it or not isn't what's supposed to count here, and his logic does seem pretty solid to me. And yeah; it's completely a procedural question. But as near as I can tell the just procedure would be totally and completely open borders.
  18. I'm extremely glad to hear it. It's heartening to see we can at least agree on that much, from the get-go. And also that, in essence. I don't think we'll necessarily have to figure out what consciousness is before we replicate it. The history of science is littered with examples of people discovering new technologies before they fully understood how they worked (penicillin comes to mind) although it would be preferable if we didn't end up playing with such a powerful force before we knew what makes it tick (the phrase "a kid playing with his dad's gun" comes to mind). I also suspect it won't be as far in the future as you seemed to imply, there. I'd be surprised if anyone participating in this thread didn't live to see it happen. But those are both very minor differences, in the grand scheme of things; in essence we're already on that same page. That's very interesting, though, because it's exactly what I'd say about your position. Put yourself in the shoes of a chatbot programmer who's trying to handle the case of being asked "how do you feel?" You might program it to respond with "good" or "bad" - both of which open themselves up to be asked "why?" Now, a real person who was really reporting on their internal state would have absolutely no problem answering that question, but a chatbot-programmer would then have to think of a specific, concrete answer to "why" (and "how" and "I know what you mean" and etc), and then an infinitely-branching set of responses for whatever their interlocutor says after that. Anyone who grasps why lying cannot work in the long run will immediately see the problem with such an approach. I not only see that problem: I am saying that this problem is INHERENT to trying to tell a non-thinking AI some string of words to make it LOOK like real AI, and that the only solution there can ever be would be to do it for real. Speak for yourself, man. I seem to recall you weren't much of a programmer (at least as of what my memory of several years ago seems to indicate) but if anyone reading this, at any point however-long-from-now, can propose any alternative approach besides sentience, itself, you'll have my eternal (and extremely public) gratitude. I dare you. Because if the only possible approaches are "pre-scripted strings of text" or "true sentience" then I would love to demonstrate how to reliably falsify the former (i.e. show it for what it really is) every time, because it's really not that complicated. Not only can it be done, it should be done: it's very important for us to know when we've actually built a proper AI and when we haven't. Finally, for the record: we haven't. But for how much longer I really couldn't say. P.S: In the Fountainhead, the very first words Toohey says to Keating are "what do you think of the temple of Nike Apteros?" Keating, despite having never heard of it before, says "that's my favorite" (just like a chatbot might) and Toohey goes on talking as if that was the only answer he was looking for; briefly saying: "I knew you'd say it". There is a reason I'm so confident this wouldn't fool anyone who takes the time to learn how the gimmick works.
  19. This makes no sense to me. What on Earth did you really mean if it wasn't that the essential characteristic of a thing is where it came from? This might just be the rum talking (and I'm very sorry if it is) but I am very confused.
  20. But not a Prius. That is not a car; it is a lunch box. I've ordered a copy that should arrive sometime in October. So no spoilers! I suppose so. I've been refining my thoughts on this over the past few days (it's been quite a while since I've tried to participate in this kind of conversation) and I think you're probably right about that. As right as it'd be to attribute "rationality", "personhood" and "individual rights" to any true AI (assuming, for the sake of argument, we actually managed to build one), calling it a member of "homo sapiens" regardless of what it's made of makes about as much sense as a trans guy declaring himself to be a female with a penis. You've got me there. That's certainly true. However, even if it's not actually possible to program "consciousness" into a computer (which is itself a somewhat dubious assumption since within our lifetimes we'll have computers -if memory serves- capable of simulating the whole human brain down to something like the molecular scale); even granting that, we could always grow the necessary organic components in a vat. We've already done it with rat brains. So although it's true that silicon might not be the appropriate material to use in our efforts to create AI, in the grand scheme of things that would represent at most a minor hiccup in such efforts. This is the part I don't entirely agree with. That infernal Chinese room. To start with, I'd like to avoid using the terms "input", "output" and "information" unless they're absolutely necessary. I think anyone who's read the ITOE can see how frequently our society abuses those infinitely-elastic terms today, so let's see if we can in the very least minimize them from here on out. Secondly, as much as I'd like to throw "simulation" into the same junk heap and be done with it, I don't think I can make this next point without it. So I'd like to mention something before I start trying to use it. The Identity of Indiscernibles is an epistemological principle which states that if any two things have every single attribute in common then they are the same thing; if X is indiscernible from Y (cannot be told apart from each other in any way whatsoever) then X is Y and we don't even need the extra label of "Y" because they're both just X. I bring this up because I recognize it as the words for the implicit method of thinking which I've always brought to this conversation as well as the basis for my conclusions about it. If it's valid then I'm fairly sure (mostly) that everything else I'm about to say must also be valid. I'd also like to point out that every single Neo-Kantian argument about philosophical zombies gets effortlessly disintegrated by the application of this one little rule. So it does have that going for it. I would agree with that - sometimes. A simulated car in a video game is obviously not the same thing as a real car. One of these can be touched, smelled, weighed and driven (etc) while the other can only be seen from certain very specific angles. The two things are very easy to distinguish from one another, provided the simulated one isn't part of some Matrix-style total simulation (in which case things would get rather complex and existential). I would even agree that a computer simulation of any specific individual's mind (like in Transcendence) would not be that person's subjective, first-person experience; i.e. it wouldn't actually be THEM (although my reasons for that are complicated and involve one very specific thought experiment). However, if a simulated consciousness could not be distinguished from an organic one (like if one were to pass the Turing Test) then by the Identity of Indiscernibles one would have to conclude that the machine was, in fact, conscious. It wouldn't be a traditional, biological kind of consciousness (assuming it hadn't been grown in a vat, which could be determined by simply checking "under the hood") but it would nonetheless be a true consciousness. Even if it was simulating the brain of some individual (like in Transcendence) whom it wouldn't actually BE, it would still be alive. In short, in most cases I would wholeheartedly agree that a simulation of a thing is not actually that thing (and could, in fact, be differentiated from the real thing quite trivially), but not in those cases of actual indiscernibality. It's that last example that I really take issue with. I don't know whether it's a case you'd actually make or not and I'm trying not to put words in your mouth. But while I'm on the subject I wanted to mention the Chinese Room objection to AI, partially because it looks vaguely similar to what you actually said (if you squint) and primarily because it annoys me so very much. The argument (which I linked to just there) imagines a man locked in a room with two slots, "input" and "output", who is gradually trained to correctly translate between Chinese and Japanese despite not understanding what a single character of either actually MEANS. This is meant as an analogy to any possible general AI, which implies that it couldn't possibly UNDERSTAND its own functions (no matter how good it gets at giving us the correct responses to the correct stimuli). First of all, one could apply the very same analogy (as well as what you said about merely "transforming information") to any human brain. What makes you think that I understand a single word of this, despite my demonstrable ability to engage with the ideas that're actually in play? Maybe I'm just the latest development in philosophical zombies. Second of all, the entire argument assumes that it is possible to correctly translate between Chinese and Japanese without speaking either of those languages, SOMEHOW. As a programming enthusiast, this is the part that really gets under my skin about it - HOW in the name of Satan do you program a thing to do any such thing WITHOUT including anything on the MEANING of its actions? The multitude of problems with today's "chatbots" (and I can go on for hours about all the ways in which their non-sentience should be obvious to any THINKING user); everything that's wrong with them more-or-less boils down to their lack of any internal referents. The fact that they don't actually know what they're saying makes them say some truly bizarre things, at times; a consequence which I'd call inescapable (metaphysical) for any non-sentient machine, by virtue of that very mindlessness. The Chinese room argument frolics merrily past all such technicalities to say: "sure, it's physically possible for a mindless thing to do all those things that conscious minds do, so how can we ever tell the two apart?!" Finally, the Chinese Room argument is almost as shameless a violation of the Identity of Indiscernibles as the concept of a philosophical zombie is. I really wish I knew which Chinese room was the one in question so I could just torch the damn thing. It's so wrong on so many different levels.
×
×
  • Create New...