Jump to content
Objectivism Online Forum

Harrison Danneskjold

  • Content Count

  • Joined

  • Last visited

  • Days Won


Harrison Danneskjold last won the day on July 6 2018

Harrison Danneskjold had the most liked content!


About Harrison Danneskjold

  • Rank
    The High Lord Infallible
  • Birthday 02/09/1991

Previous Fields

  • Country
    United States
  • State (US/Canadian)
  • Relationship status
  • Sexual orientation
  • Real Name
    William Harrison Jodeit
  • Copyright
    Public Domain
  • School or University
    Hard Knox
  • Occupation
    General Specialist

Profile Information

  • Gender
  • Location
    Saint Paul
  • Interests

Recent Profile Visitors

17618 profile views
  1. I'd just like to point out that if a Paraguayan murderer can get a clean slate by simply moving past some totally open border (without so much as giving his name) then not only would he be a threat to all the citizens of the new country but a grave injustice to all his victims in the old. Totally open immigration (like Binswanger argues for) wouldn't just harm us. I've recently realized that this whole debate has been framed by terms like "foreigners" this entire time (even in Binswanger's essay, which you'd expect to be of a slightly higher caliber than whatever we can come up with here) when it could have been framed as a question of how to handle people moving from the territory policed by one government to that of another. The latter would be a nuanced and very interesting question, the answer to which would probably dissolve any genuine confusion about the former. It seems like some on this thread (as well as Binswanger, himself) - even me; we either wanted to respond to the open xenophobia that's starting to crop up all over the place or to nuts like Angela Merkel who'll hand out free money to anyone from anywhere on Earth. And in our haste to give those truly despicable phenomena the answer they so desperately need, we seem to have uncritically and unTHINKINGLY accepted the warped terminology used by the rest of the anti-intellectual culture, at large (even Binswanger and I). Perhaps the endless rabbit-holes we now find ourselves stuck in are the punishment we deserve for failing to challenge the roots of their premises. I won't name any names other than Binswanger's and my own, but if you're reading this you should ask yourself if the shoe might fit you, too. Binswanger is absolutely right that foreigners have all the same rights we do and must be treated accordingly. The only thing that's wrong with his essay is the context it drops: that the American justice system doesn't concern itself with those who live beyond its geographical territory, and consequently that there's a real question to be asked about how to handle peoples' entrance to AND exit from that territory. As I pointed out at the very beginning of this post, the consequences of dropping that context would perpetuate danger and injustice to everyone on either side of the border. As the consequences of any failure to think usually tend to. As to that question of how to handle movement across different jurisdictions, I don't know what the right answer would be. I'm inclined to agree with DA about border stops, but maybe it would be better for our government to share its criminal database with Mexico (and vice-versa) or maybe there's a completely different third solution; I really don't know. But given what most of this thread has been about so far, it doesn't seem like the right place to start trying to sort that out. Neither do I see a point in correcting anything else I've added to it before this. Personally, I'm very disappointed in myself. But I intend to challenge those premises and step outside of that frame (as I should've done before I said one word to anyone else about it) and say something once I arrive at some well-reasoned conclusion. Live long and prosper.
  2. Not that they're foreigners, but that they're crossing from one government's jurisdiction to another. That's why I asked DA whether the stops he's advocating should be applied equally to vacationers returning to their homes in, say, New York or New Jersey. The question of citizenship isn't relevant to the argument he's making. That was a pretty funny video, though.
  3. Yes, but thinking is not a team sport. I've been working these ideas out in any odd moments I've been able to find, but the problems they've highlighted will take a bit more than that. The "remote mountaintop" bit was a dash of my own color, though; it shouldn't take more than a week or two, once I get to it. Besides. The Golden Age arrived today.
  4. No, like what programs did you use to put the slideshow together with your own audio?
  5. Maybe. I've been describing my reasoning skills as "rusty" but the more I review what I've been posting, the more atrocious thought-habits I discover. Don't be surprised if I drop off this site sometime soon: I'm considering taking one copy of Atlas and one of the ITOE to a remote mountaintop somewhere. Except for that cheeky line at the very end, I have no other "maybes" for the rest of that. That's exactly why I wanted to avoid using those terms in the first place. I won't be happy if you're right about such flawed concepts STILL being a factor in this thread - because that's exactly what I think of them, too. That's another excellent point. The kinds of "artificial intelligence" we have today really shouldn't be called "intelligence" at all; it only serves to confuse the issue. It doesn't yet make me think I'm wrong about the nature of "artificial intelligence" whenever we manage to actually achieve it. But if you know a better term for our modern toys then I'd prefer to use something else. Actually, the "bot" suffix might suffice. Speaking personally, that would convey to me exactly what we have today and be totally inappropriate for a truly thinking machine. I'll use that for now. Depending on exactly whom you mean, I would very much disagree with that. In Computing Machinery and Intelligence Alan Turing said: That doesn't sound like the thrill of tricking someone, to me. Sam Harris and Isaac Arthur aren't eager to fool anyone, either; in fact, since they're both hard-line agnostic as to whether the Turing Test would indicate consciousness or not, they'd probably agree more with you than me. Nor can I think of any of the computer scientists who're currently working on it (because I've listened to a few in my time) who cared enough about what others thought to ever be accused of such a motive. They're operating on some twisted premises about the nature of consciousness and most of them are wildly overoptimistic, but not that. I believe the last time we had this discussion it was established that the Turing Test has already been "beaten" by a grotesquely crude sort of chatbot which was programmed to monologue (with an obscene frequency of spelling and grammatical errors) about its sick grandmother. The judges "just felt so strongly" it had to be a real boy. The thing I remember most clearly was being absolutely enraged when I reviewed some of the transcripts from this test, saw that a single shred of THINKING would've showed it as an obvious fraud and revised who I thought should qualify as the judge of a proper Turing Test. I'll be screaming at my computer screen. That's the thing, though. What would one expect to see if one looked at the inner workings of another consciousness? And would a machine consciousness (if possible) look anything like an organic consciousness "under the hood"? The question of what one would find under your hood or mine is big enough to warrant its own thread, let alone some artificial kind of consciousness. --- Since we agree that a REAL human-level intelligence necessitates consciousness (thank you) I'm not sure what else I want to start before I return from that remote mountaintop. But this, I really must share. This is amazing. According to Wikipedia, I'm not the first one to think of training a Neural Net to participate in an online forum. They called it Mark V Shaney And Mark was much more amazing than what I hypothesized in that last post. Mark was something really special... My friends, a new age is dawning. PS: Do we already have a few bots hanging around here?!?!!
  6. This is nearly 3 hours of Sam Harris discussing AI with various people (Neil Degrasse Tyson comes in at around 1.5 hours and Dave Reuben at almost 2.5). I don't agree with everything he says (in fact it reminded me of all the aspects of Sam that I despise) but it ended up helping me reformulate precisely what I'm trying to say, here. He repeatedly mentions the possibility that we'll create something that's smarter and more competent than we are, but lacking consciousness; a "superhuman intellect in which the lights aren't on". What I was trying (very, very clumsily) to say by way of the Turing test example is that that's a contradiction in terms. Consciousness is a process; something that certain organisms DO. This process has both an identity (influencing the organism's behavior in drastic and observable ways) and a purpose. I don't think there could ever be some superhuman artificial intellect that was better than us at everything from nuclear physics to poetry WITHOUT having all of its lights on; after all, such capacities are why any of us even have such "lights" in the first place. This obviously is relevant to the Turing test, but in retrospect (now that I've formulated exactly what I mean) that really wasn't the best route to approach this from. But now that we're all here, anyway... As any REAL Terran will already know, AlphaStar is Google's latest AI system. Having already mastered Chess and Go, this one plays StarCraft. There'll be no link for anyone who doesn't know what StarCraft is - how dare you not know what StarCraft is?! Anyway; AlphaStar learned to play by watching thousands upon thousands of hours of human players and then practicing against itself, running the game at far faster than it's supposed to go, for the equivalent of something like ten thousand years. It beat the human world-champion a year or two ago, so suffice it to say that it is very good at StarCraft. My question is what it would be like if Google tried to train another Neural Net to participate on this very forum, as a kind of VERY strict Turing Test. What would such a machine have to say? Well, from reading however-many millions of lines of what we've written there are certain mannerisms it'd be guaranteed to pick up; things like "check your premises" or "by what standard" (or maybe even irrelevant music videos). And from the context of what they were said in response to it'd even get a sort of "feel" for when it'd be appropriate to use any given thing. Note that this approach would be radically different from the modern "chatbox" setup - and also that it could only ASSOCIATE certain phrases with certain other phrases (since that's all a neural net can really do), without the slightest awareness of what things like "check your premises" actually MEANT. Given enough time, this system (let's call it the AlphaRandian) would NECESSARILY end up saying some bizarre and infuriating things, precisely BECAUSE of its lights being off. In a discussion of the finer procedural points of a proper democracy it might recognize that the words "society" and "the will of the people" were being tossed around and say "there is no such thing as a society; only some number of individuals". And if questioned about the relevance of that statement it'd probably react like (let's face it) more than a few of us often do and make some one-liner comeback, dripping with condescension, which nobody else could comprehend. On a thread about the validity of the Law of Identity it might regurgitate something halfway-relevant about the nature of axioms, which might go unchallenged. On the morality of having promiscuous sex it might paraphrase something Rand once said about free love and homosexuals, which (being incapable of anything more than brute association) it would be totally incapable of making any original defense for, and most likely incapable of defending at all. It would very rapidly become known to all as our most annoying user. And further: since the rest of us do have all our lights on, it'd only be a matter of time before we started to name what it was actually doing. There would be accusations of "anti-conceptuality" and "mere association, with no integration". And since this is OO, after all, it would only be a matter of time before it pissed someone off SO much that they went ahead and said that it "wasn't fully conscious in the proper sense of the term". We all would've been thinking it, long before that; past a certain point it'd only need to make a totally irrelevant "check your premises" to the wrong guy on the wrong day and he'd lay it out explicitly. And if Google came to us at any time after that to say "you probably can't guess who, but someone on your forum is actually the next generation of chatbot!" we'd all know who, out of all the tens (or hundreds?) of thousands of our users, wasn't a real person. Granted, that was one long and overly-elaborate thought experiment, and you might disagree that it would, in fact, play out that way (although I did put WAY too much thought into it and I'm fairly certain it's airtight). I only mention it as one (of what's going to be many) example of my primary point: You cannot have human-level intelligence without consciousness. That's fucking amazing!
  7. The Aristotelian concept of "essences" being metaphysical (rather than epistemological) seems applicable.
  8. Yeah; sorry about that. I was already a little bit tipsy. Thanks for not giving me the kind of answer you definitely could have. Sure, it can be text-only, but I wouldn't be comfortable with the kind of emotionalist "average human being" that'd fall for any program sufficiently capable of tugging at their heartstrings. Obviously I'd prefer to be the judge, myself, but the average Objectivist should suffice. Let's not limit the processing power or memory capacity. Well, that's just it. If it was generating any of its own content on-the-fly then it wouldn't be part of today's "chatbot" paradigm (in which absolutely every response is pre-scripted). But even if it could generate its own content on-the-fly, if it had no basis to know what its words REFERRED to (like if it was just a neural net trained to imitate human speakers) then it would still end up saying some equally-bizarre things from time to time; things that could not be explained by the intentions of some bona fide consciousness. How long it'd take for it to make such a mistake isn't really the point. A more sophisticated system would obviously be able to mimick a real person for longer than a more rudimentary one could; all I'm saying is that sooner or later they would ALL show obvious signs of their non-sentience, unless they truly were sentient. Alright; maybe we'll see certain things that could fool MOST people in the short run. That's not really what I was trying to get at (and I do see that I wasn't being very clear about it; sorry). I think the better way to phrase this principle is that any non-sentient system can eventually be shown to be non-sentient by an outside observer (who perhaps has no idea what's going on "under the hood") who's at least somewhat capable of thinking critically. I have to start getting ready for work soon, but maybe it'd help if I showed some examples of what I mean later on?
  9. And presumably this would apply equally to American citizens and foreigners, alike? For instance, if I take a vacation to Cancun then the Mexican government should stop me on the way there and my own government should stop me on the way back (just in case I took up murdering random strangers while I was there)?
  10. Well, yeah. I said I wasn't entirely comfortable with where Binswanger's logic leads, but that is precisely it.
  11. I think that's where "access to criminal records" comes in. If you move from one state to another, it is generally possible for the police in your new state to find out about anything you've been convicted of in the previous one; not necessarily so with Mexico or Uruguay. Of course, this seems to imply that all the data our government currently keeps on every single one of us is proper. I'm not saying whether it is or it isn't (frankly, there have been a number of recently made points I still need to chew on a bit); only that it does seem to be a part of it. Because if it is right for our government to keep tabs on every single one of us (right down to our unique "social security numbers") then it would be wrong to just let people into the country for whom we don't have that information. But I think a case could be made that we shouldn't be keeping such close track of our own citizens, in the first place. I really don't know. But "double standard" probably isn't applicable and "circular" most certainly isn't.
  12. Really? I'm still reading through that last page (and I'm actually more inclined to agree with you about this than with Don Athos) but subjectively feel like considering this a special case??? Citation very much needed, sir.
  13. Yeah but you have argued for banning socialist ideas on the basis that they're objectively dangerous. It was a much stronger case than "we should ban them because they're offensive"; they are actually dangerous ideas. And both cases boil down to "but what if X happens", which is not the proper question to ask. I believe Harry Binswanger already did. And what you outlined is the best argument I've heard so far for having a somewhat-controlled border. I'll have to chew on that while I'm at work today.
  14. But not from writing or selling books about their (truly offensive) ideas, nor teaching it to their own kids (and those of any other consenting adults) nor talking about it online with anyone who's willing to listen. So not only would it do us very little good to ban "public collectivism" on such grounds, it would add that air of mystique to such ideas (I believe it's called the Streisand effect) and in today's culture would be much more likely to get our ideas banned from the public, instead of theirs. Honestly, your defense by the horrors of "what'll happen if enough people start taking them seriously" was so much stronger.
  • Create New...