Jump to content
Objectivism Online Forum

Harrison Danneskjold

Regulars
  • Content Count

    2518
  • Joined

  • Last visited

  • Days Won

    39

Harrison Danneskjold last won the day on July 6 2018

Harrison Danneskjold had the most liked content!

4 Followers

About Harrison Danneskjold

  • Rank
    The High Lord Infallible
  • Birthday 02/09/1991

Previous Fields

  • Country
    United States
  • State (US/Canadian)
    Minnesota
  • Relationship status
    Single
  • Sexual orientation
    Straight
  • Real Name
    William Harrison Jodeit
  • Copyright
    Public Domain
  • School or University
    Hard Knox
  • Occupation
    General Specialist

Profile Information

  • Gender
    Male
  • Location
    Saint Paul
  • Interests
    Interests.

Recent Profile Visitors

17879 profile views
  1. Absolutely. 100% yes. A "zombie" is supposed to be a mindless, soulless, nonrational monster (like Stephen Mallory's beast from the Fountainhead) that walks on two feet but cannot think or speak; it cannot be reasoned with; its only motivation is to devour the living (or rob them of their souls by turning them into more zombies). And yet it does walk upright and was a human being once, which makes it all the more terrible (indeed, most zombie movies include at least one scene involving a zombie's loved ones struggling to come to terms with the fact of their zombification). Now, the people who write such stories probably aren't trying to make any metaphors about the state of modern philosophy (with the possible exception of Warm Bodies), let alone any of their viewers. As far as any of them know they're just creating (or enjoying) exciting stories about some particularly scary monsters. But if any of them stopped and spent some time thinking about why they find those particular monsters so compelling; why the possibility of a zombie apocalypse FEELS so plausible, even if they know that it's a scientific impossibility (which many of them don't), they'd discover a perverse reflection of themselves. In short, I think zombies are an obvious metaphor for people who've swallowed today's dominant philosophy, and that what most zombie stories say about them THROUGH that metaphor isn't very flattering (although the one exception to that is, once again, Warm Bodies). The silver lining is that today's cultural atmosphere is ideal for someone to use that metaphor in order to make a philosophical point or two.
  2. Skillet - Back from the Dead The visuals are spliced in from StarCraft.
  3. Thank you. A lot. It's mainly just thinking in non-essentials. I spent several weeks on the Immigration thread, looking at the whole thing in the wrong (specifically non-essential) way. And over here I moved the goalposts from "the Turing Test will work" to "it'd work with the right judge" to "human-level intelligence requires consciousness" - every step of which was better than the last one, but all of which indicate superficial thinking. It's a little infuriating. But now that I've identified what it is I've been trying to develop some better thought-habits. Nothing's gonna happen overnight, of course, but I think I'll be ready to restate my case (or possibly retract it) soon. As for the kinds of "interactions" I'm used to - since being open about my mental struggles has occasionally taught some very good liars to get even better, whatever you suspect about it is probably true. But I solved those problems a long time ago. And I'll be back with a vengeance soon!
  4. Absolutely. I just wanted to underscore, briefly, that the question is not whether immigrants HAVE every single right a native can logically claim to have (they do) but precisely who is responsible for SAFEGUARDING those rights, and how. If you'd agree with that (as I suspect) then there's nothing else I could add to it. And that's where I'm gonna have to stop you. I once spent about 2 weeks in Finland with the intention of immigrating for a foreign-exchange student I'd known in High School. During that time, although I was no less opinionated than I've always been, I had no interest in the Finnish political life. It was more than I could handle to learn a whole new language and get used to what's served at a Finnish McDonald's; learning all the context I'd need to even know who I'd agree with was nowhere near the top of my to-do list. And from the American immigrants I've met since then I think I'm qualified to deny that such an attitude is unique either to Finland or to one's first few weeks. As appealing as a welfare handout is to some, the possibility of voting takes a good long while to enter the minds of most immigrants (except for certain explicit Islamists - and screw those guys, anyway). Finding work, housing, love (etc) here is very different from voting on how we run the country, itself. The former should only be denied to those who're obviously dangerous while the latter should only be granted to those who prove themselves worthy. And if such restrictions weren't just applied to immigrants, but to the population at large (as the Founding Fathers originally had it) - personally, that's an idea I could really and truly get behind. Service guarantees citizenship! He meant that anyone who doesn't have dismembered limbs hanging out of their vehicles should be allowed to come and go as they please (which is neither how we currently do things nor, in my mind, how we ought to do things). Yes. I do not believe that privacy, as such, is a right. I have the right to be left alone (including to not have anybody, with or without a badge, pestering me and rummaging through my stuff without a warrant) but there is no such thing as a right to 'not have X known about'. Such a thing implies that in certain situations it would violate my rights for you to NOT avert your eyes and mind (not just rude but actually criminal and punishable). And that's a premise I could really have fun with if anyone here cares to defend if. Not only is privacy not a legitimate right but the preservations of the rights of all Americans (as well as those in many other countries, to boot) specifically requires that law enforcement look through their own eyes and think with their own brains, to the utmost best of their ability, and this is specifically what would be wrong with absolutely open borders (like Binswanger really does advocate). Of all the immigrants I have personally known (including myself while I intended to stay in Finland) not a single one has had any sympathy for the Democrats. In fact, my own uncle (who STILL can't pronounce certain English words properly) is a die-hard Rush Limbaugh listener who would've voted for Reagan, if he could've. I'd be very interested in digging up the statistics about that claim because what I've seen "in the wild" has only ever supported its exact opposite. Sometimes; certainly. I wouldn't even be surprised if that was true of most like-minded Americans. I simply must hear the rest of that story. I don't have to comment on it, if you'd prefer, but please explain everything else you mean by that.
  5. Harry Binswanger wrote an essay advocating totally open borders here: http://www.hblist.com/immigr.htm And when I say "totally open" I really mean it. He specifically says there should be no stops, no border patrol, nothing. And I know that at least one participant in this thread has tried to advocate for the same, because until recently I was. Specifically because I couldn't find any holes in Binswanger's reasoning. And for the record, the only hole in his reasoning is context-dropping. So it really might not be relevant to you or the case you're trying to make. You're saying our government should just share its criminal databases with our neighbors, right (which I actually named as another potentially-valid solution to the problem of jurisdictions)? If so then perhaps this shoe does not fit you, but it does fit Harry Binswanger and myself (and at least a few other participants I am certain of). Is that what it's been for you? I could name a few things this thread has been about for those who were not you. When I say it, it does. So too for Harry Binswanger. 1. Wouldn't those be 2 different degrees of essentially the same kind of thing? 2. I was wrong about there being a case to be made that our government shouldn't keep all the information it currently collects about every single one of us. Don't get me wrong: it's a frigging scary thing - but only because of the number of unjust, nonobjective and just STOOPID laws we currently have on the books; not because of the data itself. Neither you nor I would have to worry about a proper Objectivist government collecting any amount of information on either of us. This music video actually is irrelevant. But it's just too good to keep to myself!
  6. I'd just like to point out that if a Paraguayan murderer can get a clean slate by simply moving past some totally open border (without so much as giving his name) then not only would he be a threat to all the citizens of the new country but a grave injustice to all his victims in the old. Totally open immigration (like Binswanger argues for) wouldn't just harm us. I've recently realized that this whole debate has been framed by terms like "foreigners" this entire time (even in Binswanger's essay, which you'd expect to be of a slightly higher caliber than whatever we can come up with here) when it could have been framed as a question of how to handle people moving from the territory policed by one government to that of another. The latter would be a nuanced and very interesting question, the answer to which would probably dissolve any genuine confusion about the former. It seems like some on this thread (as well as Binswanger, himself) - even me; we either wanted to respond to the open xenophobia that's starting to crop up all over the place or to nuts like Angela Merkel who'll hand out free money to anyone from anywhere on Earth. And in our haste to give those truly despicable phenomena the answer they so desperately need, we seem to have uncritically and unTHINKINGLY accepted the warped terminology used by the rest of the anti-intellectual culture, at large (even Binswanger and I). Perhaps the endless rabbit-holes we now find ourselves stuck in are the punishment we deserve for failing to challenge the roots of their premises. I won't name any names other than Binswanger's and my own, but if you're reading this you should ask yourself if the shoe might fit you, too. Binswanger is absolutely right that foreigners have all the same rights we do and must be treated accordingly. The only thing that's wrong with his essay is the context it drops: that the American justice system doesn't concern itself with those who live beyond its geographical territory, and consequently that there's a real question to be asked about how to handle peoples' entrance to AND exit from that territory. As I pointed out at the very beginning of this post, the consequences of dropping that context would perpetuate danger and injustice to everyone on either side of the border. As the consequences of any failure to think usually tend to. As to that question of how to handle movement across different jurisdictions, I don't know what the right answer would be. I'm inclined to agree with DA about border stops, but maybe it would be better for our government to share its criminal database with Mexico (and vice-versa) or maybe there's a completely different third solution; I really don't know. But given what most of this thread has been about so far, it doesn't seem like the right place to start trying to sort that out. Neither do I see a point in correcting anything else I've added to it before this. Personally, I'm very disappointed in myself. But I intend to challenge those premises and step outside of that frame (as I should've done before I said one word to anyone else about it) and say something once I arrive at some well-reasoned conclusion. Live long and prosper.
  7. Not that they're foreigners, but that they're crossing from one government's jurisdiction to another. That's why I asked DA whether the stops he's advocating should be applied equally to vacationers returning to their homes in, say, New York or New Jersey. The question of citizenship isn't relevant to the argument he's making. That was a pretty funny video, though.
  8. Yes, but thinking is not a team sport. I've been working these ideas out in any odd moments I've been able to find, but the problems they've highlighted will take a bit more than that. The "remote mountaintop" bit was a dash of my own color, though; it shouldn't take more than a week or two, once I get to it. Besides. The Golden Age arrived today.
  9. No, like what programs did you use to put the slideshow together with your own audio?
  10. Maybe. I've been describing my reasoning skills as "rusty" but the more I review what I've been posting, the more atrocious thought-habits I discover. Don't be surprised if I drop off this site sometime soon: I'm considering taking one copy of Atlas and one of the ITOE to a remote mountaintop somewhere. Except for that cheeky line at the very end, I have no other "maybes" for the rest of that. That's exactly why I wanted to avoid using those terms in the first place. I won't be happy if you're right about such flawed concepts STILL being a factor in this thread - because that's exactly what I think of them, too. That's another excellent point. The kinds of "artificial intelligence" we have today really shouldn't be called "intelligence" at all; it only serves to confuse the issue. It doesn't yet make me think I'm wrong about the nature of "artificial intelligence" whenever we manage to actually achieve it. But if you know a better term for our modern toys then I'd prefer to use something else. Actually, the "bot" suffix might suffice. Speaking personally, that would convey to me exactly what we have today and be totally inappropriate for a truly thinking machine. I'll use that for now. Depending on exactly whom you mean, I would very much disagree with that. In Computing Machinery and Intelligence Alan Turing said: That doesn't sound like the thrill of tricking someone, to me. Sam Harris and Isaac Arthur aren't eager to fool anyone, either; in fact, since they're both hard-line agnostic as to whether the Turing Test would indicate consciousness or not, they'd probably agree more with you than me. Nor can I think of any of the computer scientists who're currently working on it (because I've listened to a few in my time) who cared enough about what others thought to ever be accused of such a motive. They're operating on some twisted premises about the nature of consciousness and most of them are wildly overoptimistic, but not that. I believe the last time we had this discussion it was established that the Turing Test has already been "beaten" by a grotesquely crude sort of chatbot which was programmed to monologue (with an obscene frequency of spelling and grammatical errors) about its sick grandmother. The judges "just felt so strongly" it had to be a real boy. The thing I remember most clearly was being absolutely enraged when I reviewed some of the transcripts from this test, saw that a single shred of THINKING would've showed it as an obvious fraud and revised who I thought should qualify as the judge of a proper Turing Test. I'll be screaming at my computer screen. That's the thing, though. What would one expect to see if one looked at the inner workings of another consciousness? And would a machine consciousness (if possible) look anything like an organic consciousness "under the hood"? The question of what one would find under your hood or mine is big enough to warrant its own thread, let alone some artificial kind of consciousness. --- Since we agree that a REAL human-level intelligence necessitates consciousness (thank you) I'm not sure what else I want to start before I return from that remote mountaintop. But this, I really must share. This is amazing. According to Wikipedia, I'm not the first one to think of training a Neural Net to participate in an online forum. They called it Mark V Shaney And Mark was much more amazing than what I hypothesized in that last post. Mark was something really special... My friends, a new age is dawning. PS: Do we already have a few bots hanging around here?!?!!
  11. This is nearly 3 hours of Sam Harris discussing AI with various people (Neil Degrasse Tyson comes in at around 1.5 hours and Dave Reuben at almost 2.5). I don't agree with everything he says (in fact it reminded me of all the aspects of Sam that I despise) but it ended up helping me reformulate precisely what I'm trying to say, here. He repeatedly mentions the possibility that we'll create something that's smarter and more competent than we are, but lacking consciousness; a "superhuman intellect in which the lights aren't on". What I was trying (very, very clumsily) to say by way of the Turing test example is that that's a contradiction in terms. Consciousness is a process; something that certain organisms DO. This process has both an identity (influencing the organism's behavior in drastic and observable ways) and a purpose. I don't think there could ever be some superhuman artificial intellect that was better than us at everything from nuclear physics to poetry WITHOUT having all of its lights on; after all, such capacities are why any of us even have such "lights" in the first place. This obviously is relevant to the Turing test, but in retrospect (now that I've formulated exactly what I mean) that really wasn't the best route to approach this from. But now that we're all here, anyway... As any REAL Terran will already know, AlphaStar is Google's latest AI system. Having already mastered Chess and Go, this one plays StarCraft. There'll be no link for anyone who doesn't know what StarCraft is - how dare you not know what StarCraft is?! Anyway; AlphaStar learned to play by watching thousands upon thousands of hours of human players and then practicing against itself, running the game at far faster than it's supposed to go, for the equivalent of something like ten thousand years. It beat the human world-champion a year or two ago, so suffice it to say that it is very good at StarCraft. My question is what it would be like if Google tried to train another Neural Net to participate on this very forum, as a kind of VERY strict Turing Test. What would such a machine have to say? Well, from reading however-many millions of lines of what we've written there are certain mannerisms it'd be guaranteed to pick up; things like "check your premises" or "by what standard" (or maybe even irrelevant music videos). And from the context of what they were said in response to it'd even get a sort of "feel" for when it'd be appropriate to use any given thing. Note that this approach would be radically different from the modern "chatbox" setup - and also that it could only ASSOCIATE certain phrases with certain other phrases (since that's all a neural net can really do), without the slightest awareness of what things like "check your premises" actually MEANT. Given enough time, this system (let's call it the AlphaRandian) would NECESSARILY end up saying some bizarre and infuriating things, precisely BECAUSE of its lights being off. In a discussion of the finer procedural points of a proper democracy it might recognize that the words "society" and "the will of the people" were being tossed around and say "there is no such thing as a society; only some number of individuals". And if questioned about the relevance of that statement it'd probably react like (let's face it) more than a few of us often do and make some one-liner comeback, dripping with condescension, which nobody else could comprehend. On a thread about the validity of the Law of Identity it might regurgitate something halfway-relevant about the nature of axioms, which might go unchallenged. On the morality of having promiscuous sex it might paraphrase something Rand once said about free love and homosexuals, which (being incapable of anything more than brute association) it would be totally incapable of making any original defense for, and most likely incapable of defending at all. It would very rapidly become known to all as our most annoying user. And further: since the rest of us do have all our lights on, it'd only be a matter of time before we started to name what it was actually doing. There would be accusations of "anti-conceptuality" and "mere association, with no integration". And since this is OO, after all, it would only be a matter of time before it pissed someone off SO much that they went ahead and said that it "wasn't fully conscious in the proper sense of the term". We all would've been thinking it, long before that; past a certain point it'd only need to make a totally irrelevant "check your premises" to the wrong guy on the wrong day and he'd lay it out explicitly. And if Google came to us at any time after that to say "you probably can't guess who, but someone on your forum is actually the next generation of chatbot!" we'd all know who, out of all the tens (or hundreds?) of thousands of our users, wasn't a real person. Granted, that was one long and overly-elaborate thought experiment, and you might disagree that it would, in fact, play out that way (although I did put WAY too much thought into it and I'm fairly certain it's airtight). I only mention it as one (of what's going to be many) example of my primary point: You cannot have human-level intelligence without consciousness. That's fucking amazing!
  12. The Aristotelian concept of "essences" being metaphysical (rather than epistemological) seems applicable.
  13. Yeah; sorry about that. I was already a little bit tipsy. Thanks for not giving me the kind of answer you definitely could have. Sure, it can be text-only, but I wouldn't be comfortable with the kind of emotionalist "average human being" that'd fall for any program sufficiently capable of tugging at their heartstrings. Obviously I'd prefer to be the judge, myself, but the average Objectivist should suffice. Let's not limit the processing power or memory capacity. Well, that's just it. If it was generating any of its own content on-the-fly then it wouldn't be part of today's "chatbot" paradigm (in which absolutely every response is pre-scripted). But even if it could generate its own content on-the-fly, if it had no basis to know what its words REFERRED to (like if it was just a neural net trained to imitate human speakers) then it would still end up saying some equally-bizarre things from time to time; things that could not be explained by the intentions of some bona fide consciousness. How long it'd take for it to make such a mistake isn't really the point. A more sophisticated system would obviously be able to mimick a real person for longer than a more rudimentary one could; all I'm saying is that sooner or later they would ALL show obvious signs of their non-sentience, unless they truly were sentient. Alright; maybe we'll see certain things that could fool MOST people in the short run. That's not really what I was trying to get at (and I do see that I wasn't being very clear about it; sorry). I think the better way to phrase this principle is that any non-sentient system can eventually be shown to be non-sentient by an outside observer (who perhaps has no idea what's going on "under the hood") who's at least somewhat capable of thinking critically. I have to start getting ready for work soon, but maybe it'd help if I showed some examples of what I mean later on?
  14. And presumably this would apply equally to American citizens and foreigners, alike? For instance, if I take a vacation to Cancun then the Mexican government should stop me on the way there and my own government should stop me on the way back (just in case I took up murdering random strangers while I was there)?
×
×
  • Create New...