Jump to content
Objectivism Online Forum

All Activity

Showing all content posted in for the last 365 days.

This stream auto-updates     

  1. Past hour
  2. Sorry if that was an unfair characterization of what you meant to say. When I read about stopping "all those who cross", that sounds like random searches to me. Well, maybe not literally random people since it is everyone, but if it isn't selective, it's arbitrary. Presumably, when you say stop, you mean stopping people so that you can inspect or search something. I don't think you would mean watching license plates and asking people to slow down see can look at them. When you say stop, I think of a border patrol agent telling me to stop my car, ask where I'm going, why am going there, and otherwise intrude upon my privacy by threat of arrest or some kind of punishment. Maybe let's forget the word search and focus on inspection. I'm leaving aside asking questions, because it requires no physical interaction (unless stopping means asking me to leave my car and saying I must go into a building for questioning). To sum that up, I'm claiming that stopping all who cross without a standard of selection is arbitrary and therefore random. Then it would be selective, and then I'm fine with it. That is, a systematic and selective way to decide who to stop is perfectly fine to me. I feel like I'm missing something about your question. Fair enough. I would say though that having such a database is conducive to protecting individual rights as a procedural matter (like you said, procedure matters). Interpol is the best example I can give, but I don't know much about it to discuss it. How does anyone know if someone is searching for him? If law enforcement fails to pass along information, or the US law enforcement fails to communicate with foreign law enforcement, the failure is on law enforcement. The failure isn't the border to stop everyone who crosses. Yeah, stopping everyone who crosses the border would solve the problem, but so would data sharing and communication. If government and law enforcement fails to operate efficiently and effectively, it will fail to protect your rights. Stopping all who cross is not efficient, it doesn't sell communication problems, and has a huge amount of tension with individual rights (as in with major contentions even if you might be able to justify it). It's not the process of checking that matters I think. Talking about the initial information that is perceptually available to you. Cameras would probably be appropriate because it's still basically the perceptual stuff you do. But facial scanning is not even close to what you do perceptually because it takes into account things you literally are unable to note on your own. It's not the database comparison that bothers me, but the information you use for the comparison is derived from something you can reasonably expect to be private in day-to-day life. Reasonable, as in the things people pick up just by being conscious. That's how you get prior justification by the way. You make simple observations to start. Science works the same way. You don't start with complex procedures that take hours to do by hand or done by a computer. It's not justified to start analyzing all the numbers. You start with basic things like did a participant follow directions, or what color did the chemical turn when it was combined with another. From there, you are justified in continuing the investigation. But you can't start running tests that should only be done five steps down the line; I don't run an MRI just because you tripped on a step (even if it is possible that it's an early sign of ALS). Maybe you observe a certain chemical disposed in a nearby lake connected to a restaurant in town. Restaurants don't usually have chemicals to dispose of, so this is a strong justification for the possibility that the restaurant is doing something very bad (maybe poisoning food on purpose?). In the case of a border, maybe you see a bloodied arm hanging out the back of a car. That's a very blatant example, but that's the sort of thing I'm thinking about.
  3. Today
  4. Were those select channels claimed illegitimately? If not, what do you call those properties? If so, why would you support them? Even if property was seized by eminent domain, I'm thinking it should still be a type of public property. It's stolen property, but our only choices are to give it back to the rightful owner, let the political elites own it, or attempt to make it public property and insist that we (or our elected representatives in government) vote on its use. If it's not going back to the rightful owner, then making it public property seems like the better option. Unfortunately we are in a mixed political situation where some property rights are respected and some are violated. This makes dealing with concepts like "public property" difficult.
  5. SL had switched the context from a human being to human DNA. A human is an organism. Human DNA is part of an organism. The essentials of a human are his animalness (genus) and his rational faculty (differentia). The essentials of human DNA are its DNAness (genus) and that it's a part of a human (differentia). The differentia can't be that it has a particular atomic structure. Everything has a particular atomic structure.
  6. Er... no? I mean, I'm aware of the literal difference involved -- but I'm not convinced there's a difference in kind, vis a vis "privacy." I don't know why "unaided ability" should matter at all or be considered virtuous, or make the difference between what you contend to be a violation of right. And besides, the totality of checking a license plate also requires "aided ability" -- a computer, for instance, and telecommunications, and... a car.
  7. Ha. Strong words. Note, I said “long before”... Step back a bit. Let me ask some questions. Is your Turing test a text only no peaking type test with average human beings doing the judging of who or what is on the other side? How long is your Turing test? 10 minutes? 2 hours? 1 day? What raw memory capacity, raw processing power, brute pattern associating, unthinking genetic or neural net algorithms are you limiting your non conscious aspiring impersonator to? How many people, stories, conversations are you limiting your impersonating behemoth to? Is the blind nonthinking system permitted to generate a random personal backstory with events and words to describe thoughts and feelings and experiences reported as associated with those events (similar to what it observed others reporting about events and thoughts and feelings etc). Is it allowed access to hours and hours of television petabytes of literature? Is the internally silent monstrosity trainee in its patterns corrected in what it reports it thinks feels etc through training and “cognitive” therapy? How many years of training and creation would it take for a sufficiently sophisticated zombie to take on what looks like a personality filled with history and enough trickery to consistently and convincingly provide text messages over a short time span, such that a person simply cannot tell who or what is on the other side? This is why I say Long before... long before real consciousness is produced.
  8. One has to be at least capable to make a distinction between science – scientific method – and rambling, and having a degree in an exact science helps enormously…
  9. FDR signing the declaration of war against Germany. (Image by Office of War Information, via Wikipedia, public domain.) I was a bit surprised -- although, perhaps I shouldn't have been -- when my third-grade daughter mentioned to me last Wednesday that she had learned it was "Patriot's Day." Oddly, I had never heard of this, although Congress did indeed pass a joint resolution declaring it so in December of 2001. (I would have preferred a declaration of war.) This she mentioned apropos of nothing in a crowded waiting room as I was picking her up after gymnastics. It took me a few questions to be sure she was talking about what I was afraid she was: I had never discussed the atrocities of September 11, 2001 with my children before, and am not sure how appropriate it is beyond a certain point to discuss them at their ages. (Or at least, given the way these things tend to be discussed, I think that is true.) But since the matter came up and was probably not framed properly, I gave her something close to the below essentialized description. I made sure to include the words evil and murder: About ten years before you were born, evil men who knew how to fly planes took over several. Then, on purpose, they flew them into buildings while people were at work. They murdered everyone in the planes and many in the buildings.She agreed, and then mentioned that that is why we have check-in lines at the airports. I decided to let the fairy tale of security theater go unchallenged for now, but I am overall satisfied that they know I regard what happened as a deliberate, evil act by evil men. Whatever my children end up believing about all this, it will not be because I will fail to challenge the worst (i.e., altruist-collectivist-pragmatist) elements of our culture. They will know that I and others think otherwise, and at least have a lead on why. The combination of international appeasement and domestic curtailment of freedom since that day are more damaging than anything a barbarian is capable of. And those things anger me. These measures are worse than inaction for their stated purposes, and they will, besides, make establishing and maintaining moral clarity about the war being waged against us into a challenge when it should actually be easy. -- CAV Link to Original
  10. I'm extremely glad to hear it. It's heartening to see we can at least agree on that much, from the get-go. And also that, in essence. I don't think we'll necessarily have to figure out what consciousness is before we replicate it. The history of science is littered with examples of people discovering new technologies before they fully understood how they worked (penicillin comes to mind) although it would be preferable if we didn't end up playing with such a powerful force before we knew what makes it tick (the phrase "a kid playing with his dad's gun" comes to mind). I also suspect it won't be as far in the future as you seemed to imply, there. I'd be surprised if anyone participating in this thread didn't live to see it happen. But those are both very minor differences, in the grand scheme of things; in essence we're already on that same page. That's very interesting, though, because it's exactly what I'd say about your position. Put yourself in the shoes of a chatbot programmer who's trying to handle the case of being asked "how do you feel?" You might program it to respond with "good" or "bad" - both of which open themselves up to be asked "why?" Now, a real person who was really reporting on their internal state would have absolutely no problem answering that question, but a chatbot-programmer would then have to think of a specific, concrete answer to "why" (and "how" and "I know what you mean" and etc), and then an infinitely-branching set of responses for whatever their interlocutor says after that. Anyone who grasps why lying cannot work in the long run will immediately see the problem with such an approach. I not only see that problem: I am saying that this problem is INHERENT to trying to tell a non-thinking AI some string of words to make it LOOK like real AI, and that the only solution there can ever be would be to do it for real. Speak for yourself, man. I seem to recall you weren't much of a programmer (at least as of what my memory of several years ago seems to indicate) but if anyone reading this, at any point however-long-from-now, can propose any alternative approach besides sentience, itself, you'll have my eternal (and extremely public) gratitude. I dare you. Because if the only possible approaches are "pre-scripted strings of text" or "true sentience" then I would love to demonstrate how to reliably falsify the former (i.e. show it for what it really is) every time, because it's really not that complicated. Not only can it be done, it should be done: it's very important for us to know when we've actually built a proper AI and when we haven't. Finally, for the record: we haven't. But for how much longer I really couldn't say. P.S: In the Fountainhead, the very first words Toohey says to Keating are "what do you think of the temple of Nike Apteros?" Keating, despite having never heard of it before, says "that's my favorite" (just like a chatbot might) and Toohey goes on talking as if that was the only answer he was looking for; briefly saying: "I knew you'd say it". There is a reason I'm so confident this wouldn't fool anyone who takes the time to learn how the gimmick works.
  11. This makes no sense to me. What on Earth did you really mean if it wasn't that the essential characteristic of a thing is where it came from? This might just be the rum talking (and I'm very sorry if it is) but I am very confused.
  12. I'll detail it more later, but is it not enough to say that looking at a license plate number requires unaided ability beyond using your eyes and brain and a little bit of writing, while facial scans involve and require a computer with sophisticated algorithms that go far beyond any human capability?
  13. No, more like claimed illegitimately if we are to call it public property. That isn't to say that there can't be select channels through which people pass, and with some buildings that are dedicated to law enforcement duties. My objection is to saying that strict regulation is justified because the land is public property. It's a bad justification. Therefore, border patrol isn't protecting public property. That's all I was saying. Okay, that's all I have to offer for now then.
  14. I think you are conflating the vast and deep complexity of consciousness (and the subconscious) with its vanishingly small and superficial surface appearances. The words we finally use to communicate what we think, feel, and experience at surface consciousness are nothing compared to what is actually happening when we think, feel, and experience. Making a non conscious thing communicate words to sound like a thinking, feeling, experiencing human, although difficult, is laughably simple compared to making sure a complex system is and does what is necessary for an actual consciousness, which is thinking, feeling, and experiencing. There is more to a book, an iceberg, and a human... than what’s on the surface ... you have to look closely inside and beneath the surface to really understand... If everything about a conscious person thinking, feeling, and experiencing could be fully observed and understood... so that the waves of activity electrical and chemical in sequence and by locality (and globally) could be fully understood, and what about them was important and how, we might know what kind of different complex kind of appearances together are a sure indicator for consciousness in some other complex system... strings of words my friend do not cut it... non thinking AI will fool us long before anything like “Real synthetic I” comes to be. I think an an error of the rationalists in their theory of mind is the conflation of the products of the mind with what mind is and is doing. The mind is doing a lot more than processing information, so much more that comparing a human brain with an algorithm is laughable. The Chinese room is an empty and meaningless toy of a rationalist. PS The zombie argument is a nonstarter with an Objectivist view of existence and identity. In principle there is EVERY reason to believe we will create a synthetic consciousness, once we understand scientifically what it really is... in the FAR future.
  15. The only spoiler I uncovered so far is the author, a long-time atheist, turned to catholicism. This, however, is a fact aside from the content of the trilogy. Perhaps by October, a second reading could commence. It has been an extraordinary and unexpected approach to sci-fi thus far.
  16. I don't know where I've advocated "random searches on some individuals." I've advocated stops at the border (not random, nor even selective, but all those who cross) for the purpose of gathering information -- though it's true that 2046 describes that as a "search," and I've not argued the point. If it is a "search," it is apparently not considered "unreasonable" by current jurisprudence. The basis of the reasonableness of the stop/"search" is what I'm trying to get at with the fact of changing jurisdiction. Sure. So suppose the DEA has a list of names, or even faces, of high-powered Mexican cartel figures. What then? If Gus Fring isn't stopped at the border at any point, if he simply drives on through, what exactly is the DEA supposed to do with that information? Except that American law enforcement won't have access to all of the data that Mexico does, let alone every other country. So far as I know, there's no worldwide criminal database: all of the information is compartmentalized, primarily by national jurisdiction. So in a way, it is very much like starting from scratch; if there's a wanted murderer in, say, Paraguay, who manages to cross through to the US/Mexico border (because he's never stopped, no one checks identification, etc.), and he makes it to the United States, then suppose he is later stopped for some traffic violation in Topeka. He gives his name as "Joe Smith." He has no other identification, and his fingerprints are not in the US record. Are Topeka PD going to track down his information all the way to Paraguay? How? As far as I can tell, getting across the US border is very much a clean slate for our Paraguayan murderer, because he has managed to cross from a jurisdiction where he was wanted (where the evidence of his crime exists, and with it probable cause for further search/seizure/incarceration) to a jurisdiction where he isn't at all known. So 2046 at least agreed that reading a license plate number and entering that into a database would not count as a "search" (or if it does, not an "unreasonable" one). But would you not go so far? Or if you think that's okay, what is the difference between entering a license plate number and scanning a face, to compare either against electronic records? What about using eyes to "scan" someone's face and see if they match a database (i.e. memory)? How do we begin the trail towards "probable cause" if every step along that path is itself an "unreasonable search" needing prior justification? Zeno makes for a poor police officer, and the gathering of information must start somewhere, via some generalized and warrant-free method.
  17. You already reconstructed it. You even quoted her argument, and I responded. We disagree. That's fine. Didn't say it was. The sections seized by eminent domain clearly weren't donated. That's true. Do you think all the government-controlled land along the border was taken forcefully? I'm not sure what you're getting at. Border patrol isn't trying to protect a non-existent thing. They're protecting people and their property inside the border. That's not really my focus here though.
  18. But not a Prius. That is not a car; it is a lunch box. I've ordered a copy that should arrive sometime in October. So no spoilers! I suppose so. I've been refining my thoughts on this over the past few days (it's been quite a while since I've tried to participate in this kind of conversation) and I think you're probably right about that. As right as it'd be to attribute "rationality", "personhood" and "individual rights" to any true AI (assuming, for the sake of argument, we actually managed to build one), calling it a member of "homo sapiens" regardless of what it's made of makes about as much sense as a trans guy declaring himself to be a female with a penis. You've got me there. That's certainly true. However, even if it's not actually possible to program "consciousness" into a computer (which is itself a somewhat dubious assumption since within our lifetimes we'll have computers -if memory serves- capable of simulating the whole human brain down to something like the molecular scale); even granting that, we could always grow the necessary organic components in a vat. We've already done it with rat brains. So although it's true that silicon might not be the appropriate material to use in our efforts to create AI, in the grand scheme of things that would represent at most a minor hiccup in such efforts. This is the part I don't entirely agree with. That infernal Chinese room. To start with, I'd like to avoid using the terms "input", "output" and "information" unless they're absolutely necessary. I think anyone who's read the ITOE can see how frequently our society abuses those infinitely-elastic terms today, so let's see if we can in the very least minimize them from here on out. Secondly, as much as I'd like to throw "simulation" into the same junk heap and be done with it, I don't think I can make this next point without it. So I'd like to mention something before I start trying to use it. The Identity of Indiscernibles is an epistemological principle which states that if any two things have every single attribute in common then they are the same thing; if X is indiscernible from Y (cannot be told apart from each other in any way whatsoever) then X is Y and we don't even need the extra label of "Y" because they're both just X. I bring this up because I recognize it as the words for the implicit method of thinking which I've always brought to this conversation as well as the basis for my conclusions about it. If it's valid then I'm fairly sure (mostly) that everything else I'm about to say must also be valid. I'd also like to point out that every single Neo-Kantian argument about philosophical zombies gets effortlessly disintegrated by the application of this one little rule. So it does have that going for it. I would agree with that - sometimes. A simulated car in a video game is obviously not the same thing as a real car. One of these can be touched, smelled, weighed and driven (etc) while the other can only be seen from certain very specific angles. The two things are very easy to distinguish from one another, provided the simulated one isn't part of some Matrix-style total simulation (in which case things would get rather complex and existential). I would even agree that a computer simulation of any specific individual's mind (like in Transcendence) would not be that person's subjective, first-person experience; i.e. it wouldn't actually be THEM (although my reasons for that are complicated and involve one very specific thought experiment). However, if a simulated consciousness could not be distinguished from an organic one (like if one were to pass the Turing Test) then by the Identity of Indiscernibles one would have to conclude that the machine was, in fact, conscious. It wouldn't be a traditional, biological kind of consciousness (assuming it hadn't been grown in a vat, which could be determined by simply checking "under the hood") but it would nonetheless be a true consciousness. Even if it was simulating the brain of some individual (like in Transcendence) whom it wouldn't actually BE, it would still be alive. In short, in most cases I would wholeheartedly agree that a simulation of a thing is not actually that thing (and could, in fact, be differentiated from the real thing quite trivially), but not in those cases of actual indiscernibality. It's that last example that I really take issue with. I don't know whether it's a case you'd actually make or not and I'm trying not to put words in your mouth. But while I'm on the subject I wanted to mention the Chinese Room objection to AI, partially because it looks vaguely similar to what you actually said (if you squint) and primarily because it annoys me so very much. The argument (which I linked to just there) imagines a man locked in a room with two slots, "input" and "output", who is gradually trained to correctly translate between Chinese and Japanese despite not understanding what a single character of either actually MEANS. This is meant as an analogy to any possible general AI, which implies that it couldn't possibly UNDERSTAND its own functions (no matter how good it gets at giving us the correct responses to the correct stimuli). First of all, one could apply the very same analogy (as well as what you said about merely "transforming information") to any human brain. What makes you think that I understand a single word of this, despite my demonstrable ability to engage with the ideas that're actually in play? Maybe I'm just the latest development in philosophical zombies. Second of all, the entire argument assumes that it is possible to correctly translate between Chinese and Japanese without speaking either of those languages, SOMEHOW. As a programming enthusiast, this is the part that really gets under my skin about it - HOW in the name of Satan do you program a thing to do any such thing WITHOUT including anything on the MEANING of its actions? The multitude of problems with today's "chatbots" (and I can go on for hours about all the ways in which their non-sentience should be obvious to any THINKING user); everything that's wrong with them more-or-less boils down to their lack of any internal referents. The fact that they don't actually know what they're saying makes them say some truly bizarre things, at times; a consequence which I'd call inescapable (metaphysical) for any non-sentient machine, by virtue of that very mindlessness. The Chinese room argument frolics merrily past all such technicalities to say: "sure, it's physically possible for a mindless thing to do all those things that conscious minds do, so how can we ever tell the two apart?!" Finally, the Chinese Room argument is almost as shameless a violation of the Identity of Indiscernibles as the concept of a philosophical zombie is. I really wish I knew which Chinese room was the one in question so I could just torch the damn thing. It's so wrong on so many different levels.
  19. Imagine yourself shipwrecked on that famed "desert island". Alone, obviously you can't have any more rights than a palm tree and Fiddler crab--i.e. no human, intrinsic, natural, nor individual, rights. Adapting to your environment and using nature to your ends you do have, indeed, total freedom of action (from others), and still you'd have to be highly rational (moral) to survive and lead any sort of good life. Eventually, another person is washed ashore, then others. You have a "social context". You have a territory. Among you, you nominate some individuals to form a small judiciary and police force. You have a rudimentary government. That initial freedom of action towards your "good life" - which only other people can prevent and curtail - is established and guaranteed, for all. You have individual rights (of property, etc.) If many more persons arrived (on a boat), at a certain stage if they wished to remain, you'd ask them to promise to abide by the island's individual rights and governance, or leave. They don't have an intrinsic or human right to enter and stay, merely by virtue of being humans. And if you explored by boat, discovered another inhabited island, you could not expect and insist on these other islanders accepting and treating you by your own island's particular system of rights. Individual rights are a universal truth (by the standard of value, man's life), but that's not to mean that they have been and are recognized or practiced, universally. The preconditions of rights are the individual in "a social context", "territorial integrity", and "A" government which protects and preserves them. This last is crucial, I think.
  20. No Objectivist that I know is arguing that the government owns the land--as in all the land in the country. This is part of Binswanger's straw man, and it's not serving him or his followers well. I really have nothing more to add on that topic. If you think I'm confused about whether the government owns all the land, I don't know what else to tell you. Though, in my view, you might be confused about the purpose of government. Regarding property, and anything else of concern, the government is allowed to protect individual rights. I'm not sure why you worded the purpose negatively or focused on prevention.
  21. Yesterday
  22. Individual rights aren't contingent upon different social contexts. Rather, they are only applicable if you enter in some kind of social context. Rights don't need to be extended towards you, even though their defense and respect does. More specifically, individuals are protected or violated by an entity, not the rights per se. So when a person approaches a border, it is morally proper to treat them as individuals, and to respect their rights. To go towards DA's post, I agree that law enforcement needs information to perform its duty. But I don't see a need for there to be some special consideration at the border, as 2046 says. If information is needed, random searches on some individuals isn't really helping anything even if it is easier compared to other methods. The DEA should be concerned with Mexican law enforcement and communicate with them in order to figure out who might be worth questioning. It's not like by crossing jurisdictions everyone starts from scratch. It's actually good reason to say that part of good government is good procedure. If there is good communication, there is no need to even approach violating rights, and information is available just in time. "I want to go over Mexican data first, so please wait here for 5 minutes" would make sense sometimes if there is a known suspect from a Mexican drug cartel with a network of trucks carrying fried chicken batter for Los Pollos Hermanos, if that's what you mean DA. But if you mean "please let me search you so I can then compare Mexican data to what I found, just in case", that's a suspicion coming out of nowhere, and would be an unreasonable search. Either law enforcement has enough data already to be looking for something specific, or they have no reason at all to be searching. https://www.youtube.com/watch?v=NJYUs2UH0SM By the way, for unreasonable searches, I am including using phones to scan someone's face and see if they match a database. Maybe more important here is a right to privacy. No, I don't mean a procedural right to privacy, but privacy as an aspect of individual rights.
  23. Wouldn't that be great? Oh, wait, what's that? You have to have a degree in physics to submit anything? darn
  24. Please come back - with a link - when you will succeed to publish your theory in a serious (= peer reviewed) journal.
  25. Ha! Well we’ll see how it goes. I just happen to be rereading the series now as well... 4rth or 5th time?
  1. Load more activity
  • Create New...