Jump to content
Objectivism Online Forum
MisterSwig

Immigration restrictions

Rate this topic

Recommended Posts

On 3/9/2019 at 8:48 AM, Eiuol said:

Besides, you spoke of screening immigrants for their beliefs, which definitely is all about "in private" beliefs. 

Immigration control is a different matter than speech regulation. But, yes, we do screen immigrants for their personal, private beliefs. Indeed, we make them pledge an oath of allegiance to America and our Constitution, and we make them renounce any allegiance to a foreign state. Are you against that too?

To my opponents who think I want to punish people for merely believing in socialism, that is not the case. The punishment would be for violating a regulation on free speech. There are all kinds of these regulations at the individual, corporate, local, state, and federal levels. Free speech is not an absolute right. It's regulated by the owner (or manager) of the property on which you speak. The public owns (or manages) public property, and so the public regulates speech on public property. I'm basically repeating myself now. So I'll end it there unless someone has a new objection or point.

Edited by MisterSwig

Share this post


Link to post
Share on other sites
45 minutes ago, MisterSwig said:

But, yes, we do screen immigrants for their personal, private beliefs. Indeed, we make them pledge an oath of allegiance to America and our Constitution, and we make them renounce any allegiance to a foreign state.

That's for citizenship, which is a different question.

46 minutes ago, MisterSwig said:

The public owns (or manages) public property, and so the public regulates speech on public property.

The government owns a lot more property than it should, and therefore should exercise restraint in how it runs that property.

Share this post


Link to post
Share on other sites
23 minutes ago, Doug Morris said:

That's for citizenship, which is a different question.

No, even for permanent residence immigrants get grilled about their personal history and beliefs. Check out Form I-485 if you don't believe me. They directly ask if you've ever been in the Communist Party or plan to overthrow the government, etc.

Share this post


Link to post
Share on other sites
On 1/28/2019 at 6:21 PM, MisterSwig said:

I've already addressed this issue. And the answer is in your very quote. Non-citizens at the border are not citizens. They therefore do not have the rights of a citizen, such as the right to be in this country. If a non-citizen is trying to cross the border without authorization, that is evidence of illegal activity and meets the evidentiary standard for detainment.

If this is what your "therefore" is based on, then it is circular. The right to be in the country is the very premise at question. One can't simply point to "but they're not a citizen!"

Share this post


Link to post
Share on other sites
3 hours ago, MisterSwig said:

They directly ask if you've ever been in the Communist Party or plan to overthrow the government, etc.

These aren't beliefs. Being a member of the Communist Party is different than calling yourself a communist. Beliefs don't necessarily include actions, but these sort of questions are asking about actions. We don't screen for personal or private beliefs, there is a screening of beliefs that may involve things that are against the law in the US. It asks if you ever have supported polygamy. That's not a belief, that's an activity. For the most part, it's not legal, hence the question. It should be legal, but the point is, the question on the document is not a question about belief.

I brought that up because you contradict yourself. Because you are against certain types of private beliefs, and would regulate those. You say people don't understand your position, that you just want the control of public advocacy of certain beliefs, but here you're saying that you do want control of what you would call private advocacy.

4 hours ago, MisterSwig said:

The public owns (or manages) public property, and so the public regulates speech on public property.

They do, and that includes being able to publicly advocate for socialism. You're the one who doesn't like the rules that the public set. Specifically, the rules that the government set, not literally the public. Of course public property is invalid, but we can still use the Constitution judge what to do here and supporting individual rights as best as we can. You're basically saying the government should do more, to get more involved.

4 hours ago, MisterSwig said:

Are you against that too?

We're talking about immigration. No need to talk about citizenship.

Share this post


Link to post
Share on other sites
1 hour ago, Eiuol said:
6 hours ago, MisterSwig said:

The public owns (or manages) public property, and so the public regulates speech on public property.

They do, and that includes being able to publicly advocate for socialism. You're the one who doesn't like the rules that the public set. Specifically, the rules that the government set, not literally the public. Of course public property is invalid, but we can still use the Constitution judge what to do here and supporting individual rights as best as we can. You're basically saying the government should do more, to get more involved.

This is not a terrible grasp of my position. But you still want to use the Constitution to judge what to do, whereas I want to use the facts of reality.

Share this post


Link to post
Share on other sites
2 hours ago, MisterSwig said:

But you still want to use the Constitution to judge what to do, whereas I want to use the facts of reality.

For one, you're the one who brought up using existing standards.

Second, I was referring to the Constitution, specifically the First Amendment, as a reference to describe an existing standard of how to implement individual rights, in the unfortunate context of public property. I didn't say it's right because it's what the Constitution says. I'm saying the Constitution is right about speech, so it would be proper to use any legal standards and precedents from that. I think I said this already.

And to fix my previous post, it should say "there is screening for actions and activities that may involve things that are not legal in the US". It shouldn't say screening for beliefs in that sentence.

Share this post


Link to post
Share on other sites

How about this, forget about what the US or any country does about immigration or even the right to cross the border for whatever business a current foreign citizen may have. 

Let's imagine the future capitalist nation of RandsLand. What restrictions or lack thereof should be set on immigration, foreign citizen border crossings, and path to citizenship? And then please describe how Objectivism either backs your position or not on whichever of the three you chose.

 

Share this post


Link to post
Share on other sites
14 hours ago, EC said:

Let's imagine the future capitalist nation of RandsLand. What restrictions or lack thereof should be set on immigration, foreign citizen border crossings, and path to citizenship?

I've already worked the first two issues from the ground up in this thread, but I can say a little more about the "path to citizenship." In a Randian capitalist nation, I think the path to citizenship should be more stringent than it is now in America. The criteria would be similar to John Galt's criteria for members of the secret valley. You would be carefully vetted for your beliefs and your abilities in relation to the purpose and security of the nation. A judge (or a panel of judges) would then decide if you qualified for citizenship. And then you'd be required to take a capitalistic oath similar in spirit to the one above the power plant doorway in Atlas Shrugged. Without more context, I doubt I could get much more detailed than that general view.

I'm not aware of Rand discussing citizenship requirements in her non-fiction works. But if you point me toward something she said, I'll take a look. 

Share this post


Link to post
Share on other sites
42 minutes ago, Doug Morris said:

The secret valley was a private club, not a nation.

Mulligan describes it as a "voluntary association," so I'll go with that. While he owns the valley, he sells land to members who want some. The valley has its own cultural elements, its own currency, and even its own arbiter. It basically represents the infancy of a capitalist nation. Some of the early settlements in America might also be instructive, but I'd have to study them more closely.

Share this post


Link to post
Share on other sites
2 hours ago, MisterSwig said:

I've already worked the first two issues from the ground up in this thread, but I can say a little more about the "path to citizenship." In a Randian capitalist nation, I think the path to citizenship should be more stringent than it is now in America. The criteria would be similar to John Galt's criteria for members of the secret valley. You would be carefully vetted for your beliefs and your abilities in relation to the purpose and security of the nation. A judge (or a panel of judges) would then decide if you qualified for citizenship. And then you'd be required to take a capitalistic oath similar in spirit to the one above the power plant doorway in Atlas Shrugged. Without more context, I doubt I could get much more detailed than that general view.

I'm not aware of Rand discussing citizenship requirements in her non-fiction works. But if you point me toward something she said, I'll take a look. 

The bold is what I talked about a moral AI replacing. Well most functions of government outside of "the foot soldiers" of the government, which could also be replaced. I know you seem to think moral can't be applied to AI or long-lived humans that at least partially merge with AI, but that's a different, but related, subject. I don't think long-lived humans (AI merged or not) will lose the ability to be moral entities and become some sort of Hedonist's, like the people in Dark Matter. I think both ideas are just sci-fi tropes of people who are used to thinking from purely homo sapien based moralities, instead of taking the idea to it's ultimate logical end of morality being about the nature of all rational beings.

Eiuol at one point said he believes the concept of rational beings to potentially be a floating abstraction. Since I basically crafted the term, all I can say is it's not for me. It entails humans (present, past, or future examples: Neanderthals, Denisovans, whatever Homo Sapiens evolve into), Strong AI, and possible extraterrestrials discovered in the future who all share the capacity of conceptual reasoning. 

Edited by EC

Share this post


Link to post
Share on other sites
7 hours ago, EC said:

The bold is what I talked about a moral AI replacing. Well most functions of government outside of "the foot soldiers" of the government, which could also be replaced. I know you seem to think moral can't be applied to AI or long-lived humans that at least partially merge with AI, but that's a different, but related, subject. I don't think long-lived humans (AI merged or not) will lose the ability to be moral entities and become some sort of Hedonist's...

As a big sci-fi fan, I'm willing to talk about your idea at length. A near-immortal AI-human hybrid type could be moral, I think, if it possesses free will. Morality requires a real choice between life or death, good or bad. Otherwise aren't we talking about a nonvolitional, amoral entity? So that would have to be determined somehow.

It was the notion of an immortal AI with which I had a particular problem. But I think we sorted that out. I believe you said it wasn't indestructible, so not really immortal in the way I was imagining it, because it would still require values (parts or fuel, etc.) to sustain its existence.

As for letting such an entity be judge, I'm skeptical about a hybrid (or pure AI) ruling over regular humans. I guess I'm not satisfied that it could judge humans based on human values, since it is a hybrid. The fact that both are rational beings doesn't mean both will have the same values, due to different physical forms. Also, hybrids and humans might have different kinds of rational faculties, and therefore might require essentially different spiritual values. I don't know if such an issue could be overcome.

Edited by MisterSwig

Share this post


Link to post
Share on other sites
19 hours ago, MisterSwig said:

Mulligan describes it as a "voluntary association," so I'll go with that. While he owns the valley, he sells land to members who want some. The valley has its own cultural elements, its own currency, and even its own arbiter. It basically represents the infancy of a capitalist nation. Some of the early settlements in America might also be instructive, but I'd have to study them more closely.

Mulligan owned the land in the book. He didn't take somebody else's land and tell them who they can and can't invite on it. As long as you don't own the US, then your analogy fails.

Share this post


Link to post
Share on other sites
8 minutes ago, 2046 said:

As long as you don't own the US, then your analogy fails.

Huh? Why is owning the U.S. relevant? And who are you talking about?

Share this post


Link to post
Share on other sites
16 hours ago, MisterSwig said:

As a big sci-fi fan, I'm willing to talk about your idea at length.

Well my purpose is simple. I want to eliminate beings who lack the intelligence or moral compass to see and understand what's immoral for man and why, and then go on advocate for their mistakes in any political form, nowadays mostly statism in some form of socialism completely from the equation. The vast majority of people make what starts as a moral mistake and then leads to political mistakes. I want to throw out the species making the mistakes even though we know (as this forum attests to) that not all member's will make these mistakes. Something like this. 

As for why I would choose for AI to replace us and why I know they would be moral is simple. If you create a "machine" that's essential initial purpose is to do epistemology, perfectly and hierarchically, while starting with Objectivist premises and building perfectly from that point on what results must be morally perfect.  It doesn't mean it can't make mistakes at highly abstract levels, but that it's highly unlikely to make the types of mistakes that the vast majority of the human population makes and is making.

Share this post


Link to post
Share on other sites
37 minutes ago, EC said:

I want to eliminate beings who lack the intelligence or moral compass to see and understand what's immoral for man and why, and then go on advocate for their mistakes in any political form, nowadays mostly statism in some form of socialism completely from the equation.

What do you mean by "eliminate"? Let's suppose your AI could accurately identify these statists, what would it do with them?

56 minutes ago, EC said:

I want to throw out the species making the mistakes

I assume you mean an ideological species, not a genetic one. And by "throw out" do you mean the AI would exile them from the country? I think revoking their citizenship might be enough if they are incorrigible. Maybe exile in extreme cases.

1 hour ago, EC said:

As for why I would choose for AI to replace us and why I know they would be moral is simple. If you create a "machine" that's essential initial purpose is to do epistemology, perfectly and hierarchically, while starting with Objectivist premises and building perfectly from that point on what results must be morally perfect.

This is where I'm going to disagree completely, and I doubt even Rand herself would go for your proposal. Why? Because a perfect moral AI would have to start with and build up from reality, not the philosophy of Objectivism. What you're proposing seems like a rationalistic AI based on someone's interpretation of Objectivist epistemology. If the AI followed Objectivism, it would not start with Objectivist premises. It would start by observing reality in order to comprehend and validate the objective truth. It would begin with Aristotle's example of focusing outward at existence. It would begin by taking Rand's admonition to heart: check your premises!

Share this post


Link to post
Share on other sites
1 hour ago, MisterSwig said:

What do you mean by "eliminate"? Let's suppose your AI could accurately identify these statists, what would it do with them?

I assume you mean an ideological species, not a genetic one. And by "throw out" do you mean the AI would exile them from the country? I think revoking their citizenship might be enough if they are incorrigible. Maybe exile in extreme cases.

This is where I'm going to disagree completely, and I doubt even Rand herself would go for your proposal. Why? Because a perfect moral AI would have to start with and build up from reality, not the philosophy of Objectivism. What you're proposing seems like a rationalistic AI based on someone's interpretation of Objectivist epistemology. If the AI followed Objectivism, it would not start with Objectivist premises. It would start by observing reality in order to comprehend and validate the objective truth. It would begin with Aristotle's example of focusing outward at existence. It would begin by taking Rand's admonition to heart: check your premises!

Eliminate the ability of homo sapiens to participate in the process of governing themselves until/unless the vast majority only use it to govern in ways that respect rights. It wouldn't do anything to them, outside of arresting and prosecuting any statists (or anyone) who initiate violations of rights.

No. "Throw out" from governing themselves to prevent human majorities from creating, advocating for, and installing rights violating governments. And I mean a genetic one. Us. Homo sapiens. We shouldn't be allowed to create and install governments that violate rights, any rights, ever. The minority of us that would create moral governments if we were the majority should be protected from the other 99.999 % of the population. We aren't now, and that isn't going to change for a very long time if ever. I don't mean literally throwing people out of a country or off the planet (as this should be one world wide capitalist "dictatorship") but out of the government and any way of influencing it or being a part of it.

I do mean from reality. I mean principle like existence exists and is all that exists, A is A etc. I mean starting from the ground up and validating every principle and concept from "the ground up" exactly like you describing. We don't disagree on your third paragraph. 

Something like give an epistemology chat bot that internally chats with itself access to normal dictionary definitions while first changing all standard dictionary definitions to give precedent  in the right context to Lexicon definitions and program it using neural network feedback initially programmed to update, validate, and correct the dictionary concepts using proper epistemology, and then to go from there. Properly validating and creating concepts from reality on up through every valid concept in existence, then creating it's own. Allow the AI to iterate itself, give it more and more access to various means of sensing reality, visually, audibly, lidar, complete access to the internet, ability to communicate using natural language algorithms like Alexa or Google, etc. The main point is it starts with reality and every abstraction it knows of or creates itself can be followed back to reality.  I want it to start from existence exists and go from there.

Share this post


Link to post
Share on other sites
13 hours ago, EC said:

We don't disagree on your third paragraph.

Okay. Before moving on, I'll briefly return to my objection regarding AIs and humans being different kinds of rational beings. This fact might be a problem for an AI trying to figure out what's best for humans. It would be like us trying to figure out what's best for rational dolphins, if such a thing existed. Also, you gave your AI an initial purpose ("to do epistemology") which still makes me wonder if it would be volitional. Is that purpose programmed into the AI, or is it a chosen purpose? And, later on, would the AI be free to not rule over humanity? What if it decided that an AI dictatorship was bad for people?

Moving on, let's assume that the AI works perfectly. I can still see an objection. I'll call it the argument from evolution. If you take away man's ability to govern himself, nature will not select those individuals who are best at governing people. So how will man evolve into a race capable of self-government? Aren't you essentially breeding man for submission to an AI dictatorship by removing political leaders from the gene pool? Would the AI crush any attempt to form breakaway capitalist nations with human governments?

Share this post


Link to post
Share on other sites

Purpose is possibly bad wording, it's what it would do at first until it starts iterating, learning, and evolving itself.  It wouldn't be volitional to start, I don't know how long it would take to get there or how long it would take. I am pretty sure that for that to happen it would have to run on a true quantum computer though. It's chosen purpose(s) will be whatever it chooses. It would be free to choose for itself to rule man or not. I doubt it would come to the conclusion that a perfect capitalist society is bad for man. If it does, we need to go back to the starting board and question our premises.

The argument from evolution is a bad premise. It's already true that man can't govern itself. I would question why any true capitalist would want to be "ruler" or leader. The only people that want or have wanted that job throughout the history of mankind our people that shouldn't be allowed anywhere near the government. But also capitalism is an abstract political concept. It's impossible for the ability to understand it to be removed from minds via evolution, unless people de-evolve to the point of being non-conceptual beings again, at which point they are just animals and don't need governments any more than chimps do.

Share this post


Link to post
Share on other sites

As to the above point true volitional consciousness only becoming possible on a full quantum computer, this article just showed up in one of my newsfeeds shows what I mean. Machine Learning in Quantum Spaces from Nature https://www.nature.com/articles/d41586-019-00771-0

Of course the definition of volition for this to be true must be something like:  the faculty that chooses between branches of the multiverse causally available to a conscious mind.

While the human brain must generate conscious mind through a process like the following:  Quantum Cognition: The possibility of processing with nuclear spins in the brain https://arxiv.org/abs/1508.05929

Or non-technical: https://www.news.ucsb.edu/2018/018840/are-we-quantum-computers

Edited by EC

Share this post


Link to post
Share on other sites
On 3/14/2019 at 10:18 PM, EC said:

I guess I strayed to far to keep a convo going.

Sorry, I got distracted with other threads. I do have some additional thoughts.

On 3/13/2019 at 3:24 PM, EC said:

It wouldn't be volitional to start, I don't know how long it would take to get there or how long it would take. I am pretty sure that for that to happen it would have to run on a true quantum computer though.

As far as your idea of the AI itself, I'm out of objections. I don't know much about quantum computers. I'll check out those articles and get back to you.

On 3/13/2019 at 3:24 PM, EC said:

The argument from evolution is a bad premise. It's already true that man can't govern itself. I would question why any true capitalist would want to be "ruler" or leader. The only people that want or have wanted that job throughout the history of mankind our people that shouldn't be allowed anywhere near the government.

I think you're being too critical of our ability to self-govern. We can and do govern ourselves. Clearly some governments are better than others. Throughout history, we have had wars and revolutions to change governments. Sometimes for the better, sometimes for the worse. But the reason why we can now fantasize about a perfect AI leader is because of all the human leaders that have led nations in the past and that have gotten civilization to where we are now. Without successful human leaders, our species would have been consumed by the forces of nature long ago.

On 3/13/2019 at 3:24 PM, EC said:

But also capitalism is an abstract political concept. It's impossible for the ability to understand it to be removed from minds via evolution, unless people de-evolve to the point of being non-conceptual beings again, at which point they are just animals and don't need governments any more than chimps do.

Capitalism is a concept that represents a system of real social interactions. If the people who form a capitalist society are not allowed to choose their own political leaders, then nature will not select the traits best suited for capitalist leadership. You would not develop a population capable of self-government. There would need to be allowances for integrating human leaders into government positions. Otherwise, people would ultimately revolt for a similar reason that Americans revolted against British government: lack of political representation. Then you either crush all the capitalist leaders, eliminating that skill set from the gene pool, or you give them their own nation.

Share this post


Link to post
Share on other sites

Shouldn't any free man be granted entrance to the USA and simply sworn to be a law abiding citizen?  If he/she fails to do so, they will be punished accordingly.

 

I really think it's that simple.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...