-
Posts
622 -
Joined
-
Last visited
-
Days Won
80
necrovore last won the day on October 4
necrovore had the most liked content!
About necrovore
- Birthday 07/04/1975
Previous Fields
-
Country
United States
-
State (US/Canadian)
Florida
-
Chat Nick
Necrovore
-
Interested in meeting
Looking for friends.
-
Relationship status
Single
-
Sexual orientation
Straight
-
Copyright
Copyrighted
-
Experience with Objectivism
I discovered Objectivism in 1997, read all I could about it, and promptly adopted it. However, I don't know if I'm very effective at advocating anything.
-
Occupation
Second Assistant Bookkeeper Somewhere
Profile Information
-
Gender
Male
-
Location
Jacksonville, FL
-
Interests
Programming (Scheme, F#, C#, C++, Forth, Java, Assembly), Music (Reason 11.0), Writing (Plot, Literary Theory, Science Fiction, Fantasy, Horror).
Recent Profile Visitors
necrovore's Achievements
Advanced Member (5/7)
176
Reputation
-
SpookyKitty reacted to a post in a topic: An Attempt At Formalizing the "Concept" Concept
-
An Attempt At Formalizing the "Concept" Concept
necrovore replied to SpookyKitty's topic in Metaphysics and Epistemology
It sort of does, but they are all copies. Mathematical proof by induction works like this: First, prove that the statement is true for some small N such as N = 1. Second, prove that, if it's true for N, it's true for N + 1. So if you've proved that a statement is true for N = 1, repeating the second step allows you to prove that it's true for N = 2, N = 3, and so on. So technically there is a "data entry" for each natural number, although all the "data entries" are similar because you are using the same step over and over. Technically the deduction part only goes as far as you take it, but you can choose how many times to repeat it, and therefore how high N is. The induction part comes into play when you realize that, since you can make N as high as you want, this means the original statement has been proven true for all N. That's the genius of mathematical induction -- it captures that induction in the form of a simple process that you can use over and over to try to prove or disprove different things. However, it is a special case. Not all inductions can be captured like that. -
An Attempt At Formalizing the "Concept" Concept
necrovore replied to SpookyKitty's topic in Metaphysics and Epistemology
It requires access to the set of natural numbers, which is infinite, Q.E.D. -
SpookyKitty reacted to a post in a topic: An Attempt At Formalizing the "Concept" Concept
-
An Attempt At Formalizing the "Concept" Concept
necrovore replied to SpookyKitty's topic in Metaphysics and Epistemology
Peikoff said in his lectures about induction that he doesn't think induction can be done symbolically. Formal systems are usually only deductive. (The only exception I can think of is "proof by mathematical induction" and I think that's a special case.) In general, you can't write an "inductive syllogism." The system I described earlier in this thread (where concepts are described as functions) is incomplete because it doesn't include a mechanism for induction but rather shows how the results of that mechanism might be usefully organized. An old TRS-80 can do deduction faster than we can. However, so far, the only way machines can begin to perform induction is through neural networks and such (which means they are doing emulations of what we have been doing). It has only recently become practical to implement neural networks large enough to even try this. Induction is very different from deduction in several respects, and one respect in particular is that induction requires access to a very large data set (whereas a deductive syllogism has only a few propositions). The size of the neural networks necessary to attempt induction is related to this. I also think that induction can be a "trial and error" thing where it might take several attempts at forming a concept or a generalization in order to get it right. Inductive reasoning has to be checked against reality. There is also a feedback loop in play where, through experience (of reality), you can refine your concepts and generalizations, to make them even more accurate. -
An Attempt At Formalizing the "Concept" Concept
necrovore replied to SpookyKitty's topic in Metaphysics and Epistemology
An algorithm cannot access actual truth, but we have senses that allow us to inspect reality directly. (If an algorithm can be given such senses, and if it uses them, then it, too, can inspect reality directly.) The only way to prevent the infinite regress is to stop with the self-evident, which requires the use of the senses. -
SpookyKitty reacted to a post in a topic: An Attempt At Formalizing the "Concept" Concept
-
An Attempt At Formalizing the "Concept" Concept
necrovore replied to SpookyKitty's topic in Metaphysics and Epistemology
My thinking is that a concept is a function that takes an existent as its argument and returns "true" if the existent is an instance of the concept and "false" otherwise. So potato(x) is true only if x is a potato. The potato function would be implemented by looking at the properties of x (shape, size, color, weight, origin, etc.) and determining if they exist and if they are within certain limits. Some properties are not used at all in the definition of some concepts (e.g., the location of a potato is not used in determining whether or not it is a potato, but location is used in other concepts, such as "tourist.") Similarly, a potato may be expected to lack certain properties entirely, such as intensity or specificity, but few concepts are defined in terms of what they lack... You could also consider a concept such as "potato" as a (potentially infinite) set of all the possible existents that could be instances of that concept; potato(x) is true if and only if x is a member of the set of potatoes. Technically a concept also includes everything else you know about potatoes, but this knowledge is not wrapped up in the definition. However, the concepts are key to keeping related knowledge together. An implementation of such a function is not unique and can be changed around. For example, you might later get the concept vegetable(x) and save space by redefining potato(x) as "vegetable(x) AND ..." with other requirements, rather than copying all the requirements of vegetable(x) into potato(x). Some people might define vegetable(x) as "potato(x) OR carrot(x) OR broccoli(x) OR ..." but that isn't accurate because it's at least conceivable to discover new vegetables, and yet doing so should not change the definition of "vegetable." But it is possible to look into the definitions of various vegetables and ask what properties are in common between them all, and use these common properties to define vegetable(x). It can also be helpful to be able to find examples of abstract concepts such as vegetables, but these again are not part of the definition. It's also possible for properties themselves to be described by concepts, so that shape(x) is only true if x is a shape. A shape doesn't exist "by itself," it can only exist as a property of some other object, but we can isolate it for the purpose of thinking about it. (Sensations such as colors and smells can be sensed "by themselves" but even they have to belong to something.) There are also "contextual properties" such as randomness. (For example, if a potato is a random potato, that doesn't really say anything about the potato itself, but it says something about how the potato was selected from a group of potatoes...) It's very important to try to minimize the amount of information taken up by all these concepts. If they become too complicated they are impossible to retain or use in a human mind. (A computer would not have this particular difficulty, but complicated concepts can also take longer to use and can be more error-prone in other ways. There are examples such as self-driving cars that, if you put a sticker on a STOP sign, the self-driving system no longer recognizes the sign as a STOP sign, and might even mistake it for some other sign! This sort of thing happens if the definition is too complex and not based on essentials.) Accuracy is also critically important. -
I don't have a "collectivist approach to politics." But a political party is a group of people. I do believe that anything is better than totalitarianism. There are a few cases (including those you pointed out) where an individual politician is different enough from his or her party that a vote for that candidate could be regarded as different from a vote for that candidate's party. I would add Ron Paul to your list. But there are not many such cases, and I don't think that Trump would be one of them. In fact, most Republicans are rallying around Trump; he didn't get where he is by being unpopular (or by being "installed" there). I think that if Trump were confronted with evidence that his policies were failing, and he found that evidence credible, he'd change his policies. So not only does his party allow freedom of speech, it might even do some good. Maybe. I think it's his Democratic opponents who believe that they alone are the elites who can see through "appearances" (i.e., evidence) and grasp the Form of the Good.
-
There was a huge concerted effort to censor certain information off of Facebook and other social media sites; the existence of this effort has already been proved in court. Even though the Supreme Court decided that the original plaintiffs "lacked standing," a lower court has since ruled that RFK has the required standing. https://www.foxnews.com/politics/judge-rules-rfk-jr-sue-biden-administration-alleged-censorship-charity-questions-vaccines Some Democrats have been talking about packing the Supreme Court or even "dissolving" it. https://jonathanturley.org/2024/09/29/lebowitz-calls-for-biden-harris-to-dissolve-the-supreme-court/ John Kerry recently singled out the First Amendment as a big obstacle to "governance." https://www.zerohedge.com/political/john-kerry-says-quiet-part-out-loud-first-amendment-stands-major-block-govern The Objective Standard has published articles to the effect that FDR did use the FCC to force his opponents off the air. But the first article I could find about this was here: https://reason.com/2017/04/05/roosevelts-war-against-the-pre/ Of course Free Speech outside the USA is in even worse shape, e.g., Thierry Breton presuming that X hosting an interview of Trump would constitute the "amplification of harmful content." https://www.politico.eu/article/eu-warns-elon-musk-hate-speech-donald-trump-interview-breton-x/ (Breton has since been removed from his post for this but I think that was more for appearances; his general views on the "dangers" of free speech have not been disavowed.) Further, there's the UK police commissioner who threatened to extradite US citizens for speech: https://www.foxnews.com/media/uk-police-commissioner-threatens-extradite-jail-us-citizens-over-social-media-posts-we-come-afte -- I don't think a Leftist government would object to such an extradition. And writing and publishing a book critical of the German government can get you in all kinds of trouble: https://www.zerohedge.com/geopolitical/guilty-cj-hopkins-officially-hate-speech-criminal-germany
-
tadmjones reacted to a post in a topic: Dr. Peikoff on which party to vote for: GOP or Democrat
-
The Democrats have become completely intolerant of dissent or even disagreement. They see free speech itself as an obstacle to their goals. All the open dissenters (such as RFK) have been forced by the Democrats to join the Republicans. The result is that the Democrats present a clearer ideology than ever -- one of totalitarianism -- while the Republicans appear to be "losing their identity" because they are being flooded with refugees of all stripes who are being driven away from the Democrats. It's the totalitarian Democrats versus everyone else.
-
necrovore reacted to a post in a topic: Exploring the Foundations of Knowledge in Objectivism
-
Reblogged:Can More Work Cure Burnout?
necrovore replied to Gus Van Horn blog's topic in The Objectivism Meta-Blog Discussion
Why would it be, if you are just trying to alleviate burnout by doing a different type of work? All that's necessary is to know whether the same or different "parts of the brain" are being used. -
tadmjones reacted to a post in a topic: Reblogged:Can More Work Cure Burnout?
-
Reblogged:Can More Work Cure Burnout?
necrovore replied to Gus Van Horn blog's topic in The Objectivism Meta-Blog Discussion
Brain MRI scans have established that different parts of people's brains are used for different things. So it's reasonable to guess that two completely different tasks might be done by different parts of the brain. It is not necessary to know which parts of the brain are being used. It is also true that if you get tired of doing one type of thinking, you can resolve this by doing a different type of thinking. It is reasonable to hypothesize that this might be because one part of your brain "gets tired," so you can solve this by using a different part of your brain, by doing a completely different task. The "part of your brain" idea is useful as a metaphor even if it's not always physically accurate. This is not brain surgery. I don't see the problem here. -
SpookyKitty reacted to a post in a topic: AI and technological unemployment
-
AI and technological unemployment
necrovore replied to happiness's topic in Engineering & Technology
I suppose there is a problem: when I describe how I think dogs are thinking, I have to use those kinds of terms. That doesn't mean the dogs are using them -- they are not aware of their own thinking process. If I hold out a treat in front of a dog, the dog salivates in anticipation, so in that sense the dog is aware of the future. However, the dog doesn't conceptualize it. It doesn't have the abstract concept "the future." It doesn't think in words. It's really more like the dog has a feeling, based on what it knows. The dog cannot describe this feeling, but I can. This "pretty sophisticated conceptual and logical system" is not the dog's, it's mine; it's my attempt to describe what the dog is doing. The dog is not aware of any of this. (And on the other hand if you were trying to build a robot as smart as a dog, it would have to be pretty sophisticated.) -
SpookyKitty reacted to a post in a topic: AI and technological unemployment
-
AI and technological unemployment
necrovore replied to happiness's topic in Engineering & Technology
I guess my ad-hoc definition of "thinking" is, it is the ability to predict the results of actions before you take them, and to use that prediction to determine whether or not to take those actions. I have seen dogs dither in indecision, so I think they can think, although they don't have access to abstract concepts or language, so their thinking is perforce short-range and concrete. Thinking requires perception. It requires at least having an intuitive understanding that there are entities out there in the world and that those entities are capable of reacting to your own actions in certain ways. So, I don't think worms or jellyfish can think at all. They just respond to their sensations in an instinctive way. Humans have the ability to form and use abstractions, including based on other abstractions, and this capability is tied in with language, and with the ability to understand the idea of one thing "standing for" another. This gives us a lot more predictive power and the ability to take more complex actions over time, as compared to a dog. It also allowed us to invent computation and computers. "Intelligence" would be a qualitative measure of how sophisticated your thinking can be. Such sophistication allows greater accuracy but it can also allow sophisticated mistakes such as Ptolemaic epicycles. Thinking and computation are closely related and it should be possible to achieve either one by means of the other. However, computation is limited in certain ways. Once you have proven that a computing mechanism is equivalent to a Turing machine, that's it; there are no more limits to be lifted. I think something related probably also occurs with thinking: once you have the ability to abstract from abstractions, there are no other limitations to be lifted. Errors are possible in computation or thinking. A Turing machine can run forever without producing a result (although whether this is an "error" depends on what the purpose of the computation was), and human thinkers can accidentally form abstractions which are based on other abstractions "all the way down" and don't terminate in reality. I think there is a certain mysticism around "super-intelligence" because of the cultural prevalence of the primacy of consciousness. But super-intelligence would just be more intelligence; it does not come with telekinesis or anything. -
AI and technological unemployment
necrovore replied to happiness's topic in Engineering & Technology
In that sentence I was using "an intelligence" metonymically to mean a "thinking being," whether natural or artificial. -
Reblogged:Whoever 'Won,' America Lost
necrovore replied to Gus Van Horn blog's topic in The Objectivism Meta-Blog Discussion
It keeps going and going: https://www.zerohedge.com/political/union-makes-shock-claim-colorado-meat-factory-involved-mgmt-led-human-trafficking -
AI and technological unemployment
necrovore replied to happiness's topic in Engineering & Technology
I can describe what a "dog" is; my description can be accurate enough to distinguish dogs from all other existents, but that doesn't mean I can make a dog (like by 3D-printing). That doesn't mean that "dog" is undefined or ill-defined, either. It means that a definition is not a full specification (and doesn't have to be). In order to make a physical material, you need ingredients and a "recipe." New ingredients can be discovered, and new recipes can be worked out by trial and error. But we can't make something merely because we want it, and intelligence is not even the main factor. The main reason intelligence is not a substitute for knowledge is that you can't deduce the behavior of the universe from "first principles." Actually, it's the opposite; the only way we can discover the principles of the universe's behavior is by observation. There are also a lot of necessary principles that aren't very abstract but are needed where they do apply, like information on how to make ceramic parts in the shapes you want, and the only way to get those principles is by observation and experimentation and testing. (You probably also want to know if your ceramic parts will be strong enough to handle the stresses they'll encounter in the applications you intend for them.) These observations take time because the universe itself runs at a certain speed and only at that speed. (Setting aside relativistic effects, which seem only to work to one's disadvantage... there is no such thing as a free lunch.) A fast intelligence would be able to read books at high speed, but that would only teach them what we already know. At some point the books stop because you've reached the frontier of human knowledge. From that point on, you have to look at reality itself. (And second-hand information should be checked anyway, because it might be incorrect.) Probably the reason why our brains run at the speed they do is that it would be suboptimal for them to run faster under most conditions: it would burn more calories and stress out our bodies, and only so we could get bored. (I think adrenaline can speed up a person's brain for brief periods, and this can help them make the right moves faster if they have to fight or something.) From your chess analogy -- there is a point where you have unequivocally calculated the best move, and there is no further thinking you can do, so thinking faster becomes pointless, and you have to wait for your opponent to make his move. If you are trying to build physical objects, the "opponent" is reality itself (but it will let you win sometimes). Nanotechnology has run into some walls as well. Forces are different at those scales. Bugs for example can walk on water because at that scale the surface tension of water is enough to hold a bug's weight. Scientists have tried making micro-motors, but the internal friction is comparatively bigger when the motors are smaller. They can also make tweezers that can grab individual atoms, but the atoms "stick" to the tweezers and there's no way to get them off. Maybe a super-intelligence could grab information from dozens of disparate experiments and think of maybe ten or twelve ways to solve this problem. However, that doesn't even qualify as knowledge yet -- it's just speculation and hypotheses until it is verified by experimentation, which takes time. There can also be unforeseen side effects.