
Content Count
426 
Joined

Last visited

Days Won
7
SpookyKitty last won the day on April 24 2018
SpookyKitty had the most liked content!
About SpookyKitty

Rank
Member
Previous Fields

Country
United States

Interested in meeting
No

Relationship status
Single

Sexual orientation
Straight

Experience with Objectivism
Atlas Shrugged, Fountainhead, ITOE, Objectivism The Philosophy of Ayn Rand and various articles
Profile Information

Gender
Female
Recent Profile Visitors

Questions About Concepts
SpookyKitty replied to [email protected]'s topic in Metaphysics and Epistemology
Abstraction from abstraction isn't relevant here because what I described is an abstraction from concretes. Not at all. My point was that what patterns someone can recognize in data is a function of the concepts they have and not simply perception. 
Impossibility of God creating the universe
SpookyKitty replied to Veritas's topic in Metaphysics and Epistemology
This is a contradiction. There is always some property that some thing has from which we can deduce that another thing has it too. Proof: Let g be "God" and let u be "universe". Let P be any property. First, suppose that "P(g) or not P(g)". Then, by the law of excluded middle we have, "P(u) or not P(u)". By implication introduction this gives, "if 'P(g) or not P(g)', then 'P(u) or not P(u)'". Now, let the property Q(x) be defined as "P(x) or not P(x)". We now derive, "if Q(g), then Q(u)". The proof of the converse is left as an exercise to the reader. 
Questions About Concepts
SpookyKitty replied to [email protected]'s topic in Metaphysics and Epistemology
I agree, but that's not all that they are for. To be fair, I misspoke here. The central component of Rand's theory is that you can extrapolate values of characteristics from the range of the "crow epistemology" (measurements whose values you can directly perceive) to the conceptual level (measurements you can't directly perceive, such as distances in the light years and such). Rand's theory allows you to do this. My problem with it is that, since it only allows for the total abstraction of one characteristic at a time, the resulting concepts cannot encode complicated interdependencies among characteristics. Rand did try to mitigate this issue to some extent by allowing for some unspecified functional relationship between a single characteristic (an independent variable) and all the rest (dependent variables) by using the notion of an "essential" characteristic. But this still wouldn't be enough, since actual phenomena may not have any essential characteristic in the Randian sense. This happens all the time when you have feedback loops. When this occurs, engineers and scientists are forced to describe the behavior of such systems by using differential equations and characterizing those systems by their solution sets. And these solution sets are abstract spaces! These spaces are then understood to be the essential defining features by which systems are classified. Note that in these cases, the systems are not classified by any one nor any combination of their measurements. Indeed, they cannot be. No, it has everything to do with Rand's theory of concepts, because it is precisely the concepts we have that make some datasets easily recognizable. A layman and a physicist may have the exact same perceptual apparatus, but data that is meaningful to the physicist might seem completely random to the layman. The difference is in the physicist's far superior integration of lots and lots of physics and math concepts. Exactly. Which is precisely why Rand's theory of measurement omission cannot possibly be complete. 
Questions About Concepts
SpookyKitty replied to [email protected]'s topic in Metaphysics and Epistemology
@Eiuol I agree with a lot of what you said. I also think qualitative distinctions are more fundamental than quantitative ones. @Grames @Easy Truth I will write out some rough thoughts I've been having and some research I've been doing on this subject. Hopefully it will make things clearer. Imagine that you have two entities where the measurements of the first entity with respect to some characteristics (that we care about) are (1.0,1.0,1.0) and the measurements of the second entity are (2.0,1.0,1.0). Now, at the end of the day, Objectivism allows you to form exactly one "big" concept from this data set: A = (x,1.0,1.0) where the use of the variable "x" means that the entities belonging to this concept must have some measurement value for the first characteristic, but may have any measurement value. But also, we can use differentia to get many more "small" concepts by specifying ranges that the variable is allowed to take. For example: B = ([1.0, 12.0], 1.0, 1.0) means that the value of the first characteristic can be anything between 1.0 and 12.0, but the values of the other two characteristics must be exactly 1.0 and 1.0, respectively. So B is a subconcept of A. You can take concepts like these and make further restrictions to get subconcepts of subconcepts, for instance C = ([2.3, 4.6], 1.0, 1.0). And, I don't think it would be too much of a stretch to say that you can do disjunctive sums of intervals to get even more complex concepts such as: D = ([1.0, 12.0] + [26.5, 123.4], 1.0, 1.0) where the notation "+" means that the entity is allowed to have values either in the interval [1.0, 12.0] or in the interval [26.5, 123,4]. Furthermore, by using these two entities in combination with others, again, I don't think it would be an unreasonable interpretation of Objectivism to say that you can have concepts that look like this: E = ([1.0, 12.0] + [26.5, 123.4], y, [0.1, 2.6]) or even ones that allow for infinite collections of allowed intervals like this: F = ( [1,0, 2.0] + [4.0, 5.0] + [7.0, 8.0] + ..., y, z). If you spend some time graphing these examples, you will notice that all of the concepts formed by these methods look like rectangular prisms arranged in rectangular prisms arranged in rectangular prisms, etc., all of which have edges that are parallel to at least one of the axes. Furthermore, these are the only kinds of shapes that concepts in Objectivism are allowed to have. I hope that this will make clear what I meant when I said that Rand's theory of concepts is "merely classifying things". The problem with her theory is that there will always be realworld collections of entities that completely confound this sort of scheme. That is, when you plot the measurements of all the entities in the collection, they might form a shape which is very simple but which cannot be a combination of rectangular prisms. For instance, consider the collection of entities: {(0,2),(1,1),(2,0),(3,1.1)(4,2.1),(3.1,3),(2.1,4),(1.1,3)} Any combination of measurement omissions and restrictions to intervals will result in just two kinds of interpretations of the data. On the one hand, you will have very simple interpretations which underfit the data. And on the other hand, you will have very complicated and counterintuitive interpretations which overfit the data. And in no case whatsoever will you obtain a system of concepts which notices the supersimple underlying pattern you would have gotten had you simply plotted some points and used your spatial intuition to play connectthedots. Even though none of the entities in the above example have any measurements in common, by using your spatial intuition, you can form a simple network of similarities: (0,2) ~ (1,1) ~ (2,0) ~ (3,1.1) ~ ... ~ (1.1,3) ~ (0,2) which your brain would immediately recognize as a onedimensional loop. A onedimensional loop is a notion that: 1) Is highly abstract, and can be applied to just about any data whatsoever to yield tons of nontrivial information about that data. 2) Is super easy to understand. It's almost concrete in how easy it is to understand. 3) Captures the essence of how the entities in the example are related in a very simple and accurate way, even though they are all different from each other. I say "accurate" in the above because saying that the data is described by a onedimensional loop also implies a network of dissimilarities among the given entities. For instance, we can say that (0,2) is dissimilar to (2,0), because, on the onedimensional loop, there is no shortcut from (0,2) to (2,0) which allows you to skip the entity (1,1). The process of fitting a manifold to a set of datapoints is studied in the field of Persistent Homology: In my opinion, I think that Rand was trying to do something like this when she formulated her theory of concept formation. Rand's theory also suffers from another problem which I've been trying to address. Basically, it smuggles huge portions of mathematics (at the very least the nonzero rational numbers), which themselves have highly nontrivial spatial structure, into the notion of measurement. This, in addition to the above, is why I don't find Grames' account of how concepts of space can be derived from measurement omission at all convincing. I believe that this problem can be remedied by claiming that the human brain comes equipped with a very small number of simple spatial ideas and operations which can be used to form any mathematical concept including the concepts of logic. All this has lead me to investigating the theory of simplicial sets. These are very simple and very interesting mathematical gizmos that can encode combinatorial and topological information simultaneously. Furthermore, they constitute what is called a "topos" in category theory, which means that they are capable of serving as a foundation for all of mathematics. Additionally, every topos has its own internal logic, (and these logics are, in general, higherorder intuitionistic type theories). So there is a conception of logic out there somewhere which can be derived entirely from spatial concepts. The main problem is that the standard theory of simplicial sets allows for simplices of arbitrarily high finite dimension, and the human brain can handle only 3. However, as it turns out, it's very easy to prove that simplicial sets restricted to at most 3 dimensions also constitute a topos. I am currently trying to figure out the insandouts of all of this stuff, but I think that Rand's dream of a mathematical epistemology is on the horizon. 
Questions About Concepts
SpookyKitty replied to [email protected]'s topic in Metaphysics and Epistemology
This is an important question that I've been thinking about as well. The more I introspect, the more I realize that my brain just doesn't work the way that Rand describes. For me, if I merely know how to classify a thing, I don't feel like I understand it. But if I can see its structure and the structures it forms in relation to other things and how that structure might be changed and so on, only then do I feel like I truly understand it, and only then can one come up with truly nonarbitrary classification schemes. As an example from algebra, if you were to just tell me that a group is just a monoid where every element has an inverse, I wouldn't understand the concept of "group". But if you were to then show me a rotating triangle and how those rotations relate to the group operation, then I would easily understand the concept of a group. Even the concept of "concept" (the one that Rand describes) I currently understand primarily in terms of space. For instance, when thinking about the relationship of dogs to all other animals, I see a big circle in my mind labeled "animals", and within that circle, a smaller circle labeled "mammals", and within that circle, a smaller circle labeled "dogs". So I think that all of the concepts in my mind are spatial ones. Therefore, I strongly suspect that the process of concept formation is some kind of operation on spaces. At the very least, this is true in my case. other people's brains might work differently. Some people can't see anything in their mind's eye. Others don't hear an internal monologue. And some think only in terms of pictures of things they've actually seen. It's too simplistic to think that everyone's brain works just like yours does. EDIT: I should probably mention some of the research on this topic that I've been doing. I've been studying category theory, and I think the idea of adjunctions may hold the key to concept formaiton. By using adjunctions, one can "mechanically" derive significant mathematical concepts from totally trivial ones. I just need to find an interpretation of adjunctions that makes sense. 
Exploring Epistemology Through Type Theory
SpookyKitty replied to SpookyKitty's topic in Metaphysics and Epistemology
Yes, that seems like a very important distinction to keep in mind. The only thing I would change is to avoid naming this characteristic "constructible" and the values "0" and "1", and instead use the terms "hypothetical" and "actual". An entity with the value "hypothetical" denotes an entity that has merely been imagined, whereas the value "actual" denotes an entity that has actually been observed. I'm sure there's a fancy word for this kind of distinction, but I forgot what it was. Is this what you have in mind? Yeah, it would be absurdly hard, since then you would have to reproduce a humanlike perceptual system. Instead, the point here is to represent entities as they are represented in conscious reasoning, and, during the course of conscious reasoning, you cannot conceive of a specific entity apart from its characteristics. 
Exploring Epistemology Through Type Theory
SpookyKitty replied to SpookyKitty's topic in Metaphysics and Epistemology
Two characteriatics A and B are commensurable if and only if they are identical as types. For example, Char1 is commensurable with Char1 and nothing else. Color is commensurable with color and nothing else. 
Exploring Epistemology Through Type Theory
SpookyKitty replied to SpookyKitty's topic in Metaphysics and Epistemology
Characteristics are supposed to be anything you can directly perceive. For example color, texture, shape, etc. Their terms are the values of those characteristics like red, rough, round, etc. I Will make a more detailed submission soon. 
Exploring Epistemology Through Type Theory
SpookyKitty replied to SpookyKitty's topic in Metaphysics and Epistemology
EUREKA! After several cups of coffee and another sleepless night, I now have an example of a secondorder concept. Basically, this concept is a concept about firstorder concepts. Firstorder concepts are those that are directly about entities at the perceptual level, such as the concept C above. The secondorder concept I've cooked up, called Valid_1_Conc determines whether a given firstorder concept is actually valid or not according to known Objectivist Epistemology. Let's take a look at it: Inductive Valid_1_Conc : (Entity > Prop) > Prop :=  is_valid (F : Entity > Prop) (H : exists (e1 : Entity) (e2 : Entity), F e1 /\ F e2 /\ ~(e1 = e2) /\ (forall (x x' : Char1) (y y' : Char2) (z z' : Char3), (e1 = Kind1 x y z /\ e2 = Kind1 x' y' z') > (x = x' \/ y = y' \/ z = z'))) : Valid_1_Conc F. First, note its type declaration, (Entity > Prop) > Prop. This concept takes abstract functions (of which valid firstorder concept are a subtype) and outputs propositions about them. Its constructor requires two inputs. First, a term of type Entity > Prop. Second a proof of the condition H, which, in English, says that "The concept F must (1) Accept entities e1 and e2 (2) The entities e1 and e2 cannot be equal (that is, there must be at least two distinct entities that the concept refers to) (3) The two entities must share a value of at least one characteristic in common, (that is, arbitrary collections of entities are not concepts according to Objectivism as it currently stands)." The beauty of this thing is that by proving propositions such as (Valid_1_Conc C) as we are about to do, we are precisely validating a first order concept. Here is the theorem: Theorem : Valid_1_Conc C Proof. In order to prove Valid_1_Conc C we use the only constructor available, that being is_valid. "is_valid" requires an F of type Entity > Prop, but that is already given as C. The next thing we need is a proof of H. To prove an "exists " statement we must actually produce an entity with the required properties. We see that we actually require two such entities. Let e1 = Kind1 a d f and e2 = Kind1 a e f. Now, we have to prove a total of four propositions (1) C (Kind1 a d f) (2) C (Kind1 a e f) (3) (Kind1 a d f) =/= (Kind1 a e f) (4) (forall (x x' : Char1) (y y' : Char2) (z z' : Char3), (e1 = Kind1 x y z /\ e2 = Kind1 x' y' z') > (x = x' \/ y = y' \/ z = z')) The proof of (1) is just the theorem in the previous post, so that's taken care of. The proof of (2) is just the proof of (1) with "e" chosen instead of "f" at the appropriate step. To prove (3) we assume that (Kind1 a d f) = (Kind1 a e f) and derive a contradiction. We do so by noting that this works only if a = a, d = e, and f = f. Since the second of these is a contradiction, we're done here. Proving (4) is not as hard as it seems. Let x, x', y, y', z, and z' be any arbitrary values. The statement now reduces to "if (e1 = Kind1 x y z /\ e2 = Kind1 x' y' z'), then (x = x' \/ y = y' \/ z = z')" Note that e1 = Kind1 a d f and e2 = Kind1 a e f from the "exists" steps so that the hypotheses now say Kind1 x y z = Kind1 a d f and Kind1 x' y' z' = Kind1 a e f From this we can immediately derive the six equations: x = a x' = a y = d y' = e z = f z' = f and we only need to prove one of the disjuncts of x = x' \/ y = y' \/ z = z'. We choose the first one (though the third would work just as well and the second would be impossible). Since x = a = x', it follows that x = x' and the proof is complete. The automatically generated proof of this theorem is MONSTROUS, but I'll post it for the curious: (is_valid C (ex_intro (fun e1 : Entity => exists e2 : Entity, C e1 /\ C e2 /\ e1 <> e2 /\ (forall (x x' : Char1) (y y' : Char2) (z z' : Char3), e1 = Kind1 x y z /\ e2 = Kind1 x' y' z' > x = x' \/ y = y' \/ z = z')) (Kind1 a d f) (ex_intro (fun e2 : Entity => C (Kind1 a d f) /\ C e2 /\ Kind1 a d f <> e2 /\ (forall (x x' : Char1) (y y' : Char2) (z z' : Char3), Kind1 a d f = Kind1 x y z /\ e2 = Kind1 x' y' z' > x = x' \/ y = y' \/ z = z')) (Kind1 a e f) (conj (_is_a_C (Kind1 a d f) d f eq_refl) (conj (_is_a_C (Kind1 a e f) e f eq_refl) (conj (fun H : Kind1 a d f = Kind1 a e f => let H0 := eq_ind (Kind1 a d f) (fun e : Entity => match e with  Kind1 _ d _ => True  Kind1 _ e _ => False  Kind2 _ _ => False  Kind3 _ => False  Kind4 _ => False end) I (Kind1 a e f) H : False in False_ind False H0) (fun (x x' : Char1) (y y' : Char2) (z z' : Char3) (H : Kind1 a d f = Kind1 x y z /\ Kind1 a e f = Kind1 x' y' z') => match H with  conj H0 H1 => let H2 := match H0 in (_ = y0) return (y0 = Kind1 x y z > x = x' \/ y = y' \/ z = z') with  eq_refl => fun H2 : Kind1 a d f = Kind1 x y z => (fun H3 : Kind1 a d f = Kind1 x y z => let H4 := f_equal (fun e : Entity => match e with  Kind1 _ _ c2 => c2  Kind2 _ _ => f  Kind3 _ => f  Kind4 _ => f end) H3 : f = z in (let H5 := f_equal (fun e : Entity => match e with  Kind1 _ c1 _ => c1  Kind2 _ _ => d  Kind3 _ => d  Kind4 _ => d end) H3 : d = y in (let H6 := f_equal (fun e : Entity => match e with  Kind1 c0 _ _ => c0  Kind2 _ _ => a  Kind3 _ => a  Kind4 _ => a end) H3 : a = x in (fun H7 : a = x => let H8 := H7 : a = x in eq_ind a (fun c : Char1 => d = y > f = z > c = x' \/ y = y' \/ z = z') (fun H9 : d = y => let H10 := H9 : d = y in eq_ind d (fun c : Char2 => f = z > a = x' \/ c = y' \/ z = z') (fun H11 : f = z => let H12 := H11 : f = z in eq_ind f (fun c : Char3 => a = x' \/ d = y' \/ c = z') (let H13 := match H1 in (_ = y0) return (y0 = Kind1 x' y' z' > a = x' \/ d = y' \/ f = z') with  eq_refl => fun H13 : Kind1 a e f = Kind1 x' y' z' => (fun H14 : Kind1 a e f = Kind1 x' y' z' => let H15 := f_equal (fun e : Entity => match e with  Kind1 _ _ c2 => c2  Kind2 _ _ => f  Kind3 _ => f  Kind4 _ => f end) H14 : f = z' in (let H16 := f_equal (fun e0 : Entity => match e0 with  Kind1 _ c1 _ => c1  Kind2 _ _ => e  Kind3 _ => e  Kind4 _ => e end) H14 : e = y' in (let H17 := f_equal (fun e : Entity => match e with  Kind1 c0 _ _ => c0  Kind2 _ _ => a  Kind3 _ => a  Kind4 _ => a end) H14 : a = x' in (fun H18 : a = x' => let H19 := H18 : a = x' in eq_ind a (fun c : Char1 => e = y' > f = z' > a = c \/ d = y' \/ f = z') (fun H20 : e = y' => let H21 := H20 : e = y' in eq_ind e (fun c : Char2 => f = z' > a = a \/ d = c \/ f = z') (fun H22 : f = z' => let H23 := H22 : f = z' in eq_ind f (fun c : Char3 => a = a \/ d = e \/ f = c) (or_introl eq_refl) z' H23) y' H21) x' H19) H17) H16) H15) H13 end : Kind1 x' y' z' = Kind1 x' y' z' > a = x' \/ d = y' \/ f = z' in H13 eq_refl) z H12) y H10) x H8) H6) H5) H4) H2 end : Kind1 x y z = Kind1 x y z > x = x' \/ y = y' \/ z = z' in H2 eq_refl end))))))) 
I've been working on an idea I had a while back that Objectivist Epistemology is equivalent to a fragment of Intuitionistic Type Theory. From the The Stanford Encyclopedia of Philosophy: I am not the fist to notice that the Objectivist account of concepts is somehow linked to programming. Furthermore, through type theory we can go further and link up the Objectivist notion of concepts with logic in a formal, constructive, and algorithmic way. I hope to use type theory to eventually give a detailed account of higherorder concepts and logic consistent with Objectivism. I will now explain what I've discovered so far. According to Objectivism, there are things called "entities" which can have "characteristics" or "attributes". The characteristics themselves can take on several different (finitely many) values which we observe through perception. We can represent characteristics as types quite simply in type theory. Imagine that we live in a very simple universe where there are only three possible characteristics called "Char1" "Char2" and "Char3". Each one can further have one of a small number of values unique to that characteristic. Inductive Char1 : Type :=  a : Char1  b : Char1  c : Char1. Inductive Char2 : Type :=  d : Char2  e : Char2. Inductive Char3 : Type :=  f : Char3  g : Char3  h : Char3  i : Char3. The first definition above says "Char1 : Type" which is read as "Char1 is of type Type" (more accurately, "Char1 is of sort Type", but don't worry about that for now). It further says that "a : Char1", i.e. "the term 'a' is of type 'Char1'". The term "a" represents the value "a" of the characteristic corresponding to the type "Char1". Similarly for the rest. Now, an entity can be represented as simply a list listing the values of all of its characteristics. However, some entities are not commensurable, i.e., they don't have the same characteristics. To take that into account, we divide entities into several different kinds according to their characteristics. Inductive Entity : Type :=  Kind1 : Char1 > Char2 > Char3 > Entity  Kind2 : Char2 > Char3 > Entity  Kind3 : Char1 > Entity  Kind4 : Char2 > Entity. The arrow notation above in "Kind1 : Char1 > Char2 > Char3 > Entity" says that in order to produce an Entity of Kind1, we need to list one term of Char1, then on term of Char2, then one term of Char3. Thus, the terms of the type "Entity" all look like the following examples: Kind1 a d f Kind2 e g Kind3 b ... and so on. I assume that concepts can only be formed from entities that are fully commensurable, i.e. are all of the same kind. The reason is, though I have not proven it yet, that mixing entities of different kinds within the same concept can lead to inconsistency. Here is an example of a concept: Inductive C : Entity > Prop :=  is_a_C (e : Entity) (y : Char2) (z : Char3) (H : e = Kind1 a y z) : C e. The symbol "C" is the name of the concept. Its type is "Entity > Prop", and so a concept is an abstract function which takes entities as inputs and produces propositions (propositions are types in the sort "Prop") as outputs. The propositions that are produced are described by the last part "C e". It says, essentially "e is a C" (for a concrete example, "Socrates (an entity) is a Man (a concept)"). Since "e" must be an entity, then, concretely, the terms of the type C all look like the following examples: C (Kind1 a d f) > "The entity 'Kind1 a d f' is a C" C (Kind2 e g) C (Kind1 b d f) ... Since "C (Kind1 a d f)" is also a proposition, and propositions are just types, what are the terms of "C (Kind1 a d f)"? The terms of the type "C (Kind1 a d f)" are the proofs of the proposition "C (Kind1 a d f)". The stuff to the left of the "C e" tells us how to construct the proofs of the proposition "C e". Specifically, it tells us that we need four things. First, an entity e, then a value y of Char2, then a value z of Char3, and finally a proof H of the proposition "e = Kind1 a y z" , that is, a proof that the entity e is identical to the entity (Kind1 a y z), where a is the value of Char1 (i.e., a constant) from before. The important thing to take away from all this is that a concept contains lots of nontrivial logical as well as computational structure. Let's look at an example of how a concept may be used. To do that, we prove the following theorem: Theorem : C (Kind1 a d f) which, in English, says that "The entity of kind1 whose first characteristic has value a, whose second characteristic has value d, and whose third characteristic has value f is a C". We will prove this using a "backwards style proof", one where we start with the conclusion and then work our way backwards to see what we would have to prove before we can prove it. During the course of the proof, we build up a "proof object", i.e., a term of the type C (Kind1 a d f). Proof: Our goal is to prove that C (Kind1 a d f). To do that, we refer to the definition of the concept C. It says that the only way to construct a proof of the proposition C (Kind1 a d f), is to use the "is_a_C" constructor. The "is_a_C" constructor says that we first need an e of type Entity. It is already given to us as (Kind1 a d f), so our proof object so far is is "is_a_C (Kind1 a d f)". The next thing we need is a y of type Char2. For this proof, we will use the value "d" as it is the only one that makes sense. Hence, so far we have "is_a_C (Kind1 a d f) d". We now need a z of type Char3. We choose the value "f" for the same reason as before to get "is_a_C (Kind1 a d f) d f". Finally, we need H a proof of the proposition "e = Kind1 a y z". Since e is just (Kind1 a d f), and y is d and z is f (note that if we had chosen different values for y and z we would not be able to complete the proof), this amounts to saying that we need a proof of the proposition "Kind1 a d f = Kind1 a d f". But this is just true by the law of identity. The Coq implementation of the Calculus of Inductive Constructions provides a term of the law of identity as "eq_refl" (reflexive property of equality) . Hence, this is a valid proof of the proposition H. Our completed proof object is now (is_a_C (Kind1 a d f) d f eq_refl). There are a few lessons to learn from this proof. First, it tells us how, exactly, a concept refers to entities. It does so by only allowing entities of the form (Kind1 a * *) through its proof procedure. If we had instead been trying to prove that C (Kind1 a e f) we could have simply chosen the value e for y and we would be able to complete the proof. Thus, this would prove that the entity (Kind1 a e f) is also a C, and we can say that the concept C refers to it. If, on the other hand we had been trying to prove that C (Kind1 b d f) there would be no way to complete the proof whatsoever at the final step since we would never be able to prove the contradiction "Kind1 b d f = Kind1 a d f". The same is true of entities like (Kind2 ...) since the proof would fail right at the start. Therefore, the concept C refers to the entities (Kind1 a * *) where the * represent characteristics whose measurements have been omitted. Second, it tells us that true propositions such as "C (Kind1 a d f)" will have at least one term in them, while false propositions such as "C (Kind1 b d f)" will be empty. Third, we can think of the constructor "is_a_C" as a program whose execution terminates if the inputs to it correspond to an entity e that C accepts and evidence y, z, and H that can together prove the proposition (C e), and which never terminates otherwise. We can even prove theorems like "~ C (Kind1 b d f)" just with what we have so far using the type of reasoning above where we showed why C cannot accept the entity (Kind1 b d f), but I won't be doing so formally here as the proof term that Coq provides is ridiculously complicated (though the proof procedure itself is dead simple). The machinery we have so far can even be extended to differentiation. For example, we can define a subconcept of C whose values in its third characteric are restricted to the range of values i and h as follows: Inductive C_diff : Entity > Prop :=  is_a_C_diff (e : Entity) (y : Char2) (z : Char3) (H0 : e = Kind1 a y z) (H1 : z = h \/ z = i) : C_diff e. The last part (H1 : z = h \/ z = i) says that, in addition to the stuff we needed before, we also need a proof of the proposition that the value z is equal to either h or i. Here is a sample theorem: Theorem : C_diff (Kind1 a d h). Proof. Proceeding analogously as before, we construct (is_a_C_diff (Kind1 a d h) d h eq_refl (something)). To complete the proof we need to provide a proof of H1. We note that z = h, and therefore, H1 becomes (h = h \/ h = i). In order to prove an "orstatement" in the Calculus of Inductive Constructions we need to make a choice of either proving the left or right disjunct. Here we will choose the left one which we can prove easily by using eq_refl, and definitely not the right one which is a contradiction. So our completed proof object is now (is_a_C_diff (Kind1 a d h) d h eq_refl (or_introl eq_refl)).

SpookyKitty reacted to a post in a topic: Math and reality

SpookyKitty reacted to a post in a topic: Math and reality

Lojban.

Thank you for your intelligent and thoughtful contributions. You two are truly the bearers of a deep and enlightening philosophy. I will immediately file away these incredible insights alongside the deep wisdom I acquired from flatearthers, creationists, and postmodernists. As a more direct answer to William O from a more traditional Objectivist perspective, some concepts are formed solely through introspection. These are what Rand called "concepts of consciousness", and modus ponens is one of them. I am currently writing up a short paper which explains how the various concepts of logic ("and" "or" "not" "implies") are derived from introspection, and I will hopefully have something posted sometime tonight. I can't guarantee that I will get to modus ponens, but I can guarantee that I will get to conjunctive and disjunctive introduction and elimination rules, the rule of assumption, and ex falso quodlibet.

SpookyKitty reacted to a post in a topic: What is the Objectivist explanation of how we know modus ponens?

I am currently working on a theory which aims to show that the formal theory underlying the Objectivist process of concept formation is something very similar to Per MartinLof's Intuitionistic Type Theory. If we understand Objectivist concepts as types, then a statement like A > B says that there is a computable function which transforms any proof/construction of the concept/type A into a proof/construction of the concept/type B. The rule of modus ponens is then simply function application. If f : A > B, then the term f a , where a is a proof/construction of concept/type A, is a proof/construction of the concept/type B. Hence, from a proof of A and a proof of A > B we derive a proof of B. One could then argue that the rule of modus ponens is somehow inherent in any process of computation. This is just what it means for a concept to be "axiomatic" in Objectivist terminology.

Fundamentally, is there only ‘spacetime’?
SpookyKitty replied to A.C.E.'s topic in Physics and Mathematics
I'm sorry but this seems to just be a very clever restatement of General Relativity.