Jump to content
Objectivism Online Forum

Several outrageous propositions

Rate this topic


Math Guy

Recommended Posts

It doesn't sound like much more than fancy astrology or numerology, at least the places where you discuss numbers. In particular, placing special significance on certain numbers for no particular reason other than "it fits together!". All people make decisions based on what they know (even if what they know is simply that sacrificing 1 person every year leads to appeasement of the gods). So anything that occurs in history will be based entirely upon what people do or think. Any corresponding numbers would at best be coincidental.

That is hasty, to say the least. He is still introducing his topic.

So far it appears that the book project may be compared to The Golden Ratio: The Story of Phi, the World's Most Astonishing Number. Phi is not numerology, it is geometry. Hopefully there is analogous theoretical reason for the Math Guy's new discovery. edit: I think that is where the maximum entropy reference was leading, we'll see.

Well, a little hasty, yes, but I was expecting more or less exactly this objection. This is where my Objectivist friends and acquaintances have parted company with me in the past, often with this exact phrasing: Anything that occurs in history will be based entirely upon what people do or think.

Eiuol, I don't think it's fair to drag astrology into this, as I've given you no cause to do that. But the comparison with numerology is not unreasonable.

As an aside to everyone else: Please don't recoil in horror if you don't get this right away. I'm not into numerology. The problem is that the principle of maximum entropy deals with the number of elements in a set, and the number of different states each element can take on. It is explicitly, and by design, a method of predicting behavior using nothing but rank in the set, i.e. a pure number. The Nth element behaves the way it does because it is the Nth element. So yes, it does sound rather like numerology. But there are plenty of precedents in science for attacking problems this way, which I think is Grames' point. It is no more unreasonable to make arguments based on rank order, than arguments based on geometry. Consider, for example, those experiments in quantum physics where a particle emits two photons in opposite directions, and measuring the first photon somehow influences the state of the second photon. That too is a theory predicting behavior based strictly on rank order. Reverse the order, and the 'second' photon now calls the tune, and the 'first' photon dances to it.

(Also, as it happens, the golden ratio phi actually comes into the story eventually, in the details of scale invariance, so the story is not simply about rank, but also about geometry to some degree.)

Here's the basic problem, which I think Eiuol has sensed correctly. If you all are kind enough to give me sufficient time and attention, I'm going to present a series of these curves. My next example is going to be from epidemiology. I have a bunch of interesting research I've done on the spread of H1N1, HIV-AIDS, and so on. Then I want to talk about participation in websites, which is a nice, concrete, tangible sort of case to deal with.

For each curve we can have a vigorous discussion and dream up plenty of local, anecdotal causes -- call them incentives, if that makes more sense -- that would cause people to choose to behave a certain way. We might not be able to prove that these were the incentives that people acted upon, but we can at least imagine them as being present. For example, the growth of agricultural output might make people richer, and thus ironically make kingdoms less stable, because they reduce the incentive to be loyal to the king.

However, as we consider one curve and then another, we are going to be left without a good argument for why they are all so similar. A local cause simply will not do. There has to be a transcendant cause, a meta-cause, something that operates everywhere regardless of context.

And that, as Eiuol observed, raises uncomfortable questions about free will. This common shape of curve cannot really be a cause, can it? It can only be coincidence. If the institution of monarchy was predestined to shrink to a pitiful vestige of itself after 1,000 dynasties . . . if EVERY dynasty was doomed to shrivel in similar fashion over time . . . and if participation in websites or in churches or on battlefields all follow this same curve . . . at some point we have to ask, isn't this determinism? Doesn't this presume that free will is being overridden by the imperative of the common curve shape?

It's a good question. One might say it is THE question, certainly for Objectivists. These curves are hugely useful in predicting customer behavior, or battlefield outcomes, or the spread of a disease. They are much better tools that what science is using now. But to adopt them without a thorough discussion of the philosophical implications would be risky.

The challenge here is that we have to talk coherently about large numbers of human choices forming a distribution curve, and yet still being free. If we can do that, then we can embrace all the practical insights that these curves provide, and not have to fear that we are undermining the idea of man as a rational, sovereign, autonomous being.

This was the challenge that I frequently failed to meet in the early years of my research. We'll have to see if I've learned anything from those early setbacks. More on this presently, after I take care of some minor points.

Link to comment
Share on other sites

  • Replies 78
  • Created
  • Last Reply

Top Posters In This Topic

Isn't this fairly intuitive, though? For example, there may be a great many people willing to take the time to sign up for a forum membership, but only a few willing to post (in any given population). Those who are willing to post would reasonably be early adopters, therefore the per capita ratio of posters to members is very high. As the late adopters, those not terribly interested in posting, start joining the ratio of posters to members goes down. I just picked this one example from the several you provided, but I can see the same type of process at work in most of them: the general population is increasing, but the specific actors driving the events under study don't change - the equation gets bottom heavy. Of course, I could be missing something completely fundamental here.

Very interesting thread, Math Guy. Thanks for posting.

It's great the way people respond to my examples with just the right vocabulary. In fact, that terminology -- "early adopters" and "late adopters" comes from the work of Everett Rogers, and his curve is one of the many ad hoc curves I want to replace with my universal one.

Rogers made an assumption that is very, very common in science history. He assumed that the bell curve dominated the process of innovation adoption. He divided the population into groups according to their willingness to adopt a new technology or idea, basically taking slices of a bell curve.

This worked okay because most people took his categories as loose, descriptive metaphors, not as precise mathematical definitions. So relatively few people even know what percentage of the population is supposed to be "early adopters" and what percentage "late adopters". It doesn't matter. Most of the time we just wave our hands and use the terms without specifying any numbers.

But if you actually dig down into Rogers' scheme, it becomes clear that his assumption was totally arbitrary, and unnecessary. There's no bell curve evident in the process. Data from real historical adoption processes -- like the spread of the Model T car in America -- show that the decline in customer commitment follows a power law, not a bell curve. If you have 10,000 customers, they are generally willing to pay X dollars for a product. In order to secure 100,000 customers, the price typically needs to come down to X/2. The later customers simply aren't as keen, can't use the product in as many ways or for as many hours of the day. Their lower valuation is rational given their context. And so it goes, for the first million and the first ten million and so on.

The steepness of the demand curve varies from case to case. It isn't always perfectly in accord with my model. But it's very visibly a power law, not a bell curve or modified Bass S-curve (which is sometimes used instead of Rogers' original version).

So yes, I think you're following the argument perfectly well to this point.

Link to comment
Share on other sites

I'm curious to know if you intend to prove that one of your empirical claims is a fact, or is your interest just in the method of modeling such a pattern, whether or not it's a fact. I mean, with a name "Math Guy", I'd guess the latter, but still I should ask.

The former, emphatically. The name "Math Guy" works great in most contexts, and I've used it for years, but it conveys a slightly rationalistic, abstracted perspective to Objectivists that isn't at all the way I actually work.

The reality of most of these ad hoc laws isn't seriously disputed. They've been tested and retested for decades. So they are facts. The trouble is that they're disconnected from the rest of our knowledge.

I'm trying to establish the common connection between them, the power of the principle of maximum entropy, as also being a fact -- that is, as much more than a convenient hypothetical model. I'm not just doing curve-fitting here. I see this as a truth, an important epistemological and metaphysical truth about the way the world works. Entropy always increases, and that truth influences religion and economics and disease and war and so on.

But if it was easy to show that, it might very well have been done decades ago. So in conversation I refer to it as a hypothesis, or as a mathematical model, and urge people to make up their own minds. I distinguish between my own conviction about the connection between these laws, and what my readers are able to see at any given point.

Link to comment
Share on other sites

Here's the basic problem, which I think Eiuol has sensed correctly. If you all are kind enough to give me

And that, as Eiuol observed, raises uncomfortable questions about free will. This common shape of curve cannot really be a cause, can it? It can only be coincidence. If the institution of monarchy was predestined to shrink to a pitiful vestige of itself after 1,000 dynasties . . . if EVERY dynasty was doomed to shrivel in similar fashion over time . . . and if participation in websites or in churches or on battlefields all follow this same curve . . . at some point we have to ask, isn't this determinism? Doesn't this presume that free will is being overridden by the imperative of the common curve shape?

Okay, I am generally finding this interesting enough to continue to follow. However, the build up and baiting the suckers is getting a little beyond the scientific.

Whatever you have, it has no implications on free will. Free will is possed by individuals. Looking at large groups of people is something that various sciences have done for some time, and interesting things can be learned. What it means is that people live in a world with things, all of which are something specific which means there is causality. It should not be surprising that human activity within the world also tends to have certain patterns. We should not expect to know what those patterns are before we discover them.

Even more important is that is that the pattern you may have discovered is not a cause. Let's go further and say that you have discovered a "law". The law itself does not exist, only things exist. If humans living in the world have long term patterns in certain contexts, then there is some connection and influence in the real world. It would be like gravity or the speed of light, or a law of motion, none of which have any meaning for free will.

Link to comment
Share on other sites

The reality of most of these ad hoc laws isn't seriously disputed. They've been tested and retested for decades. So they are facts.
I'm trying to establish what you think this is a law of. For example if you were claiming that this is a physical "law of subatomic physics", I'd expect you to have some experimental facts from the domain of physics. Here, you seem to be making some kind of claim about the duration of dynasties, so I'm asking you to prove the claim, at the level of data.
Link to comment
Share on other sites

If indeed these all involve the same power law, unless you ask and answer "why", we can simply accuse you of cherry-picking examples that happen to fit the same equation. Why do these fit the equation but any of the countless other population statistics do not?

Link to comment
Share on other sites

Okay, I am generally finding this interesting enough to continue to follow. However, the build up and baiting the suckers is getting a little beyond the scientific.

Whatever you have, it has no implications on free will. Free will is possed by individuals. Looking at large groups of people is something that various sciences have done for some time, and interesting things can be learned. What it means is that people live in a world with things, all of which are something specific which means there is causality. It should not be surprising that human activity within the world also tends to have certain patterns. We should not expect to know what those patterns are before we discover them.

Even more important is that is that the pattern you may have discovered is not a cause. Let's go further and say that you have discovered a "law". The law itself does not exist, only things exist. If humans living in the world have long term patterns in certain contexts, then there is some connection and influence in the real world. It would be like gravity or the speed of light, or a law of motion, none of which have any meaning for free will.

Well, I don't feel that I'm "baiting the suckers" exactly, more like acknowledging concerns that I have heard before and that I anticipate hearing here. But okay. Your point about vocabulary is certainly on point. We have to agree on the meaning of terms.

Now, for example, I refer to this curve as a "cause," or more precisely as a meta-cause. It is not the only cause operating in a given case, but it is among the causes for the outcome we observe, and it is very broad in application, hence my use of the term "meta-cause". You prefer to say it is not a cause, but a law. You give gravity as an example of a law. I admit to some concern about calling maximum entropy a cause, but I don't think there is much to be gained by calling it a law instead.

For example, if I fall out of a window and break my leg, the cause of my falling is gravity -- yes? I fell because there is an attractive force between two bodies that is proportional to their masses and inversely proportional to the square of the distance between them. The word "cause" in this context is appropriate. If I said, I fell and broke my leg in accordance with the law of gravity, that avoids using the taboo word "cause" but I don't see that it really changes very much else.

I'm prepared to say that human beings make choices in accordance with this power law, and to refrain from calling the law a cause, if that will help. But I'm really not sure that it will. Perhaps you can say more about why you think this distinction matters.

The reality of most of these ad hoc laws isn't seriously disputed. They've been tested and retested for decades. So they are facts.

I'm trying to establish what you think this is a law of. For example if you were claiming that this is a physical "law of subatomic physics", I'd expect you to have some experimental facts from the domain of physics. Here, you seem to be making some kind of claim about the duration of dynasties, so I'm asking you to prove the claim, at the level of data.

Yes, okay, two questions here, both good ones. The first is "What is this a law of?" It is a law pertaining to randomness, to the behavior of large, complex systems with many seemingly independent parts capable of taking on many different values or measurements. The law says that the observed behavior of such systems will evolve in a consistent way, predictable using simple mathematical rules.

Short version: It is a law of randomness.

Second question: Prove the claim, at the level of data.

Okay, but just to save everyone paging through endless posts consisting of nothing but figures, I'll do some sample calculations and refer you to my sources for the rest. First some examples of decline on the dynastic level:

Dynasties ruling Portugal, 868-present

Vimara Peres to 1072, 204 years, so for N=1, mean length 204

2nd County to 1139, 67 years, so for N=2, cumulative mean 135.5

Burgundy to 1385, 246 years, so for N=3, cumulative mean 172.33

Aviz to 1495, 110 years, so for N=4, cumulative mean 156.75

Aviz-Beja to 1581, 86 years, so for N=5, cumulative mean 142.6

Hapsburg to 1640, 59 years, so for N=6, cumulative mean 128.67

Braganza to 1853, 215 years, so for N=7, cumulative mean 140.71

Saxe-Cobourg Gotha to 1910, 57 years, so for N=8, cumulative mean 130.25

Braganza to 2007, 97 years, so for N=9, cumulative mean 126.6

If you plot these on a spreadsheet and fit a power-law curve to them, you get an exponent of -0.178. My universal model predicts -0.30576. So this is a slightly shallower decline than expected, but then they cluster around -0.30576, they don't precisely match it every time. You can test the data set several ways, such as looking at how many items are above the median value at any given point. They all show a robust decline. For a data set this small, it is pretty clear.

Dynasties ruling France, 843-1870

There are 14 dynasties in the series, from the Carolingians to the last of the Bonapartes. I'll give the lengths of each, and the cumulative means:

144 / 144

341 / 242.5

170 / 218.3

17 / 168

74 / 149.2

203 / 158.2

12 / 137.3

10 / 121.4

1 / 108

0.5 / 97.2

15 / 89.7

18 / 83.8

4 / 77.6

18 / 73.4

Stability in France declined a good deal more steeply than in Portugal. The best fit curve has an exponent of -0.385, a little worse than my universal model. Again, you can play with these numbers in various ways but I think the conclusion is always going to be a power law decline.

I'll post examples of decline within a dynasty next.

Link to comment
Share on other sites

Your point about vocabulary is certainly on point. We have to agree on the meaning of terms.

Now, for example, I refer to this curve as a "cause," or more precisely as a meta-cause.

I said "baiting the suckers" because of your teasing questions about free will. I am sure that we will see more on that from you later.

I wasn't concerned about terminology but meaning. The Objectivist view of causality and laws is significantly different from what seems to be your background. However, I don't want to slow down your exposition and I am sure that there will be ample opportunity to discuss the point later.

Link to comment
Share on other sites

Whatever you have, it has no implications on free will. Free will is possed by individuals. Looking at large groups of people is something that various sciences have done for some time, and interesting things can be learned. What it means is that people live in a world with things, all of which are something specific which means there is causality. It should not be surprising that human activity within the world also tends to have certain patterns. We should not expect to know what those patterns are before we discover them.

Even more important is that is that the pattern you may have discovered is not a cause. Let's go further and say that you have discovered a "law". The law itself does not exist, only things exist. If humans living in the world have long term patterns in certain contexts, then there is some connection and influence in the real world. It would be like gravity or the speed of light, or a law of motion, none of which have any meaning for free will.

If this is the philosophic concern you have, we can already come to a conclusion. I agree with BoB G, this has no implications for free will. Free will applies to individuals, none of your examples are examples of individuals. Even monarchs (especially monarchs) are not acting as individuals but second-handedly as social networks of power and force.

However, as we consider one curve and then another, we are going to be left without a good argument for why they are all so similar. A local cause simply will not do. There has to be a transcendant cause, a meta-cause, something that operates everywhere regardless of context.

That is a hasty generalization if there ever was one. The mistake is reifying a statistical description into an existent which itself causes events to happen. What is really going on here is similar systems exhibit similar behavior, the only trick here is gaining the insight necessary to recognize the similarity.

Given two fair six-sided dice, each one is completely random statistically but their sum will be a bell curve. The die do not coordinate to produce this result, it is an emergent property of a system where each element is independent and without memory. Adding more elements to the sum makes the bell curve approach the normal distribution. The central limit theorem applies.

Given a number of manufactured electronic capacitors, they will tend to fail at a certain number of operating hours and with a certain spread. Each possible cause of failure is modeled as an unfair (loaded) die and the statistical distribution that results is the Weibull. The capacitors do not coordinate their failures with each other, the failures have many identical and competing common causes which are flaws in the dielectric material and the first cause to occur chronologically sets the failure time for the whole capacitor.

Given a number of semiconductor devices, taking the logarithm of the individual device operating lifetimes results in a normal distribution, so the device is said to have a lognormal life distribution. Lognormal distributions appear when the causes of failure progressively strengthen over time such as in electro-migration, crack propagation, and some chemical corrosion processes.

In none of these examples is there any spooky action-at-a-distance, or reverse time travelling coordinating signals.

I can disagree with your philosophical concerns and still find the arithmetic correct. B) Have you considered extrapolating backward in time to find how long ago the first kings could have been? When the mean span of rule is greater than a human lifespan, that would be impossible and first kings would have to be after that time.

Link to comment
Share on other sites

It's great the way people respond to my examples with just the right vocabulary. In fact, that terminology -- "early adopters" and "late adopters" comes from the work of Everett Rogers, and his curve is one of the many ad hoc curves I want to replace with my universal one.

I think I understand: you're trying to subsume all these seemingly disconnected "laws" under one all-encompassing law? That sounds perfectly reasonable to me. I don't know about all of the laws you listed, but the data is what it is, right? If the data doesn't fit a bell-curve, but does fit a power law curve, then that's what it is.

My point was somewhat two-fold: 1) a ratio will necessarily decline when the numerator stays the same, or changes little, as the denominator grows. That may not fit for monarchies, but for many of the examples you listed it seems a forgone conclusion that the denominator grows faster than the numerator. For example, you mentioned AIDS and H1N1. The denominator has an upper limit - approximately 6 billion - but grows rapidly as we change the parameters of the sample set. We can expect a high proportion of individuals in a close community to contract H1N1, but as we move further away from that close community, as we expand the sample set, fewer and fewer individuals will contract H1N1. The denominator grows rapidly, but the numerator changes very little. Eventually, we get to the largest possble sample set - the entire planet - and the numerator has changed very little. Many reasons can be supplied to explain why fewer and fewer individuals contract the disease, but that a power curve describes the data seems mathematically intuitive.

And 2) you have alluded to implications for free-will, and I'm afraid I don't see that - yet. The free-will of the individual forum members is the cause of whether they post or not (for example). Much in the same way that the free-will and behavior of individuals world-wide, as well as their resistance to any particular disease determines whether that disease will infect them. They are not deterministically following some power law curve. A power law curve may describe how forum posters to forum members develops, but it doesn't determine the behavior of those individual posters, or each marginal poster.

I, for one, appreciate how you're laying out your argument. You clearly understand your argument and where resistance is typically found. So, rather than reading through reams of argument, I can get right to those issues without the entire argument clouding the discussion. Perhaps I need to wait until you've further developed your argument for determinism?

Link to comment
Share on other sites

I just want to clarify that with the Hari Seldon statement I wasn't trying to make light of Math Guy's tremendous efforts, I was just struck by the similarity of Asimov's unexplained psychohistory concept and this historical maximum entropy idea. Don't want to derail the discussion either, although it'd be interesting to know of Math Guy has read the Asimov books in question.

Looking forward to today's updates!

Link to comment
Share on other sites

If indeed these all involve the same power law, unless you ask and answer "why", we can simply accuse you of cherry-picking examples that happen to fit the same equation. Why do these fit the equation but any of the countless other population statistics do not?

Well, I have given you a "why" as well as a "what" here. I've said that the common cause is that all these processes conform to the principle of maximum entropy, even the ones that don't result in this particular power law. That's actually pretty uncontroversial among mathematicians familiar with the principle. It all depends on what a system is capable of doing.

I gave the example of a six-sided die. If you throw it many thousands of times, the behavior (at least on the level of whether it comes up '1' or '6') doesn't resemble a power law because there are exactly six possible outcomes and the design of the die makes them all alike. Viewed on that very basic level, the die isn't capable of exhibiting the sort of behavior I am interested in, the decline of rare outcomes over time. It's too simple an object, with too few possible states. Nonetheless Edwin Jaynes said specifically, and other mathematicians back him up, that the behavior of the die does conform to the principle of maximum entropy -- just in its own way.

I can lay out the math behind this, but there isn't anything novel or controversial about that aspect of the story, and it's kind of boring to laymen. I'm taking that particular aspect of the problem for granted.

I suppose you can call it "cherry picking" that I focus on all these orphan laws that have no explanation, and try to establish that the explanation is entropy. But it's a virtuous kind of cherry picking. I'm going after low-hanging fruit that is delicious and nutritious, and there's nothing wrong with that. Explaining these obscure curves from diverse fields as all arising from one cause is something Rand urged scientists to do more often.

Link to comment
Share on other sites

Well, I have given you a "why" as well as a "what" here. I've said that the common cause is that all these processes conform to the principle of maximum entropy, even the ones that don't result in this particular power law. That's actually pretty uncontroversial among mathematicians familiar with the principle. It all depends on what a system is capable of doing.
I think the point that you're missing is that there is no real principle of maximum entropy that is part of the universe, which explains the supposed facts. Certainly there is mathematical language that allows you to talk about relations such as a "power law", but a power law is not a fact of reality in the way that the law of conservation of charge or Boyle's law are facts of reality. Even if it were the case that, for examples, dynasties tended to become shorter in duration over time, that still only be an observation without any causal explanation. You have to have an ontological commitment to some actual and direct cause.

As for the "what", I'm simply not persuaded. It is not about the math -- the math is irrelevant, until you have a fact that needs to be modeled. We don't yet have a fact that needs to be explained. As a starter, you need to objectively define your terms -- "dynasty". If you're talking about Portugal, you cannot have a dynasty of Portugal until 1139, since Portugal was not a country until then. You have to define the boundary of a dynasty (so that we can see why a supposed Aviz-Beja dynasty is justified, or distinguishing Braganza from Braganza-Saxe-Coburg and Gotha). And to maintain that there is a current Braganza "dynasty" will take some work since Portugal disposed of kings in 1910. And furthermore, these definitions need to be objectively justified, not simply arbitrarily set so that you can get the result that you want. What thing of reality does a "dynasty" refer to?

Link to comment
Share on other sites

If this is the philosophic concern you have, we can already come to a conclusion. I agree with BoB G, this has no implications for free will. Free will applies to individuals, none of your examples are examples of individuals. Even monarchs (especially monarchs) are not acting as individuals but second-handedly as social networks of power and force.

None of my examples so far, no. I agree, one can make the argument that an office-holder's behavior is decisively shaped by the imperatives of the office, the social network as you call it. But I have some examples where this argument is not available, where we are talking about the behavior of individuals, which I think are very interesting. More on that presently.

However, as we consider one curve and then another, we are going to be left without a good argument for why they are all so similar. A local cause simply will not do. There has to be a transcendant cause, a meta-cause, something that operates everywhere regardless of context.

That is a hasty generalization if there ever was one. The mistake is reifying a statistical description into an existent which itself causes events to happen. What is really going on here is similar systems exhibit similar behavior, the only trick here is gaining the insight necessary to recognize the similarity.

Well, it might seem hasty to you given that I got to it after a dozen posts, but it actually took me 20 years. :)

I'm familiar with the Objectivist argument against reification, and I don't think that's what I'm doing. I agree I'm being a little provocative in using the word "transcendant," because that is a word that gets abused quite a bit. But here I am using it in the correct sense: there are endless local, anecdotal causes for these curves. We can see that the factors governing traffic accidents are different from the factors governing participation in online forums, and different again from transmission or mortality rates in epidemics. Yet they all yield the same shape of curve, and close to the same slope. I say that what causes them to be similar is something that transcends the particulars of each case. You want to say that the systems are similar, and their similarities are what make the curves the same.

I think the hangup here is more a matter of terminology than real content. I am not arguing for some kind of ineffable World Spirit of the Hegelian kind, working behind the scenes. The transcendant cause I have in mind is mathematical, not mystical. If you find it more satisfactory to say that the cause is mathematical similarity, that is fine. Given the vastness of the range of cases, I like a word that emphasizes scope. Given that I haven't demonstrated that huge scope as yet, you quite reasonably prefer more conservative language.

Given two fair six-sided dice, each one is completely random statistically but their sum will be a bell curve. The die do not coordinate to produce this result, it is an emergent property of a system where each element is independent and without memory. Adding more elements to the sum makes the bell curve approach the normal distribution. The central limit theorem applies.

Given a number of manufactured electronic capacitors, they will tend to fail at a certain number of operating hours and with a certain spread. Each possible cause of failure is modeled as an unfair (loaded) die and the statistical distribution that results is the Weibull. The capacitors do not coordinate their failures with each other, the failures have many identical and competing common causes which are flaws in the dielectric material and the first cause to occur chronologically sets the failure time for the whole capacitor.

Given a number of semiconductor devices, taking the logarithm of the individual device operating lifetimes results in a normal distribution, so the device is said to have a lognormal life distribution. Lognormal distributions appear when the causes of failure progressively strengthen over time such as in electro-migration, crack propagation, and some chemical corrosion processes.

In none of these examples is there any spooky action-at-a-distance, or reverse time travelling coordinating signals.

I can disagree with your philosophical concerns and still find the arithmetic correct. :thumbsup:

Yes. Well said. But there is more in play here than I have been able to say so far. The insights that Edwin Jaynes had into probability and inference are truly awesome, and we have barely touched on them as yet.

The first problem is that these curves not only aren't normal, they're not lognormal either. They don't obey the central limit theorem, at all. The mean value changes as the set grows larger. That's one of the truly frustrating aspects common to all these obscure ad hoc laws. They can't be dealt with using standard statistical methods. If you take a look at Nicholas Taleb's books, such as Fooled by Randomness or The Black Swan, you'll see example after example of what I mean. They are orphan curves precisely because they don't fit the standard model. They have been left to languish in dark corners because in dealing with them one cannot assign a z-score, or arrive at a stable mean, or do any of the things that universities train us to do.

Jaynes came up with arguments that make such curves more tractable. The principle of maximum entropy reframes the whole question of independence, and the applicability of the central limit theorem. It's very subtle stuff. I don't want to go off in all directions here, so I will simply say that for now. Independence in the sense used by classical probability remains on the table as an issue.

Incidentally, Jaynes found the quantum-mechanical notion of reverse time traveling coordination bogus. He was a lifelong skeptic regarding the Copenhagen Interpretation and the obscurantism associated with it. I think Objectivists would find his complaints in that area very welcome and insightful. Ultimately he believed that the principle of maximum entropy could be used to reconcile quantum phenomena with everyday phenomena, to treat them all according to a single set of rules. That is surely an exciting prospect, and I have been strongly encouraged by my own work to believe Jaynes was right.

To put it very simply, for those readers who aren't necessarily familiar with statistics, or some of these philosophy of science issues: I think these curves seem spooky because we have the wrong idea about how probability works. I'm not arguing against free will. I'm arguing for a more robust view of probability that will help us understand why these curves are "normal". Along the way, we will also develop a more subtle understanding of just what it is we are free to do.

Link to comment
Share on other sites

I was telling a colleague about this at lunch today. The way I put it, I said the stability of any structure is more vulnerable as entropy increases. So it makes sense that in ever more complex systems (due to population increase for instance), even if those structures are institutions like monarchy, they will be less stable so in that case not last as long.

Is that anything like what you're saying?

Link to comment
Share on other sites

The transcendant cause I have in mind is mathematical, not mystical.

Entities are causal primaries. By implication your "trancendant cause" must be an entity. Math is not an entity but begins with the concept entity. Do you have a candidate for this entity/entities? Since the universe is all the entities in existence, to propose a meta entity, is to talk nonsense.

Now I still have some post to read in the thread but this is my thought so far.

Edited by Plasmatic
Link to comment
Share on other sites

I just want to clarify that with the Hari Seldon statement I wasn't trying to make light of Math Guy's tremendous efforts, I was just struck by the similarity of Asimov's unexplained psychohistory concept and this historical maximum entropy idea. Don't want to derail the discussion either, although it'd be interesting to know of Math Guy has read the Asimov books in question.

Looking forward to today's updates!

I read them all as a teenager (devoured them would be closer to the truth) and I was bitterly disappointed at the way Asimov resolved the story. I felt that he was saying something profoundly interesting and serious when he proposed the idea of psychohistory, and that the later business of telepathy and mind control, while it made for an interesting story, pretty much abandoned the promising argument he started with for a much more familiar sort of space opera.

I've looked at the sequels that appeared later on (both Asimov's own and those written after his death for his estate). Some are interesting, but my feeling has always been that the science lost out to the fiction. What would really have made his work epic in intellectual history was if he had given more substance to the idea of psychohistory, made it more like a real science. But of course, real science is hard, hard work, and Asimov wrote Foundation when he was a very young man and desperately trying to make a living. He couldn't have afforded to spend more time on it than he did.

When I started seeing the implications of my work, 10-15 years ago, I seriously considered adopting the pen name Harry Seldon, as homage.

Link to comment
Share on other sites

I think I understand: you're trying to subsume all these seemingly disconnected "laws" under one all-encompassing law? That sounds perfectly reasonable to me. I don't know about all of the laws you listed, but the data is what it is, right? If the data doesn't fit a bell-curve, but does fit a power law curve, then that's what it is.

Yes, although the discussion of why this particular curve should show up so often still remains. "It is what it is" doesn't satisfy everyone, hence the difficult and nuanced discussion of what qualifies as a cause, or a transcendant cause, or a law.

My point was somewhat two-fold: 1) a ratio will necessarily decline when the numerator stays the same, or changes little, as the denominator grows. That may not fit for monarchies, but for many of the examples you listed it seems a forgone conclusion that the denominator grows faster than the numerator. For example, you mentioned AIDS and H1N1. The denominator has an upper limit - approximately 6 billion - but grows rapidly as we change the parameters of the sample set. We can expect a high proportion of individuals in a close community to contract H1N1, but as we move further away from that close community, as we expand the sample set, fewer and fewer individuals will contract H1N1. The denominator grows rapidly, but the numerator changes very little. Eventually, we get to the largest possble sample set - the entire planet - and the numerator has changed very little. Many reasons can be supplied to explain why fewer and fewer individuals contract the disease, but that a power curve describes the data seems mathematically intuitive.

Well, I'm glad you find it intuitive but that is emphatically not what they teach in medical textbooks. Here's a brief summary of what I found with regard to epidemic diseases.

In the mid-19th century, before the germ theory of disease was firmly established, a man named William Farr, in charge of public health statistics in England, began putting out studies of mortality and morbidity (death rates and transmission rates). These were not the first systematic studies ever done, but they were much more authoritative and insightful than anything previous. Farr made several key observations that (so far as I can discover) were unique to him and the first of their kind.

First, Farr observed that the number of people infected in epidemics tends to rise exponentially, then abruptly break, and drop faster than it went up. He suggested, as a rough approximation, that the data would fit a quadratic equation. In effect, this implies that every person who is infected will pass the disease on with the same efficiency as the last one. Transmission numbers rise at a constant rate up to the point where the supply of people to infect starts running out. If adopted without further examination, this leads directly to the modern S-curve taught in epidemiology texts.

But there is reason to doubt whether Farr ever actually thought that transmission was uniformly efficient, or whether he was just grabbing a handy function as a first approximation. For one thing, the S-curve is symmetrical, and Farr said epidemics come down faster than they go up. The figures Farr actually had at the time, and those gathered over the next half-century, did not fit an S-curve or normal curve very well at all. As late as the 1940's one could find people marveling at how poorly the data fit the model. What they actually suggest is that long before the disease runs out of victims, it is already slowing down. Transmission starts becoming less efficient almost immediately. However, once normal curves became the fashion everywhere in science, they were seized upon in epidemiology as well. "Farr's Law" is generally expressed as an S-curve in textbooks now, without even a footnote regarding what Farr himself knew or thought.

Farr's second important discovery had to do with how one tracks mortality. He believed that mortality rates changed over the course of an epidemic. He had curves for cholera and other diseases that clearly implied it. One reason why Farr believed this was because his knowledge of disease predated the germ theory. He believed in miasmas and other environmental causes, which of course people thought of as having less and less effect if they were diluted. It mattered a great deal how much of the toxic substance was present in a given neighborhood, how much had been washed away by the rain, and so on. If divided up among more victims, miasma would again be less dangerous to any one person. So people coughing up miasma that they got in one district could be expected to "infect" their secondary victims, but with ever-increasing inefficiency.

Once the germ theory came in, this idea of diminishing effect became obsolete. It now seemed clear that a very small quantity of infectious organisms would multiply many times over in each patient. The idea of diminishing mortality across the course of an epidemic seemed forlorn, since each patient basically started over at the beginning. Science became focused on antibody protection and the strength of the immune system in attacking the invading organism. Farr's idea was forgotten -- literally, for over a century nobody pursued it.

The idea of the later victims of an epidemic automatically being less sick, simply because they come later, isn't up for discussion at this point in medical circles. You'll notice that epidemiologists refer to "the" mortality rate for a disease. Ebola (hemorrhagic fever) has a mortality rate of 80 percent. Smallpox (when it still existed) had two mortality rates, one for the "major" strain of 30 percent, the other for the "minor" strain at 1 percent. And so on.

Now, I am not a doctor, and I don't mean to sound like an arrogant ass. However, for more than a decade I have tried and tried, and I cannot find any infectious diseases that erupt into epidemics that do not decline in both transmission rate and mortality. For the older diseases, such as plague in Europe or smallpox in the New World, the evidence is sketchy and anecdotal, but witnesses clearly report the disease moving more slowly and killing fewer people in the late stages of the epidemic. For modern diseases, we have actual World Health Organization data to look at, and the result is the same.

One gets the impression from media reports (and Tom Clancy novels) of Ebola being fantastically lethal, basically not only killing an entire village in days, but killing the hospital staff who try to treat them, and then burning out before it can spread further. But in fact there have been less well-publicized outbreaks of hemorrhagic fever in Asia that involved thousands or tens of thousands of people (such as during the Korean War), and in those places the majority of patients survived, even though we have absolutely no drugs that work on any of the hemorrhagic fever varieties.

In the case of H1N1, I've had the opportunity to test my model in real time. The initial outbreak in and around Mexico City raised tremendous alarm because the mortality rate was 5 percent or more among the first thousand patients. The authorities did not at that time have a lab test for H1N1 and so there was some doubt about the real mortality rate. But while lab testing eliminated some cases as not being H1N1, the mortality rate for the remaining cases was still 5 percent in late April. From there, as it spread, the official WHO mortality rate kept going down and down. At the end of the first week of May it was 2,400 cases and 1.9 percent. By mid-May, 7,500 cases and 0.9 percent. By the third week in May, 11,000 cases and 0.8 percent. At month end, 22,000 cases and 0.6 percent.

Eventually the WHO reporting system broke down. The real patient count was in the millions but most national authorities simply could not identify or test more than a tiny fraction of the total. Now the only estimates were based on sampling, and reports from doctors' offices of increased requests for flu shots, and so on. The CDC reported on November 12th that as of October 18, 22 million Americans had caught H1N1, and 3,900 had died of it. That's a rate of 0.02 percent, or an incredible 1/250th of the mortality rate for the first hundred patients in Mexico City. Using my standard decline curve, I would have predicted a less dramatic drop to about 0.08 percent. Either way, the idea of there being one fixed mortality rate for H1N1 seems not just doubtful, but kind of ridiculous. It has done nothing but decline the entire time. The WHO figures for transmission (up to the point where they stop making sense) give the same impression. Neither transmission nor mortality were constant in this global pandemic.

Of course, medicine has a backup answer for why epidemics behave this way, and it has enjoyed great plausibility for generations now. The argument is that mutation, followed by rapid natural selection, leads the later part of the epidemic, or later epidemic waves of the same disease, to be less lethal. Basically, a random mutation shows up that kills fewer patients, and because its victims live longer, it is transmitted to more people. It gains a competitive advantage over its more immediately lethal cousin.

I'm not ridiculing this theory. It is plausible, there is certainly lots of mutation going on, properties of diseases have been proven to change. But it is used surprisingly often, one might say compulsively. It comes up for virtually every major disease. That's a lot of very convenient and rapid natural selection! Plus it is used not only for diseases like smallpox that kill large numbers, but also for diseases like H1N1. It is hard to see how a drop in mortality from 2 percent to 0.02 percent could give a disease much of an advantage.

There are variations of the argument. Maybe the mutation makes the disease easier to transmit as well. Maybe different strains attack different risk groups. There is more than a century of thought behind the idea, and I am sure there is much in the theory that is true. But the whole thing is a bit of a Rube Goldberg machine. The mutation shows up again and again, just when it is needed to explain inconvenient statistics. So in 2007, a team studying smallpox set out to confirm or deny the model, comparing different samples from historical smallpox victims to see just how different the strains were, and to correlate the spread of particular strains with changes in the mortality rate. Unfortunately they found that the "major" strain associated with a 30 percent mortality rate sometimes showed up in regions with very low mortality, and the "minor" strain showed up where mortality was high. Differences in the strains could not explain the huge differences in mortality that were observed.

I'll conclude this in the next post as this one is getting long.

Link to comment
Share on other sites

Now, let's review what my argument regarding epidemics requires. It's really simple. All we need do is imagine that we were subtly wrong about the way that infectious organisms are distributed among many hosts.

Historically we assumed that propagation from one host to another was strictly independent, that the mean amount of infectious material transmitted was stationary over time. It was X number of germs on the first occasion of transmission, and on the 100th, and on the 10,000th, and so on.

We couldn't prove this notion, however, without actually counting infectious organisms for each and every patient. We couldn't do that in the 19th century, or the 20th century. We're barely able to do it now, in a few special cases. It was an inference, a plausible hypothesis, that was very, very convenient because it simplified the math. It was not an empirically observed fact.

If the average amount were to vary, especially if it were to drop with increasing epidemic size, then all bets would be off. The standard equations wouldn't work. An increasing proportion of patients would get a sub-clinical dose of organisms, and their immune system would defeat the invaders, and they would not even become symptomatic, much less capable of infecting other people. A decreasing proportion of people would get a sufficiently high dose of organisms to wind up dying of them. So transmission would effectively fall, and so would mortality.

Assume this one thing, and the rest of the scientific framework remains intact. The dose-mortality relationship is well established. It's the foundation of vaccination, for example. All we need is proof that over time, the distribution changes in this nonlinear, nonstationary way.

We have one disease for which individual counts of organisms have actually been made, for essentially every patient -- AIDS. The dose-mortality relationship in AIDS has been documented. If your very first visit to the doctor shows you with very high "viral load," you will likely die in a matter of months. If it shows a sufficiently low "viral load," then technically you have HIV disease, but you will likely never develop full-blown AIDS.

The range of possible initial doses has been demonstrated to vary by many orders of magnitude. One patient can easily have 1,000 times the viral load of another. The distribution is highly skewed, with a lot of people having small viral loads, and a few having high. All of this is totally consistent with my model. The curves look like curves for totally unrelated things, like the distribution of casualties from bombing, or the distribution of hits on webpages. The only question we cannot answer with certainty is what the distribution was 20 years ago, at the start of the epidemic. Viral load testing was unavailable then.

If there was a higher proportion of people with high viral load 20 years ago, then the mortality rate would have been higher, and transmission rates would have been higher as well. And here we come to one of the great ironies of AIDS, because we know that they were. They have come down on their own, to a large extent, well before the drug cocktails were invented.

This will sound odd simply because I'm saying it, and you didn't hear it from a CDC medical spokesperson. But (1) the official government numbers clearly show it, and (2) if the older folks among you think back, you will clearly remember it. At the start of the AIDS epidemic, mortality was frighteningly high. Not only did everyone who had AIDS die, they died in weeks or months. The number of people with the disease was doubling every few months. There were projections (by Oprah Winfrey, among others) that 50 million Americans would have AIDS by 1990.

Every year, the latent period for AIDS was re-estimated as being longer. The death rate as a percentage of those infected went down, from 40 percent every six months to 11 percent every six months. There were also numerous controlled studies that showed transmission in 1995 could not possibly be as efficient as it had been in 1985. And this was all before medical science had anything that would slow the disease. It predated the "drug cocktails" and the present era of managing AIDS as a chronic disease. I'll say it again: Most of the mortality decline in AIDS happened before the protease inhibitors became available.

My thesis is that AIDS, despite being viewed as unique among diseases, tells us something fundamental about the nature of all diseases. We are able to see, for the first time, how the way in which organisms are distributed among individual hosts influences the clinical presentation of the disease. It is viewed at one point as a ferocious killer tearing through entire communities; then, when the caseload has grown by a factor of 10 or 100, it is viewed as being only half or a quarter as dangerous. The main reason, the universal reason, is not mutation or natural selection of different strains (although this may be present). The reason is that that is how randomness actually works.

This is a summary of something that takes 30-odd pages and numerous graphs in my book to explain. It is informal, without footnotes. I hope it won't get me labeled as an AIDS kook. (If it gets me labeled as a probability kook, so be it.)

EDIT: fixed a BBCode problem

Edited by Math Guy
Link to comment
Share on other sites

I read them all as a teenager (devoured them would be closer to the truth) and I was bitterly disappointed at the way Asimov resolved the story. I felt that he was saying something profoundly interesting and serious when he proposed the idea of psychohistory, and that the later business of telepathy and mind control, while it made for an interesting story, pretty much abandoned the promising argument he started with for a much more familiar sort of space opera.

You need not be disappointed. Asimov was told in no uncertain terms by the editor of the magazine he originally sold the stories to that he had to "break" psychohistory in the next story. As he put it, "I said 'No, no, no!' and he said 'Yes, yes, yes!' and I knew I wasn't going to sell him a 'No, no, no!'" (I am quoting from memory here.)

Asimov did a masterful job though of doing this without actually violating his premise--Psychohistory was posited as a *correct* theory of human history and if so, the only way he could break it and retain the integrity of the premise was to change the humans! Hence the Mule. Psychohistory held true because it did not have the context of someone like the Mule--Hari Seldon had made no mistakes. (Actually this is a perfect (albeit fictional) illustration of the Objectivist notion of contextual knowlege.)

(I was more disappointed in the more recent continuations of the story, but that is another matter.)

Link to comment
Share on other sites

I think the point that you're missing is that there is no real principle of maximum entropy that is part of the universe, which explains the supposed facts. Certainly there is mathematical language that allows you to talk about relations such as a "power law", but a power law is not a fact of reality in the way that the law of conservation of charge or Boyle's law are facts of reality. Even if it were the case that, for examples, dynasties tended to become shorter in duration over time, that still only be an observation without any causal explanation. You have to have an ontological commitment to some actual and direct cause.

As for the "what", I'm simply not persuaded. It is not about the math -- the math is irrelevant, until you have a fact that needs to be modeled. We don't yet have a fact that needs to be explained. As a starter, you need to objectively define your terms -- "dynasty". If you're talking about Portugal, you cannot have a dynasty of Portugal until 1139, since Portugal was not a country until then. You have to define the boundary of a dynasty (so that we can see why a supposed Aviz-Beja dynasty is justified, or distinguishing Braganza from Braganza-Saxe-Coburg and Gotha). And to maintain that there is a current Braganza "dynasty" will take some work since Portugal disposed of kings in 1910. And furthermore, these definitions need to be objectively justified, not simply arbitrarily set so that you can get the result that you want. What thing of reality does a "dynasty" refer to?

Well, to take your second objection first, I certainly understand the need not to go picking data to fit one's thesis, but I really don't think I've done that.

The primary level of the phenomenon, the level where it is most evident, is within any one particular dynasty. That is where examples exist in greatest number, and there is the least ambiguity about what we mean. I would say the idea of a dynasty is one of the oldest, and most firmly established, in historical thought. Indeed, one can make a good argument that the origin of history as a subject of study is profoundly intertwined with kingship lists. Dynasties were one of the first, if not the very first, objects of analysis by historians. There are hundreds of dynasties and the downward trend is quite unambiguous.

Your point applies to the next level up, where we string together successive dynasties ruling the same country. There is admittedly some danger at that next level of putting together unrelated items in an arbitrary fashion, of picking one dynasty as being the 'first' to rule the country and forcing the data to fit the hypothesis. Plus there just aren't as many cases to work with, which given that this is a statistical argument is a handicap.

My rule with regard to data is that I don't get to pick and choose. I identify a source that is widely recognized and as far as possible, impartial, and then I take all the cases presented by it. In the case of dynasties, I actually started out years ago by using encyclopedias and assembling the data manually. But not long ago Wikipedia assembled an article on dynasties that listed large numbers of them in convenient form. Since Wikipedia itself relies on sources like encyclopedias, and since it is far, far more convenient for readers to go there than to locate a printed Britannica, I now use Wikipedia as my source.

So I can concede that there is some uncertainty about the precise origin and boundaries of the entity "Portugal," and whether Portugal was an hereditary monarchy throughout the years I've cited. But I'm not manipulating the data. It's up to other people, more expert than me, whether to list a given dynasty as ruling "Portugal".

Your first point relates to something Plasmatic said, so I'll address it below, along with his point.

Link to comment
Share on other sites

Entities are causal primaries. By implication your "trancendant cause" must be an entity. Math is not an entity but begins with the concept entity. Do you have a candidate for this entity/entities? Since the universe is all the entities in existence, to propose a meta entity, is to talk nonsense.

Now I still have some post to read in the thread but this is my thought so far.

DavidOdden put the argument slightly differently, but I think you are both concerned about the same thing, namely the legitimacy of my calling the principle of maximum entropy either a cause, or a law -- and a "transcendant" cause or law at that!

Okay. First, can we agree that science does speak in terms of "laws" of entropy, that is, the laws of thermodynamics? In particular, the second law of thermodynamics says that for a closed system, the amount of entropy must either stay the same or increase. That is a law that is as firmly established as anything in science. I can cite prominent physicists saying they would doubt almost anything before they would doubt the truth of the second law.

Now, that doesn't mean that all scientists would accept this application of the principle of maximum entropy as legitimate. If they are not familiar with the work of Jaynes, they might be quite surprised to see the idea extended to cover something other than heat energy and temperature. But from a philosophical point of view, my use of the principle doesn't raise any problems that weren't already implicit in the classic, universally accepted second law. I'm using the same mathematical expressions, I'm just counting different entities with them.

What I'm saying is, the second law as it is used every day in physics now is a "transcendant" law or cause in the same way as the decline effect I am talking about. It is no less of a mathematical abstraction. If one is nonsense, so is the other. If one cannot qualify as a law, neither can the other.

Of course, one occasionally sees harsh criticism from Objectivists of concepts from mainstream science as being incoherent or philosophically unsound. I've cited Rand myself as saying just that -- that science is riddled with theories that are "floating," that don't have proper referents. I believe Harry Binswanger is particularly noted for having carried on this line of attack since Rand's death. So possibly it is already part of the Objectivist canon that the second law of thermodynamics is flawed conceptually, and I just failed to see that particular claim in print.

The short version is: Plasmatic, DavidOdden, I'm not rejecting your arguments as such. I just want to establish, before we get too far along, what the scope of your respective claims really is. So far they both seem to me to be very sweeping. You are indicting a huge range of established science along with my argument. I think before we can get any further we need to confirm that that is what you meant.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...