Jump to content
Objectivism Online Forum

Math Guy

Regulars
  • Posts

    29
  • Joined

  • Last visited

Previous Fields

  • Country
    Canada
  • State (US/Canadian)
    BritishColumbia
  • Relationship status
    No Answer
  • Copyright
    Copyrighted
  • Biography/Intro
    Canadian, age 48, involved with Objectivism for the past 30 years Writer/researcher with engineering background

Math Guy's Achievements

Junior Member

Junior Member (3/7)

0

Reputation

  1. I agree with these concerns, broadly speaking, but I don't regard them as a conclusive argument against including the data. On the level of analyzing individual rulers, I won't use semi-legendary figures who live 400 years; but given some circumstantial evidence I will accept a dynasty whose precise sequence of rulers is unclear or disputed as nonetheless having governed for X years in total. I see your concern about pre-literate societies but then we have to ask, if they had no records and could not themselves determine who had ruled in the past, does the concept of a "dynasty" even apply? Part of the basic idea of a dynasty is that it establishes the legitimacy of successive rulers by demonstrating their connection to the past. At the point where records do not exist, any argument about the stability of multi-generation rule starts to become moot. I agree the early record may indeed be biased toward longer-lived dynasties, and that short-lived dynasties that did keep written records might have been overlooked. But to use this argument to exclude data items kind of begs the question. We're trying to discover what gives a society stability. If stability goes down as the size and complexity of a social system goes up, for example, then small kingdoms not in the historical record might actually have supported longer-lived dynasties. Obviously, we don't know, and I won't rule data in or out based on speculation. Yes and no. I considered throwing out everything after the start of the Industrial Revolution, for example, but I found it again begs the question. New dynasties formed after 1800 had lifespans averaging only a few decades. Existing dynasties made it through the century intact only to die in the Great War, or World War II. So under the pressure of republicanism, expanding literacy, steam power, capitalism, what have you, the early dynasties still behave differently from the late ones within that span of time. That's what we're testing for, so no data gets thrown out. Well, at this point you've thrown out roughly a third of my data, focusing on the extremes on either end, and yet there's still a downward trend. I agree that even after 1000 AD the list isn't complete, but again I'm reluctant to speculate about the statistical character of dynasties we know nothing about, and that may not even have existed. (Just for example, we now know that New Guinea supported a large population over the past several thousand years. There were enormous numbers of people living in the interior of the island that early 20th century explorers simply missed. But when first contacted, these previously unknown peoples didn't have large-scale political systems, or dynastic rule, or even written language. It seems unlikely that they had these things in the past, and lost them.) I agree that apart from the Old and New Worlds prior to 1492, dynasties were never absolutely independent of one another, but I think the idea that the trend can be explained by them competing for territory is kind of vague. I wouldn't say China was seriously competing with Spain at any point in their histories, for example. The Silk Road trade no doubt allowed East to influence West and vice versa, but it hardly seems like a determining factor in regime stability. And anyway, merely being able to come up with an alternative hypothesis for why there is a decline trend doesn't show mine is unsound. There are a bunch of remarks here that I've grouped together: I'm aware of the complexity of the story, and the problems with the quality of the data. However, the whole point of a maximum entropy curve is that it allows estimates to be made of the trend or the range of outcomes without knowing the details of how the system works. The data are sufficiently good to perceive that kind of simple trend (and sadly, as I observed, if kingship lists aren't good enough as data, then almost nothing from ancient history is). It's a startling claim, as even Jaynes observed, but it works in physics problems and there is no obvious reason why it would not work here. That is what my research was intended to determine. Toynbee makes a good point of comparison, as does Spengler. I don't think it's fair to say either man "cherry-picked" his data, just as I don't think that that complaint is true of me. If you want to accuse them of something, or me, I think a better charge would be overly broad generalization -- a theory that is excessively inclusive rather than exclusive. They intended to cover all the relevant cases and made a good-faith effort, but their arguments were too simple to explain all the facts. Mine starts with an apologia for why a very, very simple model might actually be sufficient. (And not to be snarky, but I think Toynbee and Spengler both stand up quite well as scholars and historians in comparison with Rand's "Attila and the Witch Doctor" essay.)
  2. First, let me say I'm excited to be talking with a serious historian (or history buff) about this, and thank you for commenting. If my thesis seems naive and presumptuous and annoying, well, perhaps it is all of those things. I started by saying it was preposterous, and I'm not going to flinch if people are skeptical. But let us see. The data for China that I used are (years duration first, then name) 628 Three August Ones and the Five Emperors 470 Xia Dynasty 554 Shang Dynasty 275 Western Zhou Dynasty 514 Eastern Zhou Dynasty 246 Spring and Autumn Period* 254 Warring States Period* 15 Qin Dynasty 215 Western Han Dynasty 16 Xin Dynasty 195 Eastern Han Dynasty 45 Three Kingdoms 52 Western Jin Dynasty 103 Eastern Jin Dynasty 161 Southern and Northern Dynasties 37 Sui Dynasty 289 Tang Dynasty 53 Five Dynasties and Ten Kingdoms 167 Northern Song Dynasty 152 Southern Song Dynasty 209 Liao Dynasty 119 Jin Dynasty 97 Yuan Dynasty 276 Ming Dynasty 1 Shun Dynasty 267 Qing Dynasty 4 Empire of China If I omit everything up to and including Shang, I still get a negative slope of -0.29 for the cumulative curve. (I have a reason for using the cumulative curve, which I will discuss below.) Wikipedia actually breaks down the Eastern Zhou into two partly overlapping sub-dynasties (shown with *), so if I use those instead of treating the Eastern Zhou as one, the slope becomes more shallow, and flattens noticeably at the end, but it's still -0.26. If I throw out the two rump dynasties (Shun in 1644 and Empire of China in 1912-16), there's a slight rise at the end which spoils the monotone shape of the curve, but it still doesn't get me back to the average of the first two accepted dynasties in the set. And at that point I would have thrown out 5 of my 22 data points. To get a positive curve I have to start with the Qin, which basically throws out everything in Chinese history prior to 221 BC. It would not take much for your knowledge of Chinese history to vastly exceed mine. But it seemed from my study of the Mandate of Heaven that the Shang Dynasty had to have some real historical foundation, since the theory of the Mandate was used by the Zhou to justify their overthrow of the Shang. Plus there seem to be ample primary sources cited for the Shang -- the "Bamboo Annals," inscriptions on bronze artifacts, archeological digs of palaces and tombs. Wikipedia lists several different sets of dates that have been proposed for the length of the Shang, but none put it any shorter than 500 years. Again, I'm not averse to doing the analysis independently of Wikipedia or some neutral source, assembling a custom data set. But coming from someone who is not a professional historian, I think the cries of "cherry picking" if I did it that way would be all the louder. First, about correlation coefficients, and confidence levels: In early drafts of my book, I actually computed all these, not just for the historical dynastic data but for all the various curves I presented. I took all that information out after about six months because 1) citing them all wreaked havoc with the flow of the text; 2) the coefficients and confidence intervals were typically good for large data sets but not as good for small ones; and 3) my particular method of presenting the data tended to force high correlation values in a way that a statistician would see as contrived. Out of these, 2) was probably the strongest argument. In effect, the specific figures didn't actually convey much in a given case, beyond whether I was working with a lot of data points, or a few. I could write a book that would have stupendously high correlation and confidence scores for every graph, simply by dropping 70-80 percent of my examples and sticking with large data sets. But it wouldn't convey the idea I'm working on. If I wanted to write about the application of the principle to a very broad range of items, then I would have to accept the data sets as they came. For example, if I have a dynasty that consists of 8 rulers, I can play with different ways of calculating the likelihood of the last 4 being less long-lived than the first 4 -- but all the methods I might use give the same weak confidence level, because N=8 just can't do much for you. I'm not expressing scorn for academic convention. It has its place. But this isn't a Popperian argument where I'm trying to falsify some proposition by showing the data lie outside an arbitrary confidence interval. Since the theme of the book is to illustrate the power of Jaynes' way of looking at probability, and since I'm arguing that these phenomena are invariably nonlinear, nonstationary, and don't obey the central limit theorem, it would be redundant to continually test, for each and every graph, the alternative thesis that they do obey the CLT and that the pattern is just random happenstance. Now, in regard to your first question, I use rank number versus cumulative years to get average duration, and plot the decline in average duration. Why do the calculation based on the cumulative curve? Because the theory says that the pattern depends on increasing set size, or rank. We can certainly do it other ways and still get a decline -- as you did when you fitted a linear estimate to my table and got the downward trend of about 1.5 years lost per additional dynasty. But if you think the cause is entropy, then you want above all to see what the pattern looks like in a cumulative plot. That allows me to compare apples to apples when I look at disparate data types. However, that also creates a strong auto-correlation effect. The scores for a cumulative plot are always good if the slope is even slightly downward. So if I were to cite correlation scores for the graphs the reader actually sees, it would be meaningless and misleading. I hope that's clear. I'll deal with the rest of your points tomorrow, when I have more time.
  3. You generally have to be careful to compare within a system rather than between systems, to look at many races taking place together at one time, rather than just one. So it works well if you compare small Canadian Parliamentary ridings (electoral districts) to large ones, or Senate races in small U.S. states versus large ones, or small districts in the City of London to large ones, for the same series of elections. It would not work, or at any rate the law would be much less apparent, if we created a grab-bag of different voter populations -- like mixing Greek, Ukrainian, and Iraqi election results in one data set. The law isn't that universal. It also doesn't work quite as well for presidential elections, where there is only one nationwide ballot. For more on this see below. Yeah, you got me on that one. I was being a little sloppy in referring to the 50.5-49.5 split, and didn't spell out what I meant. First, there is a very noticeable difference between small states and large ones. Hawaii or Alaska will generally deliver bigger majorities than California, when the vote is held for governor or Senate seats. But there's also a trick that distinguishes the American system from most others. Most systems are genuinely multiparty, and that means it is possible to win with no more than a plurality. Multiparty systems thus tend to have a steeper curve. The strength of the consensus fades more noticeably as we go from 2,500 to 250,000 voters. The U.S. system so effectively excludes serious third parties that the winner nearly always has a genuine majority. The failure to achieve consensus is much less noticeable if there are two and only two choices, not 3 or 5 or what have you. So the majorities are noticeably stronger in U.S. elections than in Canadian or European ones, and the curve isn't as steep.
  4. Yes. If you mean, should the very first managers of a team tend to last longer in the job, and later managers cycle through more frequently, then yes, that is what I would expect, though I haven't studied that case. It's true, for example, of comic books. A long-lived comic book will tend to keep its original creative team for many years, a decade or more. But eventually, when the founders leave, their replacements are on average less loyal, or less popular with the readers, or viewed as mere temp workers by management. In any case, they tend to have shorter tenures. This same principle applies to customer loyalty in a lot of industries. Your first customers are your most enthusiastic and committed. The 100th or the 10,000th shows up later and leaves much sooner. I call this "loyalty fatigue" and it has a big impact on businesses dependent on charging annual support fees, or "upselling" established customers. I've found a few counter-examples so far. It isn't true of mayors, at least not in the major American cities dating back to Revolutionary times. But the reason appears to be a massive change in the job description. Early mayors were unpaid, and were not elected. They were chosen by a committee and assigned the job. It was not unknown for even prominent citizens to move away rather than accept the honor. So comparing the tenure of modern mayors to 18th-century Boston is perhaps inappropriate. It's apples versus oranges.
  5. Hmm, these are excellent questions. It may take me several posts to deal with all the implications, so bear with me. To start by getting terminology right, you identify "random" events as always being independent. I agree that that is the conventional way of looking at the problem, the classical perspective laid down by Newton and Pascal in the 17th century. But that is not how Jaynes approached it. We actually go back to a more basic level of argument in Jaynes' system. Random behavior, Jaynes would say, simply means the behavior of a system with a range of possible states about which one has limited knowledge. If I must guess which state the system will be in at a given moment, then by definition the system is varying randomly. Randomness is a relationship between the observer and the observed. The system itself "knows" what it will do, or is doing. It is determinate. It is my knowledge that is indeterminate. I have to specify a range of possibilities because I don't know which one will occur. You may recognize that this is a Bayesian approach to probability. However, Jaynes then added his own innovation. He viewed probability as a branch of logic. He introduced the principle of maximum entropy as a kind of virtually all-purpose limiting case -- a logical axiom, if you like. The principle of maximum entropy is a supremely powerful principle because it is nearly always going to be applicable. Systems that are not at maximum entropy are the exception. Systems that are at maximum entropy are the rule. In everyday problems like economics or history, the principle of maximum entropy is even more useful than, say, conservation laws. There's no guarantee that the number of people in an economy will remain constant, or that the quantity of money will be conserved -- but there is every reason to think that disorder will always be at a maximum, no matter what sort of quantities we are measuring, whether any are conserved are not. One consequence of this approach is that the idea of independence recedes in importance. We don't necessarily need to know if the elements of the system are independent, or dependent, and we don't build up a detailed model of the classical kind of what the system might be doing. We take a different route entirely. We know that dependent or independent, they are thoroughly disordered -- and the kind of disorder is itself meaningful to us. Jaynes didn't use the Objectivist terminology, but he certainly would have understood it if I said that randomness is an epistemological concept, not a metaphysical one. The system does what it does, and we do not know the details: instead of trying to model the details we apply a few broad logical rules about what it has to be doing and make estimates based on those. It is possible to make a strong prediction about the behavior of very complex systems - that is, to specify a distribution curve that the observable system variables will obey -- using a very limited number of parameters, provided that the system does obey the principle of maximum entropy. The details of the system then remain unknown. One does not model the mechanics. One would instead compute what range of outcomes (what curve) would maximize the entropy -- that is, maximize the uncertainty of individual outcomes. This usually turns out to be a power law. Jaynes limited himself to demonstrating this in chemistry and physics but he acknowledged, in a number of papers, that it could be applied to economics and to other phenomena as well. In the past fifty years, many people have since done so. The results are elegant and powerful, and sidestep entirely the dependent/independent problem. So for example, if we sat down to estimate how elite incomes (the top 10 percent say) would be divided up in a society, we would immediately become bogged down in an insane number of details. What laws apply? What are the tax rates? What are the cultural imperatives? What sort of wealth does the society hold? Is it agrarian, industrial, post-industrial? Most people would shrug and say they have no idea how incomes are distributed. It would seem like an impossible task even to start building the model. However, Vilfredo Pareto discovered more than a century ago that ALL societies, from the medieval era to the present, obey the same simple rule for the distribution of elite incomes. They always follow a power law. The arrangement is a geometric function: it might be 100,000 people with incomes of $50,000 to 99,000, and then 40,000 with incomes of $100,000 to $199,000, and then 16,000 with incomes from $200,000 to $399,000. You see the rule? For every doubling in income, in this country there are just 0.4 times as many people who earn the higher amount. In another country the fraction might be 0.35 or 0.27 or some other number. However, plotted on a log-log graph, these cohorts or bins closely approach a straight line -- always. The only variation between countries is the slope of the line. Now, I contend, backed up by Jaynes, that the reason that Pareto's income curve works so neatly, and gives us this elegant power law, is because of the logical constraint of entropy. The set of all elite income-earners is large enough, the economy complex enough, and enough time has gone by, that you can ignore the details of the society and just assume that income uncertainty will be maximized according to the principle of maximum entropy. The range of incomes is not just very wide -- it is as wide as it can be, given the circumstances. It is maximally random. This makes it possible to estimate the distribution of incomes using a very simple curve despite the daunting social complexities involved. That's a very, very compact summary of a complex idea requiring several chapters in my book. It's a very different approach to probability, and it requires serious thinking about before one can do it confidently. When I say the cause of Pareto's elite incomes curve is a mathematical meta-cause, I mean this: Despite the economy's incredible complexity, and all the myriad possibilities of free will, the end result is always this very smooth geometric distribution. There are all sorts of things we don't know about the economy, but there is one thing we are confident we do know: its measurable variables must obey the principle of maximum entropy. So to answer your concluding question: Yes, I'm curve-fitting, but with a very powerful logical principle that says this shape of curve is privileged epistemologically.
  6. Ultimately I think the challenge to free will posed by curves like this can be met. I am not arguing that free will doesn't exist. But I think these curves do complicate the picture. The issues are subtle and far-reaching. For governments to move systematically from lasting 600 years to lasting 8 years, is good news overall for the cause of individual freedom. But if the erosion of regime stability is truly inexorable, if it follows a predictable mathematical rule, then the question arises: Where do philosophical ideas enter into the story? The decline in regime stability was evident several thousand years before Aristotle appeared on the scene. It continued in similar fashion after Aristotle, and after Augustine, and Aquinas, and Locke, and Kant. I cannot discern any acceleration or change in the pattern. The shape of the decline curve is the same throughout history. This contradicts (or at least appears to contradict) the Objectivist premise that philosophical ideas move history. Ideas might well move other aspects of history, but not regime stability. These curves appear to transcend (there's that word again) philosophy, religion, language, any influence other than numbers. I think the correct way to view these curves is as a manifestation of steadily increasing disorder. Early societies were extraordinarily uniform and predictable, at least as compared with the way we live now. Very little changed in any given year. There were no new language words, no new ideas, no economic improvements, and no political innovations. Thus regimes were very stable. However, each new nation-state that was founded necessarily had its own language and customs, its own economic foundation,and its own unique vulnerabilities. Innovation wasn't impossible, merely uncommon. The pace of change gradually accelerated, the nature of politics changed -- but the amount of change at any given point was governed by statistical rules unrelated to the content of the ideas. Knowing what philosophical ideas were in play in a given society is necessary to predicting its future course, but ideas alone are not sufficient to explain what happens. There is, so to speak, a speed limit for change. Societies will only absorb so much change at any one time -- not more, and not less either. Because these curves are so common (I have hundreds more to talk about), I think it is necessary to modify the Objectivist premise regarding the role of ideas. Ideas do drive history and social life, but only in a direction allowed by statistical laws, and only at the pace allowed by the size and complexity of social systems at that point.
  7. Well, yes, that's certainly true in 2009. But the whole list of dynasties covers every major power throughout history, since around 3,500 BC. Apart from a few very brief experiments in democracy, hereditary monarchy was the standard form of government everywhere, right up to the 20th century. So a decline in stability in hereditary monarchy might not be big news today . . . it affects places like Nepal or Saudi Arabia, and not much else. But it has huge implications for our study of history.
  8. My copy of OPAR is in a box somewhere and I haven't read it in a while. But I infer this has to be a reference to Peikoff's doctrine of "the arbitrary". Right? So I did a little searching on the Web under "arbitrary," just to make sure of the argument, and here's what Peikoff says: An arbitrary claim is one for which there is no evidence, either perceptual or conceptual. It is a brazen assertion, based neither on direct observation nor on any attempted logical inference therefrom. For example, a man tells you that the soul survives the death of the body; or that your fate will be determined by your birth on the cusp of Capricorn and Aquarius; or that he has a sixth sense which surpasses your five; or that a convention of gremlins is studying Hegel’s Logic on the planet Venus. If you ask him “Why?” he offers no argument. “I can’t prove any of these statements,” he admits—“but you can’t disprove them either.” And what, according to Peikoff, is one to do with such a claim? In the absence of evidence, there is no way to consider any idea, on any subject. There is no way to reach a cognitive verdict, favorable or otherwise, about a statement to which logic, knowledge, and reality are irrelevant. There is nothing the mind can do to or with such a phenomenon except sweep it aside. Okay, assuming for the moment that this is what you are referring to, let's next clarify what claim(s) you regard as arbitrary. I have made two distinct claims. One could perhaps break them down further but there are at least two separate ideas here: 1) If you consult Wikipedia's list of dynasties, and do some quick sums on the dates for each one, you will find that later hereditary rulers in a given dynasty on average had shorter terms than earlier ones. You will also find that the lengths of the dynasties themselves form a similar pattern. 2) The reason the terms of later rulers or the spans of later dynasties are shorter is because long series of hereditary rulers (and by extension, long series of party control in any system that transfers executive power) are governed by a power law, in conformity with the principle of maximum entropy. I am not at all surprised if anyone is skeptical about claim #2. I haven't come anywhere close to establishing it in a handful of posts. For a reader of this forum who does not have access to my book, it is merely an interesting hypothesis at this point. Not an arbitrary hypothesis in Peikoff's sense, let me stress, but not proven either. I am, however, surprised that you consign claim #1 to the status of "the arbitrary". Reading Peikoff's explanation of what that term means, I see no resemblance between it and what I am saying. It not only isn't arbitrary; I also have trouble even treating it as open to question. It is an observable fact. Responding to this has taken me some time. First, to understand what you were saying here I had to Google "taxonomic artifact". In 30 years of busy intellectual life I have never heard anyone (Objectivist or otherwise) use that term, and I did not think merely breaking it down into dictionary definitions of "taxonomic" and "artifact" was going to be sufficient. Google gave me 175 matches in total -- not a lot -- and I reviewed every single one. Of these, 94 were footnotes or endnotes referring to a 1987 essay in Nature entitled "Is the Periodicity of Extinctions a Taxonomic Artifact?" The overwhelming majority of the other 81 were references to paleontology or evolutionary biology, since 1987. In one case, a liberal blogger referred to the Republicans calling themselves "the party of morality" as a "taxonomic artifact". In one other case, Noam Chomsky made an analogy between the biological sense of the term and an obscure distinction between certain passive and active verbs. A book on museums also used the term as a sarcastic metaphor, referring to the Crystal Palace exhibition of 1851 as condensing the whole of human endeavor into one glittering and senseless category, roughly speaking, "things ruled over by Queen Victoria." That's all I could find. It is not a term in wide use even among paleontologists. So then I spent some time sorting out what sort of thing a "taxonomic artifact" actually is. It turns out that among paleontologists it has several meanings. It has been used to refer to a classification scheme that treats unrelated species as belonging to a different family. That seems straightforward, except that I have also found authors using it in the reverse sense -- a classification scheme that takes minor differences in the same species as constituting evidence of multiple species. Thus it does not seem to mean much more than "classification error," and might be dismissed as not a particularly useful term even in paleontology. However, the word "artifact" does have some significance here. It rescues the term from being redundant. Today's paleontologists quite often use statistical methods to assign classifications. They don't necessarily study the fossils by eye and form a reasoned judgment about them, finding such techniques both subjective and cumbersome. Instead they crunch numbers and let various scoring schemes determine what is a species and what isn't. So a "taxonomic artifact" arises specifically when you incorrectly classify fossils as a result of using an poorly chosen statistical procedure. Whew. Okay. So now I have some sense of this term's literal meaning, but I remain unable to apply it (even metaphorically, as I must assume you meant) to my own work. I didn't assign rulers to dynasties on the basis of a statistical procedure, and neither did the various scholarly authorities that Wikipedia references. The basic data don't rely on any kind of statistical reasoning. They are the most elementary sort of observation, e.g. in 1483 Richard III became protector on behalf of the 12-year-old son of his brother Edward IV. You would have grounds to worry about specific dynasties being artifacts if my calculations had any impact on how dynasties are identified and classified, but my calculations all come well after the fact, and nobody else involved in the process ever made any calculations. The metaphor of "taxonomic artifact" just doesn't apply here. It is a kind of error that cannot occur. I will deal with some of your other objections next. I'm not saying "dynasty" is primary and "nation" secondary. I'm saying "ruler" is primary in relation to "dynasty," since a dynasty is a collective noun relating to a number of rulers. A series of dynasties clearly is a derivative concept which requires both the idea of "ruler" and the idea of "dynasty" to make sense. It is still more derivative, hence my reference to it belonging on "the next level up". No epistemological error here. Yes, certainly the facts are primary. I'm not suggesting arguing from numbers unconnected to facts. But if I have 379 examples of a phenomenon to support one claim, and only 25 to support another, then all else being equal my claim with 379 examples is the stronger one. More measurements lead to a more precise abstraction about how the measurements are related. The larger the number of examples, the more convincing our estimate of, for example, the slope of a curve. I couldn't parse this sentence. You can certainly suggest countries at random for discussion, from the Wikipedia list. Here is the link: http://en.wikipedia.org/wiki/Dynasty I think saying I have abdicated responsibility for the relationship of my data to reality overstates your case by a long way. I was simply saying that I don't speak Portuguese, don't have access to the National Museum where the original source documents regarding the Portuguese monarchy are stored, and have not studied, for example, the long succession of treaties, surveys, tax censuses, and other documents that scholars use to define the ever-shifting borders of Portugal over the past 1,100 years. Much less do I speak Chinese or Russian or Swahili, or have access to the relevant national archives for their dynasties. If I must be responsible for proving all that firsthand, the project is impossible and secure knowledge of history is impossible. If there were time, or if I could find suitable helpers, I would do a more in-depth investigation of the sources underlying the Wikipedia list, and build a longer and more authoritative one. In fact I did do a search of many hours for kingship lists not on the Wikipedia page, and found several (such as Assyria). However, including or excluding these items did not alter my conclusions, and the items I found omitted tended to be disputed by scholars, or incomplete, much more often than the items Wikipedia accepted. So my effort to criticize the Wikipedia list for the most part reinforced its credibility. There is no absolutely secure position to take here. If I build a custom list of my own selections, I am open to accusations of cherry-picking my data. If I rely on Wikipedia, I accept responsibility for whatever flaws exist in their list. To maximize my credibility, I have chosen to cite Wikipedia but to do some independent work validating it. Yes. But this criticizes at least two distinct ideas while treating them as one. First there is the idea that later rulers have shorter terms. Never mind whether they obey a power law. Their terms are consistently shorter, in a very striking way. The question, "What fact of reality causes this distribution?" applies to that observation, that the later rulers stay in power for a shorter time. The power law is the start of an answer to that question. I maintain that the definition of "dynasty" behind the Wikipedia list is objective, that the data are reliable, and that the fact of a decline is firmly established. That is the correct starting point of the discussion: There is a decline, now how do we account for it? Historically, the idea of entropy was never strictly or solely about thermodynamics. That was its first and most spectacularly successful application, but it has many others. It began as a theory about enumerating the total number of possible states in a system. At that time (the late 19th century), the existence of atoms remained speculative. Details of their behavior were unavailable. So Boltzmann's arguments about entropy were on a very broad level. They referred, in effect, to the number of distinct moving parts (which might be atoms or molecules or something else altogether) and the number of states those parts could take on. Since 1948, science has recognized the distinct field of information entropy, introduced by Claude Shannon. It has nothing to do with thermodynamics, but it very definitely refers to the number of moving parts or possible states in a system. Since 1957, science has recognized the principle of maximum entropy introduced by Edwin Jaynes. Again, it has nothing specifically to do with heat loss or thermodynamics. There are textbooks applying it, for example, to economics. I cited one earlier. In writing about the principle of maximum entropy to an audience of lay readers, or even for scientists, it is best to be careful and start by assuming they know nothing about any other kind of entropy beyond the version governing thermodynamics. People are often surprised to learn that the principle even exists. But I thought that I already made that introduction. An appropriate challenge at this stage of the discussion might be that you haven't read or heard of Jaynes, and find it difficult to believe that the same mathematical rules can be applied to people (or viruses, or falling bombs, or other entities) that apply to atoms. That is legitimate and reasonable, and moreover I consider it my burden in the discussion to explain how that can work. Happy to do it, in fact. But I can't keep going back to the very beginning as if I hadn't spoken at all.
  9. That is a good way of putting it. I have a number of other stability measures that work the same way. For example, if you study election returns district by district, you will find a very regular power-law dependence of the result on the number of voters. Small districts tend to produce much more decisive majorities for the winner. Large districts produce weak majorities, or pluralities in a multi-party system. Once again the pattern appears to depend solely on numbers, not on culture or language or economics. In effect, when you try to achieve consensus on a particular candidate, information entropy (uncertainty) increases with the number of voters. A group of 2,500 voters will give the winner 75 percent of their votes, and distribute the remaining 25 percent among the other candidates. It can keep that kind of consensus up for election after election. A group of 25,000 voters will almost never do that well. They might manage a 60-40 split on a regular basis. A group of 250,000 voters will be lucky to manage 55-45. And so on. You can see the implications almost immediately. Large states or districts can't form strong consensus in favor of any one leader, and so they are more unstable. Small states or districts can stay with a policy or platform for several cycles with far less difficulty. The perpetual 50.5-49.5 split between presidential candidates in the modern U.S. system is not, at root, a problem with ideology or culture. It is not the fault of the Republicans and Democrats that America is perpetually locked in red-versus-blue warfare. It's a consequence of the principle of maximum entropy.
  10. DavidOdden put the argument slightly differently, but I think you are both concerned about the same thing, namely the legitimacy of my calling the principle of maximum entropy either a cause, or a law -- and a "transcendant" cause or law at that! Okay. First, can we agree that science does speak in terms of "laws" of entropy, that is, the laws of thermodynamics? In particular, the second law of thermodynamics says that for a closed system, the amount of entropy must either stay the same or increase. That is a law that is as firmly established as anything in science. I can cite prominent physicists saying they would doubt almost anything before they would doubt the truth of the second law. Now, that doesn't mean that all scientists would accept this application of the principle of maximum entropy as legitimate. If they are not familiar with the work of Jaynes, they might be quite surprised to see the idea extended to cover something other than heat energy and temperature. But from a philosophical point of view, my use of the principle doesn't raise any problems that weren't already implicit in the classic, universally accepted second law. I'm using the same mathematical expressions, I'm just counting different entities with them. What I'm saying is, the second law as it is used every day in physics now is a "transcendant" law or cause in the same way as the decline effect I am talking about. It is no less of a mathematical abstraction. If one is nonsense, so is the other. If one cannot qualify as a law, neither can the other. Of course, one occasionally sees harsh criticism from Objectivists of concepts from mainstream science as being incoherent or philosophically unsound. I've cited Rand myself as saying just that -- that science is riddled with theories that are "floating," that don't have proper referents. I believe Harry Binswanger is particularly noted for having carried on this line of attack since Rand's death. So possibly it is already part of the Objectivist canon that the second law of thermodynamics is flawed conceptually, and I just failed to see that particular claim in print. The short version is: Plasmatic, DavidOdden, I'm not rejecting your arguments as such. I just want to establish, before we get too far along, what the scope of your respective claims really is. So far they both seem to me to be very sweeping. You are indicting a huge range of established science along with my argument. I think before we can get any further we need to confirm that that is what you meant.
  11. Well, to take your second objection first, I certainly understand the need not to go picking data to fit one's thesis, but I really don't think I've done that. The primary level of the phenomenon, the level where it is most evident, is within any one particular dynasty. That is where examples exist in greatest number, and there is the least ambiguity about what we mean. I would say the idea of a dynasty is one of the oldest, and most firmly established, in historical thought. Indeed, one can make a good argument that the origin of history as a subject of study is profoundly intertwined with kingship lists. Dynasties were one of the first, if not the very first, objects of analysis by historians. There are hundreds of dynasties and the downward trend is quite unambiguous. Your point applies to the next level up, where we string together successive dynasties ruling the same country. There is admittedly some danger at that next level of putting together unrelated items in an arbitrary fashion, of picking one dynasty as being the 'first' to rule the country and forcing the data to fit the hypothesis. Plus there just aren't as many cases to work with, which given that this is a statistical argument is a handicap. My rule with regard to data is that I don't get to pick and choose. I identify a source that is widely recognized and as far as possible, impartial, and then I take all the cases presented by it. In the case of dynasties, I actually started out years ago by using encyclopedias and assembling the data manually. But not long ago Wikipedia assembled an article on dynasties that listed large numbers of them in convenient form. Since Wikipedia itself relies on sources like encyclopedias, and since it is far, far more convenient for readers to go there than to locate a printed Britannica, I now use Wikipedia as my source. So I can concede that there is some uncertainty about the precise origin and boundaries of the entity "Portugal," and whether Portugal was an hereditary monarchy throughout the years I've cited. But I'm not manipulating the data. It's up to other people, more expert than me, whether to list a given dynasty as ruling "Portugal". Your first point relates to something Plasmatic said, so I'll address it below, along with his point.
  12. Now, let's review what my argument regarding epidemics requires. It's really simple. All we need do is imagine that we were subtly wrong about the way that infectious organisms are distributed among many hosts. Historically we assumed that propagation from one host to another was strictly independent, that the mean amount of infectious material transmitted was stationary over time. It was X number of germs on the first occasion of transmission, and on the 100th, and on the 10,000th, and so on. We couldn't prove this notion, however, without actually counting infectious organisms for each and every patient. We couldn't do that in the 19th century, or the 20th century. We're barely able to do it now, in a few special cases. It was an inference, a plausible hypothesis, that was very, very convenient because it simplified the math. It was not an empirically observed fact. If the average amount were to vary, especially if it were to drop with increasing epidemic size, then all bets would be off. The standard equations wouldn't work. An increasing proportion of patients would get a sub-clinical dose of organisms, and their immune system would defeat the invaders, and they would not even become symptomatic, much less capable of infecting other people. A decreasing proportion of people would get a sufficiently high dose of organisms to wind up dying of them. So transmission would effectively fall, and so would mortality. Assume this one thing, and the rest of the scientific framework remains intact. The dose-mortality relationship is well established. It's the foundation of vaccination, for example. All we need is proof that over time, the distribution changes in this nonlinear, nonstationary way. We have one disease for which individual counts of organisms have actually been made, for essentially every patient -- AIDS. The dose-mortality relationship in AIDS has been documented. If your very first visit to the doctor shows you with very high "viral load," you will likely die in a matter of months. If it shows a sufficiently low "viral load," then technically you have HIV disease, but you will likely never develop full-blown AIDS. The range of possible initial doses has been demonstrated to vary by many orders of magnitude. One patient can easily have 1,000 times the viral load of another. The distribution is highly skewed, with a lot of people having small viral loads, and a few having high. All of this is totally consistent with my model. The curves look like curves for totally unrelated things, like the distribution of casualties from bombing, or the distribution of hits on webpages. The only question we cannot answer with certainty is what the distribution was 20 years ago, at the start of the epidemic. Viral load testing was unavailable then. If there was a higher proportion of people with high viral load 20 years ago, then the mortality rate would have been higher, and transmission rates would have been higher as well. And here we come to one of the great ironies of AIDS, because we know that they were. They have come down on their own, to a large extent, well before the drug cocktails were invented. This will sound odd simply because I'm saying it, and you didn't hear it from a CDC medical spokesperson. But (1) the official government numbers clearly show it, and (2) if the older folks among you think back, you will clearly remember it. At the start of the AIDS epidemic, mortality was frighteningly high. Not only did everyone who had AIDS die, they died in weeks or months. The number of people with the disease was doubling every few months. There were projections (by Oprah Winfrey, among others) that 50 million Americans would have AIDS by 1990. Every year, the latent period for AIDS was re-estimated as being longer. The death rate as a percentage of those infected went down, from 40 percent every six months to 11 percent every six months. There were also numerous controlled studies that showed transmission in 1995 could not possibly be as efficient as it had been in 1985. And this was all before medical science had anything that would slow the disease. It predated the "drug cocktails" and the present era of managing AIDS as a chronic disease. I'll say it again: Most of the mortality decline in AIDS happened before the protease inhibitors became available. My thesis is that AIDS, despite being viewed as unique among diseases, tells us something fundamental about the nature of all diseases. We are able to see, for the first time, how the way in which organisms are distributed among individual hosts influences the clinical presentation of the disease. It is viewed at one point as a ferocious killer tearing through entire communities; then, when the caseload has grown by a factor of 10 or 100, it is viewed as being only half or a quarter as dangerous. The main reason, the universal reason, is not mutation or natural selection of different strains (although this may be present). The reason is that that is how randomness actually works. This is a summary of something that takes 30-odd pages and numerous graphs in my book to explain. It is informal, without footnotes. I hope it won't get me labeled as an AIDS kook. (If it gets me labeled as a probability kook, so be it.) EDIT: fixed a BBCode problem
  13. Yes, although the discussion of why this particular curve should show up so often still remains. "It is what it is" doesn't satisfy everyone, hence the difficult and nuanced discussion of what qualifies as a cause, or a transcendant cause, or a law. Well, I'm glad you find it intuitive but that is emphatically not what they teach in medical textbooks. Here's a brief summary of what I found with regard to epidemic diseases. In the mid-19th century, before the germ theory of disease was firmly established, a man named William Farr, in charge of public health statistics in England, began putting out studies of mortality and morbidity (death rates and transmission rates). These were not the first systematic studies ever done, but they were much more authoritative and insightful than anything previous. Farr made several key observations that (so far as I can discover) were unique to him and the first of their kind. First, Farr observed that the number of people infected in epidemics tends to rise exponentially, then abruptly break, and drop faster than it went up. He suggested, as a rough approximation, that the data would fit a quadratic equation. In effect, this implies that every person who is infected will pass the disease on with the same efficiency as the last one. Transmission numbers rise at a constant rate up to the point where the supply of people to infect starts running out. If adopted without further examination, this leads directly to the modern S-curve taught in epidemiology texts. But there is reason to doubt whether Farr ever actually thought that transmission was uniformly efficient, or whether he was just grabbing a handy function as a first approximation. For one thing, the S-curve is symmetrical, and Farr said epidemics come down faster than they go up. The figures Farr actually had at the time, and those gathered over the next half-century, did not fit an S-curve or normal curve very well at all. As late as the 1940's one could find people marveling at how poorly the data fit the model. What they actually suggest is that long before the disease runs out of victims, it is already slowing down. Transmission starts becoming less efficient almost immediately. However, once normal curves became the fashion everywhere in science, they were seized upon in epidemiology as well. "Farr's Law" is generally expressed as an S-curve in textbooks now, without even a footnote regarding what Farr himself knew or thought. Farr's second important discovery had to do with how one tracks mortality. He believed that mortality rates changed over the course of an epidemic. He had curves for cholera and other diseases that clearly implied it. One reason why Farr believed this was because his knowledge of disease predated the germ theory. He believed in miasmas and other environmental causes, which of course people thought of as having less and less effect if they were diluted. It mattered a great deal how much of the toxic substance was present in a given neighborhood, how much had been washed away by the rain, and so on. If divided up among more victims, miasma would again be less dangerous to any one person. So people coughing up miasma that they got in one district could be expected to "infect" their secondary victims, but with ever-increasing inefficiency. Once the germ theory came in, this idea of diminishing effect became obsolete. It now seemed clear that a very small quantity of infectious organisms would multiply many times over in each patient. The idea of diminishing mortality across the course of an epidemic seemed forlorn, since each patient basically started over at the beginning. Science became focused on antibody protection and the strength of the immune system in attacking the invading organism. Farr's idea was forgotten -- literally, for over a century nobody pursued it. The idea of the later victims of an epidemic automatically being less sick, simply because they come later, isn't up for discussion at this point in medical circles. You'll notice that epidemiologists refer to "the" mortality rate for a disease. Ebola (hemorrhagic fever) has a mortality rate of 80 percent. Smallpox (when it still existed) had two mortality rates, one for the "major" strain of 30 percent, the other for the "minor" strain at 1 percent. And so on. Now, I am not a doctor, and I don't mean to sound like an arrogant ass. However, for more than a decade I have tried and tried, and I cannot find any infectious diseases that erupt into epidemics that do not decline in both transmission rate and mortality. For the older diseases, such as plague in Europe or smallpox in the New World, the evidence is sketchy and anecdotal, but witnesses clearly report the disease moving more slowly and killing fewer people in the late stages of the epidemic. For modern diseases, we have actual World Health Organization data to look at, and the result is the same. One gets the impression from media reports (and Tom Clancy novels) of Ebola being fantastically lethal, basically not only killing an entire village in days, but killing the hospital staff who try to treat them, and then burning out before it can spread further. But in fact there have been less well-publicized outbreaks of hemorrhagic fever in Asia that involved thousands or tens of thousands of people (such as during the Korean War), and in those places the majority of patients survived, even though we have absolutely no drugs that work on any of the hemorrhagic fever varieties. In the case of H1N1, I've had the opportunity to test my model in real time. The initial outbreak in and around Mexico City raised tremendous alarm because the mortality rate was 5 percent or more among the first thousand patients. The authorities did not at that time have a lab test for H1N1 and so there was some doubt about the real mortality rate. But while lab testing eliminated some cases as not being H1N1, the mortality rate for the remaining cases was still 5 percent in late April. From there, as it spread, the official WHO mortality rate kept going down and down. At the end of the first week of May it was 2,400 cases and 1.9 percent. By mid-May, 7,500 cases and 0.9 percent. By the third week in May, 11,000 cases and 0.8 percent. At month end, 22,000 cases and 0.6 percent. Eventually the WHO reporting system broke down. The real patient count was in the millions but most national authorities simply could not identify or test more than a tiny fraction of the total. Now the only estimates were based on sampling, and reports from doctors' offices of increased requests for flu shots, and so on. The CDC reported on November 12th that as of October 18, 22 million Americans had caught H1N1, and 3,900 had died of it. That's a rate of 0.02 percent, or an incredible 1/250th of the mortality rate for the first hundred patients in Mexico City. Using my standard decline curve, I would have predicted a less dramatic drop to about 0.08 percent. Either way, the idea of there being one fixed mortality rate for H1N1 seems not just doubtful, but kind of ridiculous. It has done nothing but decline the entire time. The WHO figures for transmission (up to the point where they stop making sense) give the same impression. Neither transmission nor mortality were constant in this global pandemic. Of course, medicine has a backup answer for why epidemics behave this way, and it has enjoyed great plausibility for generations now. The argument is that mutation, followed by rapid natural selection, leads the later part of the epidemic, or later epidemic waves of the same disease, to be less lethal. Basically, a random mutation shows up that kills fewer patients, and because its victims live longer, it is transmitted to more people. It gains a competitive advantage over its more immediately lethal cousin. I'm not ridiculing this theory. It is plausible, there is certainly lots of mutation going on, properties of diseases have been proven to change. But it is used surprisingly often, one might say compulsively. It comes up for virtually every major disease. That's a lot of very convenient and rapid natural selection! Plus it is used not only for diseases like smallpox that kill large numbers, but also for diseases like H1N1. It is hard to see how a drop in mortality from 2 percent to 0.02 percent could give a disease much of an advantage. There are variations of the argument. Maybe the mutation makes the disease easier to transmit as well. Maybe different strains attack different risk groups. There is more than a century of thought behind the idea, and I am sure there is much in the theory that is true. But the whole thing is a bit of a Rube Goldberg machine. The mutation shows up again and again, just when it is needed to explain inconvenient statistics. So in 2007, a team studying smallpox set out to confirm or deny the model, comparing different samples from historical smallpox victims to see just how different the strains were, and to correlate the spread of particular strains with changes in the mortality rate. Unfortunately they found that the "major" strain associated with a 30 percent mortality rate sometimes showed up in regions with very low mortality, and the "minor" strain showed up where mortality was high. Differences in the strains could not explain the huge differences in mortality that were observed. I'll conclude this in the next post as this one is getting long.
  14. I read them all as a teenager (devoured them would be closer to the truth) and I was bitterly disappointed at the way Asimov resolved the story. I felt that he was saying something profoundly interesting and serious when he proposed the idea of psychohistory, and that the later business of telepathy and mind control, while it made for an interesting story, pretty much abandoned the promising argument he started with for a much more familiar sort of space opera. I've looked at the sequels that appeared later on (both Asimov's own and those written after his death for his estate). Some are interesting, but my feeling has always been that the science lost out to the fiction. What would really have made his work epic in intellectual history was if he had given more substance to the idea of psychohistory, made it more like a real science. But of course, real science is hard, hard work, and Asimov wrote Foundation when he was a very young man and desperately trying to make a living. He couldn't have afforded to spend more time on it than he did. When I started seeing the implications of my work, 10-15 years ago, I seriously considered adopting the pen name Harry Seldon, as homage.
  15. None of my examples so far, no. I agree, one can make the argument that an office-holder's behavior is decisively shaped by the imperatives of the office, the social network as you call it. But I have some examples where this argument is not available, where we are talking about the behavior of individuals, which I think are very interesting. More on that presently. Well, it might seem hasty to you given that I got to it after a dozen posts, but it actually took me 20 years. I'm familiar with the Objectivist argument against reification, and I don't think that's what I'm doing. I agree I'm being a little provocative in using the word "transcendant," because that is a word that gets abused quite a bit. But here I am using it in the correct sense: there are endless local, anecdotal causes for these curves. We can see that the factors governing traffic accidents are different from the factors governing participation in online forums, and different again from transmission or mortality rates in epidemics. Yet they all yield the same shape of curve, and close to the same slope. I say that what causes them to be similar is something that transcends the particulars of each case. You want to say that the systems are similar, and their similarities are what make the curves the same. I think the hangup here is more a matter of terminology than real content. I am not arguing for some kind of ineffable World Spirit of the Hegelian kind, working behind the scenes. The transcendant cause I have in mind is mathematical, not mystical. If you find it more satisfactory to say that the cause is mathematical similarity, that is fine. Given the vastness of the range of cases, I like a word that emphasizes scope. Given that I haven't demonstrated that huge scope as yet, you quite reasonably prefer more conservative language. Yes. Well said. But there is more in play here than I have been able to say so far. The insights that Edwin Jaynes had into probability and inference are truly awesome, and we have barely touched on them as yet. The first problem is that these curves not only aren't normal, they're not lognormal either. They don't obey the central limit theorem, at all. The mean value changes as the set grows larger. That's one of the truly frustrating aspects common to all these obscure ad hoc laws. They can't be dealt with using standard statistical methods. If you take a look at Nicholas Taleb's books, such as Fooled by Randomness or The Black Swan, you'll see example after example of what I mean. They are orphan curves precisely because they don't fit the standard model. They have been left to languish in dark corners because in dealing with them one cannot assign a z-score, or arrive at a stable mean, or do any of the things that universities train us to do. Jaynes came up with arguments that make such curves more tractable. The principle of maximum entropy reframes the whole question of independence, and the applicability of the central limit theorem. It's very subtle stuff. I don't want to go off in all directions here, so I will simply say that for now. Independence in the sense used by classical probability remains on the table as an issue. Incidentally, Jaynes found the quantum-mechanical notion of reverse time traveling coordination bogus. He was a lifelong skeptic regarding the Copenhagen Interpretation and the obscurantism associated with it. I think Objectivists would find his complaints in that area very welcome and insightful. Ultimately he believed that the principle of maximum entropy could be used to reconcile quantum phenomena with everyday phenomena, to treat them all according to a single set of rules. That is surely an exciting prospect, and I have been strongly encouraged by my own work to believe Jaynes was right. To put it very simply, for those readers who aren't necessarily familiar with statistics, or some of these philosophy of science issues: I think these curves seem spooky because we have the wrong idea about how probability works. I'm not arguing against free will. I'm arguing for a more robust view of probability that will help us understand why these curves are "normal". Along the way, we will also develop a more subtle understanding of just what it is we are free to do.
×
×
  • Create New...