Jump to content
Objectivism Online Forum

Several outrageous propositions

Rate this topic


Math Guy

Recommended Posts

Hello everyone,

I have a series of implausible and shocking empirical discoveries that I would like to share with this forum. They will be published next spring in my book The Decline Effect: The law behind diminishing returns and wildly varying outcomes in markets, politics, culture, religion, disease, and war. Before committing myself to a final edit, I would very much like to have a discussion of their philosophical implications with a group of Objectivists.

My work builds on a theoretical foundation of probability and inference using the principle of maximum entropy, that was laid down by Edwin Jaynes starting in 1957. Essentially what I have found is that a huge range of phenomena all obey the same simple statistical rules -- NOT the normal or bell-shaped curve, but a power-law curve with very different implications.

My findings raise questions about the nature of markets, the power and spread of ideas, the evolution of political and religious institutions, even the nature of free will. I'd like to present a summary of several key discoveries and see what people say about them.

Given the posted rules of this forum, I need to offer some information about my background and motivations. I was involved in the Objectivist movement for many years, having for example attended "Debate 84" in Toronto between Leonard Peikoff, John Ridpath, and a pair of socialists. I wrote for several Objectivist publications in the 1990's, and attended discussion groups in Kingston, Toronto, Seattle, Vancouver, and Victoria. I was also very active in the AOL Objectivist forum 15 years ago.

However, I gradually stopped participating in Objectivist activities, in part due to lack of time, in part because the arguments in 2004 were not very different from 1984, and in part because my research findings raised awkward implications. I would describe my discoveries to Objectivist friends, and the reaction would often be a mix of skepticism (that I was just wrong) and suspicion (that I was somehow being immoral in even pursuing the question). There were plenty of people who found my arguments interesting and were prepared to listen, but as it turned out, none from the "closed" school of Objectivism.

Although we have not corresponded in several years, I remain on friendly terms with David Kelley. I am on even friendlier terms with Chris Sciabarra, in part because some years ago, I wrote a review of The Facets of Ayn Rand for the Journal of Ayn Rand Studies. I hope, sometime after the book is out, to draft an essay on the implications of my findings for Objectivism. Chris Sciabarra has expressed interest in having such an essay appear in JARS.

Let me stress that I am not trying to re-ignite the "open" versus "closed" debate. Twenty years of such debates are more than sufficient for me. What I want is for a group of intelligent adherents to the "closed" view to look at my findings. My hope is that in the course of the discussion, I may hear arguments that shed new light on the problem, a perspective that my "open" friends are not able to supply.

Is this something that would interest people here? Should I proceed?

Link to comment
Share on other sites

  • Replies 78
  • Created
  • Last Reply

Top Posters In This Topic

I'm interested, if I can follow, but as far as a proper review, shouldn't you seek that out among Mathematicians, specifically Statisticians, rather than in philosophy circles?

The ability to review your book would be quite a tall order. If you represent the frequency of people, by level of ability, on one of those power law graphs, you'll find that their number would be exponentially decreasing, as the level of ability goes up, and by the time you reach the heights of someone able to recognize such a significant new discovery in Statistics (as you seem to claim it is, I don't know enough to even know for sure if you are or not), the number of people will be very tiny. :) What are the odds of one being among this small a group, brought together by a very different interest?

Link to comment
Share on other sites

I'm interested, if I can follow, but as far as a proper review, shouldn't you seek that out among Mathematicians, specifically Statisticians, rather than in philosophy circles?

The ability to review your book would be quite a tall order. If you represent the frequency of people, by level of ability, on one of those power law graphs, you'll find that their number would be exponentially decreasing, as the level of ability goes up, and by the time you reach the heights of someone able to recognize such a significant new discovery in Statistics (as you seem to claim it is, I don't know enough to even know for sure if you are or not), the number of people will be very tiny. :) What are the odds of one being among this small a group, brought together by a very different interest?

Thank you Jake (and thank you to everyone else who replied).

Your question is a very good one. I've had some very fruitful discussions with professional mathematicians over the years, in fact some of the inspiration for the work came from editing another man's book on a related subject. I suppose technically I am myself a professional mathematician, although I don't work at a university. I have talked with other professionals as well about specific applications. I haven't lacked for people to talk to.

I haven't tried to put my work through the usual peer review process for several reasons. Leaving aside all the standard complaints about academia, something like this, which sprawls across so many disciplines, would not be suitable for formal review. I am convinced the way to present it is to put all the arguments and all the footnotes in one place (675 pages, 220 graphs and illustrations), and just let laymen and scientists alike make up their own minds.

However, this group does have at least two unique qualifications that I am interested in. First, I take it as given that you are all engaged on the question of philosophical fundamentals -- how we know what we know, how do we define concepts, what is free will, and so on. That level of engagement is rare, even among academics.

Second, the book was inspired in part by Rand's theory of integration, and the critique of modern science that she and Leonard Peikoff put forward decades ago. Peikoff referred in The Ominous Parallels to science being divided into two camps, those who generated theories without any facts to support them, and those who accumulated facts but neglected to explain them by means of coherent theories. Rand of course has said similar things on a number of occasions.

This is precisely the situation I found with regard to power-law distributions. Rand's philosophy of science, at least on this point, has proved far superior to that of Popper, or Kuhn, or Feyerabend (to name a few). Science has gradually accumulated a mountain of discrete, ad hoc "laws," each one a power-law curve, each describing some very specific empirical relationship. They are not merely present, but central to current thinking, in biology, economics, military history, criminology, and so on. Somehow, no one has noticed (or bothered to do anything about) this mass of similar yet unintegrated facts. All these laws, I contend, are actually expressions of one basic principle:

— Pareto’s law of elite incomes

— Zipf’s law of word frequencies

— Lotka’s law of scientific publications

— Kleiber’s law of metabolic rates

— the Clausewitz-Dupuy law of combat friction

— Moore’s law of computing costs

— the ‘paradox of higher demand’ in religion

— the Wright-Henderson cost law

— Weibull’s law of electronics failures

— the Flynn Effect in IQ scores

— Benford’s law of digit frequencies

— Farr’s law of epidemics

— Hubbell’s neutral theory of biodiversity

— Bass’s law of innovation diffusion

— Rogers’ law of innovation classes

— Wilson’s law of island biogeography

— Smeed’s law of traffic fatalities

— the Cobb-Douglas factor elasticity curve

At the same time, for the past 50 years, people working in these fields have for the most part ignored or under-used the maximum-entropy framework laid down by Jaynes, an idea with phenomenal potential and sweeping implications. The theory needed to explain their "orphan" facts has been lying in plain sight for decades.

What I found, over 20 years of investigation, was that integration was in fact possible, and more than that, necessary. Rather than dozens of discrete "floating" laws, each unrelated to the next, we actually have a small number of standard curves, which can be constructed using an even smaller number of Jaynesian postulates. And of course, having established a new "universal," I could then explore numerous applications for which no ad hoc rules had even been proposed.

Some of these curves are very weird and disturbing. At least, I find them so, and so do all the people I have shown them to, Objectivists or not. But the method by which I arrived at these findings owes a great deal to Rand, and so I think it would be particularly interesting to see what you all make of them. I've already submitted a draft of my book to my editor, and in the next few months I will need to commit to a final, authoritative edit. Perhaps something will come out of this discussion worth including. If not, I can at least get a better sense of how audiences respond to my arguments.

I'll present my first case in my next post.

Link to comment
Share on other sites

This first example concerns the life (and death) of political regimes -- starting with hereditary monarchies. This, by the way, is a pattern I discovered on my own, not one of the ad hoc laws named for esteemed but long-dead scientists above.

We'll start with a really elementary question. Suppose we were to pull out a reference book and choose some random dynasty out of history -- the Tang dynasty, who ruled China from 618 AD to 907 AD, or the Valentinian dynasty that ruled Byzantium in the 4th century. Each ruler managed to stay in power for some span of years, before dying, passing on the throne to an heir, or being overthrown in some fashion. Suppose we render all those spans as integers, rounding to the nearest year, and look at them in sequence, like this:

37 12 9 9 8 10 10 11 4 15 11 8 15 10

18 5 8 5 1 14 3 1 3 1 10 6 9 13 8 1

2 3 22 1 15 14 18 15 2 16 ...

This particular sequence, by the way, is the first 40 Popes of the Catholic Church.

Now, the question I want to ask is: If out of N rulers, I took an average of the first N/2, and compared it to the average for the last N/2, what would I see? There are three possibilities:

-- Rising trend (rulers stay in power longer)

-- Level trend (rulers experience no change in stability)

-- Falling trend (rulers are deposed or killed sooner)

The standard assumption in classical statistics is the "indifference principle," which says basically that if we don't know anything specific about the situation, we should expect to see no difference between two sides of an arbitarily chosen line. We expect a level trend, a stationary or unchanging mean. Later rulers in a dynasty should hold power as long as early ones.

In Jaynes' system we replace this assumption with a general rule: in any large, complex system composed of many moving parts, entropy will increase. What this rule means will vary from case to case, but here it is simple to apply. Since long spans in power are rare, and short spans are more common, for entropy to increase requires that the rare item (long spans) become rarer. The average span in office should therefore decline.

In the example above, it does decline, by about 20 percent. Later Popes varied enormously in their terms of office, from just 1 year to 22 years. But overall they were in office for a shorter time.

The same is true -- much to my surprise, and yours too, I imagine -- for virtually every dynasty I have examined. Some decline only a little (say 5 percent). Others decline steeply (50 percent or more). The mean decline is about 20 percent. The longer the dynasty, the more likely it is to converge on the 20 percent figure.

The form of the decline is scale invariant, meaning that if we compute cumulative decline for each of the N rulers, we get a power-law curve depending on N. By the time we get to the 10th ruler, the average time in office is half that of the first ruler. By the time we get to the 100th ruler, the average time has fallen to one-quarter that of the first. And so on.

This pattern, taken by itself, would not excite much comment. It is strange, anomalous, intriguing, but it hardly suggests that the whole framework of classical probability needs to be rethought. However, we are just getting started. . . .

Link to comment
Share on other sites

This pattern holds for Japanese emperors, Roman emperors, Pharoahs, Popes, Chinese emperors, and on and on. It holds even for very short dynasties. If a dynasty only lasted long enough to make one handover of power -- from one king to his son -- nonetheless the son's reign has tended to be shorter than the father's.

I am sure some of you will be tempted, as I was, to hypothesize that this has something to do with limited human lifespans, and the vigor of youth, intergenerational dynamics, and the inherent limitations of hereditary rule. We might call it the "King Lear" effect. A young, ambitious man seizes power and founds a dynasty. During his lifetime he reforms the law, expands the territory of the kingdom, wins new allies, and so on, and so on. But being self-confident and ambitious, he is reluctant even in old age to hand over control. He delays until the last possible moment, in most cases until he actually dies. This means that he is succeeded by his adult son, or perhaps several sons, who have spent a lifetime waiting in their father's shadow. Perhaps there is conflict. Perhaps external enemies sense weakness once the old man is gone. In any case, the fortunes of the kingdom go downhill after the promising beginning, not for reasons of probability, but because of human nature and the limitations of hereditary rule.

This is a good theory, but ultimately wrong. It is relatively easy to test. In a number of countries, one dynasty has followed another for thousands of years. Egypt, China, Rome, Byzantium, and other great states outlasted even their strongest dynasties by an order of magnitude. As many as 30 dynasties have succeeded one another in governing the same geographic region. So we can perform the same analysis as we did for the individual rulers, but on a wider scale.

We find that here too, there is decline. Later dynasties do not last as long as the early ones. The shape of the curve is the same; the average decline of 20 percent from the first half of the list to the last half is again evident.

Obviously, as these dynasties tend to last much longer than a human lifetime, and the states they govern last thousands of years, any sort of simple father-son effect cannot possibly be responsible. This is an impasse I encountered again and again in my studies: we start with a strange power-law phenomenon, which we explain using some sort of local, anecdotal cause. Then we find a very similar power law operating nearby for which our anecdotal explanation is useless. Eventually, given enough examples, we are forced to the conclusion that this is a very broad, universal rule, one that transcends specific causes. It has, if you will, a meta-cause or mathematical cause. It is a rule relating to randomness as such rather than a rule relating to fathers and sons, or political dynasties.

The next stage is even stranger. I will be back later tonight with another post.

Link to comment
Share on other sites

The same is true -- much to my surprise, and yours too, I imagine -- for virtually every dynasty I have examined.
Why is this surprising? There is no reason to expect things to stay the same just because we're ignorant about them.
Link to comment
Share on other sites

Before you get too deep into this, I want to point out a problem, at the very least in your way of expressing your project. So we have an ordered list of royal spans, divide the list in two, get the mean of the first and second half -- call those respectively I and J. Then either I=J, I<J or I>J. You cannot see a "rising / level / falling trend" given this method. The notion of a "trend" requires doing something more than just averaging together two sets of numbers.

Now, you have not defined a "large, complex system" nor have you proven empirically that this notion is applicable to the rule of emperors or popes; or anything else. And finally, I haven't seen the evidence that the concept of "entropy" has any relevance to how long a ruler, pope or clock lasts.

The interesting result would, of course, be if you could empirically establish a trend that "things don't last as long, over time". So perhaps you could focus on that one point: how do you intend to establish this (if that's your claim)? We need to look at your data.

Link to comment
Share on other sites

Before you get too deep into this, I want to point out a problem, at the very least in your way of expressing your project. So we have an ordered list of royal spans, divide the list in two, get the mean of the first and second half -- call those respectively I and J. Then either I=J, I<J or I>J. You cannot see a "rising / level / falling trend" given this method. The notion of a "trend" requires doing something more than just averaging together two sets of numbers.

Now, you have not defined a "large, complex system" nor have you proven empirically that this notion is applicable to the rule of emperors or popes; or anything else. And finally, I haven't seen the evidence that the concept of "entropy" has any relevance to how long a ruler, pope or clock lasts.

Yes, agreed to all of this. Not trying to pull a fast one. I like to use this example as a quick and easy introduction, to lay out the basic idea without dwelling too much on technical issues. Once I've established the basic idea -- given you a suitably prepared concrete to contemplate -- then I can go back and do some of this other work.

The point you raise about splitting the list into two is good. Let me express that premise a little more precisely.

Take a set of n ordered measurements M1 through Mn.

The cumulative average of M1 through Mi for any i < n will tend to fall in proportion to i ^ a, where a is an exponent between 0 and -1. Over many different sets of M, the exponent a clusters around -0.30576.

Thus if i=1, 1^-0.30576 = 1, and if i=2, 2^-0.30576 = 0.809. If i=10, 10^-0.30576 = 0.495.

The fact that the second half of the set is smaller is a derived property. Putting the relationship that way makes it easy for non-mathematicians to follow, and splitting the set exactly in half illustrates the indifference principle quite neatly. But the underlying power-law relationship is better expressed by the formula above.

What is a "large, complex system"? For the moment, I want to leave this as an implicit, partly ostensive definition. If I had to break it down at this point, I would say it is a system composed of discrete elements that can each take on a range of possible values in random fashion. But I don't think that really conveys the idea fully. In the book I give hundreds of examples of such systems, so the reader can get a good range of concretes fixed in his mind before trying to understand what systems will or won't obey the decline law. This is an inductive argument, and so I'd like a little leeway in laying out the evidence before attempting the actual induction.

How is the concept of entropy applicable outside of thermodynamics? To be clear, in this area I claim relatively little originality. I lean heavily on proofs established by Jaynes himself, starting 50 years ago. I also cite a huge range of applied work that has been done, particularly in the last decade. Maximum entropy methods have been used already in a variety of fields, outside of thermodynamics -- for example, distribution of wave heights in a disturbed body of water. There's an economics textbook, Maximum Entropy Econometrics: Robust Estimation with Limited Data, and work in species abundance in biology, and a variety of other niche applications. So the necessary arguments and mathematical formalisms are out there for use of entropy in diverse fields. It's not Boltzmann entropy, it's information entropy, it relates to what we are able to know about the behavior of a given system.

Unfortunately the Wikipedia article on Jaynes is terribly short and uninformative, his essays are hard to find, and his textbook (published posthumously) is kind of daunting to laymen. There aren't good comprehensive sources for laymen on the power and applicability of the principle of maximum entropy. That's a major symptom of the problem. No one thinks it is important to explain in a broad, accessible way.

I address the history of the concept of entropy, and Jaynes' key innovations, in several chapters in the book. I will delve into those issues here as well, once I have accomplished some other things.

What I am doing is putting all those applications, and some more of my own invention, in one volume, written for the layman and/or scientists who want to understand how the principle is applied outside their own fields. I'm trying very hard NOT to bury the reader in equations, but to tell the story with words and graphs and simple, tangible examples. The equations do exist, but an over-emphasis on mathematical formalisms too early in the discussion tends to leave most people behind.

The interesting result would, of course, be if you could empirically establish a trend that "things don't last as long, over time". So perhaps you could focus on that one point: how do you intend to establish this (if that's your claim)? We need to look at your data.

Yes, let's focus on that for the moment. Next weird development coming up in my next post.

EDIT: I originally wrote the cumulative average would "rise" instead of "fall". The cumulative total rises, the average falls.

Edited by Math Guy
Link to comment
Share on other sites

All right. Now we come to the weirdest aspect of the hereditary monarchies data set.

We have observed two kinds of decline so far:

-- Within a given dynasty, later rulers have shorter terms, and the cumulative average for all rulers in the set declines according to a power law with exponent roughly -0.30576.

-- Within the set of dynasties for a particular country, later dynasties are also shorter, and the cumulative average again declines according to the same power law.

This is, if you will, a fractal sort of relationship. We examine history on a "micro" level, looking at one family ruling one kingdom, and where we might expect to find bland uniformity, we find a very emphatic downward trend. Then we examine history on a more "macro" level and we see the same trend. We zoom out to a wider scale but find the same shape of curve.

What will happen when we zoom out yet again? If we simply take all dynasties, regardless of what country they pertain to, and assemble them in chronological order, what will we see?

Common sense cries out that what we should see here is a resumption of the indifference principle. Very well, within a country, there is decline. That is unexpected but not alarming. It suggests there is something at work on a local level that causes the later items to be influenced by the earlier ones. But now we are relating dynasties that have no historical, geographical, or logical relationship to one another. What does the Tang dynasty in 7th century China have to do with the House of Savoy in 1705? Why on Earth would measurements of one have any sort of specific relationship to measurements of the other?

Yet they do. If we assemble all the dynasties known to history, in order, we see a very clear progression in their values. Later dynasties (wherever they are on the surface of the Earth) are shorter, with the cumulative total once again approximating a power law based on their rank order. By the time we get to the i-th dynasty, wherever it might be, the cumulative total has fallen to approximately i^-0.30576.

Here is a breakdown of the data. It shows how many dynasties I found for that period in history, and their average durations.

# AVG

3050-2500 BCE 6 637

2499-2000 BCE 10 507

1999-1500 BCE 14 304

1499-1000 BCE 8 216

999-500 BCE 16 137

499-1 BCE 26 189

CE 1-500 27 297

CE 501-1000 61 184

CE 1001-1500 109 132

CE 1501-1800 73 107

CE 1801-2008 38 52

You can see that on a detail level, the data are not absolutely uniform. The trend plunges downward steeply in the years approaching 500 BCE, then rises to a local peak in the early centuries AD, before plunging down again. But taking the cumulative average, we see a curve very closely approximating the previous two. We have zoomed out to an even more colossal scale, and the shape of the curve is the same yet again. History . . . is fractal!

There are plenty of issues that can be raised here about the quality of data. Do we know if the early kings referred to in documents from 4,000 years ago actually existed? Shall we trust the inferences of archeologists about the Pharaohs? Perhaps in those distant times there are empires completely unknown to us, that lived much shorter lives and died without records.

But even if we discarded the data prior to 500 BCE as being untrustworthy, and took only those dynasties that were established since Aristotle, or since Christ, we'd still have the same basic result. Decline pervades the data set. It cannot be purged by removing a few unusual items, or postulating a few missing ones.

What this suggests is that long spans of history are tied together by a mathematical rule. If staying in power represents success, then throughout human history, hereditary dynasties have been inexorably growing less and less successful, and more and more unstable. The presence of the rule on the other levels suggests that on this level as well, the trend has real predictive and causal meaning. We don't know yet WHY this rule operates, but it appears to operate pervasively: within dynasties, between dynasties, and across continents, oceans, and millenia of time.

Now consider the implications for our present system of democracy. This very simple rule effectively dictates that hereditary monarchy HAD to collapse as a social institution, and moreover that it wound up being replaced at the precise time that it did, because by the 1800's, all the new dynasties that were being established were short-lived, lasting on average only 52 years, or barely one human lifetime. Once the older dynasties collapsed in WW I and WW II, they were not replaced, and dynastic rule came to an end.

The trend in the global data set implies that hereditary monarchy, as an institution, had built-in obsolescence. Each dynasty grew increasingly frail. Each country ruled by dynasties also grew more unstable. And the entire global system of dynastic rule was bound to hit bottom one day.

It is, eerily, a forecast for the future of humanity. Had there been a sharp mathematician in Aristotle's time, or in the early Roman Empire, who had access to lists of kings, he could have worked out roughly when and how the entire system of hereditary rule would cease to operate -- literally thousands of years in advance. The rule operates, at least as far as we can see so far, in complete indifference to the ideas that the rulers had, or their subjects had. It is an impersonal rule, an inhuman rule, that takes no special notice of philosophy or language or religion.

Now, I don't want to force an interpretation on the reader. I don't expect you to go running screaming into the street, as if I have overturned all your beliefs about free will and the role of ideas in history in one fell swoop. At this early stage I simply want to ask: What does this make you think of?

It makes me think of Hegel. More about that tomorrow.

Link to comment
Share on other sites

So what are the mathematical parameters? How do you determine that this scale independent fractal pattern actually results in a power law? I am studying math and this is interesting to me.

Most important though is context. You can't just say EVERY fundamental aspect of nature has this law. There needs to be a set of assumptions for which your theory is true. The normal distribution is created by taking the limit of the sampling distribution of events which follow the binomial distribution as the sample size approaches infinity. (Okay, that example may not be a good one, but it's based on my current understanding. Correct me if I'm wrong.)

For example, say you have a set of random forces. To which constraints must those forces be subject before things start to follow your distribution? For example, in gambling, there is a distribution which results from certain conditions. Everybody starts with the same amount of money. They each then bet in proportion to their wealth against another who is also able to bet the same amount, they flip a coin, and depending on which is heads and which is tails, one wins and the other loses. Eventually the process is repeated for a set number of iterations. That would create a certain kind of distribution. Kind of like how a random walk approaches the normal distribution through a limit over time and for certain conditions.

So what's the verdict? Do you get what I'm saying? I know I talked a lot, and might be babbling a bit, but we're all guilty of doing it for one time or another. :)

Link to comment
Share on other sites

What does this make you think of?

It made me think about the common elements: humans, and agrarian societies. Some accumulating factor such as knowledge or cultural practices or economics of warfare, or parallel selection pressures on human nature, makes monarchy more difficult until it is impossible.

It also made me think of The Great Wave: Price Revolutions and the Rhythm of History by David Hackett Fischer. The cyclic aspect is lacking in monarchial decline, but said price revolutions are non-cyclically coming faster.

Link to comment
Share on other sites

thanks for making this post, it's very interesting, exciting even

i only have a basic knowledge of the field of statistics, but i'll let you know what i think, for what it's worth

the social entropy idea, and the resulting dynasty trend, they make sense to me

if the complexity of the system increases, ie. if the number of actors increases, if the human population increases, then

- the dominance of any particular people/dynasty/institution will be lessened through increased competition

no?

especially if the dominance arose through monopolization/hogging of resources which other people want

Link to comment
Share on other sites

It made me think about the common elements: humans, and agrarian societies. Some accumulating factor such as knowledge or cultural practices or economics of warfare, or parallel selection pressures on human nature, makes monarchy more difficult until it is impossible.

It also made me think of The Great Wave: Price Revolutions and the Rhythm of History by David Hackett Fischer. The cyclic aspect is lacking in monarchial decline, but said price revolutions are non-cyclically coming faster.

I've read Fischer, he did a very thorough and scholarly job. But I have to disagree that the cyclic aspect is lacking. What you are seeing in this "fractal" decline pattern is cycles within cycles. Decline on multiple levels creates a cyclical, sawtooth-type decline pattern.

Say a monarch establishes a new dynasty, and rules for 30 years. His son then rules for 20, and the fortunes of the kingdom go up and down for several generations before finally trailing off in the 4-year reign of the infant great-great-grandson, and conquest by an upstart baron.

The rise of the rebel baron to the throne represents the start of a new cycle. The Chinese were familiar with this kind of cycle, they called it "the Mandate of Heaven". At the beginning of a new dynasty there is not only the promise of reform, but quite often actual reform. The barbarians and bandits are driven off for a time, new roads and bridges are built, commerce revives, the arts and sciences receive fresh patronage. Then, gradually, the new court becomes as corrupt, cynical, and ineffectual as the old one was. Successive monarchs make promises they do not keep. The bandits and barbarians creep in. Eventually the kingdom is in total disarray, perhaps broken into pieces, and the dynasty is totally discredited. The Mandate then passes to whoever can keep order and restore confidence. The first reign of the new dynasty is typically much longer than the last reign of the old one.

It sounds rather like Obamamania, doesn't it? No accident. The decline pattern for monarchies has continued into the modern era, for democracies. If we plot party control of the various U.S. state governments, from when they were colonies up to the present, there is a fairly smooth continuity present between the modern democratic era. The English crown was increasingly unstable during the 17th century, and so regimes lasted an average of 50-60 years -- typical of dynasties at that point. In the late 18th century, parties held power over a given state for an average of 10-11 years. In the first half of the 20th century, they averaged 8 years, and in the late 20th century, just 7 years. Control of the presidency has also declined in stability. During the first century or so the average was 20 years, now it too is down to 8 years.

The different levels of decline make it difficult to adopt one single explanation. We can explain the long sweep of declining stability using such fundamental factors as agriculture, but then how do those same factors reverse themselves when a new dynasty is established?

I have considered the argument that the change in stability in monarchies was due to systematic changes in agriculture, population growth, and so on. It is hard to refute because it requires proving a negative. How can we know there wasn't a poorly documented or misunderstood trend taking place down through the years that made monarchy more and more fragile? Perhaps increasing literacy? Or greater wealth, the growth of the middle class? Certainly those things were happening, and populations with greater wealth, knowledge, and numbers tend to demand more say in how they are governed. Given so many plausible possibilities and so little hard data, we might argue perpetually and not reach a final conclusion.

But then as I said, we would still be stuck with the puzzle of why the next dynasty starts out more stable. The shape of the curve is a separate puzzle. Throughout my book, I make a distinction between local contributing causes and what Aristotle might have called the "formal" cause. A local, contributing cause is a reason for the trend to be downward: Monarchies become more unstable because the population becomes richer, smarter, better fed. The "formal" cause is the thing that gives the trend its specific form. So why does the curve have this shape, so similar to Smeed's Law and the Wright-Henderson law and all those others that I listed? Because of the principle of maximum entropy. This shape maximizes the spread of possible results, the uncertainty at any given time about what will happen. This kind of "formal" cause can also be called a meta-cause. It transcends the specific content of the curve, and imposes a universal shape on all sorts of different raw material.

So in effect, I agree with you. Changes in fundamental economy, the accumulation of wealth and the rise of markets, were definitely a factor. This discovery doesn't reverse or overthrow the narrative of history that we all know about, from the bread and circuses of the Roman Empire to Magna Carta to the English Parliament and the French Revolution. What this curve does is reveal a spooky underlying order that has nothing to do with economy as such. To adequately explain the "flow" of history we need to consider both the local, anecdotal causes like agrarian economics, and this mathematical metacause.

I hope that makes sense. I will tackle TuringAI's post when I have time later today.

Link to comment
Share on other sites

So in effect, I agree with you. Changes in fundamental economy, the accumulation of wealth and the rise of markets, were definitely a factor. This discovery doesn't reverse or overthrow the narrative of history that we all know about, from the bread and circuses of the Roman Empire to Magna Carta to the English Parliament and the French Revolution. What this curve does is reveal a spooky underlying order that has nothing to do with economy as such. To adequately explain the "flow" of history we need to consider both the local, anecdotal causes like agrarian economics, and this mathematical metacause.

It doesn't sound like much more than fancy astrology or numerology, at least the places where you discuss numbers. In particular, placing special significance on certain numbers for no particular reason other than "it fits together!". All people make decisions based on what they know (even if what they know is simply that sacrificing 1 person every year leads to appeasement of the gods). So anything that occurs in history will be based entirely upon what people do or think. Any corresponding numbers would at best be coincidental.

Edited by Eiuol
Link to comment
Share on other sites

It doesn't sound like much more than fancy astrology or numerology,

That is hasty, to say the least. He is still introducing his topic.

So far it appears that the book project may be compared to The Golden Ratio: The Story of Phi, the World's Most Astonishing Number. Phi is not numerology, it is geometry. Hopefully there is analogous theoretical reason for the Math Guy's new discovery. edit: I think that is where the maximum entropy reference was leading, we'll see.

Edited by Grames
Link to comment
Share on other sites

So what are the mathematical parameters? How do you determine that this scale independent fractal pattern actually results in a power law? I am studying math and this is interesting to me.

The key is that the power law applies both to sets and subsets. Suppose that I have a country like China with 20-30 dynasties, and each dynasty having some number of rulers. Each dynasty constitutes a distinct subset and has its local decline, according to the power law. Then the set of dynasties has a similar decline, with the individual rulers in each dynasty now subsumed and dynasties being treated as individual elements. Because the curves on both scales tend toward the same slope, the overall result is fractal. It looks the same as we "zoom" in or out.

A given ruler is subject to several overlapping constraints. His term, on average, has to be X amount lower than the previous ruler in the dynasty. It also has to be low enough that his dynasty length is shorter than the previous dynasty. And his term is also affected, in a distant way, by the diminishing stability of monarchy across all of history. Think of each set as imposing one constraint, with the result being a series of simultaneous equations.

Most important though is context. You can't just say EVERY fundamental aspect of nature has this law. There needs to be a set of assumptions for which your theory is true. The normal distribution is created by taking the limit of the sampling distribution of events which follow the binomial distribution as the sample size approaches infinity. (Okay, that example may not be a good one, but it's based on my current understanding. Correct me if I'm wrong.)

No, quite correct. I'm not saying every aspect of nature follows a power law. What I'm saying (and Jaynes is saying) is more subtle. Entropy always increases. In many situations the increase in entropy results in a unique distribution of measurements that follows from the special conditions prevailing at the time. But in a surprisingly large range of situations it results in these power law curves.

For example, consider a 6-sided die that is uniform and unweighted. Throw it a few thousand times and it will settle into a uniform distribution with equal likelihood of '1' through '6' coming up. The range of outcomes does not contain rarer and less rare items, and so no decline effect is possible. In the long run, you get equilibrium. Jaynes observed that this actually IS the maximum entropy distribution, it maximizes entropy given the range of possible outcomes. We don't think of it as a maximum entropy distribution, we just think of it as being dictated by classical statistics. But it happens that the two methods of forecasting the distribution of outcomes agree in this case.

On the other hand, consider traffic accidents. Sixty years ago, a researcher named Smeed observed that countries with relatively small numbers of cars had higher per-capita fatal accident rates. Plotting the data for dozens of countries revealed a power law curve. If you doubled the number of cars on the road, the number of fatal accidents only rose by about 62 percent.

Traffic accidents by any standard of measurement are rare events. Increasing entropy means that they get rarer. Thus if you have a larger pool of cars, more and more of them in percentage terms manage to avoid fatal collisions. Despite sixty years of theorizing by traffic analysts, no one knows why Smeed's Law works. My proposal is that it falls in this large category of things that all behave the same way, due to entropy.

For example, say you have a set of random forces. To which constraints must those forces be subject before things start to follow your distribution? For example, in gambling, there is a distribution which results from certain conditions. Everybody starts with the same amount of money. They each then bet in proportion to their wealth against another who is also able to bet the same amount, they flip a coin, and depending on which is heads and which is tails, one wins and the other loses. Eventually the process is repeated for a set number of iterations. That would create a certain kind of distribution. Kind of like how a random walk approaches the normal distribution through a limit over time and for certain conditions.

So what's the verdict? Do you get what I'm saying? I know I talked a lot, and might be babbling a bit, but we're all guilty of doing it for one time or another. :thumbsup:

I think I get it. I would put it this way. Entropy applies to everything -- so says Jaynes, at least. However, entropy doesn't dictate a single uniform distribution for everything. It operates contextually, as we saw with the six-sided die. These particular power laws apply to a very large range of phenomena, much larger than anyone presently suspects, but not to everything. A formal definition of the conditions under which a power law is sure to apply is going to be a while in coming. At this point I prefer to work by giving a large range of examples rather than trying to establish a formal definition. I have shown that the same power law applies to:

-- cost reduction curves in manufacturing (Wright-Henderson law)

-- output reduction curves in macroeconomics (Cobb-Douglas factor elasticity)

-- diminishing customer loyalty as the market for a product grows

-- diminishing per-capita participation as membership in a web forum grows (more and more people join but then lurk rather than post)

-- diminishing per-capita audience participation for web-based media (comments on YouTube videos)

-- lower per-capita casualty rates for battles involving larger armies

-- lower per-capita casualty rates for bombing raids involving larger numbers of bombs

-- lower per-capita attendance in churches as they grow

-- lower per-capita tithing and donation in churches as the grow also

-- lower metabolic output per unit mass in single-celled organisms, insects, reptiles, and mammals (Kleiber's Law)

-- slower mass gains in percentage terms for developing human fetus as it approaches maturity

-- lower per-capita mortality for infectious epidemic diseases as total caseload grows (H1N1, cholera, plague, scarlet fever, Ebola, HIV-AIDS, basically all of them)

-- lower transmission rates for the same diseases, again as total caseload grows

-- voter turnout in large versus small jurisdictions

and so on, and so on. I can list huge numbers of examples, but I am reluctant to try and draw a boundary around them and say definitively, 'Here are the precise conditions required for this behavior to occur.' All the examples above involve large numbers of discrete entities (people, or manufactured goods, or cells in a body) engaged in some probabilistic activity (whether to participate, get hit by a bullet, donate, metabolize, become infected). As the set of items subject to the same probability range grows, the mean for that range shifts. The rare items get rarer.

If you think of the whole universe of activities in which rare items could get rarer, you will see why I resist trying to frame permanent, exhaustive conditions for the decline effect to occur. I've barely started cataloging all the possibilities.

Link to comment
Share on other sites

All the examples above involve large numbers of discrete entities (people, or manufactured goods, or cells in a body) engaged in some probabilistic activity (whether to participate, get hit by a bullet, donate, metabolize, become infected). As the set of items subject to the same probability range grows, the mean for that range shifts. The rare items get rarer.

Isn't this fairly intuitive, though? For example, there may be a great many people willing to take the time to sign up for a forum membership, but only a few willing to post (in any given population). Those who are willing to post would reasonably be early adopters, therefore the per capita ratio of posters to members is very high. As the late adopters, those not terribly interested in posting, start joining the ratio of posters to members goes down. I just picked this one example from the several you provided, but I can see the same type of process at work in most of them: the general population is increasing, but the specific actors driving the events under study don't change - the equation gets bottom heavy. Of course, I could be missing something completely fundamental here.

Very interesting thread, Math Guy. Thanks for posting.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...