Jump to content
Objectivism Online Forum

What if every identifiable entity moves at the same speed?

Rate this topic


icosahedron

Recommended Posts

Anyway, one reason that analysis on a continuum is necessary is that without some kind of theoretical assurance of well-posedness, you can't really be sure that your numerics are giving you good answers, or if they are, you can't say given some step size how close they're getting to the real solution.

So you are claiming that real analysis provides THE objective standard for the correctness of computations?

I challenge that. Mathematics is a result of humans thinking, and hence just as prone to vogues and pressures as every other discipline. True, mathematicians as a rule embrace logic and causality, so they are way ahead of many others from the get go; but they are people in society, and the history of mathematics shows how often the accepted patterns of representation have needed a refresh.

Like anything man-made, real analysis didn't have to be the way it is. And I say it floats away from reality a bit, and this leads to awkward, complex formulations that reduce the ability of young minds to come up to speed.

- ico

Link to comment
Share on other sites

Like, why can't an intelligent 12 year old understand the essentials of modern physics, and develop experience with the ideas? Because its too dang hard to explain with one foot in traditional formulations.

I remember learning Schrodinger's equation and it's significance, and then using that formulation to solve problems -- what a pain in the butt!

Then the next semester, the professor rolls out matrix mechanics and everything became so simple by comparison.

This is typical in traditional schooling at all levels: teach one thing in an old-fashioned way, then show the easy way to do the same thing, or recast the idea in a larger context that requires different assumptions than before Done for good reason, this is a useful pedagogical device (rarely, IMHO -- just show folk the latest and greatest, like, none of us needs to know how to pilot a horse and buggy). However, it's usually done as a sequence of "Aha"'s and "Gotcha"'s that leave most students wondering how the conceptual framework they spent so much time learning is going to be pulled out from under them in the future.

If the symbolic representation of ideas is inefficient, then the leverage of those ideas is limited relative to what it could be.

The assumption of continuity of the physical substrate is precisely such an outmoded, inefficient idea -- and the formulation of physics in terms of PDE's is suspect in terms of conceptual efficiency, if not practical results.

In developing software, one learns that when two bugs affect the same system, it can be hard to see one of them because the other occludes it (e.g., by preventing the system from reaching the code path with the other bug in it). Similarly, if the software is "smelly", then even if it works today, the cost of maintaining it, or improving it, can become prohibitive. Then you can either refactor it or junk it and start from scratch. The point is, good developers spend significant time refactoring their codes to keep the smell to a manageable minimum.

I strongly suspect that humans are being held back in their intellectual and technical progress by inefficient means of representing and solving problems.

Cheers.

- ico

Link to comment
Share on other sites

So you are claiming that real analysis provides THE objective standard for the correctness of computations?

What I claim is that, to the extent that you want to use calculus (which means, to the extent that you want to apply continuum models to things) you can't really avoid some kind of real analysis. And because you're right that we can only measure with rationals, this means we need some assurance that our continuum model corresponds to reality.

Now you may (and I suspect do) reject the need for such continuum models. If you argue that since we're really only solving approximate problems anyway we should just discretize everything, then you need to find a way to give assurances that these numerical schemes yield results sufficiently close to the correct answer in a way you can control. If you think that space-time by nature is discrete so the problem is moot, this does not invalidate the fact that computing macro-level phenomenon from their constituent particles is not computationally feasible.

The assumption of continuity of the physical substrate is precisely such an outmoded, inefficient idea -- and the formulation of physics in terms of PDE's is suspect in terms of conceptual efficiency, if not practical results.

I don't think that the differential equation is as hapless a tool for describing reality as you seem to think. The advances in the industrial revolution and more are in a large part a consequence of these continuum models, so the practical results speak for themselves.

Link to comment
Share on other sites

What I claim is that, to the extent that you want to use calculus (which means, to the extent that you want to apply continuum models to things) you can't really avoid some kind of real analysis. And because you're right that we can only measure with rationals, this means we need some assurance that our continuum model corresponds to reality.

Now you may (and I suspect do) reject the need for such continuum models. If you argue that since we're really only solving approximate problems anyway we should just discretize everything, then you need to find a way to give assurances that these numerical schemes yield results sufficiently close to the correct answer in a way you can control. If you think that space-time by nature is discrete so the problem is moot, this does not invalidate the fact that computing macro-level phenomenon from their constituent particles is not computationally feasible.

I don't think that the differential equation is as hapless a tool for describing reality as you seem to think. The advances in the industrial revolution and more are in a large part a consequence of these continuum models, so the practical results speak for themselves.

Sorry, didn't mean to malign the past results obtained with continuum models; for conventional engineering, such models are so useful that even if a better model were out there, I'm not sure it would have mattered much until folk realized that continuum models were deficient in modeling small things. That's when the focus shifted to eigenvalue models, so that whilst the machinery was still continuously formulated, the observable outcomes became discretely spectral for precisely measurable properties. And, didn't the industrial revolution occur before the experimental debunking of the continuous trajectory models in the context of small things? Since then, the biggest advances have come from the quantum models, e.g., transistors and micro-electronics.

But looking to the future, to spur future innovation, we need better models, IMHO.

And, computing macro-level phenomenon from micro-states is only difficult because the computational framework is inefficient -- the given seems to navigate these computations with ease, eh? Like, where does Nature choose to round off PI in forming a soap bubble?

Cheers.

- David

Link to comment
Share on other sites

And I certainly don't want to keep anyone from using the best adapted tools to solve a problem.

And, computing macro-level phenomenon from micro-states is only difficult because the computational framework is inefficient -- the given seems to navigate these computations with ease, eh? Like, where does Nature choose to round off PI in forming a soap bubble?

"Nature" don't "compute" anything; when a soap bubble forms no one need be computing anything. Soap bubbles act according to their nature, and our task to try to figure out how to predict it in the best way possible.

Link to comment
Share on other sites

If one defines 'distance' as the number of meter sticks between two points and 'speed' as the rate at which an object follows a path of a certain distance, then one cannot say from observation that the speed of all objects is the same.

I think what you mean to say is that a conserved quantity exists, analogous to speed, which is the same for all objects.The "four-velocity" in general relativity is a similar but not equivalent quantity to the one you described.

Link to comment
Share on other sites

I have had this model for a long time, but it has been a while since I have discussed it.

Is it possible that every thing is moving at the same speed (including photons)?

Blah blah math jargon blah jargon math stuff. Everything is moving in every direction, and motion is when its movements are oriented more in one direction than the rest. Blah jargon.

Cheers.

- David

Is there any evidence-based reason to believe that the speed of fundamental particles is a universal constant?

The trick is to realize that blah blah the same thing I already told you in the first post, jargon jargon.

- David

When someone asks you for evidence, it usually means they want to know what observations you based this crazy theory on.

Link to comment
Share on other sites

Well as math the since of measurement , it is certainly pretty bad math if your just pulling it out of a hat of algebraic terms. Which I would swear is going on here. Lets see some a bit more evidence, ie connection to reality here.

Link to comment
Share on other sites

If one defines 'distance' as the number of meter sticks between two points and 'speed' as the rate at which an object follows a path of a certain distance, then one cannot say from observation that the speed of all objects is the same.

I think what you mean to say is that a conserved quantity exists, analogous to speed, which is the same for all objects.The "four-velocity" in general relativity is a similar but not equivalent quantity to the one you described.

Go to dictionary.com and look up the word "speed". Judge for yourself.

What I propose is a model based on constant speed, but varying frequency and direction of displacement.

I suppose I could use the phrase "instantaneous speed", or even "differential speed", or maybe "path speed" is the best phrase. I kinda like that last one -- but my interpretation of the existing definition of the word speed seems to be essentially the same as any of these.

But I like path speed ... maybe, pathspeed? (paths peed? heh heh).

To reiterate: What I propose is a model based on constant speed, but varying frequency and direction of displacement.

Obviously I think it has legs or I wouldn't be yapping about it, but I am happy to have any and all challenges and objections, because I need to tighten up my arguments and render them palatable before I will be satisfied.

- ico

Link to comment
Share on other sites

You completely ignored my statement that whatever this constant quantity is called, its not what we call speed. I can't even tell what your actual idea is because you're not very clear. I think it would be a lot more worthwhile to actually talk as technically as you can so that people with a physics background can understand you.

A varying frequency of what?

Discrete states based on the NATURAL NUMBERS?!

In what 'space' of variables would this 'speed' 'appear' constant for every observer?

How does this idea simplify the knowledge of lie groups necessary to conceptualize much of quantum field theory? Does this have any use in classical dynamics? What is the point?

Edited by Q.E.D.
Link to comment
Share on other sites

A concept is an object of individual consideration.

A mind can consider at most one concept at a time, but has the power to choose both the concept and duration of consideration. Thinking is the act of deliberately choosing the sequence and duration of concepts that one considers.

In particular, one can consider the relationship between an object and its surroundings, and observe that the relationship can change over time -- an object can move relative to its surroundings. An object's motion is accounted by the mind as a discrete sequence of relationships to its surroundings. Objectively, motion can be no more, and no less, than the mind's grasp of such sequences.

Each element of such a sequence of motion is specified by two variables:

1) The state of the object as determined at that point in the sequence; and

2) The duration until the next determination of state.

In other words, each element of an observed motion sequence has both state and duration.

It is sufficient to consider the case where the duration is a fixed unit of time for every element of the sequence, because one can always find a unit of time that divides equally into any finite set of durations, so that a sequence with varying durations can be viewed, without loss of generality, as a sequence of unit durations such that every original element is modeled as a sequence of identical unit elements.

Thus, a specific path of motion is naturally modeled as a discrete sequence of changes in position, with the time between changes fixed, i.e., with the frequency of change constant.

So far I have been considering changes in position and ignoring the fact that material objects have mass. But, according to Uncle Albert, there is a radiative frequency proportional to the mass of an object.

I assume that the radiative frequency is the natural frequency of (potential) change of direction of a material object. I model a photon as a set of line segments end to end along a given ray. I model a massive object by working back from its radiative frequency equivalent photon, changing the direction of segments until, on balance, the net effect of the steps is to bring about the discretely observable motion of the object. And by construction, it is special-relativistically invariant.

The rest follows from considering the nature of volumetric space, for which a tetrahedron is the simplest enclosure and the most natural basis for erecting coordinate rays. Gravity appears to be closely related to the shape of the local coordinate tetrahedron, or as I call it, the motive tetrahedron: it is equilateral in a radially symmetric local gravity field, and distorted if the local gravitational field is skewed.

What evidence would be sufficient to prove my hypothesis? How about correlation to spectroscopic data? I'm trying to explain the atomic spectrum of hydrogen with this framework, in the expectation that it will lead to a similar description for helium. That is my first goal, a better theory for predicting atomic spectra. And it really really helps me to discuss it, but I have no one nearby to discuss with. So thanks for the attention.

- ico

Edited by icosahedron
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...