On Binary Planets, and Binary Polyhedra

Faceted Augmented Icosa

This image of binary polyhedra of unequal size was, obviously, inspired by the double dwarf planet at the center of the Pluto / Charon system. The outer satellites also orbit Pluto and Charon’s common center of mass, or barycenter, which lies above Pluto’s surface. In the similar case of the Earth / Moon system, the barycenter stays within the interior of the larger body, the Earth.

I know of one other quasi-binary system in this solar system which involves a barycenter outside the larger body, but it isn’t one many would expect: it’s the Sun / Jupiter system. Both orbit their barycenter (or that of the whole solar system, more properly, but they are pretty much in the same place), Jupiter doing so at an average orbital radius of 5.2 AU — and the Sun doing so, staying opposite Jupiter, with an orbital radius which is slightly larger than the visible Sun itself. The Sun, therefore, orbits a point outside itself which is the gravitational center of the entire solar system.

Why don’t we notice this “wobble” in the Sun’s motion? Well, orbiting binary objects orbit their barycenters with equal orbital periods, as seen in the image above, where the orbital period of both the large, tightly-orbiting rhombicosidodecahedron, and the small, large-orbit icosahedron, is precisely eight seconds. In the case of the Sun / Jupiter system, the sun completes one complete Jupiter-induced wobble, in a tight ellipse, with their barycenter at one focus, but with an orbital period of one jovian year, which is just under twelve Earth years. If the Jovian-induced solar wobble were faster, it would be much more noticeable.

[Image credit: the picture of the orbiting polyhedra above was made with software called Stella 4d, available at this website.]

An Image, from Outside All of the Numerous Event Horizons Inside the Universe, During the Early Black Hole Era

late universe

This image shows exactly what most of the universe will look like — on a 1:1 scale, or many other scales — as soon as the long Black Hole Era has begun, so this is the view, sometime after 1040 years have passed since the Big Bang. This is such a long time that it means essentially the same thing as “1040 years from now,” the mere ~1010 years between the beginning of time, and now, fading into insignificance by comparison, not even close to a visible slice of a city-wide pie chart.

This isn’t just after the last star has stopped burning, but also after the last stellar remnant (such as white dwarfs and neutron stars), other than black holes, is gone, which takes many orders of magnitude more time. What is left, in the dark, by this point? A few photons (mostly radio waves), as well as some electrons and positrons — and lots — lots — of neutrinos and antineutrinos. There are also absurd numbers of black holes; their mass dominates the mass of the universe during this time, but slowly diminishes via Hawking radiation, with this decay happening glacially for large black holes, and rapidly for small ones, culminating in a micro-black-hole’s final explosion. Will there be any baryonic matter at all? The unanswered question of the long-term stability of the proton creates uncertainty here, but there will, at minimum, be at least be some protons and neutrons generated, each time a micro-black-hole explodes itself away.

Things stay like this until the last black hole in the cosmos finally evaporates away, perhaps a googol years from now. That isn’t the end of time, but it does make things less interesting, subtracting black holes, and their Hawking radiation, from the mix. It’s still dark, but now even the last of the flashes from a tiny, evaporating black hole has stopped interrupting the darkness, so then, after that . . . nothing does. The universe continues to expand, forever, but the bigger it becomes, the less likely anything complex, and therefore interesting, could possibly have survived the eons intact.

For more on the late stages of the universe, please visit this Wikipedea article, upon which some of the above draws, and the sources cited there.

“You Majored in WHAT?”

I’m in my twentieth year of teaching mostly science and mathematics, so it is understandable that most people are surprised to learn that I majored in, of all things, history.

It’s true. I focused on Western Europe, especially modern France, for my B.A., and post-WWII Greater China for my M.A. My pre-certification education classes, including student teaching, were taken between these two degree programs.

Student teaching in social studies did not go well, for the simple reason that I explain things by reducing them to equations. For some reason, this didn’t work so well in the humanities, so I took lots of science and math classes, and worked in a university physics department, while working on my history M.A. degree, so that I could job-hunt in earnest, a year later, able to teach physics and chemistry. As it ended up, I taught both my first year, along with geometry, physical science, and both 9th and 12th grade religion. Yes, six preps: for an annual salary of US$16,074.

History to mathematics? How does one make that leap? In my mind, this explains how:

  • History is actually the story of society over time, so it’s really sociology.
  • Sociology involves the analysis on groups of human minds in interaction. Therefore, sociology is actually psychology.
  • Psychology is the study of the mind, but the mind is the function of the brain, one of the organs of the human body. Psychology, therefore, is really biology.
  • Biological organisms are complex mixtures of interacting chemicals, and, for this reason, biology is actually chemistry.
  • Chemistry, of course, breaks down to the interactions of electrons and nuclei, governed by only a few physical laws. Chemistry, therefore, is really physics.
  • As anyone who has studied it knows, physics often involves more mathematics than mathematics itself.

…And that at least starts to explain how someone with two history degrees ended up with both a career, and an obsession, way over on the mathematical side of academia.

Does Everything Move at the Speed of Light?

everything moves at c

I have a friend who once explained to me his way of understanding spacetime, and what Einstein discovered about it, which was to start with the idea that, as he put it, “everything is traveling at c,” and proceed from there. Light travels at c, of course, but time does not pass for light, forming vector AG, shown in purple. A spatially-stationary rock is still traveling — temporally, into the future, at a rate of sixty seconds per minute, as represented by dark green vector AN. My friend’s idea was to interpret this rate of time-passage — the normal time passage-rate we generally experience — as another form of c. Sublight moving objects are moving at c, according to this idea, as a vector sum of temporal and spatial velocities. In this diagram, all spatial dimensions are collapsed into one direction (parallel to the x-axis), while time runs up (never down) the y-axis, into the future (never the past).

I don’t know why it took me perhaps a decade to see that my friend’s idea is testable. Better than that, the data needed to test it already exist! All I need to do is cross-check the predictions of my friend’s idea against a thoroughly-tested formula regarding relativistic time dilation. The relevant equation for time dilation is this one, which you can find in any decent Physics textbook:

equation for time dilation

In the diagram at the top of this post, the blue horizontal component-vector NM represents a spatial velocity of (c)sin(10º) = 0.173648c. It is a component of the total velocity of the object represented by blue vector AM, which is, if my friend is correct, is c, as a vector-sum total velocity — the sum, that is, of temporal and spatial velocities. By the equation shown above, then, the measured elapsed time for an event — say, the “minute,” in “seconds per minute” — to take place, at an object with that speed, as measured by a stationary observer, should be 1/sqrt[1-(0.173648)²] = 1/sqrt(1 – 0.0301537) = 1/sqrt(0.969846) = 1/0.984808 = 1.01543 times as long as the duration of the same event, for the observer, with the event happening at the observer’s location.

Now, if time is taking longer to pass by, then an object’s temporal speed is shrinking, so this slightly longer elapsed time corresponds to a slightly slower temporal speed. As seen in the equations above, near the end of the calculation, the two have a reciprocal relationship, so such an object’s temporal speed would only be 0.984808(temporal c) = 0.984808(60 seconds/minute) = 59.0885 seconds per minute. Therefore, an object moving spatially at 0.173648c would experience time at 0.984808k, where k represents the temporal-only c of exactly 60 seconds per minute — according to Einstein.

Next, to check this against my friend’s “everything moves at c” idea, I need only compare 0.984808 to the cosine of 10º, since, in the diagram above, based on his idea, vector BM = (vector AM)cos(10º). The cosine of 10º = 0.984808, which supports my friend’s hypothesis. It has therefore just passed its first test.

As for the other sets of vectors in the diagram, they provide opportunities for additional testing at specific relativistic spatial velocities, but I’m going to skip ahead to a generalized solution which works for any spatial velocity from zero to c, corresponding to angles in the diagram from zero to ninety degrees. Substituting θ for 10º, the spatial velocity, (c)sin(10º), becomes simply (c)sinθ, which corresponds to a temporal velocity of (c)cosθ, with it then necessary to show that the “cosθ” portion of this expression is equivalent to the reciprocal of 1/sqrt[1 -(sinθ)²],  after the cancellation of c² in the numerator and denominator of the fraction, under the radical, in the denominator of Einstein’s equation for time dilation. By substitution, using the Pythagorean trigonometric identity 1 = (sinθ)² + (cosθ)², rearranged as 1 – (sinθ)² = (cosθ)², the expression 1/sqrt[1 -(sinθ)²] = 1/sqrt[(cosθ)²] = 1/cosθ, the reciprocal of which, is, indeed, cosθ, which is what needed to be shown for a generalized solution.

My friend’s name is James Andrew Lemley. When I started writing this post (after the long process of preparing the diagram), I did not know what result I would get, comparing what logically follows from Andrew’s idea with the well-tested conclusions of Einstein’s time-dilation formula, at even one specific relativistic speed. Andrew, I salute you, and think this this looks quite promising. Based on the calculations above, and after all these years, I must tell you that I now think you are, indeed, correct: in a sense that allows us to better understand spacetime, we are all moving at c.

Anticarbon-14 and Oxygen-18 Nuclei: What If They Collided? And Then, What About the Reverse-Reaction?

anticarbon-14 and oxygen-18Were nuclei of anticarbon-14 and oxygen-18 to collide (and their opposite charges’ attractions would help with this), what would happen? Well, if you break it down into particles, the anticarbon-14 nucleus is composed of six antiprotons and eight antineutrons, while the oxygen-18 contains eight protons and ten neutrons. That lets six proton-antiproton pairs annihilate each other, releasing a specific amount of energy, in the form of gamma rays, with that amount calculable using E=mc² and KE=½mv². The two excess protons from oxygen-18, however, should escape unscathed. In the meantime, eight neutron-antineutron pairs also are converted into a specific, calculable amount of gamma-ray energy, but with two neutrons surviving. Here’s the net reaction:

particles

Two protons and two neutrons, of course, can exist as separate particles, two deuterons, a tritium nucleus and a neutron, or a single alpha particle.

Now, consider this:  any physical process is, at least hypothetically, reversible. Therefore, it should be possible to bombard a dense beam of alpha particles with many gamma rays, each of a specific and calculable energy, and, rarely, the reverse reaction would occur, and anticarbon-14 and oxygen-18 nuclei would appear. Oxygen-18 is stable, but rare, so detection of it would be evidence that the reverse-reaction had occurred. Anticarbon-14, however, can logically being expected to decay to antinitrogen-14 via the antimatter version of beta-negative decay, which, it being antimatter, will result in the emission of an easily-detectable positron. It likely will not have time to do this, though, for carbon-14’s half-life (and anticarbon-14’s as well, one assumes) exceeds 5,000 years. The more likely scenario for the anticarbon-14 nucleus is that it will create a large burst of gamma rays when it encounters, say, a non-antimatter carbon atom — and these gamma rays would come from a different position than the ones bombarding the alpha particles, and can therefore be distinguished from them by determination of their direction.

Such a reverse-reaction would be quite rare, for it involves a decrease in entropy, violating the Second Law of Thermodynamics. However, the Second Law is a statistical law, not an absolute one, so it simply describes what happens most of the time, allowing for rare and unusual aberrations, especially on the scale of things which are extremely small. So, do this about a trillion times (or much more, but still a finite number of trials) and you’ll eventually observe evidence of the production of the first known anticarbon nucleus.

Also, before anyone points this out, I am well aware that this is highly speculative. I do make this claim, though:  it can be tested. Perhaps someone will read this, and decide to do exactly that. I’d test it myself, but I lack the equipment to do so.

How Richard Feynman Saved Eastern Tennessee from Getting Nuked

feynman

I’m reading the book shown above for the second time, and am noticing many things that escaped my attention the first time through. The most shocking of these items, so far, is finding out that history’s first nuclear explosion almost occurred by accident, in Oak Ridge, Tennessee, during World War II. One person prevented this disaster, and that person was Richard Feynman, my favorite scientist in any field. If you’d like to read Feynman’s account of this, in his own words, it’s in the chapter “Los Alamos from Below,” which starts on page 107.

Feynman, a physicist, was one of many civilians involved in the Manhattan Project, doing most of his work in New Mexico. At one point, though, he obtained permission to visit Oak Ridge, in order to try to solve problems which existed there. These problems were caused by the American military’s obsession with secrecy, which was caused, in turn, by the fact that it was known, correctly, that at least one spy for the Nazis was among the people working on the Manhattan Project. The military’s “solution” to this problem was to try to keep each group of civilians working for them in the dark about what the other groups of civilians were doing. Most of them had no idea that they were working to develop a bomb, let alone an atomic bomb. In Tennessee, they thought they were simply working on developing a way to separate uranium isotopes, but did not know the underlying purpose for this research.

The military men in charge knew (because the physicists in New Mexico figured it out, and told them) a little bit about the concept of critical mass. In short, “critical mass” means that if you get too much uranium-235 in one place, a runaway chain-reaction occurs, and causes a nuclear explosion. The military “brass” had relayed this information to the civilian teams working in Tennessee, by simply telling them to keep the amount of U-235 in one place below a certain, specific amount. However, they lacked enough knowledge of physics to include all the necessary details, and they deliberately withheld the purpose for their directive. Feynman, by contrast, did not share this dangerous ignorance, nor was he a fan of secrecy — and, as is well known, the concept of respecting “authority” was utterly meaningless to him.

While in Tennessee, Feynman saw a large amount of “green water,” which was actually an aqueous solution of uranium nitrate. What he knew, but those in Tennessee did not, is that water slows down neutrons, and slow neutrons are the key to setting off a chain reaction. For this reason, the critical mass for uranium-235 in water is much less than the critical mass of dry U-235, and the “green water” Feynman saw contained enough U-235 to put it dangerously close to this lower threshhold. In response to this, Feynman told anyone who would listen that they were risking blowing up everything around them.

It wasn’t easy for Feynman to get people to believe this warning, but he persisted, until he found someone in authority — a military officer, of course — who, although he didn’t understand the physics involved, was smart enough to realize that Feynman did understand the physics. He was also smart enough to carefully listen to Feynman, and decided to heed his warning. The safety protocols were modified, as were procedures regarding sharing of information. With more openness, not only was a disaster in Tennessee avoided, but progress toward developing an atomic bomb was accelerated. It turns out that people are better at solving problems . . . when they know the purpose of those problems.

Had this not happened, not only would Eastern Tennessee likely have suffered the world’s first nuclear explosion, but overall progress on the Manhattan Project would have remained slow — and the Nazis, therefore, might have developed a controlled nuclear bomb before the Americans, making it more likely that the Axis Powers would have won the war. Richard Feynman, therefore, dramatically affected the course of history — by deliberately putting his disdain for authority to good use. 

save lives

Public Schools in the United States Should Rename the “Free Lunch”

tanstaafl

If you live in the USA, you are probably familiar with the phrase “free lunch,” or “free and reduced lunch,” as used in a public-school context. For those outside the USA, though, an explanation of what that phrase means, in practice, may be helpful, before I explain why a different name for such lunches should be used.

The term “free and reduced lunch” originated with a federal program which pays for school lunches, as well as breakfasts, with money collected from taxpayers — for students whose families might otherwise be unable afford these meals. The program’s eligibility requirements take into account both family income and size. There’s a problem with it, though:  the inaccuracy of the wording used, especially the troublesome word “free.” The acronym above, “TANSTAAFL,” is familiar to millions, from the works of Robert A. Heinlein (science fiction author), Milton Friedman (Nobel-Prize-winning economist), and others. It stands for the informally-worded phrase, “There ain’t no such thing as a free lunch,” which gets to the heart of the problem with the terminology we use when discussing school lunches. (Incidentally, I have seen an economics textbook use the phrase “TINSTAAFL,” in its place, to change “ain’t no” to “is no.” I do not use this version, though, for I am unwilling to correct the grammar of a Nobel laureate.)

The principle that “free lunches” simply do not exist is an important concept in both physics and economics, as well as other fields. In physics, we usually call it the Law of Conservation of Mass and Energy, or the First Law of Thermodynamics. This physical law has numerous applications, and has been key to many important discoveries. Learning to understand it, deeply, is an essential step in the education of anyone learning physics. Those who teach the subject, as I have in many past years, have an even more difficult task:  helping students reach the point where they can independently apply the TANSTAAFL principle to numerous different situations, in order to solve problems, and conduct investigations in the laboratory. It is a fundamental statement of how the universe works:  one cannot get something for nothing.

TANSTAAFL applies equally well in economics, where it is related to such things as the fact that everything has a cost, and those costs, while they can be shifted, cannot be made to simply disappear. It is also related to the principle that intervention by governments in the economy always carries costs. For example, Congress could, hypothetically, raise the federal minimum wage to $10 per hour — but the cost of doing so would be increased unemployment, especially for those who now have low-paying jobs. Another possible cost of a minimum-wake hike this large would be a sudden spike in the rate of inflation, which would be harmful to almost everyone.

To understand what people have discovered about the fundamental nature of physical reality, physics must be studied. To understand what is known about social reality in the modern world, economics must be studied. Both subjects are important, and understanding the TANSTAAFL principle is vital in both fields. Unfortunately, gaining that understanding has been made more difficult, for those educated in the United States, simply because of repeated and early exposure to the term “free lunch,” from childhood through high school graduation. How can we effectively teach high school and college students that there are no free lunches, when they have already been told, incessantly, for many years, that such things do exist? The answer is that, in many cases, we actually can’t — until we have first helped our students unlearn this previously-learned falsehood, for it stands in the way of the understanding they need. It isn’t a sound educational practice to do anything which makes it necessary for our students to unlearn untrue statements.

I am not advocating abolition, nor even reduction, of this federal program, which provides essential assistance for many families who need the help. Because I am an American taxpayer, in fact, I directly participate in funding this program, and do not object to doing so. I do take issue, however, with this program teaching students, especially young, impressionable children in elementary school, something which is untrue.

We need to correct this, and the solution is simple:  call these school lunches what they actually are. They aren’t free, for we, the taxpayers, pay for them. Nothing is free. We should immediately replace the phrase “free and reduced lunch” with the phrase “taxpayer-subsidized lunch.” The second phrase is accurate. It tells the truth, but the first phrase does the opposite. No valid reason exists to try to hide this truth.

The Vacuum Cleaner Enigma

Image

The Vacuum Cleaner Enigma

A vacuum is, by definition, a region of space devoid of matter. While a perfect vacuum is a physical impossibility, very good approximations exist. Interplanetary space is good, especially far from the sun. Interstellar space is better, and intergalactic space is even better than that.

Along come humans, then, and they invent these things:

vacuum-cleaner-upright

. . . and call them “vacuum cleaners.”

Now, this makes absolutely no sense. There isn’t anything cleaner than a vacuum — and the closer to an ideal vacuum a real vacuum comes, the cleaner it gets. Since vacuums are the cleanest regions of space around already, why would anyone pay good money for a machine that supposedly cleans them? They’re already clean!

Even cleaning in general is a puzzle, without vacuums being involved at all. To attempt to clean something — anything — is, by definition, an attempt to fight the Second Law of Thermodynamics. Isn’t it obvious that any such effort is, in the long run, doomed from the outset?

—–

[Image note:  I didn’t create the images for this post, but found them using Google. I assume they are in the public domain.]