A Mathematical Model for Human Intelligence

Curiosity and Intelligence

People have been trying to figure out what intelligence is, and how it differs from person to person, for centuries. Much has been written on the subject, and some of this work has helped people. Unfortunately, much harm has been done as well. Consider, for example, the harm that has been done by those who have had such work tainted by racism, sexism, or some other form of “us and them” thinking. This model is an attempt to eliminate such extraneous factors, and focus on the essence of intelligence. It is necessary to start, therefore, with a clean slate (to the extent possible), and then try to figure out how intelligence works, which must begin with an analysis of what it is.

If two people have the same age — five years old, say — and a battery of tests have been thrown at them to see how much they know (the amount of knowledge at that age), on a wide variety of subjects, person A (represented by the blue curve) may be found to know more, at that age, than person B (represented by the red curve). At that age, one could argue that person A is smarter than person B. Young ages are found on the left side of the graph above, and the two people get older, over their lifespans, as the curves move toward the right side of the graph.

What causes person A to know more than person B, at that age? There can be numerous factors in play, but few will be determined by any conscious choices these two people made over their first five years of life. Person B, for example, might have been affected by toxic substances in utero, while person A had no such disadvantage. On the other hand, person A might simply have been encouraged by his or her parents to learn things, while person B suffered from parental neglect. At age five, schools are not yet likely to have had as much of an impact as other factors.

An important part of this model is the recognition that people change over time. Our circumstances change. Illnesses may come and go. Families move. Wars happen. Suppose that, during the next year, person B is lucky enough to get to enroll in a high-quality school, some distance from the area where these two people live. Person B, simply because he or she is human, does possess curiosity, and curiosity is the key to this model. Despite person B‘s slow start with learning, being in an environment where learning is encouraged works. This person begins to acquire knowledge at a faster rate. On the graph, this is represented by the red curve’s slope increasing. This person is now gaining knowledge at a much faster rate than before.

In the meantime, what is happening with person A? There could be many reasons why the slope of the blue curve decreases, and this decrease simply indicates that knowledge, for this person, is now being gained at a slower rate than before. It is tempting to leap to the assumption that person A is now going to a “bad” school, with teachers who, at best, encourage rote memorization, rather than actual understanding of anything. Could this explain the change in slope? Yes, it could, but so could many other factors. It is undeniable that teachers have an influence on learning, but teacher quality (however it is determined, which is no easy task) is only one factor among many. Encouraging the “blame the teacher” game is not the goal of this model; there are already plenty of others doing that.

Perhaps person A became ill, suffered a high fever, and sustained brain damage as a result. Perhaps he or she is suddenly orphaned, therefore losing a previous, positive influence. There are many other possible factors which could explain this child’s sudden decrease of slope of the blue “learning curve” shown above; our species has shown a talent for inventing horrible things to do to, well, our species. Among the worst of the nightmare scenarios is that, while person B is learning things, at a distant school, the area where person A still lives is plunged into civil war, and/or a genocide-attempt is launched against the ethnic group which person A belongs to, as the result of nothing more than an accident of birth, and the bigotry of others. Later in life, on the graph above, the two curves intersect; beyond that point, person B knows more than person A, despite person B‘s slow start.  To give credit, or blame, to either of these people for this reversal would clearly be, at best, a severely incomplete approach.

At some point, of course, some people take the initiative to begin learning things on their own, becoming autodidacts, with high-slope learning curves. In other words, some people assume personal responsibility for their own learning. Most people do not. Few would be willing to pass such judgment on a child who is five or six years old, but what about a college student? What about a high school senior? What about children who have just turned thirteen years old? For that matter, what about someone my age, which is, as of this writing, 48? It seems that, the older a person is, the more likely we are to apply this “personal responsibility for learning” idea. Especially with adults, the human tendency to apply this idea to individuals may have beneficial results. That does not, however, guarantee that this idea is actually correct.

I must stop analyzing the graph above for now, because the best person for me to examine, at this point, in detail, is not on the graph above. He is, however the person I know better than anyone else: myself. I’ve been me now for over 48 years, and have been “doing math problems for fun” (as my blog’s header-cartoon puts it) for as long as I can remember. This is unusual, but, if I’m honest, I have to admit that there are inescapable and severe limits on the degree to which I can make a valid claim that I deserve credit for any of this. I did not select my parents, nor did I ask either of them to give me stacks of books about mathematics, as well as the mathematical sciences. They simply noticed that, when still young, I was curious about certain things, and provided me with resources I could use to start learning, early, at a rapid rate . . . and then I made this a habit, for, to me, learning is fun, if (and only if) the learning is in a field I find interesting. I had absolutely nothing to do with creating this situation. My parents had the money to buy those math books; not all children are as fortunate in this respect. Later still, I had the opportunity to attend an excellent high school, with an award-winning teacher of both chemistry and physics. To put it bluntly, I lucked out. As Sam Harris, the neuroscientist, has written, “You cannot make your own luck.”

At no point in my life have I managed to learn how to create my own luck, although I have certainly tried, so I have now reached the point where I must admit that, in this respect, Sam Harris is correct. For example, I am in college, again, working on a second master’s degree, but this would not be the case without many key factors simply falling into place. I didn’t create the Internet, and my coursework is being done on-line. I did not choose to be born in a nation with federal student loan programs, and such student loans are paying my tuition. I did not create the university I am attending, nor did I place professors there whose knowledge exceeds my own, regarding many things, thus creating a situation where I can learn from them. I did not choose to have Asperger’s Syndrome, especially not in a form which has given me many advantages, given that my “special interests” lie in mathematics and the mathematical sciences, which are the primary subjects I have taught, throughout my career as a high school teacher. The fact that I wish to be honest compels me to admit that I cannot take credit for any of this — not even the fact that I wish to be honest. I simply observed that lies create bad situations, especially when they are discovered, and so I began to try to avoid the negative consequences of lying, by breaking myself of that unhelpful habit. 

The best we can do, in my opinion, is try to figure out what is really going on in various situations, and discern which factors help people learn at a faster rate, then try to increase the number of people influenced by these helpful factors, rather than harmful ones. To return to the graph above, we will improve the quality of life, for everyone, if we can figure out ways to increase the slope of people’s learning-curves. That slope could be called the learning coefficient, and it is simply the degree to which a person’s knowledge is changing over time, at any given point along that person’s learning-curve. This learning coefficient can change for anyone, at any age, for numerous reasons, a few of which were already described above. Learning coefficients therefore vary from person to person, and also within each person, at different times in an individual’s lifetime. This frequently-heard term “lifelong learning” translates, on such graphs, to keeping learning coefficients high throughout our lives. The blue and red curves on the graph above change slope only early in life, but such changes can, of course, occur at other ages, as well.

It is helpful to understand what factors can affect learning coefficients. Such factors include people’s families, health, schools and teachers, curiosity, opportunities (or lack thereof), wealth and income, government laws and policies, war and/or peace, and, of course, luck, often in the form of accidents of birth. Genetic factors, also, will be placed on this list by many people. I am not comfortable with such DNA-based arguments, and am not including them on this list, for that reason, but I am also willing to admit that this may be an error on my part. This is, of course, a partial list; anyone reading this is welcome to suggest other possible factors, as comments on this post. 

The American Historical Clock of War and Peace

SANYO DIGITAL CAMERA

The yellow years are ones in which the USA was getting into or out of major wars — or both, in the case of the brief Spanish-American War. The red years are war years, and the blue years are years of (relative) peace.

The sectors are each bounded by two radii, and a 1.5° arc. The current year is omitted intentionally because 2016 isn’t over yet, and we don’t know what will happen during the rest of it. 

A Tour of the Periodic Table of the Elements, Part 1

Periodic-Table-of-Elements 1st one with alkali metal and such

(click to enlarge)

In this, and the some upcoming posts, I’ll be showing you various collections of elements on the horizontally-extended version of the periodic table — one that includes the f-block elements in their proper place, rather than relegating them to two separate rows below the other elements. (I’m also suggesting the purple letters A – N for the usually-unrecognized groups in the f-block, and keeping the group numbers 1-18, with which many are familiar, for other groups).

For this first post, I’ll start with some sets of elements which are familiar to most who have studied the subject, plus some others which are much less well-known.

  • Light blue — the alkali metals.
  • Black background with red symbol and atomic number — hydrogen, which is definitely not an alkali metal, despite it sharing group 1 with them.
  • Dark blue — the alkaline-earth metals.
  • Dark yellow — the lanthanides.
  • Orange — these two elements are included with the lanthanides in some sources, and with the transition metals in others.
  • Bright pink — the actinides.
  • Light pink — these two elements are included with the actinides is some sources, and with the transition metals in others.
  • Red — the transition metals, also known as the transition elements, and d-block elements.
  • Light purple — group 13 is often called the “boron group,” but it also goes by other names, such as the “icosagens” and the “triels.”
  • Dark purple — group 14 is often called the “carbon group,” but it also goes by other names, such as “tetragens” and “crystallogens.” In semiconductor physics, these elements are referred to as group IV elements. 
  • Dark green — group 15 elements are referred to as the pnictogens, or nitrogen-group elements.
  • Bright yellow — bright yellow is used here for the chalcogens, also known as the group 16 elements, or oxygen-group elements.
  • Light green — the halogens.
  • Gray — the noble gases.

Earth’s Oceans’ and Continents’ Relative Surface Areas, Analyzed, with Two Pie Charts

I’ll start this analysis with a simple land/water breakdown for Earth’s surface:

land and water

The two figures in the chart above are familiar figures for many — but how does “land” break down into continents, and how does “water” break down into oceans, as fractions of Earth’s total surface area? That’s what this second chart shows.

continents and oceans

With continents, I placed them on the chart to make it easier to see physically-connected continents as sets of adjacent wedges of similar color, separated only by thin lines. The most obvious example of this is Europe and Asia, which are considered separate continents in the first place only for historical reasons, not geographical ones. Combine them, into Eurasia, and it has 36.3% of Earth’s total land area, which is (36.3%)(0.292) = 10.6% of Eath’s total surface area. Even then, Earth’s three largest oceans (the Atlantic, Indian, and Pacific Oceans) are each larger than Eurasia.

There are other naturally-connected continents, albeit with much smaller land connections — narrow enough for humans to have altered this fact, only a “blip” ago on geographical time-scales, by building the Suez and Panama Canals. In the case of the Suez, its construction severed, artificially, the naturally-occurring land connection between Eurasia and Africa, and the term “Afro-Eurasia” has been used for the combination of all three traditionally-defined continents. Afro-Eurasia has 56.7% of Earth’s land, but that’s only (56.7%)(0.292) = 16.6% of Earth’s total surface area. That’s larger than the Indian Ocean, at (19.5%)(0.708) = 13.8% of Earth surface area. However, both the Atlantic Ocean, at (23.5%)(0.708) = 16.6% of Earth’s surface area, and the Pacific Ocean, at (46.6%)(0.708) = 33.0% of Earth’s surface area, are still larger than Afro-Eurasia.

The Pacific Ocean alone, in fact, has a greater surface area than all of Earth’s land — combined.

The other case that can be made for continent-unification involves North and South America, since their natural land connection was severed, only about a century ago, by the construction of the Panama Canal. Combine the two, and simply call the combination “the Americas,” and that’s 28.5% of earth’s land, which is (28.5%)(0.292) = 8.3% of Earth’s surface area. (I didn’t simply call this combination “America” to avoid confusion with the USA.) The Americas, even in combination, are not only smaller than each of Earth’s three largest oceans (the Atlantic, Indian, and Pacific), but also smaller than Afro-Eurasia, or Eurasia — or even Asia alone, by a narrow margin.

By the way, there are lots of things that don’t show up on the second chart above: islands, inland seas, lakes, rivers, etc., and there’s a good reason for that: on the scale of even the larger pie chart above, all these things are so small, compared to the oceans and continents, that they simply aren’t large enough to be visible.

I Now Have Empirical Evidence for the Existence of My Own Brain!

Pic-03202015-001

A doctor needed to look at my brainwaves (and a bunch of other MSLs, also known as “medical squiggly lines”), as recorded during a sleep study, so of course I asked him if I could see them myself. Who wouldn’t want to see their own brainwaves?

Zome: Strut-Length Chart and Product Review

This chart shows strut-lengths for all the Zomestruts available here (http://www.zometool.com/bulk-parts/), as well as the now-discontinued (and therefore shaded differently) B3, Y3, and R3 struts, which are still found in older Zome collections, such as my own, which has been at least 14 years in the making.

Zome

In my opinion, the best buy on the Zome website that’s under $200 is the “Hyperdo” kit, at http://www.zometool.com/the-hyperdo/, and the main page for the Zome company’s website is http://www.zometool.com/. I know of no other physical modeling system, both in mathematics and several sciences, which exceeds Zome — in either quality or usefulness. I’ve used it in the classroom, with great success, for many years.

A Graph Showing Approximate Mass-Boundaries Between Planets, Brown Dwarfs, and Red Dwarf Stars

planet and brown dwarfs and red dwarf stars

 

I found the data for this graph from a variety of Internet sources, and it is based on a mixture of observational data, as well as theoretical work, produced by astronomers and astrophysicists. The mass-cutoff boundaries I used are approximate, and likely to be somewhat “fuzzy” as well, for other factors, such as chemical composition, age, and temperature (not mass alone), also play a role in the determination of category for individual objects in space.

Also, the mass range for red dwarf stars goes much higher than the top of this graph, as implied by the thick black arrows at the top of the chart. The most massive red dwarfs have approximately 50% of the mass of the Sun, or about 520 Jovian masses.

A Graph of Infections and Deaths During the First Four Months of the 2014 Ebola Outbreak

Ebola

Source of data for this graph:  http://www.abc.net.au/news/2014-07-31/ebola-timeline-deadliest-outbreak/5639060.

The date I used as “day zero,” March 25, 2014, is the day when the Ministry of Health in Guinea announced an outbreak of ebola was in progress, according to this source: https://en.wikipedia.org/wiki/Ebola_2014. It started earlier, of course, but was not widely known before that date. The last data points shown are for July 27, 2014, the most recent date for which I have the needed information.

If Recent Trends Continue, Gasoline Will Soon Be Free

Here’s what gas prices have done in the U.S. during the last three months:

Image

The price of gas three months ago was $3.79 per gallon, and now it is $3.27, so, in three months, it dropped 52 cents per gallon.  That’s a rate of -$2.08 per year per gallon, so, if this recent trend continues, gasoline will cost not much more than a dollar a gallon a year from now, and will become free sometime later in 2014. In fact, by the end of 2014 (again, if this trend continues), gasoline will have a negative price, which means they’ll pay us to take the stuff.

Sheryl Crow must have known this day would come, for she wrote a song about gasoline becoming free a few years back, which you can find below (embedded from YouTube) –– a song called, of course, “Gasoline.” Enjoy!