<![CDATA[Lattice Insight, LLC<br />​Mathematical Statistics Consulting - Blog]]>Mon, 13 May 2024 15:03:47 -0400Weebly<![CDATA[The Significance of Statistical Significance]]>Fri, 11 Dec 2020 20:58:18 GMThttp://latticeinsight.com/blog/the-significance-of-statistical-significanceBelow is a link to my paper in the Proceedings of the 2020 Joint Statistical Meetings, "The Significance of Statistical Significance":

ww2.amstat.org/MembersOnly/proceedings/2020/data/assets/pdf/1505342.pdf

Abstract: Hypothesis testing and decision rules are in the news as never before. The reproducibility of experiments, one of the touchstones of the scientific method, is uncertain, while some warn that most scientific results are wrong; see Ioannidis, as well as van der Laan.

At the heart of the controversy is the significance of statistical significance: specifically, the significance of p-values. Poorly crafted decision rules have led to a loss of confidence in p-values, with some proposing to ban this incredibly useful tool altogether. We reject this over-reaction.

​We will discuss three aspects of p-values: 1) improving model specification, thereby reducing the probability of a type II error (false negative), by introducing new families of transformations to reduce skewness and excess kurtosis; 2) setting significance level as a decreasing function of sample size, thereby reducing the probability of a type I error (false positive), thus compromising between a fixed significance level and a fixed meaningful effect size; 3) continuous decision rules that assign plausibility levels to the null hypothesis and alternative hypothesis.
]]>
<![CDATA[The "experts" were wrong again on the unemployment rate]]>Fri, 04 Sep 2020 17:51:57 GMThttp://latticeinsight.com/blog/the-experts-were-wrong-again-on-the-unemployment-rateAh, the "experts" - often wrong, but never in doubt. They merely make their pronouncements from Mount Olympus, or Mount Bloomberg News, or Mount CDC/NIH, or Mount Imperial College - and mere mortals prostrate themselves. No independent thinking welcome here!

Back in the old days before coronavirus, we used to evaluate prognostications on their accuracy. We even teach that still, although the concept is clearly old-fashioned.

When I turned on the business news this morning, I heard that the consensus forecast of the "experts" was that the (U3) unemployment rate for August, 2020 would decline slightly from July's 10.2% to roughly 9.8%. I thought that was too high. I expected a rate around 9.2%, with a possibility of being slightly below 9%. The true unemployment rate for August was 8.4%.

My inexpert guess was much closer than that of the experts. How was that possible? Don't they have their fancy terminals and databases and expensive statistical software, that can compute multivariate exogenous seasonal autoregressive fractionally integrated moving average time series? How can an amateur with Excel under one arm hope to top that? Abandon hope, all ye who enter here!

It seems that there is a connection between the U3 unemployment rate, reported monthly, and the weekly insured unemployment rate, which appears as part of the initial claims news release. The insured unemployment rate only refers to that part of the labor force covered by unemployment insurance. The rate is the ratio of continuing unemployment claims to the insured labor force, and has been reported weekly since January, 1971.

While not identical, it seems that the two rates should be closely related. The following scatterplot relates the unemployment rate (UNRATE on the FRED online time series database) to monthly aggregates of the insured unemployment rate (IURSA). For reference purposes, in the period from January, 1971 to February, 2020, the mean value of IURSA was 2.8, and for UNRATE it was 6.2. In February, 2020 alone, IURSA was 1.2 (a record low), and UNRATE was 3.5 (about a 50-year record low). This point appears in the lower left of the graph.
The fit shown above is a power law fit, with R-squared over 66%. The large cloud of 590 observations to the lower left consists of the months from January, 1971 to March, 2020. April, 2020 is the highest point; May, 2020 is the point farthest to the right; then June, July, and August move down and to the left. The five outliers to the upper right have very little effect on the power law equation or R-squared.

On the first Thursday of the month (September 3 in this case), we usually know the average IURSA for the previous month. This was 9.9% for August, 2020. The question is what should have been expected for UNRATE. The model from the previous month tells us that the best point estimate is about 12.6%.

The July IURSA was 11.4%. Does it seem reasonable that when IURSA declined 1.5% month-over-month to 9.9%, that the closely related UNRATE should have increased from July's 10.2% to 12.6%? Of course not.

Rather than regressing to the best-fit curve, I am proposing that the points themselves are regressing to their pre-pandemic level (1.2%,3.5%) as ordered pairs. This is visible in the sequence of points from May through August, 2020. Even though we only knew the points from May through July, the IURSA coordinate for August was known to continue that trend. I deduced that it was more reasonable to project UNRATE on a line connecting July, 2020 (11.4%,10.2%) to February, 2020 (1.2%,3.5%). This is shown in the graph below.
Although I too overestimated August's unemployment rate, this simple method outperformed the experts. Sometimes simplicity is a virtue.
]]>
<![CDATA[The social-economic response to the pandemic]]>Sun, 23 Aug 2020 13:14:04 GMThttp://latticeinsight.com/blog/the-social-economic-response-to-the-pandemicSeveral months into the Wuhan coronavirus pandemic, it is reasonable to continue to ask whether the social-economic response to the pandemic has been proportional to the health impact of the pandemic within the United States.

This raises the question about how to measure both impacts. We use two weekly measures provided by the federal government.

The CDC compiles data for the weekly total number of deaths from all causes during a flu season, which is one year measured from October 1 to September 30. We compare this number to the weekly total number of deaths from all causes averaged during the preceding six years (2013-2019), adjusted for the size of the population, expressing the result as a ratio in percent terms.

The US Labor Department compiles a statistic called the insured unemployment rate, which measures the fraction of the unemployment-insured labor force that is continuing to collect unemployment benefits. This number is smaller than the better-known unemployment rate, but it is reported weekly. We compare this number to the insured unemployment rate reported for several months before the pandemic, a record low of 1.2%, expressing the result as a ratio in percent terms.

Lastly, we compute the ratio of the two ratios above, which tells us how far the impact of governmental policy responses has hit the economy relative to the size of the health impact. The results are shown below.
The total deaths ratio hit its maximum value of 134% during mid-April; that is, during one week, there were 34% more deaths than normal. Since early May, however, total deaths per week from all causes have returned to approximately normal. It would be reasonable to say that the pandemic has ended, unless for some reason people have stopped dying from cardiovascular disease and cancer.

The unemployment ratio hit its maximum value of 1425% during mid-May; that is, during one week, the insured unemployment rate was 14.25 times its normal value. As of early August, the latest complete data available, this ratio stands at 850%.

The ratio-of-ratios, the measure of disproportionate response, hit a maximum of 1368% during mid-May, and its most recent value in early August was 848%. This suggests that the drastic impact of policy choices on the economy far outweighs the danger posed by the coronavirus.
]]>
<![CDATA[Hydroxychloroquine's effect on coronavirus death rates]]>Sun, 16 Aug 2020 21:00:11 GMThttp://latticeinsight.com/blog/hydroxychloroquines-effect-on-coronavirus-death-ratesI would like to present to you an excellent, in-depth survey of the use of hydroxychloroquine (HCQ) in the United States to treat Wuhan coronavirus:

www.tabletmag.com/sections/science/articles/hydroxychloroquine-morality-tale

Another website contains maps of HCQ restrictions by state within the United States, and by country around the world: americasfrontlinedoctorsummit.com/hcq/

A group of researchers discovered a 79% reduction in mortality from coronavirus in countries that make extensive use of HCQ compared to countries that restrict it: hcqtrial.com/

I wondered if there was a comparable result for American states. I combined the information from the HCQ by state map above, together with Worldometers data on the death rate per 1 million population (using a logarithm transformation): www.worldometers.info/coronavirus/country/us/

Here is the result.
The p-value of this relationship is 0.0002, and that is highly significant. Each additional level of restriction of HCQ adds about 37% on average to coronavirus mortality; conversely, each level of HCQ restriction that is removed reduces mortality about 27%. The average reduction from level 5 restrictions (for example, inpatient use only) to level 1 restrictions (ordinary prescription) reduces mortality about 72%. That reduction is noticeably close to that found by the HCQ trial study.

Professional statisticians will notice that the predictor variable is really ordinal, not continuous, so caution is called for in interpreting the results. This is a quick analysis, but there can be no doubt from the graph that even an ANOVA will produce a similarly significant result.
]]>
<![CDATA[International alliance with the United States]]>Thu, 09 Jul 2020 18:17:29 GMThttp://latticeinsight.com/blog/international-alliance-with-the-united-statesThe map below provides one way to estimate the closeness of the alliance between the countries of the world and the United States. It is based on the US State Department's annual report, Voting Practices in the United Nations. We focus on the last 3 calendar years, 2017-2019, and votes in the UN General Assembly identified by the US State Department as "important" policy votes. The State Department awards a point for full agreement, and a half-point for partial agreement, with American votes in the UN General Assembly. The scores for the last 3 years were averaged together to create the following graphic. In this graphic, blue represents complete agreement, and red represents complete disagreement.
Here are the scores for selected countries.

100%: USA
96.7%: Israel
84.0%: Canada
78.2%: Australia
58.4%: United Kingdom
58.1%: Italy
57.3%: France
57.0%: Germany
54.6%: Japan
46.0%: Mexico
41.9%: Brazil
35.9%: Nigeria
29.7%: Bangladesh
29.4%: India
28.1%: Pakistan
​25.5%: Indonesia
20.7%: Russia
14.1%: China
]]>
<![CDATA[Rely on the data: racism around the world]]>Tue, 07 Jul 2020 20:57:41 GMThttp://latticeinsight.com/blog/rely-on-the-data-racism-around-the-worldThe question on everyone's mind these days is: how racist is America? To put things in context, it would help to compare America to other countries.

We rely on a graphic from a Washington Post story: www.washingtonpost.com/news/worldviews/wp/2013/05/15/a-fascinating-map-of-the-worlds-most-and-least-racially-tolerant-countries/

This story reported on a study done in 2013 by the World Values Survey, based in Sweden. They asked people around the world whether they were willing to have a neighbor of a different race. A higher score on this survey represents lower racial tolerance, shown in red on the map. A lower score represents higher tolerance, shown in blue on the map; see below.
Jordan and India were by far the least racially tolerant countries in the survey. The Americas (including the United States), northern Europe, Australia, and New Zealand were the most racially tolerant.

​Next, we examine migration data from the United Nations. The following map represents total net migration from 1960 to 2019. Blue countries have net immigration (inflow of people), red countries have net emigration (outflow of people), and the deepness of the color represents the number of people migrating.
We highlight the countries with the most migration in the following bar chart.
Clearly, when the people of the world get to vote with their feet, the country that more people want to move to than any other country by far is the United States of America, with more than 50 million net immigrants from around the world in the last 60 years. No other country took in more than 14 million immigrants in the same period.

Total world migration during 1960-2019 was about 182 million. Thus, the United States was the goal of about 28% of all migrants in the world. If we rely on the wisdom of the crowd, it seems that the United States is a desirable place to live after all.
]]>
<![CDATA[Sue the experts]]>Fri, 15 May 2020 19:06:08 GMThttp://latticeinsight.com/blog/sue-the-experts"Often wrong, but never in doubt." They are on your television screen, the credentialed class, the talkers, the bureaucrats, the professors, impeccably tailored, coiffed, and made-up. The three words not in their vocabulary: "I don't know." We trust them to make life-or-death decisions for us, not because they have a track record of being correct, but because we are told they are the "experts". How do we know they are "experts"? Because other people say so, people who themselves are "experts". Their "expertise" is determined through a circular process. They are in the club, and you are not.

If you do dare to think for yourself, you are deemed "dangerous", "discredited", "debunked". After a struggle of millennia to liberate the mind of man from tyrannical diktat, and the urging of decades to "Question authority", all of a sudden, "The science is settled"; "We follow the science"; "We follow the experts"; "We follow the consensus". We dare not look at the man behind the curtain.

A few decades after repeated, dire warnings about the upcoming ice age, the "experts" decided that all life on earth will end in 11 years due to catastrophic anthropogenic global warming . Like good lemmings, the chattering class instantly adopted the new cry of "wolf". That changed back to global cooling this morning, with a report on the sunspot minimum.

Dr. Fauci's history includes incorrect forecasts on the AIDS pandemic, and statements from early 2020 about the minimal danger from the Wuhan coronavirus. The Imperial College pandemic model came from a forecaster who overestimated deaths from mad cow disease by several orders of magnitude. The IHME pandemic model's 95% prediction intervals only contained actual future observations 30% of the time. The "experts" decided that elderly coronavirus patients should be placed in nursing homes. None of this has stopped the chattering class from obeisance to these failed prophets.

The world is a complex place, with many variables. Even among the known variables, their values are not often known.

This household is becoming aware of a new horror of our national lockdown: the denial of medical care to the 99.6% of Americans who are not coronavirus patients. Estimates are for tens of thousands of missed cancer diagnoses in America; studies in Australia and South Africa predict similarly dire consequences, as the government allows treatment of only one disease that has afflicted 0.06% of the world's population, and killed 0.004% of the world's population. What did the chattering class say about the importance of saving even one life?

This petition needs 100,000 signatures by June 8 to get action. Please help us by signing: petitions.whitehouse.gov/petition/order-governors-allow-their-citizens-receive-healthcare-we-need-coronavirus-isnt-only-health-problem

The following graphics show my forecast of new coronavirus cases in the United States, with R-squared close to 96%; and weekly new coronavirus cases per million population in several countries.
]]>
<![CDATA[The deadly tyranny of scientism]]>Fri, 01 May 2020 20:27:59 GMThttp://latticeinsight.com/blog/the-deadly-tyranny-of-scientismEvery generation has its new non-theistic religion. We have seen the worship of money, power, fame, celebrity, athletic prowess, utopianism, and environmentalism. Now, the best way to virtue-signal is to assert that you "believe in the science", or that you "believe the experts". However, the "experts" aren't always right; and "the science" has nothing to do with science. The adherents of scientism actually worship authority; but the worship of authority is deadly for science.

The adherents of scientism do not attempt to define science. We need to know what science is before we state that we "believe in the science". Science is the search for the objective truth regarding the laws and details of nature. It is hypocritical for the same intellectuals who have been lecturing us for the last half-century that there is no such thing as objective truth, suddenly to turn around and claim to be the sole custodians of objective truth.

There is objective truth, as revealed, for example, in mathematics. Both mathematics and natural science demonstrate that objective truth is infinitely detailed; we can always learn more. The adherents of scientism, those who "believe the experts", have held back scientific progress for centuries. In modern times, they proudly identify themselves with "the consensus".

Five hundred twenty-eight years ago, the consensus of the "experts" was that Columbus's voyage was doomed to fall off the edge of the flat earth. Two hundred years ago, the consensus of the "experts" was that mankind could not survive travel faster than half the speed limit of most highways today. The consensus of the "experts" one hundred fifty years ago was that light propagates through the luminiferous ether, and that there is a fixed frame of reference for physics. Einstein's "Jewish physics" was rejected in a letter signed by the "experts" of German physics. The discovery of radioactivity and antibiotics were accidents. The discoveries of the citric acid cycle and quasi-crystals were not published on their initial submission; the second one was deemed impossible; both discoveries earned Nobel prizes.

Unfortunately, humanity never takes this lesson to heart. The elites have never given up on the goal outlined in Plato's Republic: to constitute a ruling class. Original thinking is suppressed. Mere observation of fact is treated as dangerous. One dare not acknowledge that the emperor has no clothes.

Someone sent me an interesting graphic comparing mortality rates for the influenza virus of 2017-18 with the Wuhan coronavirus of 2019-20, in America, stratified by age. See below.
I qualify this: I have not seen the original data for this study. However, there does not appear to be much difference.

The following graphics show cumulative Wuhan coronavirus deaths per million population, from Our World in Data, and weekly new cases of Wuhan coronavirus for the most afflicted countries.
The first chart shows that America's cumulative death rate from Wuhan coronavirus is substantially lower than most countries in western Europe. The second chart might seem to imply that Brazil is doing something right; however, they are emerging from summer to cooler fall weather, and their cases may begin to rise.

Immersing myself in the data for the last two months, I am gradually concluding that nothing we do makes a difference in the progress of the virus. The virus is working towards a goal of infecting as many as possible. Most infected people will develop immunity. Only infected people will slow the progress of the virus. This is the concept of herd immunity.

Originally, America was told that we must "flatten the curve". We were fearful that a tidal wave of sick patients would overwhelm the health care system, with tens of thousands of corpses blocking the entrances of hospitals. We have passed the peak, without overwhelming the health care system in most of America. Our local hospital is performing operations at only 20% of its full capacity, due to the governor's orders. Presumably, cancer screenings, stents, dialysis, and the like are "elective" procedures - if you are not a member of the elite.

But the elites have moved the goalposts. Now that we have successfully "flattened the curve", we are being told that we cannot be released from effectively national house arrest until the Wuhan coronavirus is eradicated.

One week from today, we are likely to discover that the unemployment rate is at least 18%; GDP is expected to plunge in the second quarter at an annualized rate of close to 30%. The people advocating this draconian national lockdown work from home on their computers. Their voices are amplified by their friends in the media. These latter tell us soothingly and mendaciously, "We are all in this together". They are not small-business owners struggling to pay bills, on a paper-thin margin that has torn asunder, nor are they craftsmen, or independent contractors.

But what about the death toll? It seems that during our national crisis, the death toll has decreased.The following numbers are from the CDC: US deaths from all causes in weeks 7 through 13 in 2020 compared with average deaths from all causes in weeks 7 through 13 from 2014 through 2019 in (parens).
  • Week 7 - 55,981 (57,033)
  • Week 8 - 55,494 (56,300)
  • Week 9 - 54,834 (55,848)
  • Week 10 - 54,157 (56,160)
  • Week 11 - 52,198 (55,395)
  • Week 12 - 51,602 (55,035)
  • Week 13 - 52,285 (54,212)

My conclusion is straightforward:

Social distancing, face masks, washing: yes!

Stay-at-home orders, business closures: no!

​Never surrender your right to think for yourself!

]]>
<![CDATA[Some animals are more equal than others]]>Sun, 26 Apr 2020 23:45:59 GMThttp://latticeinsight.com/blog/some-animals-are-more-equal-than-othersIt's a tough time to be a "non-essential" worker or a small-business owner. One of America's most thoughtful, unifying, and inspirational governors recommended that such people simply get jobs as essential workers. Problem solved!

The problem may be a bit more difficult than it seems, however. The US Department of Labor follows a weekly measure of unemployment called the Insured Unemployment Rate. In the past decade, this number has run at about 3 times the standard unemployment rate. The jump last week from 1.2% to 11% should be alarming, as seen in the graphic below.
If we add in the effects of underemployment, shown in the U6 rate, we see that U6 has run between 3 and 7 times the insured unemployment rate, as shown in the graphic below.
Although this ratio declines in bad economic times, the most recent level of this ratio, about 3.6, implies that we could be looking at a U6 rate close to 40%, with standard unemployment close to 20%, in the next Labor Department release Friday morning, May 8.

State governments are closing businesses arbitrarily, with no guaranteed date for re-opening. These businesses employ tens of millions of Americans, run on a thin margin, and are weeks away from being unable to pay their own bills.

An esthetician's business was shut down by a wise governor: www.breitbart.com/politics/2020/04/20/pennsylvania-nonessential-worker-blasts-gov-tom-wolf-why-dont-you-stop-getting-paid/

A consignment store owner had to lay off her employees, while big-box stores are still permitted to sell clothing, as directed by the same wise governor: www.breitbart.com/politics/2020/04/20/pa-consignment-store-owner-how-can-walmart-target-still-sell-kids-clothes-i-cant/


A farmer had 61 thousand chickens euthanized, at a time when our supermarket has very few eggs on the shelf: www.startribune.com/egg-demand-shifted-and-61-000-minnesota-chickens-were-euthanized/569817312/

Why has this been done? To save lives? So far, projections show that American deaths from Wuhan coronavirus will be approximately as many as the high end of flu seasons. Indeed, we may well be on the road to herd immunity. Several recent studies have found the prevalence of Wuhan coronavirus antibodies in the population to be dozens of times higher than the number of confirmed cases. But the Congressional Budget Office is forecasting a contraction of 40% in GDP, a return to the 1930s, if we continue the lockdowns.

The following graphic shows the per capita death rate from Wuhan coronavirus in America to be running substantially less than that in Europe.
Within America itself, there is tremendous variability in the impact of Wuhan coronavirus by county, as shown in the following graphic. While New York City and its immediate suburbs have been hit hard, the parts of America that grow our food have been touched more lightly. As I showed in my previous post, this is a direct function of population density. To be blunt, America should not be forced into a Venezuela-style calamity, starving with no toilet paper, due to a disease primarily affecting a few large cities and their immediate suburbs (also shown in my previous post).
In a nutshell, the lesson is this.

Social distancing, face masks, washing: yes!

Stay-at-home orders, business closures: no!

]]>
<![CDATA[My wife's tumor]]>Mon, 20 Apr 2020 22:38:57 GMThttp://latticeinsight.com/blog/my-wifes-tumorTwo months ago, in February, an MRI revealed a "mass" near a gland in my wife's neck: an apparent parotid lipoma. Most such "masses" are benign and slow-growing, but not all of them. The doctor recommended a follow-up MRI in mid-April, to see if the "mass" had changed. We made our appointment.

In early April, however, the imaging center, across the state line in Delaware, informed us that all imaging centers in the healthcare chain we use, save one, would be closing due to Wuhan coronavirus. That one remaining center would be open only for the highest priority cases, and that does not include tracking the progress of a "mass". The scheduler informed us, with all due gravity and seriousness, of the fact that one Delaware resident had already died from the Wuhan coronavirus. To put this into context, approximately 23 Delawareans die every day on average.

This morning, we had a tele-conference with the doctor. There was little we could discuss without a new MRI scan. The doctor shared much interesting information, however.

The major hospital in northern Delaware performs about 180 surgeries per day in ordinary times. However, they have reduced surgeries by about 80%. (So much for overwhelming the system.) The triage rule for medical care is excruciatingly stingy; the only patients who can receive surgery are those who would die, or suffer loss of limb or vision, within 2 weeks.

Again: if you are gravely ill but could be expected to die 3 weeks from now, the local healthcare system will not operate on you. After all, if you come into their facilities, you might get sick or make someone sick!

I do not want to single out this particular health care conglomerate for opprobrium; many healthcare systems under the control of a certain political party have rationed care to the point of denying care. I have read the harrowing account of a woman's horrifying suffering in a Virginia hospital because the governor forbids the prescription of hydroxychloroquine for treating Wuhan coronavirus.

The governors of Pennsylvania, Delaware, and other states have assumed godlike powers of decision about what medical procedures are "elective" and which are not. Abortion is an "essential" activity, but following the progress of a tumor is not essential.

Yesterday, while driving on our first shopping trip in 3 weeks, we observed many closed businesses, including businesses closed permanently. The customers could not be trusted to determine which businesses are essential, so the governors did it for them. The wholesaler we patronized sells clothing, office supplies, hardware, garden supplies, and so on, like many of the closed businesses we saw; but the food sold by the wholesaler enabled the entire business to remain open. I recommend that all businesses sell snacks and beverages to qualify as "essential".

​The graphic below depicts the number of deaths from Wuhan coronavirus as of April 19. New York City (5 counties) comprises some 35% of the national total, in red. The orange slices are 7 counties in the New York City metropolitan area. The 3 yellow counties are other hard-hit counties (and one county-equivalent). Green represents the rest of the country, about 47% of the total.
The graphic below depicts Wuhan coronavirus deaths rate per 1 million population in the USA as of April 19. We use the same color coding. Notice that when these 15 counties and county-equivalents are removed from the USA total, the death rate drops to about half the current national average.
The following graphic shows these counties on a national map. Please do not misinterpret this map. Wuhan coronavirus does exist outside the colored counties, but at a lower rate.
The question has to be asked: is the cure worse than the disease? The answer is complex. The disease is fatal for a small number of people, especially those who are older and have certain pre-existing medical conditions. The disease lingers for some survivors in unpleasant forms. Wuhan coronavirus must be taken seriously.

However, social distancing, which is effective for containment, does not require stay-at-home orders, which are not effective for containment, according to a second such study I read today. Theoreticians who are monomaniacally focused on the eradication of the Wuhan coronavirus, and the politicians who follow their recommendations, are willing to sacrifice the lives of hundreds of thousands of Americans who will not get the medical care they need, not to mention the needless drug overdoses and suicides we will suffer in the next depression.

It is time to open up America, while maintaining social distancing, wearing face masks, and washing. Otherwise we will not have a country left to save.
]]>
<![CDATA[Six hundred]]>Fri, 17 Apr 2020 01:03:49 GMThttp://latticeinsight.com/blog/six-hundredSix hundred. Six zero zero. That is the approximate number of jobs destroyed in the United States, so far, for every American life lost to the Wuhan coronavirus, so far.

As of April 16, 2020, about 34 thousand Americans have died with Wuhan coronavirus. Under CDC guidance, coronavirus does not have to be known as the principal cause of death. Indeed, the decedent need not have tested positive for the disease, as long as coronavirus is presumed or suspected to have been present.

In the 1957-58 Asian flu pandemic, about 100 thousand Americans died from the flu, in a country with about half the population of today's America. That is equivalent to about 200 thousand deaths today. The 2017-18 flu epidemic killed about 61 thousand Americans, for comparison.

America's response to the current pandemic has been to shut down the economy, and in some states, to put citizens under virtual house arrest. Moralizing politicians and health experts insist that the lockdown continue nationally until there are no new cases for two straight weeks. America suffers about 7700 deaths each day already, including some from infectious disease (and some from medical error...), but never has there been such a drastic response to a disease that kills far fewer children than the flu.

The following graphic shows the progress of new cases of Wuhan coronavirus in the six most affected countries, in weekly number of news cases per one million population, measured from the first day on which total cases numbered at least one per one million population. All countries hit their peaks during week 4 or week 5. America is past the peak of new cases. ("Geo mean" refers to the geometric mean of the six time series.)
The following graphic shows the strong relationship between Wuhan coronavirus deaths per one million population, and population density. The data points are the 50 states plus DC. However, New York state has been separated into two components: New York City, and the remainder of the state. It is remarkable that R-squared for this power-law model is nearly 49%. Remove Alaska from the data set, and R-squared increases to about 51%.
The implication is clear: high population density is a risk factor for Wuhan coronavirus mortality. The relationship would most likely be stronger still if it were plotted by county rather than by state; the hardest hit counties in America are also the most densely populated.

The above graphic depicts data points, the states, filled in according to a color spectrum, in which the lowest risk states are blue, and the highest risk locations are red. This graphic justifies today's proposal for re-opening America, announced today by the White House Coronavirus Task Force. Decisions should be made, not just state-by-state, but even on a county-by-county basis.

America runs a terrible danger if we continue in the one-to-two-year lockdown proposed by some elites. It is not possible to turn on an economy like a light switch. Unemployment is devastating to individuals and to families; to have it running at depression-era levels would be a disaster.

The elites may believe that food grows in the stock room at Whole Foods, but the reality is far more complex. If farmers are not able to harvest their winter wheat and plant a spring crop, we could experience famine on a communist scale.

Stay-at-home orders do not add value to the benefits of social distancing, according to the studies I have read. The recommendations thus seem clear: allow businesses to re-open; allow people to travel as they wish; but wear face masks in a relatively crowded space, and wash frequently. This strategy has worked successfully in eastern Asian countries, and can work here, until a vaccine becomes available.

America will not remain forever in isolation, jobless and starving. It is time to re-open, cautiously, now.
]]>
<![CDATA[A new record for the economy]]>Thu, 05 Sep 2019 00:04:38 GMThttp://latticeinsight.com/blog/a-new-record-for-the-economy
The American economy has just achieved a very positive record, in a way that workers can appreciate. Average hourly earnings by production and non-supervisory employees, when adjusted for CPI inflation and underemployment, have achieved their highest level since records began to be kept in 1964. The level is $21.67 as of 2019Q2.

(As usual on our blog, we estimate underemployment, the U6 rate, to be twice the standard U3 unemployment rate. The ratio of U6 to U3 has stayed between 1.8 and 2.0 since 2012.)
A negative metric on the economy is at low levels not seen in 63 years. Our misery index is the sum of CPI inflation plus underemployment, as defined above (the U6 rate approximated by twice the U3 rate). The level as of 2019Q2 is 9.1%. The last time this misery index was lower was 1956Q1.
]]>
<![CDATA[Freedom and stock performance]]>Fri, 23 Aug 2019 19:48:26 GMThttp://latticeinsight.com/blog/freedom-and-stock-performance
An article on yesterday's Marketwatch website discussed a new ETF under the ticker symbol FRDM. This ETF is marketed by Life + Liberty Indexes, at this website: www.lifeandlibertyindexes.com

The management believes that personal and economic freedom are positively associated with greater opportunities for growth. This is a hypothesis that can be tested.

We employed stock performance data from the IShares website: www.ishares.com/us/products/etf-investments#!type=ishares&style=44342&fac=43511&fsac=43535&view=keyFacts

Among the listings are ETFs tracking broad stock market indexes for 41 countries and territories. We took average annual return on NAV for 5, 3, and 1-year periods, together with standard deviations for 3 and 1-year periods available from Fidelity Investments. This was used to construct a quasi-Sharpe ratio for the 41 countries and territories. We did not seek to incorporate 10-year performance data, which was unavailable for many of the countries concerned.

We compared this to the Cato Institute's studies on human freedom www.cato.org/human-freedom-index-new and economic freedom www.cato.org/economic-freedom-world, and their report of the aggregate amount of freedom in these 41 countries and territories, scaled to a sum from 0 to 100.

The result is graphed above. There is a moderate positive association between stock performance and freedom, but R^2 is only 17.5%. The graph depicts some unfree countries substantially outperforming the model, as well as some free countries substantially underperforming the model.

It appears that the freedom variable provides has only weak predictive ability for stock market performance. While it is possible that 10-year data might smooth out irregularities in the business cycle, it seems likely that we need new variables to explain the discrepancies between countries that lie far above the trendline, and those that lie far below the trendline. Suggestions are welcome.

​Full disclosure: After viewing the graph above, the author added the New Zealand and Switzerland ETFs to his portfolio, which already included the American stock market.
]]>
<![CDATA[American workers' earnings]]>Tue, 14 May 2019 23:42:51 GMThttp://latticeinsight.com/blog/american-workers-earnings
We address the earnings power of American workers, using data from FRED, the research arm of the St. Louis Federal Reserve Bank. We consider the average hourly earnings of production and non-supervisory employees; this might be considered as the lower and middle economic classes. We adjust their wages for inflation (this makes it "real") by dividing out the value of the Consumer Price Index, standardized so that the index in 2017Q1 equals 1.

We penalize our new metric for low labor force participation and for high unemployment. We estimate U6 underemployment, which was measured officially only since 1994, by multiplying the standard U3 unemployment rate by 2. The U6/U3 ratio has remained close to 2 during the Obama and Trump administrations. We display the 4-quarter moving average of the results.

This metric's previous peak was in 1973Q4. This record was approached, but not broken, in 2001Q1 and 2007Q3. The previous record was broken in 2017Q2 and has continued to increase since then. It would appear that the purchasing power of the lower and middle economic classes is now at record levels, even when adjusted for inflation, unemployment, underemployment, and lack of labor force participation.
]]>
<![CDATA[Demographic vigor]]>Mon, 13 May 2019 17:17:45 GMThttp://latticeinsight.com/blog/demographic-vigor
We introduce a new demographic metric: vigor. Vigor is defined to be the ratio of a population's life expectancy to its median age. Data is available from the CIA Factbook. The highest value is 4.31 for the Gaza Strip; the lowest value is 1.68 for Monaco. The colors on the world map are chosen so that red represents the lowest level of demographic vigor, and green represents the highest level of demographic vigor. Notable values include:

1.79 - Russia
1.80 - Japan
1.87 - European Union
2.05 - China
2.10 - United States of America
2.30 - world average
2.31 - Brazil
2.42 - Indonesia
2.47 - India
2.75 - Bangladesh
2.86 - Pakistan
3.04 - Nigeria

Median age alone does not convey the concept of demographic vigor. A country may have a low median age and high fertility, but this boost to population growth may be cancelled out by low life expectancy. The new ratio introduced here is increased by having more children who then survive longer.
]]>
<![CDATA[Federal Reserve Policy]]>Mon, 31 Dec 2018 22:48:30 GMThttp://latticeinsight.com/blog/federal-reserve-policyThe policy of the Federal Open Market Committee (FOMC) is like Munchausen syndrome by proxy. This is a psychiatric disorder in which a person may attribute illness to another person, in order that the first party may care for the second party. In some cases, the first party may even harm the second party in order to enable the caretaking to take place.

More colloquially, the Fed’s policy is like the military policy in which “we had to destroy the village in order to save it.” To date, 21.1% of world stock market value as measured by the ACWI has been wiped out – and the Fed is not finished.

​How is the US economy doing now? Let us examine the Switkay New Economic Vitality Index™, an updated and simplified version of the index originally published at http://www.latticeinsight.com. This index makes use of the following data from the St. Louis Federal Reserve Bank’s online database, FRED:
  • disposable personal income;
  • unemployment rate: 25-54 years, men;
  • civilian labor force participation rate: 20 years and over, men;
  • average hourly earnings of production and nonsupervisory employees: total private;
  • consumer price index for all urban consumers: all items;
  • average (mean) duration of unemployment;
  • market value of gross federal debt;
  • industrial production index;
  • total population: all ages including armed forces overseas.
The index is finally approaching mid-range for the first time since early 2009, after the worst recession since World War 2, and does not appear overheated.

The job market, which was crushed during the Great Recession, is now creating jobs at the fastest pace since 1969. In the following graphic, the columns are the years 1948-2018, the rows are the four quarters of the year, and the color represents the job opening rate compared to the six unemployment rates U1 (the narrowest) to U6 (the broadest, underemployment rate). (Some of these series were extrapolated backward from more recent data using linear regression. U1 and U3, the conventional unemployment rate, began in 1948; U2 began in 1967; U4, U5, U6 began in 1994; and the job openings series began in 2001. Correlations for all unemployment series with U3 are above .97; the correlation of job openings with U3 is –.80.)

​In red quarters, such as the 5.5 years 2008Q4-2014Q1, there were not enough job openings to cover the U1 unemployment rate. In orange quarters, there were enough job openings to cover U1 but not U2. In yellow quarters, there were enough job openings to cover U2 but not U3; this is the most frequent occurrence, as well as the median of all quarters. In green quarters, there were enough job openings to cover U3 but not U4. In cyan quarters, there were enough job openings to cover U4 but not U5; and in blue quarters, there were enough job openings to cover U5 but not U6.
​The strong job market has led the “experts” to fear imminent hyperinflation – which has not happened yet – due to out of control labor costs. But here is the year-over-year real hourly wage growth, which currently is close to zero:
​Inflation, as measured by the real-world CPI or the Fed’s preferred (because lower) PCE is lower than its long-term average. The long-term average for CPI inflation is close to 3.5% per year, but the current value is under 3%:
​A commodities index ETF, DBC, has plunged 22.4% since its recent high on October 3. This includes oil, natural gas, metals, and agricultural commodities, all of which affect the cost of living.
The “experts” believe we are in an asset bubble, partly because of misunderstanding the significance of the ratio of net worth to GDP. The bubble on the right is not the result of hyperinflated assets, but rather of underperforming GDP, as we shall see.
​The Wilshire 5000 Index, an index of the broad US stock market, is below its long-term trend, and stands at about the 26th percentile of all such deviations:
​Stocks are said to be priced too high relative to earnings. But as the following chart shows (data from www.multpl.com), there was a fundamental change, starting in the 1990s, in how much investors were willing to pay for one dollar of earnings. This change took place shortly after the opening of the major online trading services. The trend model has r-squared = .12, but the two-epoch model has a larger r-squared = .29. In the post-mid-1997 epoch, the current value of the S&P 500 P/E ratio is only at the 47th percentile:
By either measure (price relative to trend, or price relative to earnings in the post-1997 era), stocks are not expensive.

The Fed fears that the economy may be too hot. One way it concludes this is by comparing real GDP to real potential GDP, a unicorn of an economist’s dream, which measures the amount of output of which the economy is capable without incurring inflation. The following chart depicts the ratio of real GDP to real potential GDP, showing that this ratio is almost always greater than one, often substantially so. Perhaps the American economy has more potential than economists give it credit for.
A key element in computing potential GDP is determining NAIRU, the non-accelerating inflation rate of unemployment, an intellectual descendant of the “natural rate” of unemployment. NAIRU and the “natural rate” of unemployment are related to the Phillips curve, the theory that there is a negative association between inflation and unemployment.

If there is such a negative association, then the misery index, the sum of inflation and unemployment, should vary within relatively narrow bounds over time. The following graph depicts the sum of CPI inflation and underemployment. (We define underemployment to be 1.75 times the unemployment rate, because the quotient of the U6 and U3 rates has traditionally been close to 1.75.) This misery index varies between approximately 5% and 28%, nowhere near constant.
​We demonstrate the relation between inflation and unemployment directly in the following graph, in which the correlation is actually positive, not the expected negative number:
​It may be hypothesized that inflation is triggered not by low unemployment per se, but rather by unemployment being lower than its “natural rate”, or NAIRU. This false theory is refuted by comparing inflation to the rate to which unemployment exceeds NAIRU, as the following graph demonstrates:
This last graph at last demonstrates the expected negative correlation – but a correlation which is quite close to zero, and demonstrates no statistical significance. The Fed’s recent rate hike decision assumed that the graph above shows a strong negative association. The Congressional Budget Office has determined that the current value of NAIRU is 4.61%, but the unemployment rate is 3.7%. Consequently they deem that inflation is a risk, despite staying below 3% for almost 7 full years; nor is it increasing now.

There are potential risks in the US economy, some of which are related to Fed policies, others of which are the responsibility of the elected branches of government.

We consider the yield curve, which is getting closer to inversion. Inversion is often measured as the spread between two specific interest rates, such as the Treasury 2- and 10-year notes. We introduce an apparent innovation: the slope of the yield curve, which makes use of interest rates on all terms of fixed rate Treasury bills and notes. This is the slope of the regression line of interest rates on the natural logarithm of their terms. The long-term average value of the slope is about 0.32; its current value is 0.19. The following chart shows that the slope of yield curve is not yet negative, but it is getting close enough to signal the potential for recession soon.
​One of the most troubling charts depicts full-time employment for men aged 25-54 years. This is the segment of the labor market with the highest labor force participation rate. This segment was crushed in the Great Recession and is now in partial recovery, but is also suffering from a long downtrend that may be related to trade policies or changing social mores.
​How does the Fed make its decisions, and how should it make its decisions? We are assured indignantly that the “experts” know what they are doing and do not allow political considerations to interfere, not one little bit. Thus, many “commoners” are confused by the apparent coincidence of interest decisions and political control in the White House:
​“Commoners” are also confused as to why the Fed’s unwinding of its massive $4 trillion assets accelerated only after the most recent presidential election:
The Fed claims to be making decisions based on data. The Fed is clearly blissfully unaware of the danger signals coming from world bond markets and stock markets, perhaps on the fatuous theory that “The market is not the economy” (except if individuals lose their savings and businesses cannot afford to borrow, but other than that there is no connection whatsoever between the market and the economy).

One proposed method of Fed rule-making is a so-called Taylor rule, named after Stanford economist John Taylor. It proposes that the federal funds rate should be a linear function of inflation and the logarithm of the quotient of GDP and potential GDP, on the basis that optimization of these variables falls within the Fed’s mission, along with maximizing employment.

We cannot re-run history as a series of experiments with different Taylor rules. However, we can examine the existing data to determine what Taylor-style rule best describes the Fed’s decision-making.

​To this end, it is important to appreciate the sheer magnitude of the difference between the size of the economy and its long-term trend. The following chart shows the deviation from the trend for the natural logarithm of real GDP per capita (GDP adjusted for inflation and for population). Unlike the earlier chart regarding potential GDP, it appears that the American economy is still 10% below its long-term trend:
We have constructed a Taylor-style rule in which we predict the federal funds rate based on three variables. The most significant predictor is the year-over-year (YOY) percentage increase in the consumer price index (CPI). The second most significant predictor is the deviation of the natural logarithm of real GDP per capita from its long-term trend. The third most significant predictor is the YOY percentage increase in real GDP per capita. Several notes follow.

1) The unemployment rate has essentially zero correlation with the federal funds rate.
2) The model we produce attempts to describe the federal funds rate as it was, not necessarily as it should be.
3) Our model-making method is dynamic. As new data comes in, the long-term trend of real GDP per capita will adjust, and so will deviation from this trend.

​Below, we graph what the federal funds rate was in green, what the model predicted in blue, and a 4-quarter smoothing of the predicted rate in red.
This model achieves r-squared = .68, with a standard error of about 2%.

We cannot explain purely mathematically why the observed federal funds rate was so much excessively higher than the predicted rate during the years 1981-1984, or why it was so excessively low in 2003-2004 and 2011. (For the record, second-order and interactive terms were not significant.)

Our model suggests that the federal funds rate is close to its predicted level. In the second and third quarters of 2018, we predict a rate between 2% and 2.25%; the actual rates were between 1.5% and 2%. The latest raise, to a range of 2.25-2.5%, does not seem warranted. I have seen no data supporting the FOMC’s belief that a rate near 2.9% is “neutral” in the current environment, particularly with the 10-year Treasury yielding 2.74%.

Some have argued that savers have been punished by low rates for too long. The chart above makes one wonder why the Fed only realized this after the last presidential election. I recently opened a 12-month CD with a well-known national bank for 2.6%; how much more should a 12-month CD pay?

​I would like to see the Fed announce that the proposed rate hikes for 2019 are off the table for now. The last rate hike should be repealed. The liquidation of the Fed’s assets should continue, but more slowly: perhaps cut in half. Finally, the Fed should commit to a Taylor-style rule incorporating the inflation rate, the deviation of real GDP per capita from its long-term trend, the percentage growth in real GDP per capita, the labor force participation rate, the U6 underemployment rate, and the slope of the yield curve, as defined earlier. Then the Fed may regain some of the trust it has squandered recently.
]]>
<![CDATA[Linguistic power, 2015]]>Fri, 13 Jan 2017 16:29:15 GMThttp://latticeinsight.com/blog/linguistic-power-2015
We compare the power of languages across the world economy. We collected data on the 50 countries with the largest GDP, as measured nominally (at current exchange rates) or by PPP (purchasing power parity, adjusting for prices). We also collected data on which languages are official in each country, and what fraction of the population speaks various languages. The power of each language was added up across the countries in the study, and we computed the geometric mean of GDP nominal, first language speakers; GDP PPP, first language speakers; GDP nominal, official language; GDP PPP, official language. The twelve highest ranking languages are depicted above, as a fraction of their total, not out of total world GDP.

The pie chart is a bit misleading; obviously there are many smaller languages with some share of world GDP. Nevertheless, the error is not large, because there are many countries not included in our study, which have official languages or native speakers included in the chart above. This is true especially for the languages that are official in the most countries: English (59), French (29), Arabic (27), and Spanish (21). For comparison, were the chart continued, Indonesian would rank slightly below Korean, and Dutch about half that level.

Chinese and Hindi are official languages in what are by far the two most populous nations on earth; nevertheless, they have little influence outside their home countries.

The linguistic concentration of economic power is impressive. English and Chinese account for more than half of world GDP. When we add the languages of the world's third and fourth largest economies, Japan and Germany, and the three other widespread languages listed above, we are looking at roughly 86% of world GDP, created by 7 languages.
]]>
<![CDATA[USA Economic Vitality Index, second quarter 2016]]>Fri, 21 Oct 2016 20:36:48 GMThttp://latticeinsight.com/blog/usa-economic-vitality-index-second-quarter-2016
Our USA Economic Vitality Index has undergone two improvements. First, the data series has been extended back to 1966Q1; our graph begins in 1969Q1, in order to group it by presidential administrations. Second, we have added two important components to the index: total consumer debt as a fraction of GDP, and the employment-population ratio for men, which is an important social indicator.

Our new version of the index currently stands at 61.32 as of 2016Q2, a decline of 0.29 points from the first quarter. This is the third consecutive decline, confirming that the US economy is in a contraction by the standards of this index. The previous high, 62.36 in 2015Q3, was the highest level since 2009Q2.

Of ongoing concern is the continuing decline of the velocity of M2 money, the ratio of GDP to the M2 money supply. This ratio currently stands at a new record low of 1.450, the lowest in the history of this data series.

The labor force participation rate stands near 38-year lows. Mean weeks of unemployment still stands above 27 weeks. The employment-population ratio for men has not yet recovered to its level at the end of 2008. Federal debt stands at 105% of GDP, close to the recent post World War 2 record.

The Switkay USA Economic Vitality Index is a function of the following variables:
  • real gross domestic product per capita;
  • total Federal debt as a percentage of gross domestic product;
  • the U6 unemployment rate (including those working part-time who would prefer full-time work);
  • mean weeks of unemployment;
  • average hourly earnings, production and non-supervisory employees, private;
  • US population;
  • the civilian labor force participation rate;
  • the employment-population ratio for men;
  • the consumer price index, all urban consumers;
  • the velocity of the M2 money stock;
  • the real trade-weighted exchange value of the US dollar (broad index);
  • real net worth of households and non-profits
  • total consumer credit as a percentage of gross domestic product.

It is updated at the end of every quarter, when data for the previous quarter become finalized. We use the word real to mean inflation-adjusted. All data is taken from FRED, the research service of the Federal Reserve Bank of St. Louis. The index is normalized so that its median value in the years 1969 to 2008 is 100.

Excepting reverses of only one quarter, the index indicates:
  • expansion, 1966 (earliest data) to 1969Q3;
  • contraction, 1969Q4 to 1971Q4;
  • expansion, 1972Q1 to 1973Q2;
  • contraction, 1973Q3 to 1975Q3;
  • expansion, 1975Q4 to 1979Q2;
  • contraction, 1979Q3 to 1980Q3;
  • expansion, 1980Q4 to 1981Q3;
  • contraction, 1981Q4 to 1983Q1;
  • expansion, 1983Q2 to 1985Q1;
  • contraction, 1985Q2 to 1988Q1;
  • expansion, 1988Q2 to 1989Q3;
  • contraction, 1989Q4 to 1992Q3;
  • expansion, 1992Q4 to 2000Q2;
  • contraction, 2000Q3 to 2003Q2;
  • expansion, 2003Q3 to 2006Q1;
  • contraction, 2006Q2 to 2011Q3;
  • expansion, 2011Q4 to 2015Q3;
  • contraction, 2015Q4 to present.
]]>
<![CDATA[Improved presidential election forecast]]>Wed, 17 Aug 2016 17:50:10 GMThttp://latticeinsight.com/blog/improved-presidential-election-forecast
Last night, I saw political scientist Helmut Norpoth on television speaking about his model to predict the USA presidential election for 2016. He uses a time series model to predict the popular vote margin for one party over the other. From 1952 until the present, most of the time, the White House has been held for exactly two consecutive terms by each party. Reconstructing his model on my computer, it appears to give Trump a 5.7% advantage over Clinton.

I describe his perspective as a technical analysis, while mine is a fundamental analysis, in the language of investors. Which does a better job? We would prefer a model with higher predictive power and smaller errors, yet employing as few predictor variables as possible.

It turns out that the economic model performs better by both standards than the time series model. Also, when both the economic and lagged observation variables are placed together into a large model, the economic variables remain highly significant, while the lagged observation variables are not significant.

Still, we can make use of Norpoth's observation regarding the cyclical nature of White House control in a better way. We introduce a new variable, the number of terms that the incumbent party has held the presidency. This variable remains significant even in the presence of the economic variables.

Our previous model (see our most recent blog below) had R-squared = 71.4%; the new model, shown above, boosts R-squared to 80.4%. Indeed, the new model now predicts 15 of the last 16 presidential elections correctly, the exception being 1968, a year of turmoil in America and around the world - and a year not unlike 2016.

The model predicts a margin of 0 in the popular vote between Clinton and Trump - almost even. However, the margin of error is plus or minus about 11.8%. Although this is moderately more precise than the previous model, it still leaves us 95% certain that either Clinton will win by up to 11.8% over Trump, or Trump will win by up to 11.8% over Clinton.

This makes winning the Electoral College more critical than ever. The map below depicts the relative values of the states to the national campaigns, based on 1) number of electoral votes; 2) current poll numbers; 3) presidential voting history; and 4) current political representation. The more brightly colored states are more critical, with Florida and Ohio both as must-win states.
]]>
<![CDATA[Predicting the United States presidential election, 2016]]>Thu, 11 Aug 2016 19:47:38 GMThttp://latticeinsight.com/blog/predicting-the-united-states-presidential-election-2016
Who is going to win the USA 2016 presidential election? Inquiring minds want to know!

I was curious to see if the national popular vote in the presidential election could be predicted on the basis of certain quantifiable variables. There is an adage that voters consider pocketbook issues foremost. I conjectured that the simplest variable affecting a voter's sense of economic health would be percent change from a year ago in real disposable income per capita. Disposable income refers to income after taxes; real means that this amount is adjusted for inflation; and per capita means averaged for every member of the population.

We consider this variable in three quarters prior to the general election, to help us predict the outcome. The outcome in this case is the percent margin in the national popular vote accruing to the party that currently holds the White House at the time of the election. The theory is that a larger increase in disposable income would be positively associated with support for the incumbent party.

The results of the model are shown in the graph above. The horizontal axis represents the predicted margin of victory for the incumbent party, and the vertical axis represents the actual margin of victory. The closer the points are to the diagonal line, the more accurate the predictions. In fact, for this model, R-squared is 0.714, a strong correlation.

Indeed, the ultimate outcomes are predicted correctly 14 out of 16 times, for the presidential elections from 1952 to 2012. The correctly predicted elections appear in the upper right and the lower left. The exceptions, to the upper left and the lower right, are shown in red. In 1968, the election should have been won by Hubert Humphrey based on economic data, but perhaps due to the vast social turmoil that year, voters elected Richard Nixon. In 2012, the election should have been lost by Barack Obama based on economic data, but perhaps due to a weak campaign close by Mitt Romney, voters re-elected Obama.

The 2000 election appears in yellow. The model correctly predicts a popular vote victory by Al Gore, but this year was one of the rare times that the Electoral College vote was not consistent with the popular vote, and George Bush was elected.

Our model predicts that the Democrat nominee this year should have a 2.26% margin of victory in the popular vote over the Republican nominee. The 95% prediction interval gives a margin of error of plus or minus 12.5%. This interval is shown as a green line in the graph. The large margin of error is due in part to the small sample size (n = 16). An additional predictive variable might be helpful here, such as change in the crime rate or unemployment rate. In any case, we are 95% confident that the Democrat will win by up to 15%, or that the Republican will win by up to 10.5%. We'll talk again after the election!
]]>
<![CDATA[The Switkay Freedom and Development Index, 2016]]>Thu, 04 Aug 2016 01:28:15 GMThttp://latticeinsight.com/blog/the-switkay-freedom-and-development-index-2016
This index provides an overview of quality of life in 143 countries and territories. It combines information from the following public sources:
  • 25% is the Freedom in the World index published by Freedom House, of which 40% is political rights and 60% is civil liberties;
  • 25% is the Index of Economic Freedom published by the Heritage Foundation and the Wall Street Journal;
  • 50% is the Human Development Index published by the United Nations, which comprises life expectancy, average years of education, and per capita income (purchasing power parity).

Australia and New Zealand have the highest scores, as last year, at 100.00 and 99.45. They are followed by South Korea, 99.09; Switzerland, 99.02; Denmark, 98.76; and Germany, the United Kingdom, and the Netherlands are above 97. The United States of America scores 89.25, China 52.26, and India 54.42.

The three main variables are positively correlated. A cluster analysis reveals 4 clusters, as shown in the map below. The light blue cluster has high political freedom and high human development. The light green cluster has high political freedom but low human development. The light yellow cluster has low political freedom but high human development. The pink cluster has low political freedom and low human development.
]]>
<![CDATA[The USA Economic Vitality Index, 1st quarter, 2016]]>Tue, 28 Jun 2016 22:17:06 GMThttp://latticeinsight.com/blog/the-usa-economic-vitality-index-1st-quarter-2016
The Switkay USA Economic Vitality Index decreased 0.18 points to reach a level of 83.26 for the 1st quarter of 2016, its second consecutive decline.

The U6 underemployment rate fell from 9.9% in the 4th quarter to 9.8% in the 1st quarter. However, concern remains over the unusually high U6/U3 ratio, reflecting a higher usage of part-time workers than previously; the fact that Gallup's survey of underemployment reports numbers about 4% higher than the Labor Department's; the labor force participation rate of 62.9%, hovering near 38-year lows; and mean weeks of unemployment is up by 1 week to 28.8

The velocity of the M2 money supply slipped about 1.6% in the 1st quarter to 1.459, a new record low in the 42+ year history of this dataset; this is an ominous sign for the economy.

The debt-to-GDP ratio increased 1.5%, to a new level of 105.7%, surpassing the previous high in the 1st quarter of 2014.

This is the first period of back-to-back declines for the index since mid-2011. If we ignore one-quarter changes of direction as statistical noise, we can characterize:
  • 1989Q3 to 1992Q2 as contraction;
  • 1992Q3 to 2000Q2 as expansion;
  • 2000Q3 to 2003Q2 as contraction;
  • 2003Q3 to 2006Q1 as expansion;
  • 2006Q2 to 2011Q3 as contraction;
  • 2011Q4 to 2015Q3 as expansion;
  • 2015Q4 to present as contraction.

The Switkay USA Economic Vitality Index is a function of the following variables:
·         real gross domestic product per capita;
·         total Federal debt as a percentage of gross domestic product;
·         the U6 unemployment rate (including those working part-time who would prefer full-time work);
·         mean weeks of unemployment;
·         average hourly earnings, production and non-supervisory employees, private;
·         US population;
·         the civilian labor force participation rate;
·         the consumer price index, all urban consumers;
·         the velocity of the M2 money stock;
·         the real trade-weighted exchange value of the US dollar (broad index);
·         real net worth of households and non-profits.

It is updated at the end of every quarter, when data for the previous quarter become finalized. We use the word real to mean inflation-adjusted. All data is taken from FRED, the research service of the Federal Reserve Bank of St. Louis. The index is normalized so that its median value in the years 1973 to 2008 is 100.
]]>
<![CDATA[The USA Economic Vitality Index, 4th quarter, 2015]]>Mon, 11 Apr 2016 01:03:21 GMThttp://latticeinsight.com/blog/the-usa-economic-vitality-index-4th-quarter-2015
The Switkay USA Economic Vitality Index decreased 0.21 points to reach a level of 83.44 for the 4th quarter of 2015, its first decline since the 1st quarter of 2014.

The single biggest change in the index's underlying components appears to be a 3.7% spike in the debt-to-GDP ratio, to a new level of 104.2%, surpassing the previous high in the 1st quarter of 2014.

The U6 underemployment rate fell from 10.2% in the 3rd quarter to 9.9% in the 4th quarter. However, concern remains over the unusually high U6/U3 ratio, reflecting a higher usage of part-time workers than previously; the fact that Gallup's survey of underemployment reports numbers more than 4% higher than the Labor Department's; and the labor force participation rate of 62.5%, the lowest level since the 3rd quarter of 1977.


The velocity of the M2 money supply is 1.483, the lowest level in the 42+ year history of this dataset; this is an ominous sign for the economy.

The Switkay USA Economic Vitality Index is a function of the following variables:
·         real gross domestic product per capita;
·         total Federal debt as a percentage of gross domestic product;
·         the U6 unemployment rate (including those working part-time who would prefer full-time work);
·         mean weeks of unemployment;
·         average hourly earnings, production and non-supervisory employees, private;
·         US population;
·         the civilian labor force participation rate;
·         the consumer price index, all urban consumers;
·         the velocity of the M2 money stock;
·         the real trade-weighted exchange value of the US dollar (broad index);
·         real net worth of households and non-profits.

It is updated at the end of every quarter, when data for the previous quarter become finalized. We use the word real to mean inflation-adjusted. All data is taken from FRED, the research service of the Federal Reserve Bank of St. Louis. The index is normalized so that its median value in the years 1973 to 2008 is 100.
]]>
<![CDATA[The Michigan primary; reviewing Super Tuesday]]>Tue, 08 Mar 2016 20:00:54 GMThttp://latticeinsight.com/blog/the-michigan-primary-reviewing-super-tuesday
Looking back at the Georgia and Texas primaries, our models were generally more accurate than the major poll averages. The major exception was model 1 missing the big Cruz surge in Texas. Overall, our two models are more accurate than the major poll averages.

Today's big primary is in Michigan. Here are our predictions:

Model 1: Clinton 63.0%, Sanders 37.0%
Trump 41.8%, Cruz 22.2%, Kasich 21.6%, Rubio 14.4%

Model 2: Clinton 60.4%, Sanders 39.6%
​Trump 40.3%, Kasich 27.3%, Cruz 24.4%, Rubio 8.0%

Clinton easily defeats Sanders either way. Trump wins Michigan in both models. The big story will be Kasich, who surges in model 2 at Rubio's expense. We shall see.

We calculate the likelihood of winning the nomination based on delegates won to date plus national poll standings below.

Clinton has a more than 67% chance of winning her party's nomination at this point, while Trump has a more than 51% chance of winning his party's nomination. All other candidates are far behind.

However, if Kasich loses Ohio and Rubio loses Florida next week and both drop out, how would a one-on-one match between Trump and Cruz go? We don't yet have enough polling data to make a reliable prediction.

After next week's big contests in Florida, Illinois, North Carolina, and Ohio, we have a wonderful model to discuss predicting the outcome of the general election based on economic variables. Stay tuned!
]]>
<![CDATA[Super Tuesday]]>Tue, 01 Mar 2016 20:48:37 GMThttp://latticeinsight.com/blog/super-tuesdayWe have a lot of information to share today, so let's get to work. First of all, the performance of our predictive models has been fairly good in Iowa, New Hampshire, and South Carolina. In fact, our two models predicted the actual results in the three states with 20-30% less error than the averages of Real Clear Politics and the Huffington Post, as seen below; the performance in South Carolina was very good. In fact, only our models picked up the huge last-minute surge for Clinton in SC.
There are numerous primaries today. I will give forecasts for only Georgia and Texas.

GA Model 1: Clinton 67.7%, Sanders 32.3%; Trump 40.6%, Rubio 22.5%, Cruz 20.9%, Kasich 8.3%, Carson 7.7%

GA Model 2: Clinton 69.5%, Sanders 30.5%; Trump 41.1%, Rubio 22.7%, Cruz 20.7%, Kasich 8.4%, Carson 7.1%

TX Model 1: Clinton 68.8%, Sanders 31.2%; Cruz 34.2%, Trump 30.6%, Rubio 20.5%, Kasich 8.2%, Carson 6.5%

TX Model 2: Clinton 65.0%, Sanders 35.0%; Cruz 35.7%, Trump 31.8%, Rubio 16.9%, Kasich 8.7%, Carson 6.8%

Finally, let's take another look at the likelihood of these 7 candidates achieving their party's nominations, including the results from IA, NH, NV, and SC, but not Super Tuesday. Clinton holds more than a 9-point lead on her challenger Sanders. Trump holds almost a 25-point lead on his closest challenger Cruz. However, we will have a lot more information within a day, as so many delegates will be chosen today.
]]>