Are We in a State of Hysteria Over COVID-19? (6)

Reasons to Wonder

Model Based Projections

The IHME Model (Continued)

In previous posts I have cast doubt on the various COVID-19 models based on their demonstrated unreliability over time.  That is, they all have started with terrifyingly high projections of deaths (even with social distancing) but then were successively and significantly (massively in the Imperial College Model case) reduced, sometimes over a span of just days.

While the above information is devastating to these models’ credibility, there is another, more technical, means by which to judge them.  All responsible modelers acknowledge that there is uncertainty in their predictions.  They admit that, by use of uncertain / incomplete data and artificial mathematical manipulations, they cannot claim to predict the future with total accuracy.  The metric often used to convey this uncertainty is called the Confidence Interval.

methodsfig1

The 95% Confidence Interval for a population of results with a Gaussian probability distribution.

A confidence interval, in statistics, refers to the probability that a population parameter will fall between two set values for a certain proportion of times. Confidence intervals measure the degree of uncertainty or certainty in a sampling method. A confidence interval can take any number of probabilities, with the most common being a 95% or 99% confidence level.

In more simplistic terms, the modelers use statistical theory to determine the range of output values for which they are, say, 95% confident that the actual, real world data will be within.  Thus, if the model is useful and the mathematical calculations correct we expect that for 95% of real world cases the values will fall within the model’s 95% Confidence Interval.  However, if it turns out that far less than 95% of actual data falls within this Confidence Interval then the model has failed by its own terms.  That is, the model has failed to adhere to the level of uncertainty that the modelers themselves have specified.

Based on these concepts a group of researchers decided to test the predictions of the IHME COVID-19 model using the Confidence Interval defined by the IHME modelers themselves.  The chose the easiest possible future prediction test; that being:

Given all the deaths by State data up to a given date, prediction of the next day’s number of deaths per State.

In other words, all the IHME model had to do was predict the next day’s death count per state within the Confidence Interval (i.e., the actual number of deaths is between the least and most death values for which the modelers said they could predict with 95% confidence).

The paper’s authors picked four dates: March 30 & 31 and April 1 & 2.  Since for each date the number of predicted deaths were made for each of the 50 States in the U.S., this resulted in a total of 200 model predictions.  If the model performed with the confidence predicted by the modelers that would mean that only 10 of the 200 predictions would fall outside of the 95% Confidence Intervals.  The following chart shows what actually occurred.

IHME-CI-Model-Results

Confidence Interval IHME Model results from “Learning as We Go: An Examination of the Statistical Accuracy of COVID19 Daily Death Count Predictions” for next day number of deaths prediction by State.  The total number of data points utilized is 200 (i.e., 50 States times 4 days).

Note that the number of next day predictions that fall outside of the IHME modeler’s defined 95% Confidence Interval is 130 as opposed to the expected 10.  The unavoidable conclusion is that the IHME modelers are not able to accurately predict the reliability of their model’s output data even for the easiest test case (i.e., predict the next day’s number of deaths).  The paper authors summarize their findings as follows.

Our results suggest that the IHME model substantially underestimates the uncertainty associated with COVID19 death count predictions. We would expect to see approximately 5% of the observed number of deaths to fall outside the 95% prediction intervals. In reality, we found that the observed percentage of death counts that lie outside the 95% PI to be in the range 49% – 73%, which is more than an order of magnitude above the expected percentage.

But it’s far worse than this.  For a model that is incapable of predicting number of deaths one day in advance has been used by policy makers to predict weeks, even months into the future.  This result exposes the entire COVID-19 public safety initiative as having been based on modeling that is utterly discredited even if evaluated on the terms set by the modelers themselves.  This is the basis upon which our economy has been devastated, with potentially millions of citizens forced into poverty and lost hope, with all of the associated lives lost due to increased suicide, drug abuse, depression, domestic and community violence, undiagnosed disease, among other causes.  This is a massive scientific scandal with direct devastating consequences for hundreds of millions and indirect for billions of human beings worldwide.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s