The model used to evaluate lockdowns were flawed.
Imperial College London’s researchers developed a model to assess the effect of different measures used to curb the coronavirus’s spread in a recent study. However, according to Swedish researchers from Lund University and other institutions, in the journal Nature, the model possessed fundamental shortcomings and cannot describe the published conclusions.
It was almost significantly the complete societal lockdown that suppressed the wave of infections in Europe during spring, according to the results from Imperial.
The study assessed the effects of different measures such as social distancing, banning public events, closing schools, self-isolating, and the lockdown itself.
“The measures were implemented at about the same time over a few weeks in March. Due to this, the mortality data used does not contain sufficient information to distinguish their individual effects. We have demonstrated this by carrying out a mathematical analysis. Based on this, we then performed simulations utilizing Imperial College’s original code to demonstrate how the model’s sensitivity leads to misleading results,” reveals Kristian Soltesz, first author of the article and associate professor in automatic control at Lund University.
The group’s curiosity in the Imperial College model aroused because it explained almost all of the transmission’s reduction during the spring via lockdowns in ten of the eleven countries modeled. The exception was Sweden, which never introduced a lockdown.
“The model presented an entirely different measure as an explanation for the reduction in Sweden—a measure that appeared almost ineffective in other countries. It looked nearly too good to be true that an effective lockdown was in place in every country except one. At the same time, another measure appeared to be surprisingly effective in this country,” notes Soltesz.
He is careful in pointing out that it is entirely plausible that individual measures are affected but that the model could not conclude their effectiveness.
“The several interventions do not seem to work in isolation from one another, but are often inter-dependent. A behavioral change due to one intervention influences other interventions’ effect. How much and in what form is harder to know, and requires several skills and collaboration”, states Anna Jöud, associate professor in epidemiology at Lund University and co-author of the study.
According to the authors, analyses of Imperial College models and others highlight the importance of epidemiological models in review currently.
“There is a significant focus in the debate on sources of data and their reliability. However, it lacks a systematic review of different models’ sensitivity in terms of parameters and data. It is comparatively important, especially when governments across the world are using dynamic models for decision making”, according to Soltesz and Jöud.
The initial step is carrying out a correct analysis of the model’s sensitivities. If they pose too significant a problem, more reliable data is often needed, often combined with a less complicated model structure.
“A lot is at stake. Hence, it is sensible to be humble when confronted with fundamental limitations. Dynamic models are usable as long as they consider the assumptions’ uncertainty and the data leading to them. In the reverse case scenario, the results are at par with assumptions or guesses”, concludes Soltesz.