BU’s Dashboard is Misleading, and this Matters

The following text recently appeared in a public relations piece published by Boston University:

[President] Brown says the dashboard will help answer the important question: in terms of infection rate, ‘is BU better off or worse off than the community at large? That’s the transparency we need to know, and the community around us needs to know.’”

If the university intends that the COVID-19 Dashboard be used in this way then why is that dashboard presently showing statistics in a misleading fashion? Is this the result of an intention to deceive the public when it comes to an extremely serious public health issue, rather than provide genuine transparency, or is it simply the result of incompetence? Have BU’s own scientists and public health experts tried telling BU’s leaders that they are displaying misleading statistics on the Dashboard, or are they content for this affront to honest public health efforts to continue?

There are many things that can be criticized about the BU Dashboard. The bar graph at the top is not well-designed. Results are only being reported for students at the moment, and employees have been left out for now (employee data will apparently be provided later). These things are not what I am most worried about. The most pressing problem is that the data is not really comparative, despite the misleading heading, “Comparative Statistics: Averages.”


Students are being tested twice a week. People in the general community are rarely tested more than once. This means that the BU percentages should not be based on number of positive tests / number of tests, but should instead be based on number of positive tests / number of people tested. The wrong denominator is being used, hence any comparisons with the general population made on the basis of this dashboard will be misleading.

The fact is that the choice of denominator makes the statistics on display here misleading to a general audience. This use of statistics becomes even more problematic when one thinks about this part of the page, which is displayed near the top of the dashboard:


The “Cumulative from July 27, 2020” chart is problematic partly because it includes the university’s test run using locally based asymptomatic subjects that occurred before the undergraduate students started coming back. More importantly, many people have been tested multiple times now. The percentage of positive tests in this chart becomes more and more deceptive as time goes by, because the denominator will go up by two for each student who is being retested every week. No wonder the positive test percentage is so small! And it will keep getting smaller as time goes by. On the other hand, if the denominator used here were number of people tested, the positive test percentage would go up over time, at least after the number of people tested reaches the number that are going to be here all term (assuming that COVID-19 isn’t completely stopped in its tracks). This would not prevent BU from arguing its testing program is working well (if it is working well), because if the rate at which the positive test percentages increase over time is kept very low, that is something that can be held up for everyone to pay attention to.

There are also problems when it comes to making statistical comparisons with the general population that will continue even if the right denominator (number of people tested) starts to be used, as it should be. In particular, there is the selection bias that comes from the fact that people in the general population usually only get tested if they appear to have COVID-19 symptoms. This means that, when tested, people in the general population are already much more likely to have COVID-19 than people at BU who are mostly being tested when they are asymptomatic.

It might be that an improved, less misleading Dashboard would still suggest that BU is doing very well because of its testing program. Suppose BU doesn’t satisfactorily fix its Dashboard, but everything works out well this semester anyway, when it comes to the actual rate of infection. Might not BU’s leaders then be able to justifiably declare, “SEE, you didn’t need to worry about the Dashboard!” No. We are a university. University research should not be corrupted by misleading PR. Even if things work out well when it comes to the present public health crisis, BU will still be highly criticizable for not fixing this Dashboard, if it doesn’t do so.