Show us the Information

It’s been a couple of weeks since I and others started pointing out that there are serious problems with BU’s Dashboard that are due to the fact that the denominator being used for BU data throughout is number of tests (a number that keeps going up and up, as students are tested twice a week, and in-person instructors are tested once a week), rather than number of people tested (a much lower number that will have more or less hit a ceiling a week ago). Since then the Dashboard design has been changed, with some information being added, and some being removed. We were told in August that the data for number of people tested would be available on September 6, but no attempt has been to even add percentages that use number of people tested, let alone to move away from an emphasis on data that relies on number of tests as a denominator. This means BU is continuing to leave itself open to the charge of aiming to mislead people when it comes to comparing positive test percentages across BU and the wider community.

Employee data has now been added, and this is a good thing, but there has been no attempt to address the serious problem just mentioned, and the new Dashboard has even been made worse in one way. Prior to the recent changes to the Dashboard, data was being reported on the number and percentage of tests that are invalid. Now this data is not being reported at all. Why remove these updates? We have not been provided with any justification for doing so. We have, however, been told that the testing process has changed, with a new type of swab kit being used, in order to speed up the process of returning results to people after they are tested. Last week we were seeing average sample processing times in the high 30s (e.g. an average of 39.5 hours for results received on September 3, and an average of 38.9 hours for results received on September 6). This was not the promised 24 hours average waiting time. The numbers have now come down (e.g. 25.6 hours for September 10). These facts might lead one to ask: given that the reporting of invalid test numbers on the Dashboard was stopped at the time that the testing process changed in order to speed up that process, was the decision to stop reporting invalid test numbers made because of an expectation that we would see invalid test numbers rise significantly as a result of these changes?

Let me be clear: I am not saying that if the percentage of invalid tests has now increased that this invalidates the move to using tests that can be processed more quickly. Increasing turn around times is clearly important for efforts to keep the infection rate down. The tradeoff here may well be worth it. What I am saying is that there is no good reason to hide information about invalid tests. BU should continue to provide that information at the same time as explaining to us why the tradeoff in question is worth making, if there has been a tradeoff (and if the hypothesis suggested by the above question is false, BU should go back to publishing invalid test numbers, anyway, in order to build trust, which public health experts tell us is a very valuable resource in a pandemic).

As far as I am aware, we haven’t at BU received a single general communication providing details concerning false positive and false negative rates, and invalid tests. We remain completely in the dark on these issues. Again, let me emphasize that I’m not saying that if these rates were properly discussed we would see that something terrible has been going on; on the contrary, I think trust in the testing program would increase. The basic problem is one of disrespect. Students and faculty deserve to be treated as intelligent adults, and kept fully informed. It is odd that this even needs to be said when talking about a university, but, sadly, there is so much PR spin to be found in higher education these days that I guess it does need to be said.

Credit where credit is due. BU has now started issuing weekly COVID-19 bulletins. This is in line with a recommendation that I made a few weeks ago, after referring to the public health literature on the importance of possessing a good information policy in a pandemic with the invaluable assistance of Professor Michael Siegel. The first of these bulletins contains too little useful information, but hopefully we’ll learn much more as time goes by. I’m happy to see that people are asking questions and pressing for much needed improvements in the comments on the bulletin. We have also now been provided with data regarding student non-compliance.

Higher ed reporter Kirk Carapezza has produced an excellent WGBH Radio piece on BU’s mistaken decision to not inform in-person instructors when students in their classes test positive. It includes interviews with Northeastern Distinguished University Professor of Law, Wendy Parmet (Parmet is a public health law expert, and she asserts that there is no good legal justification for this policy choice), Professor Michael Siegel, and myself. I also recommend keeping up with Michael Siegel’s new blog. Recent posts include “The LfA Marketing Scheme Revisited: Two Weeks In, It is Clear that this was a Hoax to Lure in Tuition Dollars” and “New Data from Boston Public Schools Demonstrates How BUSPH Learn from Anywhere Approach is a Racist Policy,” as well as a guest post by John Sherman, an attorney and senior program fellow at the Harvard Kennedy School.

UPDATE, September 15: As discussed previously here (near the bottom of the page), in August, BU Information Services and Technology provided student LfA moderators with a sexist and culturally insensitive dress code. The relevant set of instructions was fairly quickly withdrawn, and I have now been informed that Professor Kecia Ali (Religion) and Professor Cati Connell (Director, Women’s, Gender and Sexuality Studies) have worked together with Tracy Schroeder, Vice President of Information Services and Technology to produce a revised IS&T Code of Conduct that is no longer problematic in the relevant respects. To Tracy Schroeder and IS&T’s further credit, these recent changes, the rationale for them, and the process by which they were made have also been been openly discussed in an email provided to student moderators (and elsewhere, no doubt).