It was made explicit in a meeting of department chairs this week that when it comes to planning for the Spring semester, BU is presently assuming that instructors who have not successfully applied for workplace adjustments that permit them to teach remotely will continue to follow the LfA hybrid model and teach on campus. This is not surprising news, but one hopes that reliable survey data will be systematically gathered from professors and other instructors, as well as students, regarding the success or otherwise of LfA classes, before any final decision to continue with LfA is made. My sense is that LfA has, in general, been something of a disaster, both for instructors and for students. However, I fully admit that the evidence for this that is new this semester (let’s not forget we had relevant evidence from pedagogical experts before the semester began) has so far been anecdotal in nature (see here, here, and here, for instance). Amongst other things, we need to know:
– What percentage of classes are operating in the hybrid mode (this percentage may drop further as semester progresses, as courses move online, if that’s what students want)?
– In the classes that are operating in the hybrid mode, what percentage of students are attending in person?
– What percentage of instructors that are teaching their classes in the hybrid mode think they could provide a better learning environment for students if they were to instead teach remotely next semester (something that most are not presently permitted to do)?
Of course, even if a university administered survey is provided at some point, we know from experience that the survey may not be taken seriously by BU’s leaders. In any case, cognizant of the fact that the university has not yet systematically surveyed the BU community, the BU PhD Student Coalition has decided to take the lead by setting up a survey, and I particularly recommend that students and instructors who have direct experiences with LfA complete this survey.
The second BU Weekly COVID-19 Report has been published online. It contains responses to some of the concerns that have been raised regarding the Dashboard. It includes the following news: “Now that the University-wide coronavirus surveillance plan has been in place for more than six weeks, and students, faculty, and staff have settled into on-campus or remote learning, BU plans to share the number of people in its testing population … ‘we’re going to put that number in the dashboard.’” This is a response to a concern that I and others raised in August. We are told that the number of people that have been tested will be posted somewhere on the Dashboard, but we are not told that we will see the data regarding percentages of people that have tested positive recast so that number of people tested is used as a denominator, rather than the highly misleading number of tests (it’s telling that the word “denominator” is not used anywhere in the report, and percentages are not mentioned in this context). We are also not told why it was thought necessary to wait more than six weeks to start reporting the number of people tested. The idea of waiting for people to be fully settled is mentioned as if it is a reason for waiting to report on the number of people tested, but it doesn’t seem like a good reason at all to postpone moving to a much less misleading denominator.
Is the Dashboard being provided for public relations reasons, or for public health reasons? It seems that public relations concerns have to a certain extent been driving the way the data is being presented on the Dashboard, rather than what should be happening, which is that public relations concerns take a back seat to public health concerns.
Regarding the fact that invalid test numbers were previously being provided on the Dashboard, but suddenly stopped being provided, no attempt is made to address the issue, raised in my last post, that moving to a process that significantly reduces the time it takes to receive the results of tests may have also led to a significant increase in the percentage of tests that are invalid. Instead of addressing this concern, the report simply says “The daily and cumulative numbers of invalid tests were removed from the dashboard… because an invalid result requires that person come back for repeat testing within 24 hours.” This information about retesting being required within 24 hours is helpful for us all to receive, and at least this change to the Dashboard was addressed in the report. Still, being provided with more information, of a kind that was already being reported, seems better than being provided with less information.