Wednesday, April 15, 2020

the most important thing i got from cuomo today is that it seems like he's going to make an attempt to correct the numbers coming out, to try and clarify how the curve is developing. it is probably not a coincidence that the dip under 700 was on a sunday; that would suggest some lagging into monday or tuesday.

in mathematics, we can actually talk about facts, if we are very careful in doing so. that's unusual in the existing epistemological framework, where our understanding of reality is necessarily limited by our inability to perfectly measure it. so, what exactly are the facts, here? that's what people like me are sitting here and trying to figure out, by analyzing the data in front of us.

and, these are two different things, facts and data. data is messy, full of error and full of bias. you can sometimes pull facts out of the data, but you have to do it carefully. you should never conflate the two ideas.

i am not one, but i could be if i wanted to be, and, to a very large extent, what a statistician does is analyze bias. they don't just look at the data, because they know immediately that the data, on it's face, is always wrong. so, they figure out tools to analyze the data, tools that are not objective, but are rather reliant on the assumptions put into them. if you want to get into the philosophy of this, look up david hilbert, who wrote most of it. but, the basic point is that anything a statistician is going to give you is really just their informed opinion, because what they're doing is trying to identify biases in the data and correct for them.

statisticians will consequently give you a variety of analyses, and they're all just somebody's opinion as to what test needs to be used where, what bounds are appropriate, etc.

so, when is something a fact? it's when you decide that it is, and you'll know it is when you've convinced yourself of it. that sounds hokey, but it isn't. first, you have to do more than just measure it once - if something is a fact, it should be repeatably demonstrable as such. worse, if you can't repeat the experiment, as is often the case in economics, and as is the case in the current scenario, then you need to be very cautious about using words like "fact". repeatability is fundamental to the idea of factuality. you should also be able to derive the same facts using multiple methods; that is, facts should be consistent. and, yes, you have to do the tests to figure this out; otherwise you shouldn't be using terms like "fact", as the definition of what a fact is is tied up in these tests.

we don't know how many people have died from this, and we don't know how many people have contracted it. we know how many people have come into hospitals, at least, but that's a very restricted subsample with large amounts of inherent bias that needs to be corrected for before anything can be extrapolated from it.

so, do we know the facts? no. we don't. that's what people like myself are trying to figure out. what are the facts here?

so, when i tell you that we need to wait to know if this is a curve or a peak, i'm pointing to the inaccuracies in the data - i'm pointing out that we shouldn't be drawing conclusions until we're sure that we have the facts clear, and i'm actually getting a little bit of back-up on that point from cuomo, who is expending what are currently sparse resources in trying to actually figure that out.

and, when i say we need to wait for antibody testing to understand whether this is crashing due to immunity or distancing, i'm also making an appeal to the facts - rather than relying on the correctness of models (which are an opinion), or interpolations of incomplete data sets.

what they've done is run a set of very, very loose models, declare the outcome of that modelling to be reality, and then measure the effectiveness of their behaviour based on the sanctity of the models. while we have no choice but to do the best we can in terms of building models to guide the public service, even if it's not good enough, there is really little excuse for the way they've presented these models. we really have to wait for the data to know the facts.

politicians are actually good people for scientists or logicians to argue with, because they're sneaky. they're good with words; they're good at debate, at spinning concepts over in confusing ways designed to obfuscate, even when they don't really fully understand what is actually exiting their lips. for a logician to sit down and really deconstruct these political briefings is actually great real-world debating practice, as it should help both in working through the problems in the presentation and also in helping clarify the concepts in the mind of the logician.

i would encourage it as good downtime practice, if you find yourself off for the rest of the semester, and unable to find a job in the existing economy.