The chances of this happening was 1 in 1,000,000,000
If your model assigns a probability of one in a billion to something that has happened, your model is almost certainly wrong. (...up to certain assumptions. You have to be careful about cases like 'the probability of any one person winning the lottery is effectively zero'... but someone generally wins the lottery.)
For instance - you give me a supposedly-fair coin. I flip it thirty times and it comes up heads every time. My model is '50/50 chance of heads/tails' - which gives a little less than a one-in-a-billion chance of 30 heads / 30 trials. My model is wrong. Perhaps the coin isn't fair. Or perhaps I'm uncannily good at reproducibly flipping a coin. Or perhaps I can't tell heads from tails on this particular coin. Or perhaps I was lying when I said it got 30 heads. Or perhaps you told this to every person on the planet and I was just the person who got lucky. Etc.
Or for instance - you give me a stack of a thousand ballots for recounting, split 50/50 Biden/Trump. Upon inspection, 250 ballots previously counted for Biden are now counted for Trump, and 0 ballots previously counted for Trump are now counted for Biden. My model was that mistakes are equally likely in either direction. My model is wrong. (Though, honestly, I haven't bothered to calculate exactly how wrong.)
If your model assigns a probability of one in a billion to something that has happened, your model is almost certainly wrong. (...up to certain assumptions. You have to be careful about cases like 'the probability of any one person winning the lottery is effectively zero'... but someone generally wins the lottery.)
For instance - you give me a supposedly-fair coin. I flip it thirty times and it comes up heads every time. My model is '50/50 chance of heads/tails' - which gives a little less than a one-in-a-billion chance of 30 heads / 30 trials. My model is wrong. Perhaps the coin isn't fair. Or perhaps I'm uncannily good at reproducibly flipping a coin. Or perhaps I can't tell heads from tails on this particular coin. Or perhaps I was lying when I said it got 30 heads. Or perhaps you told this to every person on the planet and I was just the person who got lucky. Etc.
Or for instance - you give me a stack of a thousand ballots for recounting, split 50/50 Biden/Trump. Upon inspection, 250 ballots previously counted for Biden are now counted for Trump, and 0 ballots previously counted for Trump are now counted for Biden. My model was that mistakes are equally likely in either direction. My model is wrong. (Though, honestly, I haven't bothered to calculate exactly how wrong.)
I'm not a math guy, but that's not the point of the post.