There are three estimates of excess deaths in USA, two using surveys and one where I extrapolated Fabian's deaths-to-doses ratio to USA. All come close
We will never know the exact number of the deaths and it is not even possible to agree on the precise definition. However the fact that three completely different estimates came close to each other is extremely notable.
I guess it is. I'm just curious what it indicates. What can we learn, and how certain can we be that we're recognizing/interpreting the signal correctly.
That's why we need to try and figure out the degree of uncertainty, or error.
Mistakes are part of reaserch, but we can limit them by being aware of the assumptions we make, and their limitations. And any computational model will only be as reliable as the assumption it's based on, and the data used.
Let's try something different: what did you use to test your hypothesis that vaccines cause deaths , besides COVID/non-covid deaths and vaccine doses administered?
Well I haven't, but you will find out if I am wrong once I find out. Everybody is
wrong all the time, so I won't make a big deal out of it.
What's important is working towards a better understanding and staying honest while we do.
It's on the to-do list for sure. Any particular way you want me to validate it? The vague idea I am having is just looking at excess mortality data of those countries that issued their vaccination data on a daily basis, the same way I did for German federal states.
Keep in mind I am not a mathematician or data scientist, just a hobbyist looking to understand what is going on. I'll gladly expand my toolset anytime, so looking forward to hearing your suggestions.
I agree, everyone has a right to be wrong. I was wondering what efforts you made to reduce that chance.
I have no specific expertise in this kind of modeling, but I'm sure you could find some protocol. Method validation is usually conducted usiya separate, independent data set, that was analysed using another method.
I agree, honesty and transparency are important. And I'm curious if you readers understand how to interpret your results? Or if they even care?
I am not sure why you are referring to this as a model. All I did was correlate non-COVID excess deaths with new vaccinations over 20 weeks. I did this for 16 federal states, where the Pearson correlation coefficient ranges from 0.893 to 0.981.
The only modelling happens when I calculate excess mortality. I don't see how that needs to be validated and to be frank, I would not know how.
How did you validate you model?
There are three estimates of excess deaths in USA, two using surveys and one where I extrapolated Fabian's deaths-to-doses ratio to USA. All come close
https://igorchudov.substack.com/p/covid-vaccines-killed-278000-americans
Don't invest too much energy in this guy. I have probably spent close to an hour replying to him, but it turned out he hasn't even read my article.
And? How accurate should we consider any of them?
We will never know the exact number of the deaths and it is not even possible to agree on the precise definition. However the fact that three completely different estimates came close to each other is extremely notable.
I guess it is. I'm just curious what it indicates. What can we learn, and how certain can we be that we're recognizing/interpreting the signal correctly.
Well I welcome you to join in. The more perspectives we have the better we can further our understanding of the situation.
We can never be completely certain about almost anything however we're just doing our best and Publishing what we find and learn from our mistakes
That's why we need to try and figure out the degree of uncertainty, or error.
Mistakes are part of reaserch, but we can limit them by being aware of the assumptions we make, and their limitations. And any computational model will only be as reliable as the assumption it's based on, and the data used.
Well the Standard Error and 95% CI I supplied should be sufficient.
Why do you think so?
This is we've been discussing: what assumption did you make and how sure are you that they are accurate?
Numbers give us a false sense of certainty, especially when dealing with such a complicated and vague subject.
You keep asking me the same questions, yet you have failed to read a single of my articles it seems.
And you keep making the same defense.
Let's try something different: what did you use to test your hypothesis that vaccines cause deaths , besides COVID/non-covid deaths and vaccine doses administered?
Well I haven't, but you will find out if I am wrong once I find out. Everybody is
wrong all the time, so I won't make a big deal out of it.
What's important is working towards a better understanding and staying honest while we do.
It's on the to-do list for sure. Any particular way you want me to validate it? The vague idea I am having is just looking at excess mortality data of those countries that issued their vaccination data on a daily basis, the same way I did for German federal states.
Keep in mind I am not a mathematician or data scientist, just a hobbyist looking to understand what is going on. I'll gladly expand my toolset anytime, so looking forward to hearing your suggestions.
I agree, everyone has a right to be wrong. I was wondering what efforts you made to reduce that chance.
I have no specific expertise in this kind of modeling, but I'm sure you could find some protocol. Method validation is usually conducted usiya separate, independent data set, that was analysed using another method.
I agree, honesty and transparency are important. And I'm curious if you readers understand how to interpret your results? Or if they even care?
I am not sure why you are referring to this as a model. All I did was correlate non-COVID excess deaths with new vaccinations over 20 weeks. I did this for 16 federal states, where the Pearson correlation coefficient ranges from 0.893 to 0.981.
The only modelling happens when I calculate excess mortality. I don't see how that needs to be validated and to be frank, I would not know how.
The question isn't the 'what', but the 'how' you did it, and 'why' you did it the way you did.
At this point I'll have to ask you if you even read the article, because I am providing all the code and data to the readers in the Methods section.
The code is pretty lean, too. It's not hard to understand at all, very basic.
I have, and I kept looking for the section that explains the basic assumptions used for your analysis.
I understand that you evaluated the correlation between mortality and vaccine doses.
What other work did you do? Did you evaluate any other factors? Do you have any information regarding the vaccination status of the people who died?