I know it's not the intent of the post but I seriously doubt any of these "effects" are adjusted for multiple comparisons. They're probably random noise.
I.e. bad science. I'll have to take a closer look though.
Statistically significant findings (when p<0.05) will happen randomly 95% of the time when there is no real difference. If you construct 20 different hypothesis tests, you'd expect an average of 1 that wasn't really significant. They did just that. Also, this is a survey. So probably terrible for any kind of meaningful inference. Probably another academic dredging their data (p-hacking) so they can publish.
I know it's not the intent of the post but I seriously doubt any of these "effects" are adjusted for multiple comparisons. They're probably random noise.
Wooshed me. Take a look and let me know what that means: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7040223/?fbclid=IwAR31buZo0-KwXx0tH6UEBBC14VMuUin3dM_iblkqiheEECwlxJiYQl6S7o4
I.e. bad science. I'll have to take a closer look though.
Statistically significant findings (when p<0.05) will happen randomly 95% of the time when there is no real difference. If you construct 20 different hypothesis tests, you'd expect an average of 1 that wasn't really significant. They did just that. Also, this is a survey. So probably terrible for any kind of meaningful inference. Probably another academic dredging their data (p-hacking) so they can publish.