True! The scientific method was devised by the same people that directly inspired our nation. It was meant to be a method through which anyone could reproduce the same results and compare, allowing the people to garner true knowledge instead of relying upon a large body to filter it to them.
Obviously, we're in need of a return to form on that. Climate change science is a chief example. Somehow, ONLY the giant science foundations have the resources and the abilities to process the data right, violating that concept that science should be accessible. Obviously, with something as complex as that, that makes a degree of sense, but nonetheless, there should be performable experiments by individuals that can verify their data, yet there does not appear to be.
Is it wrong? I can't prove it's wrong, but I take it with a strong dose of skepticism because if it's inaccessibility (and everything else wrong with it).
Another example might be astrophysics/space science. Unlike climate, however, it IS observable and reproducible by individuals if necessary. People have launched their own satellites and such; curvature of the Earth can be measured by math, and the patterns of the planets and stars can be observed with fairly simple equipment (depending on the scope talked about). In instances of much greater depth (stuff about expansion of the universe and so on) many of the facilities where that data can be found are relatively accessible to the public (large scale space telescopes, for example).
In the private sector if I had an employee who continued missing their marks and insisting that they just need to update their model with new information I would give them an opportunity to do so at least once, but would require them to set bounds on reliability of an acceptable prediction beforehand based on their previous results.
If the model proved to be inaccurate a "second" (I'd assume there would be iterative development and improvement of the model in-between assessments and issuance of predictions) time I would suggest that two things be done before proceeding. (1) Find a comparable system and model for a different data set and determine what their success metrics are and (2) Assuming that there was a significant variance (there would be in this case) I would suggest that they reduce the complexity and shorten the development cycle by breaking the model into smaller component models.
Ideally you would then end up with a system where things like the following scenario could be modeled and specifically tested using components with standardized inputs and outputs that could be refined overtime whenever they were used in a meta model.
Alice wants to know what impact on the reproductive cycle of an endangered bird species a 50% increase in algae blooms in a waterway adjacent to their habitat could have over the next 12-months and which variables, if any, could reduce the velocity of any negative impact on desired outcomes for that bird population.
This approach requires an extraordinary amount of structured data but typically produces excellent results over time as individual issues with component models can be addressed and resolved verifiably on a case by case basis. It also means that labor is conserved across the organization because the component models, assuming standardized input/output with some wiggle room, can be repurposed as is or with modifications (Docker for Data Modeling essentially).
True! The scientific method was devised by the same people that directly inspired our nation. It was meant to be a method through which anyone could reproduce the same results and compare, allowing the people to garner true knowledge instead of relying upon a large body to filter it to them.
Obviously, we're in need of a return to form on that. Climate change science is a chief example. Somehow, ONLY the giant science foundations have the resources and the abilities to process the data right, violating that concept that science should be accessible. Obviously, with something as complex as that, that makes a degree of sense, but nonetheless, there should be performable experiments by individuals that can verify their data, yet there does not appear to be.
Is it wrong? I can't prove it's wrong, but I take it with a strong dose of skepticism because if it's inaccessibility (and everything else wrong with it).
Another example might be astrophysics/space science. Unlike climate, however, it IS observable and reproducible by individuals if necessary. People have launched their own satellites and such; curvature of the Earth can be measured by math, and the patterns of the planets and stars can be observed with fairly simple equipment (depending on the scope talked about). In instances of much greater depth (stuff about expansion of the universe and so on) many of the facilities where that data can be found are relatively accessible to the public (large scale space telescopes, for example).
In the private sector if I had an employee who continued missing their marks and insisting that they just need to update their model with new information I would give them an opportunity to do so at least once, but would require them to set bounds on reliability of an acceptable prediction beforehand based on their previous results.
If the model proved to be inaccurate a "second" (I'd assume there would be iterative development and improvement of the model in-between assessments and issuance of predictions) time I would suggest that two things be done before proceeding. (1) Find a comparable system and model for a different data set and determine what their success metrics are and (2) Assuming that there was a significant variance (there would be in this case) I would suggest that they reduce the complexity and shorten the development cycle by breaking the model into smaller component models.
Ideally you would then end up with a system where things like the following scenario could be modeled and specifically tested using components with standardized inputs and outputs that could be refined overtime whenever they were used in a meta model.
This approach requires an extraordinary amount of structured data but typically produces excellent results over time as individual issues with component models can be addressed and resolved verifiably on a case by case basis. It also means that labor is conserved across the organization because the component models, assuming standardized input/output with some wiggle room, can be repurposed as is or with modifications (Docker for Data Modeling essentially).