In the private sector if I had an employee who continued missing their marks and insisting that they just need to update their model with new information I would give them an opportunity to do so at least once, but would require them to set bounds on reliability of an acceptable prediction beforehand based on their previous results.
If the model proved to be inaccurate a "second" (I'd assume there would be iterative development and improvement of the model in-between assessments and issuance of predictions) time I would suggest that two things be done before proceeding. (1) Find a comparable system and model for a different data set and determine what their success metrics are and (2) Assuming that there was a significant variance (there would be in this case) I would suggest that they reduce the complexity and shorten the development cycle by breaking the model into smaller component models.
Ideally you would then end up with a system where things like the following scenario could be modeled and specifically tested using components with standardized inputs and outputs that could be refined overtime whenever they were used in a meta model.
Alice wants to know what impact on the reproductive cycle of an endangered bird species a 50% increase in algae blooms in a waterway adjacent to their habitat could have over the next 12-months and which variables, if any, could reduce the velocity of any negative impact on desired outcomes for that bird population.
This approach requires an extraordinary amount of structured data but typically produces excellent results over time as individual issues with component models can be addressed and resolved verifiably on a case by case basis. It also means that labor is conserved across the organization because the component models, assuming standardized input/output with some wiggle room, can be repurposed as is or with modifications (Docker for Data Modeling essentially).
In the private sector if I had an employee who continued missing their marks and insisting that they just need to update their model with new information I would give them an opportunity to do so at least once, but would require them to set bounds on reliability of an acceptable prediction beforehand based on their previous results.
If the model proved to be inaccurate a "second" (I'd assume there would be iterative development and improvement of the model in-between assessments and issuance of predictions) time I would suggest that two things be done before proceeding. (1) Find a comparable system and model for a different data set and determine what their success metrics are and (2) Assuming that there was a significant variance (there would be in this case) I would suggest that they reduce the complexity and shorten the development cycle by breaking the model into smaller component models.
Ideally you would then end up with a system where things like the following scenario could be modeled and specifically tested using components with standardized inputs and outputs that could be refined overtime whenever they were used in a meta model.
This approach requires an extraordinary amount of structured data but typically produces excellent results over time as individual issues with component models can be addressed and resolved verifiably on a case by case basis. It also means that labor is conserved across the organization because the component models, assuming standardized input/output with some wiggle room, can be repurposed as is or with modifications (Docker for Data Modeling essentially).