That is, instead of talking about uncertainties, estimate how much our knowledge could simulate the real phenomenon? In many cases, it is easier to say that we ignore parts of a phenomenon rather than to define what we don't know... We could see that as "ignorance modelling".


Great question! I agree with you that if a set of models is not including a specific process that is revelvant for that Application it could not be included. there are many ways of composing and creating multi-models. There are even examples suggesting that a few good models are better than large multimodel average (and often models that score high in a variable they score high in many others - because of modeling improvements but aslo connection across variables). However in many cases, large ensembles are better characterizing the distribution. And several “dimensions” should be explored (skills, model independence,…).Another key point is the Evaluation and Quality Control function - this should include a “modeller’s” evaluation- very interesting to suggest the use of this “knowledge” in building multi-model components.

Glad you find this interesting. Many climate sceptics are using the "uncertainty" argument against climate models, but this "ignorance modeling" would help clarify the situation IMHO. I would suggest climate scientists to have a look at what people in "superforecasting" communities do. There could be a lot of hidden "qualitative" information provided informally by scientists that clever "crowd sourcing" could help bring into the models.

1 Comment

  1. Gaining confidence in models is a long-term path. Modellers gain confidence in their own tools with time after spending years to explore the way models react to sensitivity tests or in different situations, the way they can be compared to reference observations or with other models of the same kind. The confidence also depends on the number of users of a given tool. More users, more opportunities to detect issues or validate the model behaviours. In addition, this confidence is clearly time, space and variable-dependent. For example, the same modeller speaking about the same model could give a good confidence level for the representation of daily-scale precipitation aggregated over France but a low confidence level for the representation of extreme hourly precipitation over the city of Marseille. So very good question but no easy answer