Simulation is conducted to provide data which would be more difficult and expensive to obtain through operation of the real process.
If there is no real process (a project is being proposed and data is required for the design) then there is no absolute way of testing the model - and it is easy to assume the model is perfect - and to mistake checking the model is correctly configured is the same as checking the model is correct.
Many models have obtained a status (through common use and the number of publications saying "it has been shown that") of being unquestioned, and untested. Some of the most basic assumptions - which can easily be tested with readily available operations data - are left assumed to be correct and the required validating data is either not taken or is ignored.
Flotation and Leaching are two examples of processes where a simple first order reaction rate curve is often assumed to apply on a single component and many papers have been written with "boiler plate" optimisation studies of flotation circuit changes based on this assumption. Many of these and other studies only provide average results (or combined end conditions) such as the final oxidation extent in POX, or the combined rougher bank con grade etc.
Unless the model can be shown to reproduce the shape of the process response curve (the quench water flow distribution or the "down the bank" sulphur grades) it should not be accepted as valid model for use in predicting the process response to circuit changes.
You can forget about froth recovery to con (or detachment and fall back of mineral particles into the slurry phase) and bubble size distributions, and lots of other very difficult things to measure until you are sure you have got a good solid model that gets the basics right. And that means you get the response to particle size and different minerals - and possible degree of liberation modelled reasonably.
This doesn't mean that "froth recovery" and bubble size distribution are not important - they may be - but they are almost definitely less important than the BIG levers of particle size distribution, distribution of mineralisation, and liberation. And in any case, the machine is designed to produce the best bubble size distribution achievable in the practical operating environment, and the impellor wear and misalignment etc. is a whole lot bigger factor than a theoretical bubble distribution equation based on minimum surface energy or whatever.
The METSIM simulation system provides many models that are well documented in many books, and through numerous publications and published theses. The models can be implemented in simplified or more rigorous form, and can be enhanced or modified to suit your particular requirements and observations. The flotation model (which has always been capable of rate by component, rate by size, rate by degree of liberation, rate by reagent addition etc.) can be extended to rate by gas superficial velocity, or rate by bubble size distribution, or any other mechanism rate that you think is important to your application. But the implementation in METSIM is not "hardwired" in. You do not have to enter questionable values for froth recovery or superficial gas velocity in order to fit experimental data because the equations are combined in the code.
Too many models are developed in the confines of research centers with very little regard to practical application in an operating plant, or to the workings of real machines. If these research models were leading to better designed machines that would be good. But in many cases new machine breakthroughs defy these models (show them to be wrong) and all they produce is new instrumentation to try and measure what it has been assumed (in the research environment) to be important.