There are at least two other important tasks which are deeply entangled with the discipline and philosophy of engineering. One is the labour of modelling and the other, the design of approximation techniques. Michael Weisberg has recently written a wonderful book on models and modelling, a topic which in the past was not being taken seriously but was central to engineering. Weisberg elaborates why all our encounters with reality involve one or another kind of model, for example, descriptive, explanatory and predictive models. Even what we call empirical data are not ready-made, they are products of model projection, which means data can be distorted or even false data may be derived if the model is inadequate, too small or too big, misapplied to a target system or applied to a wrong sector of reality. The thing about models is that they are packed with all sorts of implicit and explicit theoretical, mathematical, logical and computational assumptions. Such assumptions encompass not just the model’s description but also the core of the model i.e. the structure and its interpretive factors or construals which include information about the scope, assignment, and fidelity criteria of the model itself. The latter criteria pertain to the exact information which specify the model’s representational, dynamic and resolution constraints for a given level or scale. Without proper attention to such details and the assumptions underlying them, all data and facts can be fundamentally distorted or erroneous. The whole myth of raw or pure data is perpetuated by people who have no clue about how data is mined—irrespective of what kind of data we are talking about.