Maintenance in progress?

Daan Kolkman recently defended his PhD thesis in sociology at the University of Surrey in the United Kingdom. Currently, he is involved in setting up the Jheronimus Academy of Data Science, a joint project by the Dutch universities of Tilburg and Eindhoven. 


‘If sociologists had the privilege to watch more carefully baboons repairing their constantly decaying social structure, they would have witnessed what incredible cost has been paid when the job is to maintain, for instance, social dominance with nothing at all, just social skills.’ Latour (2005, p. 70)


Those interested in maintenance stress the importance of the work that is necessary just to keep things going. They direct our view from spectacular innovations to the mundane and everyday efforts that often occur out of sight. This represents an exciting alternative lens of inquiry which is very valuable in advancing our understanding of socio-technical systems. In this post, I contend that studies of maintenance should not be restricted to what happens once systems break down, but can also address the initial embedding of new services and products. Such a broader conceptualization of “maintenance” can help highlight, amongst other things, how non-experts are central to the success of even highly technical innovations.

The excerpt above is from “Reassembling the Social”, written by French sociologist Bruno Latour. It introduces the idea of decay as the status quo. Left undisturbed, a hot cup of coffee and the room it sits in will become the same temperature. Without any outside intervention, this process is irreversible. The coffee never spontaneously heats up, just as the social standing of a shunned baboon never recovers without struggle; energy and effort are expended to restore temperature and renew relationships. Once we accept decay as an inescapable fact of live, it becomes hard to overlook just how important maintenance is.

Maintenance, however, is not only necessary once things cease to work or break down. Rather, the mundane activities of maintenance are what make most things function to begin with -it is the very stuff our society is made of. It is only after the efficiency of the steam engine was improved, the operating costs were reduced, that the technology was mature enough to facilitate commercial exploitation. It was not the initial technological feat of the steam engine itself, but its subsequent societal embedding as a steamboat that made it successful.

Over the past three years, I have studied this process of embedding. More specifically, I focused on the adoption and use of computer models in government. Despite the quite narrow focus of that study, it offers insights that may be useful beyond the domain of information systems.

First, a bit of background. Computer models represent a special class of information systems. They can be defined as a collection of algorithms that someone can use to learn about some societal system. Examples of such systems include the climate, the economy or the housing market. Although the use of computer models in government is hardly new, recent years have seen a sharp rise of interest in using models to inform the policy making process. This should not be surprising, given the impressive accomplishments of machine learning in particular, and data science more generally.

However, scholars that develop computer models have observed a gap in the potential usefulness of computer models and their actual use in practice. Why are organizations failing to make use of these new, exciting, and powerful tools? In answering that question I spent a fair chunk of my time reading the academic literature and engaging with other scholars. However, I also directed my efforts towards studying eight cases of model adoption and use in the Netherlands and the United Kingdom. I developed a clear picture of each case by reading computer model specifications, interviewing policy analysts, using the computer models myself and, yes, observing people developing and using these models in practice. This allowed me to inquire into and to some degree experience first hand how the process of embedding unfolds in practice. What is it that the people that develop, use and interpret these computer models do?

Sure, part of the work they engage in is technical. The development of a computer model involves at the very least some data collection, data wrangling and programming. Once a working version of a model is completed, it will be validated against historical data, subjected to sensitivity analysis or other tests, before it used in the first place. Work on a computer model tends to be ongoing and iterative in nature, it is regularly updated to incorporate new data or theoretical insights. This also involves some technical maintenance work like version control, error-checking, a system for raising and processing support tickets, etcetera.

However, one of the key findings of this study is the surprisingly large amount of time that goes into embedding computer models in an organization. Although the skilled technical work required for making computer models should not be underestimated, much more effort is directed towards activities that you will not find in any modeling or data science textbook.

While amongst experts, computer models may serve as a vehicle for open discussion and structure debate, this breaks down in contexts where non-experts are involved. To them, computer models may be virtually incomprehensible and operate more like black boxes. In order for computer models to be effective in informing the policy making process, they have to be understood, trusted and relevant to experts and non-experts alike. This presents a challenge especially because since computer models are technical and the intended user-base may neither have the required technical knowhow, nor the time (and perhaps patience) to develop the required skills.

A conceptual model of air quality (Gamas et al., 2015)

A conceptual model of air quality (Gamas et al., 2015)

 As a consequence, experts have to invest time to explain their model in simple terms to a non-expert user-base. Their efforts may include more formal activities, like drafting presentations, creating model documentation and model tutorials, and organizing events like training sessions. Often these will include some visual representation of the model's mechanics, like the above sketch of an air quality model. In more informal, direct interaction with non-expert users, experts attempt to explain the mechanics of the model in plain English and respond to questions that are posed by these users. Over time, non-experts may develop a basic understanding of the computer model.

This understanding in itself is not enough, since non-experts have to be convinced of the credibility of a computer model. Although experts amongst themselves may use the outcomes of analysis to ascertain and communicate the quality of a model, the outcome of, say, a sensitivity analysis may mean little to those with no statistical training. In effect, experts have to resort to other ways of demonstrating the quality of a computer model. For example, they may use historical data to illustrate the model's predictive capacity or open-source the model to convey confidence. Non-expert users may evaluate the model in relation to information from other sources, like media outlets, trusted agencies or other models. Especially when a model present a less favorable outcome in comparison, users are likely to resist adoption and challenge its use.

It is only after plenty of users have a basic understanding of the model, what it can give them and no longer contest its validity that it can begin to inform decision making. This, again, is no trivial matter. Users will have existing ways of informing policymaking and the model may not easily fit within their existing working practice. For instance, policymaking can move quite fast at times and in order to accommodate this a model has to have a decent runtime and facilitate visualizations. Considering the ever-changing scope of the political agenda, the model has to be flexible enough to incorporate new thinking.

What lessons can to be learned from this research on models for the study of maintenance and innovation more generally? First and foremost, it demonstrates that the everyday and mundane activities that are associated with maintenance take place not only in order to fix something once it has broken down. Rather, such activities are engaged in on an ongoing basis and form the very foundation of what it is that makes an innovation useful. The process of initial embedding requires considerable effort and determines whether an innovation will succeed or not. Even for very high-tech innovations, the non-technical efforts may be as – if not more – important than the technical work in facilitating this embedding. By conceptualizing maintenance in a broader sense, we can begin to understand not only the work required for things to remain operational, but also the large amount of non-technical work that goes into making innovations work in the first place.



Gamas, J., Dodder, R., Loughlin, D., & Gage, C. (2015). Role of future scenarios in understanding deep uncertainty in long-term air quality management. Journal of the Air & Waste Management Association, 65(11), 1327-13.


Latour, B. (2005). Reassembling the social-an introduction to actor-network-theory. Reassembling the Social-An Introduction to Actor-Network-Theory, by Bruno Latour, pp. 316. Foreword by Bruno Latour. Oxford University Press, Sep 2005.