This article by Camille Roth and Jérémie Poiroux has been published in the 11th issue of the Social science research on the Internet (RESET), “Writing code, making software”, in April 2022. Here is the abstract:
Several recent works on recommender algorithms have called for shifting the focus away from the study of their effects, such as the emergence of prediction biases or filter bubbles, to look at how they are designed. We propose here to answer this call thanks to a qualitative study based on interviews with about thirty developers. We show that the conditions of production of these algorithms are very closely linked to their use. Deployed on platforms with a large number of users, thus allowing a permanent observation of their functioning, algorithmic code evolves in a hybrid way that continuously depends on the work of developers and the actions of users. Simply put, the use of algorithmic guidance guides its own evolution – whether it is introducing new variables, new algorithmic processes and, above all, choosing between numerous variants through tests that quantify user reactions in real time in the light of essentially commercial objectives. From this point of view, code development is to a large extent a semi-autonomous evolutionary process in which user testing is the main arbiter: developers introduce mutations, users implicitly produce performance calculations, expressed in standard business terms (audience, sales). By emphasizing the crucial importance of the choice of these metrics, once the choices concerning the architecture of a given platform are made, we call on future research to frame the question of algorithmic policy primarily in terms of the definition of these two dimensions –performance and platform design– rather than opening up further the black box of code and its design.
The open-access online paper is available here (french only).