Next Article in Journal
An Exploratory Analysis of Museum Attributes from the Perspective of Tourists and Residents: The Case of Thyssen-Bornemisza National Museum, Madrid, Spain
Next Article in Special Issue
Constitutional Values in the Gig-Economy? Why Labor Law Fails at Platform Work, and What Can We Do about It?
Previous Article in Journal
Acceptable Behavior or Workplace Bullying?—How Perpetrator Gender and Hierarchical Status Affect Third Parties’ Attributions and Moral Judgments of Negative Behaviors
Previous Article in Special Issue
Understanding Technological Unemployment: A Review of Causes, Consequences, and Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visions of Automation: A Comparative Discussion of Two Approaches

Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology, 76133 Karlsruhe, Germany
Societies 2021, 11(2), 63; https://doi.org/10.3390/soc11020063
Submission received: 23 April 2021 / Revised: 14 June 2021 / Accepted: 15 June 2021 / Published: 16 June 2021

Abstract

:
In recent years, fears of technological unemployment have (re-)emerged strongly in public discourse. In response, policymakers and researchers have tried to gain a more nuanced understanding of the future of work in an age of automation. In these debates, it has become common practice to signal expertise on automation by referencing a plethora of studies, rather than limiting oneself to the careful discussion of a small number of selected papers whose epistemic limitations one might actually be able to grasp comprehensively. This paper addresses this shortcoming. I will first give a very general introduction to the state of the art of research on potentials for automation, using the German case as an example. I will then provide an in-depth analysis of two studies of the field that exemplify two competing approaches to the question of automatability: studies that limit themselves to discussing technological potentials for automation on the one hand, and macroeconomic scenario methods that claim to provide more concrete assessments of the connection between job losses (or job creation) and technological innovation in the future on the other. Finally, I will provide insight into the epistemic limitations and the specific vices and virtues of these two approaches from the perspective of critical social theory, thereby contributing to a more enlightened and reflexive debate on the future of automation.

1. Introduction

Ever since the publication of the seminal study “The Future of Employment” by Carl Benedikt Frey and Michael Osborne [1,2], fears of technological unemployment have (re-)emerged strongly in public discourse. Responding to these concerns, policymakers and researchers have tried to gain a more nuanced understanding of the future of work in an age of automation. But what insights is the contemporary scientific debate on automation able to supply when it comes to the extent that automation might happen in the future, and on what epistemic basis? In the last years, a myriad of studies has been published on the impact of technological development, most often described as automation and digitalization (for metastudies with a German focus see [3,4,5]). Aside from different methodologies applied and differences in the data employed, it is particularly the differences of the research questions dealt with in these studies that make it difficult to give a general assessment of the current state of research on automation. One might broadly distinguish two lines of inquiry regarding the future of automation, however: studies exploring the technological potentials for automation today or in the near future, and studies that try to predict actual future job losses. Although these two lines of inquiry are easily confused, they nonetheless represent a crucial distinction: Increased automation cannot simply be equated with aggregate job losses. To read even the simplified statement, “Every second worker in today’s economy could be substituted by robots and AI” as, “We will soon have a rate of 50% technological unemployment” presupposes that there will be no countervailing job creation at all, an assumption that is highly improbable. What is more, even if the substitution of human labor were technologically feasible, there is no automatism that would ensure that this automation would actually take place. Indeed, adoption of automation technologies is dependent on a number of additional variables, the relative costs of automation being a central one. If the cost of automation technologies vastly exceeds the amount of wages that can be saved by introducing them, adoption across the economy will likely be slow. Furthermore, increasing political opposition to automation technologies might slow down their adoption—for instance through legislation, strong union opposition, or worker militancy [1] (pp. 43, 44). As such, technological feasibility does not directly translate into economic reality.
Much seems to be technologically feasible, however. Frey and Osborne famously found that 47% of jobs in the US featured more than 70% probability of “potentially [being] automatable over some unspecified number of years, perhaps a decade or two” [1] (p. 38). Applying their methodology to Germany, Carsten Brzeski and Inga Burk concluded that 59% of jobs in Germany might be at risk [6]. Another study by the Leibniz Centre for European Economic Research in Mannheim on behalf of the Federal Ministry of Labour and Social Affairs (BMAS) attempting to apply the methodology of Frey and Osborne to Germany slightly lowered this number to 42% [7]. Several other studies published are situated in the same general order of magnitude: The study “A future that works: Automation, employment and productivity” by the McKinsey Global Institute concluded that around 45% to 47% of work “activities […] can be automated by adapting currently demonstrated technologies” [8] (p. 47) and two studies by the Institute for Employment Research, the research branch of the German Federal Employment Agency, seem to suggest a potential of substitution of around 40% [3] (p. 35).
Studies following the other line of inquiry (focused on net job losses) tend to highlight the economic opportunities provided by technological development, citing weak positive effects or negligible negative effects on total employment and chances of an upskilling of the work force as well as increased competitiveness supporting strong employment [4] (pp. 69ff.).
The overall takeaway of this state of the art of research could then be summarized as follows: There is a shared sentiment in the scientific field that there exists great potential for automation, with almost every job in today’s economy possibly becoming substitutable in the next one or two decades. On the other hand, technology has proven not to undermine aggregate employment in the past and the economic opportunities afforded by technological progress should make sure that employment remains roughly the same while productivity increases.
This would be an error, however. Although ascribing every study the same claim to truth and trusting in collective intelligence might seem a plausible approach, it is nonetheless problematic. First of all, the quality of the methods, data, and so forth employed by the various studies might differ greatly, rendering the “principle of indifference” unjustified [9] (p. 7). Additionally, as the collective failure of the economic profession to anticipate the last great financial crisis illustrates, not even a strong agreement within scientific discourse can guarantee the correctness of this agreement, particularly when it comes to the social sciences. Therefore, other than identifying general strands of research and discussing their common features, a proper assessment of the epistemic power of research on automation can only be given on a case-by-case basis. In the following, I will introduce two exemplary studies on the future of automation, one for each strand of research, discussing their epistemic advantages and limitations. The hope is that by discussing these two exemplary studies, we might gain a better understanding of how to approach and appraise studies in this field more generally. In a last step, I will discuss what societal functions these different forms of studies might serve and try to give an assessment of these two competing research strands from the point of view of Critical Theory.

2. The Future of Automation: Two Approaches

2.1. Investigating Future Technological Potentials

The first study we will review in some detail is the (in-)famous study “The Future of Employment: How susceptible are jobs to computerisation?” by Frey and Osborne [1]. Not only can it be considered the prototypical contemporary study on the technological potentials of automation, spawning a multitude of adoptions of the study to different nation states, it also is maybe the central study of the contemporary debate on automation and helped to reemphasize the importance of the subject to policymakers and the general public [10] (p. 85). Finally, the study was scrutinized extensively by the scientific community, laying bare possible weak points of the approach and triggering the authors to expand on their already extensive description of the study’s methodical approach [11].
After a short introduction to the history of debates around technological development and employment, Frey and Osborne turn towards the future by discussing “advances in fields related to Machine Learning (ML), including Data Mining, Machine Vision, Computational Statistics and other sub-fields of Artificial Intelligence (AI)” that might allow both for the automation of cognitive tasks in the future and further advances in the development of robotics and thus the automation of manual labor. They highlight that historically, the automation of non-routine tasks was deemed technologically impossible. As such, the question of automatability largely came down to whether a task was based on explicit, standardized procedures with little to no need for adapting on the fly.
However, advances in the field of machine learning, combined with increasingly complex and comprehensive datasets that might be employed for the training of the algorithms and rapidly declining costs of computation, sensor technologies, and robots would now, according to Frey and Osborne, render previously unautomatable non-routine tasks more and more automatable, as illustrated by progress in the field of, for instance, deciphering handwritings, translation, and autonomous driving [1] (pp. 14–22). As a consequence, Frey and Osborne turn away from the classical distinction between routine and non-routine tasks and embark on a search for other so-called “engineering bottlenecks”—technical challenges that are, according to their review of the research field, unlikely to be mastered in the near future and thus limit the scope for automation 1.

2.1.1. Searching for Refuges of Human Labor

They identify three such bottlenecks: complex perception and manipulation, creative intelligence, and social intelligence. They point out that algorithms still struggle with “identifying objects and their properties in a cluttered field of view” and thus also with the manipulation of irregular objects [1] (p. 25). They also highlight challenges in terms of failure recovery and the development of soft manipulators and tactile feedback mechanisms. Regarding challenges to emulating creative intelligence, Frey and Osborne emphasize that tasking an algorithm with novel recombination of existing knowledge would by itself not be much of a challenge. The real challenge would be to “find some reliable means of arriving at combinations that ‘make sense.’” [1] (p. 26). In other words, having algorithms create something “novel” might be perfectly technologically feasible, but the result might not match human needs, which might themselves be difficult to elaborate beforehand. Lastly, and maybe more importantly, they point out that even if an algorithm were to provide an output that could be described as creative, “there would still be disagreement about whether the computer appeared to be creative,” indicating the relevance of mechanisms of cultural persistence related to creativity. Lastly, they turn towards the challenges to emulate social intelligence, required in persuasion, negotiation, and care. They refer to progress in the research field of affective computing but nonetheless point out that “While algorithms and robots can now reproduce some aspects of human social interaction, the real-time recognition of natural human emotion remains a challenging problem, and the ability to respond intelligently to such inputs is even more difficult” [1] (pp. 26, 27). Even in simplified settings, typical social tasks would likely continue to be challenging to automate, let alone complex ones involving negotiating skills or high levels of empathy [1] (pp. 24–27).

2.1.2. Utilizing Machine Learning to Learn about the Impacts of Machine Learning

In a next step, they employ the O*NET database of the US Department of Labor, containing information on hundreds of occupations, collected through “regularly updated […] surveys of each occupation’s worker population and related experts.” These occupational descriptions contain variables such as finger dexterity, originality, persuasion, etc., which Frey and Osborne then link to the engineering bottlenecks they identified [1] (pp. 28ff.). In addition, they convened an expert workshop with machine learning researchers who were tasked with going through 70 occupations, assessing “whether each task for the occupations was automatable, given the availability of state-of-the-art computer equipment and conditional upon the availability of relevant big data for the algorithm to draw upon” [11]. These subjective assessments then served as the training data set for an algorithm providing probabilistic classification of occupational automatability 2.
Why this highly intricate approach, rather than just assessing job profiles linearly based on their task composition and the related bottleneck variables? Frey and Osborne claim that their algorithm “provides a smoothly varying probabilistic assessment of automatability as a function of the variables. For our Gaussian process classifier, this function is non-linear, meaning that it flexibly adapts to the patterns inherent in the training data. Our approach thus allows for more complex, non-linear, interactions between variables: for example, perhaps one variable is not of importance unless the value of another variable is sufficiently large” [1] (p. 36).
In other words, the algorithm would allow for the assessment of the probability of a job becoming automatable based on an assessment of whole job profiles—not on a task-by-task basis, but rather in the specific configuration these tasks find themselves embedded in. These probabilistic assessments were then used to assign jobs to three different categories (low risk of automation from 0 to 30% probability, medium risk of automation between 30 and 70%, and high risk of automation from 70% onwards). Jobs in the high-risk category accounted for 47% of US employment, triggering alarmistic headlines around the world claiming every second job in the US (and by way of assumption: probably in other countries) would be lost to automation. There are several things wrong with that: For a number of reasons (see above), technological automatability and net job losses are not the same. As a matter of fact, Frey and Osborne dedicate a substantial share of their paper to discussing why this distinction is important and conclude by pointing out that they “make no attempt to estimate how many jobs will actually be automated” [1] (p. 42).
It might help to revisit the central claim of the study against this backdrop: “According to our estimate, 47 percent of total US employment is in the high risk category, meaning that associated occupations are potentially automatable over some unspecified numbers of years, perhaps a decade or two” [1] (p. 38). It is noticeable that the claim is phrased in rather cautious language, speaking of potential automatability and leaving the temporal scope deliberately open, at maximum giving a vague indication. What is more, it necessarily compresses most of the assumptions made by the authors up until this point into the term “our estimate.” To conclude the reconstructive part of my discussion of their study, I shall endeavor to rephrase this sentence in order to better represent the assumptions contained in it for further scrutiny.
If
  • our assessment of the potential of contemporary and near-future automation technologies is correct (based on the identification of engineering bottlenecks and the reverse assumption that all activities not affected by these engineering bottlenecks are technically automatable);
  • O*NET data adequately represents occupational reality;
  • nothing went wrong in composing the training data set; and
  • the machine learning algorithm we used on the data adequately generalized the training data set in order to assign its probabilistic assessments,
then we find that 47% of today’s US employment has a risk of over 70% of being automatable at some time in the future (maybe a decade or two).

2.1.3. The Problem with Assumptions #1

Let us quickly go through these assumptions: Although the literature review by Frey and Osborne appears to be thorough and their engagement with technical experts can be reasonably expected to increase the quality of their assessment of the field further, one should nonetheless be somewhat cautious when it comes to reproducing what is ultimately a self-assessment of researchers. Overestimating technological potentials has been called a typical déformation professionelle of scientists involved in the advancement (and promotion) of specific technologies [12] (p. 9) 3. Additionally, although the approach to identify possible engineering bottlenecks and to then reversely conclude that anything not covered by them might be automatable has some evidence to it, it runs the risk of downplaying the risk of unwelcome surprises in technology development. This limitation of their approach was briefly addressed by Frey and Osborne, who claim that their focus on “near-term technological breakthrough in ML and MR [mobile robotics]” and the deliberate temporal flexibility in their estimate might compensate for some of these uncertainties [1] (p. 43).
As for the O*NET data, they can be considered “the most detailed and comprehensive assessment of skills used in employment that exists” [15] (p. 41). Yet, the database was not compiled with automatability studies in mind, as indicated by Frey and Osborne [1] (p. 29), forcing them to identify variables and indicators that they deem relevant to automatability. Furthermore, the occupational profiles of the O*NET represent necessarily a somewhat abstract generalization of actual job realities. As such, they fail to both capture perhaps crucial variations within certain job profiles as well as run the risk of failing to account for the importance of tacit knowledge in practicing certain professions.
Although the job title of some people might, for instance, still be “office assistant,” they might have long outgrown their original job profile and might have been tasked with much more complex and challenging tasks, rather than “just” ensuring coffee supply and doing basic scheduling tasks. This also applies to more subtle, informal shifts in work activities. The job reality of some administrative staff might actually be much more akin to mental health counselors (0.48% probability of automatability, according to Frey and Osborne’s study) than to the average file clerk (97% probability). In regards to the challenge posed to the assessment of automatability by tacit knowledge, a worker might be limited in the way she answers a questionnaire she is presented with, leaving out the importance intuition plays in handling a certain workpiece—which might, upon further investigation, be deciphered as a way to unconsciously account for certain properties of the workpiece or work environment that might be missed by a robot due to sensor limitations or deemed unimportant while programming its control software (How does it feel to the touch? What is today’s humidity like?). A task that might be described both by experts and workers as a simple manipulation task might thus actually turn out to depend on levels of perception difficult to automate with today’s or even near-future technology.
This criticism is addressed by the authors in some detail, both in the initial study as well as in its aftermath. Although they raise doubts about whether tasks performed in occupations vary that significantly [1] (p. 24), [10], they draw attention to two important ways the challenges stemming from variations within job profiles and tacit knowledge might be reduced. The first one is standardization and simplification 4. Imagine a skilled tradesperson of the early 19th century carefully hand-crafting a workpiece from start to finish. Her labor process might be impossible to automate, even today. Industrial robotics has excelled, however, at automating specific steps of highly standardized and fragmentized production of standardized mass-consumer products. In the same vein, it might be difficult to automate all possible activities a worker categorized as a file clerk might engage with in the course of her workday—but to be able to save labor costs, this is not necessary in the first place. Instead, one might investigate ways in which, for instance, the tasks of a file clerk central to the economic success of a company could be automated and to do without the rest. Or, one might axe a number of administrative positions and hire one dedicated mental health counselor to make up for the social intelligence lost in the process.
In addition, one of the key achievements expected from the development of AI is to solve Polyani’s paradox. The term was coined by David H. Autor, who built on Michael Polyani’s “observation that, ‘We know more than we can tell’” [16] (p. 136), pointing out that “the scope for [technological] substitution is bounded” by the fact that “engineers cannot program a computer to simulate a process that they […] do not explicitly understand” [16] (p. 135). Autor also picks up on the promises of machine learning to surmount this challenge. Rather than having to “teach” an algorithm how to solve a specific task through a predefined process, they might “be able to program a machine to master the task autonomously by studying successful examples of the task being carried out by other.” Instead of codifying explicit procedures, the algorithm might undergo “a process of exposure, training and reinforcement,” allowing it to “potentially infer how to accomplish tasks” that were unautomatable before [16] (p. 159). And Frey highlights this new technological possibility “to unravel Polyani’s paradox, at least in part” as the most significant advance of automation technologies over the last decade [17] (p. 301), reinforcing the importance of tacit knowledge as a (evanescent) challenge to automatability.
Frey and Osborne are also aware of the centrality of properly composed training data for machine learning. As such, they implemented several precautions to reduce expert bias while compiling the training data. Testing their subjective hand-labelling with “objective” O*NET variables (see above) and only hand-labelling professions whose automatability the experts collectively were “highly confident about” [1] (p. 31) can be understood as attempts to counteract the bias of individual experts. Yet, as noted before, collective overestimation cannot be ruled out altogether.
What is puzzling to me, however, is the prevalent silence in the scientific discourse around this study when it comes to the utilization of the training data—the actual machine learning. Whether a set of 70 occupations is large enough to generalize onto hundreds of other occupations, for instance, seems doubtful 5. One might also challenge whether a machine learning algorithm is actually able to reliably generalize hand-labels with an associated high confidence to cases in which the hand-labelling by experts was deemed too unreliable, generalizing their expertise beyond what they consciously could do. Frey and Osborne certainly seem to think so [11], and discuss established quality criteria and associated literature within the field of machine learning [1] (pp. 32ff.). Yet, without basic training in machine learning, there are few alternatives than trusting their self-evaluation. The fact that despite the extraordinary amount of scrutiny the study received hardly any attention was given to the employment of machine learning by the authors, which is one of the most innovative features of the study, is puzzling nonetheless 6.
Why is that discussion of the methodological robustness of its use of machine learning almost entirely absent in the scientific debate 7? The most plausible explanation seems to me that although the findings of the study drew high levels of attention, the nitty-gritty of the technical description was daunting to many researchers. This is not to lay the blame for this incomprehension exclusively on Frey and Osborne, who tried to supply “a non-technical description” of their approach [11]. Rather, this situation confronts us with an interesting question: How can institutions central to scientific progress in the past (scientific discourse on an equal footing, peer review, etc.) be sustained when the dissemination of new ways to do research introduces a high level of “epistemic opacity” for many experts—let alone the interested public [23] (pp. 139, 140)? The study and the discussion around us indeed seem to represent an example of epistemic opacity that led to a partial failing of scientific discourse.
After all that has been said so far, the two most common forms of critique levelled against Frey and Osborne—that they vastly exaggerated the technological potential for automation and that they would assume “a direct cause-and-effect relationship” between innovation and the substitution of human labor [24] (p. 16)—can be gauged much more clearly: Although there can be little doubt that their approach, based on a reverse assumption of automatability in the absence of engineering bottlenecks, is likely to return an estimate of automatability on the upper end of the range of what might become technologically possible, their discussion of the state of the art of research as well as their engagement with technical experts seems to suggest a fairly up-to-date, albeit optimistic, assessment of the field and its technological potentials. Concerning the second criticism, one might even be inclined to quickly disregard it altogether: After all, Frey and Osborne time and time again stress that they do not intend to give the impression that they made an “attempt to estimate how many jobs will actually be automated” [1] (p. 42), [10,16] (p. 323), let alone answer the key question of how many new jobs might be generated at the same time—and even less that their approach could be simply applied to other economies 8.
Yet, despite the clear and apparent focus on technological potentials rather than labor market outcomes throughout most of the study, the use of triggering terms such as “expected employment impact” [1] (p. 36) and “expected impacts of future computerisation on US labour market outcomes” [1] (p. 1) in key passages of the study seems to betray this intention. Even a very charitable interpretation of the use of the word “expected” cannot entirely alleviate the impression that key passages of the study are phrased in a way that might attract maximum attention, contradicting the study’s ultimately rather sober and earnest approach 9.
Finally, let us for the moment conclude our discussion of the study by asking ourselves what might be learned after all this scrutiny of Frey’s and Osborne’s study. On the one hand, the study presents us with a generalized version of the collective assessment of near-future automation potentials by technical experts, applied to a multitude of occupations covering most of the US labor market. The study highlights potential impacts of advances in machine learning and robotics on the automatability of jobs. In particular, it draws attention to high potential for automation in transport and logistics, as well as office and administrative support and manufacturing. However, Frey and Osborne also provide higher-resolution insights, for example, regarding the potential automatability of “cashiers, counter and rental clerks” and a number of service occupations who happen to work closely with other humans but whose function—according to the authors and the experts they consulted—does not require high levels of social intelligence or dexterity. Lastly, the output of the machine learning algorithm draws attention to unused potentials for the standardization and simplification of tasks, for instance, through prefabrication in construction or the rationalization of food delivery processes within restaurants [1] (pp. 38, 39)—sometimes even to the surprise of the involved experts [11]. On the other hand, the study also reinforces the persistence of obstacles to automation. As such, it also highlights potentials for future automation-resistant employment as well as skill sets that might reduce the risk of personally being affected by automation, reinforcing the importance of education in general and creative and social skills in particular.
Lastly, combining their assessment with data on occupational educational and wage levels, Frey and Osborne were able to conclude that “both wages and educational attainment exhibit a strong negative relationship with the probability of computerisation” [1] (p. 42). In other words, the higher the wages and the educational attainment within a given occupation, the less likely it is that it could be automated. Their conclusion that this would imply “a truncation in the current trend towards labour market polarization, with growing employment in high and low-wage occupations, accompanied by a hollowing-out of middle-income jobs” [1] (p. 42) should be met with some skepticism, however. Their claim that their model would predict that future automation would “mainly substitute for low-skill and low-wage jobs in the near future” [1] (p. 42) again overstrains the explanatory power of the model they built, as, as we have learned by now, automatability does not equal actual future automation. As a matter of fact, the high potential for automation in low-wage jobs might be relatively easily explained: Quite a few of them might have been automatable with tried and tested automation technologies for decades, but low wage levels might have raised the relative costs of automation to a level unattractive to capital. On the contrary, it would have been surprising if automation potentials in low-wage jobs were equally utilized in comparison to higher paying jobs, given the political economy of automation under capitalism. Whether this potential will eventually be utilized will under current conditions ultimately depends on possibly falling prizes of automation technologies and the development of wages on the lower end of the wage spectrum—not just on some novel technological features.
To summarize, the study by Frey and Osborne provides an innovative approach to the question of technological automatability as well as an insightful introduction to the contemporary debates on automatability. Its approach is informed by an extensive literature review, first-hand experience with the field, and expert input. The assumptions made by the author teams are fairly clear and largely well justified, although hardly altogether unproblematic. The data employed by them can be considered a worldwide gold standard and their machine learning-based approach must be called cutting-edge. At the same time, the use of machine learning represents the most fundamental source of epistemic uncertainty regarding the study, but has hardly been picked up in scientific debate. The greatest scientific achievement of the study, and studies like it, is the fact that they sensitize rather concretely for the potentials for automation offered by advances in technological development, in this case in the field of artificial intelligence (and related robotics). As such, they are useful tools in synthetizing assessments of (technical) experts, which they allow to generalize to the level of entire labor markets. Their greatest potential drawback is that they lend themselves well to misinterpretations that draw conclusions laying beyond their explanatory power—a fact that is illustrated both by a myriad of critiques missing the core of the study by Frey and Osborne and, in this particular case, at the very least aggravated by a number of assertions by the authors that seem to contradict their own discussion of the limitations of their approach.
Rather than trying to answer the question of what employment impacts of automation should be expected in the future with a model ill-equipped to do so, we shall now turn towards an exemplary study that makes the claim to address this question more directly.

2.2. The Past’s Future: Empiristic Prognostics

Next, I will turn towards the study “Economy 4.0 and its labour market and economic impacts” by Marc Ingo Wolter et al. [25] to illustrate studies trying to provide concrete estimates of future labor market impacts of technological change in Germany. I chose this study because it is available in English, provides extensive documentation of its methodological approach, and positions itself as a study addressing the gap in research left open by Frey and Osborne [25] (pp. 7–9). Additionally, the study was developed in collaboration between scientists of the Institute of Economic Structures Research (a research consultancy), the Institute for Employment Research (the research branch of Germany’s Federal Employment Agency, abbreviated IAB), and the Federal Institute for Vocational Education and Training (an independent federal institution charged with conducting research on vocational education and training and thereby, the future of work, abbreviated BIBB). The latter two institutions, IAB and BIBB, are specifically charged with providing expertise on labor market policies to decision-makers. The author team consisted of distinguished experts on labor market development and the study builds on an economic forecasting and simulation model that has been in use and in continuous refinement for almost a quarter of a century [25] (p. 16). In other words, it would not be much of a stretch to claim that there is hardly any scientific expertise more reputable in Germany when it comes to possible labor market transformations.
In general, the study builds on existing labor market analyses and economic modelling by IAB and BIBB. To project the labor market impacts of the so-called Economy 4.0 10, they modify an established scenario (“baseline projection”) through five deviating “partial scenarios,” assuming increased investment in equipment and buildings, education, and software, and reflect upon impacts of these changes on cost and profit structures within the economy, on its occupational structure, and on the demand for new goods and services [25] (p. 10). These partial scenarios are detailed through a set of 18 assumptions covering everything from modifications in the capital stock of sensor technologies over the increased need for consulting services to higher government spending on (cyber-)security. Most of the study is dedicated to introducing and discussing these modified assumptions in detail, as well as conducting step-by-step analyses of the partial scenarios, allowing the impact of individual assumption sets on labor demand to be grasped. In the end, these scenarios are integrated for final comparison with the baseline projection. Wolter et al. conclude that their comparisons “shows that the effects digitisation has on the overall level of labour demand at minus 30,000 jobs [in 2025] and minus 60,000 in 2035 will carry no weight” [25] (p. 56). In other words, according to their projection, only 30,000 additional jobs would be lost to accelerated technological change by 2025 compared to the base scenario out of a total of 43.4 million projected jobs. At a share of 0.07% of jobs lost to accelerated technological change, one can indeed consider this number miniscule. However, the insight provided by the study is, of course, not limited to these few numbers—and just as with the study by Frey and Osborne, one has to be careful when interpreting these numbers.

2.2.1. The (Dis-)Advantages of More Classical Macroeconomic Models

First of all, both the baseline projection used for comparison as well as the Economy 4.0 scenario presented by Wolter et al. were created through use of the Q-INFORGE model. Q-INFORGE itself is a modified version of the IAB/INFORGE model for econometric forecasting and simulation, a time-tested software developed by the Institute of Economic Structures and Research and employed by the IAB to calculate projections for the future of the German economy. The documentation of the original IAB/INFORGE model [28] is almost 200 pages long, with the sub-sub-sub-module for the labor market computing 19 different parameters (ranging from yearly working time per full-time/part-time employee over average hourly wages to the number of unemployed or employer contributions to social security), for which various interdependencies are assumed [29] (pp. 79ff.). The complexity of the German economy is represented in around 20 of such modules and sub-modules with the claim to deliver a “bottom-up” (“individual sectors within the national economy are modelled in great detail and the macroeconomic variables are generated through aggregation”) and “completely integrated” (“a representation of interindustrial relation and an explanation on the use of income of provide households” is provided) model of it [25] (pp. 16,17).
To further refine the existing modelling of the labor market, IAB/INFORGE was combined with the BIBB/IAB Qualification and Occupational Field Projections model (QuBe), resulting in the creation of Q-INFORGE 11. Both source models are briefly introduced through infoboxes and diagrams stretching out over roughly half a dozen pages and references to in-depth information is provided. Nonetheless, even though documentations of these models exist, their highly formalized writing consisting in part mostly of equations and their sheer extent represents substantial obstacles.
This is not to imply any sinister intent on the side of the researchers involved in developing these models. On the contrary, the fact that it is possible to describe a more or less comprehensive model of such a highly complex social system as our economy in less than 200 pages is testament to the effectiveness of this mode of expression. In comparison to the machine learning employed by Frey and Osborne, this more classical macroeconomic modelling has a key advantage: Although it certainly is not self-explanatory, it can, in principle, be understood by anyone with sufficient time, motivation, and education, whereas although the model trained by Frey and Osborne might be subjected to statistical tests regarding its robustness, the inner functioning remains opaque or has to be laboriously reverse engineered [30]. Accordingly, the model employed by Wolter et al. can be considered to be more readily accessible to scrutiny by peers, reinforcing its reliability, particularly given its prominence and long-term use.
That should not imply, however, that this kind of modelling would be altogether unproblematic: First of all, one might question the relevance of the differentiation of forms of opacity just introduced above, as it matters little in day-to-day operations whether a certain model cannot be understood due to technical illiteracy (or even just the lack of time) or due to an essential epistemic opacity fundamentally related to the scientific method employed. In the end, the question whether a model is “essentially epistemically opaque” [23] (p. 139) or just functionally opaque might be interesting on a theoretical level, but since it is common practice of both researchers and policymakers to signal expertise within the debates on automation by referencing a plethora of studies, rather than limiting oneself to the careful discussion of a small number of selected papers one might actually be able to grasp comprehensively, the concern that this distinction might not be worth much might not be entirely unfounded.
Another issue I will return to in the final part of the chapter is the empiricism of the models employed by IAB and BIBB: Not only the value of specific parameters within the model, but also the relationship between these parameters are largely derived by science based on empirical observation (e.g., when estimating the average operating life of various groups of capital goods [29] (pp. 43ff.)). Accordingly, they can rightfully claim that they are not just arbitrarily making things up [28] (p. 5). Indeed, as Holm Tetens [31] argued in his introduction to the philosophy of science, scientific prognosis is generally limited to talking about the future based on knowledge derived from past observations of existing structures and the laws governing them and their dynamics. Projecting them into the future might seem unproblematic in many cases—for instance when it comes to making the assumption that gravity will persist in the future. Yet, this empiricism introduces a structural conservatism to these models: Ultimately, the scenarios derived by these models represent little more than a reproduction of the past—and the more concrete and detailed the economic modelling is, the less it is able to transcend the present and provide knowledge that could prepare policymakers and civil society for unexpected labor market disruptions or other crises. What is more, this approach is likely to be matched even when conscious assumption-setting takes place: Rather than assuming radically different dynamics of societal development than before, the submission to an empiricist logic makes researchers prone to selecting sets of assumptions that deliver more or less status quo scenarios, normatively informed by a broadly shared, seemingly apolitical “common sense” [26].
Finally, once formalized, the uncertainty and the normative dimension of the sets of anticipatory assumptions that ultimately determine the outcomes of the projection are covered up. The computational output is unambiguous and appears to be “objectively” derived compared to, for instance, philosophical reasoning about possible future developments conducted in natural language [32] (p. 436), [33] (p. 254). This is particularly important, as picking the right set of assumptions can enable you to reach almost any result one sets out to reach [34]. Accordingly, the importance of the assumptions of the study by Wolter et al. can hardly be overestimated [25] (p. 60). Let us thus address them next.

2.2.2. The Problem with Assumptions #2

The first set of assumptions postulates that between 2017 to 2035 investment is moderately expanded by EUR 185 billion compared to the baseline scenario, with agriculture and manufacturing contributing EUR 45 billion and the service industry the remaining EUR 140 billion [25] (p. 24). Although these numbers certainly sound ambitious, they correspond to less than an additional EUR 10 billion of investment annually (for comparison, Wolter et al. stated that “current investments in new equipment and other new systems” stand at around EUR 300 billion annually [25] (p. 23), adjusted for prices—implying an increase of a little more than 3%). In addition, the public sector is assumed to support the push for Economy 4.0 by investing EUR 12 billion to ensure widespread broadband coverage (95% of households should have access to a 50 Mbit/s connection by 2018 [25] (p. 26)) 12. So far, these assumptions seem perfectly plausible, if a bit meagre in size: If the adoption of new technologies should be sped up, it seems reasonable to assume investment needs to be expanded.
The next set of assumptions covers the changes in cost and profit structures. Estimates are given regarding additional educational demands and costs, the level of diffusion of digital technologies through the economy, an increased need for consulting services, and potentials for cost saving through decreases in raw materials, consumables, supplies, purchased services, and costs of logistics. Finally, labor productivity is projected to “be 1 percent higher until 2025 than in the QuBe baseline projection.” The setting of their assumptions on potentials for cost savings and productivity increases is informed by two company surveys by IAB, polling about 2000 companies on “digitisation and its desired effects” [25] (p. 30) 13.
After setting these macroeconomic parameters, they turn towards a more detailed modelling of changes in the labor market, focusing on the question of which jobs might be automated and which shifts in the occupational composition might be expected. Wolter et al. built on an earlier IAB publication by Katharina Dengler and Britta Matthes [35] that investigated the possibility to assess substitutability potentials in the German economy. They did so by combining data from the BERUFENET (the German counterpart to the O*NET) and substitutability assessments of experts of the Federal Employment Agency.
Leaving aside the question of whether BERUFENET adequately represents occupational realities 14 and whether employment experts are actually better qualified to assess the technical substitutability of tasks than technical experts (which seems a somewhat problematic claim), their approach differs in a key respect to the one of Frey and Osborne discussed above: Rather than asking for assessments of whether tasks might become automatable in the near future, the assessment by Dengler and Matthes is based on the factual automatability of a task in the year 2013 [35] (p. 11). Accordingly, they fail to take into account most of the features deemed most interesting in the latest technological development: the automation of non-routine tasks and the affiliated conquest of Polyani’s paradox [1,15,31]. Although the worry that technical experts might overestimate the potentials of future technological development is legitimate, the assumption that there will be no further development at all up until the year 2035 almost certainly has to be regarded as a severe underestimate.
By using the framework of Dengler and Matthes, Wolter et al. enshrine the technological level of development of the year 2013. What is more, they assume that only half of the technological potentials identified by Dengler and Matthes will actually be utilized. Their rationale for this assumption is that levels of automation “cannot be determined beforehand, as there will be other changes to the occupation field structure endogenous to the model–e.g., due to different the development in wages [sic]–in addition to the assumption made” [25] (p. 41). Although they are of course correct in pointing this out, their rule-of-thumb approach to the assessment of the impacts of accelerated technological development of the economy is nonetheless disappointing: Not only do they fail to take into account some of the defining features of the latest developments in the field of automation technologies, they also simply assume that even the technological potentials that will be almost a quarter of a century old at the end of their projections in 2035 will go severely underutilized. In contrast, modelling likely levels of automation utilization based on the development of wage levels, etc., would have been a key contribution to redeeming their self-imposed goal to economically ground the debate sparked by Frey and Osborne 15.
The decrease in labor demand due to increased automation is in their model counteracted, at least in part, by the last set of assumptions, detailing increases in demand through increased government spending, additional demand from private households due to higher wages, and an increased willingness to pay for customized Industry 4.0 products, as well as increased exports. All these assumptions are predicated on the assumption that the German economy will be a trailblazer of Industry 4.0, “generating ‘temporary monopoly profits’ over foreign competitors” [25] (p. 21). Although some of the details of these assumptions raise question marks 16, the general picture is fairly clear: Moving swiftly and decidedly to adopt Industry 4.0 would boost productivity and product quality, making German products more attractive to domestic as well as foreign consumers. As a result, the competitiveness of the German economy in global competition would be strengthened.
Wolter et al. are keenly aware of the precarious nature of this basic premise. In light of this, it is only fitting that the final paragraph of their study should be no less than a call to arms:
“The scenario calculations […] make one thing clear: There ultimately is no other way–if Germany’s unable to implement Economy 4.0, other countries will still do so. And the assumptions which have a positive effect on Germany in the above scenario (pioneer, additional demand abroad, competitive edge) will then count against Germany as a business location. Decreases in production and further unemployment will result. Those are triggered by a loss in competitiveness and domestic demand shifting toward imported products. So the task must therefore be to make the transition as sustainable as possible” [25] (p. 61).
As the quote indicates, they are aware that other countries similarly aim to strategically boost innovation as a tool to strengthen competitiveness [25] (p. 21) but are unable to envision any alternative to deepening international competition and economic chauvinism. The demands and necessities of capitalist competition are naturalized (“There ultimately is no other way”) and the study is firmly entrenched in what has been called a “dialectics of pessimism and optimism” [36]: Things can go on as they are—the German economy can continue to be a leading exporter, strengthening employment domestically while conquering global market shares, and thus jobs, from less competitive economies—as long as everyone gets behind Industry 4.0. In this respect, the study features the strong pedagogic undertones of not a “self-fulfilling prophecy” but a projection whose realization is actively pursued by its authors.
The fact that Wolter et al. openly address this basic premise of their scenario modelling does not constitute a failing of theirs. On the contrary, this transparency should be welcomed and is a virtue of this study compared to studies that operate with similar sets of assumptions but fail to disclose the fact that these assumptions are integrated into a specific normative framework—the affirmation of capitalist social relations, commitment to economic growth as the basis of social stability and (“ultimately”) economic chauvinism. One would be too quick, too, to disregard this scenario merely as an overtly optimistic outlook provided by scientists tasked with the management of the status quo (of the labor market) to policymakers who are also committed to a more or less frictionless continuation of the status quo of the national economy and welfare state. Indeed, their modelling substantially refines and expands the understanding of the possible impacts of the automation on the labor market, providing insight on likely winners and losers of accelerated technological development.
One of the key insights of the study, for instance, is that contrary to all the attention and homage paid to manufacturing in the Industry 4.0 discourse, increased investment into technology is actually likely to speed up the deindustrialization of the German employment base [25] (pp. 56–58). Additionally, the study provides insight into which occupational groups might grow or contract under the assumptions of the scenario (with commercial office occupations and electrical occupations worst hit and core IT and teaching occupations seeing the biggest growth [25] (p. 55)) as well as into changes in the educational requirements of a technologically upgraded economy [25] (p. 59). Accordingly, the scenario can be understood as a meaningful tool for the researchers involved to sensitize policymakers to challenges that might arise while pursuing the Industry 4.0 strategy—even under “fair weather” conditions 17. More generally, the extensive discussion of the assumptions of the scenario can serve as a meaningful launch pad for reflection on the relationship of various economic factors that shape the labor market—bearing in mind that the assumptions made by Wolter et al. need to be examined critically, as they themselves emphasize [25] (p. 60). This critical examination itself can then be understood as one of the key opportunities to deepen one’s understanding of the subject matter—although this might not be an altogether realistic demand or even hope (see my discussion of the functional opacity of this kind of study above).
But despite these merits of the study, there are also serious drawbacks: Not only do the assumptions made by Wolter et al. require scrutiny, at least as crucial is the fact that although the assumptions draw attention to specific issues the authors apparently find essential, they divert attention from other possible lines of inquiry regarding the forces that might shape automation’s impact on the labor market and normative orientations that might inform the assessment of its general impact. To give only two examples, it seems curious that Wolter et al. should discuss the number of soldiers hired for cyber warfare but omit discussions of working time reductions altogether. The length of the working week clearly is a non-negligible factor when it comes to managing labor demand and supply and as such is covered by the modelling framework they employ—and very clearly has a bigger potential to bolster employment than a couple thousand policemen and soldiers. Additionally, working time reductions is one of the key policies advanced in scientific and public discourse in response to automation. At the risk of raising an allegation that cannot be proven (after all, it is difficult to verify the motivation of an omission if it is not addressed by the authors themselves), not explicitly addressing the issue of working times at all could well be understood to betray a more general aversion to transcending today’s basic work regime even in a rather obvious way.
Another omission that is telling is the lack of any attention to ecological sustainability in the construction or evaluation of the scenario. Although the term “sustainable” is used in the study (see the longer quote above), it is best understood in the meaning of “economically sustainable,” or more precisely, sustainability is equated with increased economic competitiveness. Although the vast difficulties of measuring ecological impacts of economic changes should be appreciated, and one also has to take into account that Wolter et al. are labor market experts and not sustainability experts, it is nonetheless noteworthy that they, for instance, were able to give estimates on possible monetary savings for companies in raw materials—but even in that context omitted to discuss any ecological implications of so-called Industry 4.0.
This dominance of economic reasoning is consistent with the overall approach of the study, whose design principle is that investment has to “yield a good return [to companies]” [25] (p. 31) and therefore has to consistently highlight possible cost savings as well as profit opportunities—other considerations are put aside, or rather, not even considered. Even if one deems this exclusive focus legitimate, it should nonetheless be noted that leading economists feel comfortable discarding ecological sustainability as an evaluative dimension without feeling the need to address this omission at all, while references not only to employment opportunities but to economic growth and profit opportunities abound. Not only does this raise doubts regarding the depth to which ecological challenges have been recognized within economics, it also casts some shadows over the usefulness of economic modelling that blanks out one of the most profound contemporary developments that might reasonably be expected to, among a myriad of other effects, shape future labor markets even more fundamentally than consumer enthusiasm for customized sneakers or, at the risk of repeating myself, the recruitment of a couple thousand soldiers.
To summarize, the study by Wolter et al. represents a high-profile example of macroeconomic expertise, employing a scenario method to model the expected effects of increased technology use on the German labor market. It builds on a well-established methodology and the scientific institutions involved can draw on substantial manpower and long-running, well-respected research. It goes substantially beyond the approach developed by Frey and Osborne by modelling the development of the labor market by embedding the reflection of the impacts of technological change within a projection of macroeconomic development. In comparison to Frey and Osborne, their approach does not feature a degree of essential opacity, but is in principle comprehensible.
Doing so demands of the reader to engage with vast sets of assumptions, however, both specific to the concrete scenario as well as general to the modelling frameworks employed by the authors. These assumptions are necessarily much more wide-ranging than those employed by Frey and Osborne, as the assumptions regarding automatability form just one sub-module of the whole modelling endeavor. As many critics of Frey and Osborne have pointed out, modelling the actual progress of automation in an entire economy simply is much more complex than looking at the latest developments in artificial intelligence or robotics research (or other engineering fields) and has to account for a number of other factors. However, in accepting this necessity, one also has to accept that such a macroeconomic approach is, by definition, much more speculative. My critique of their assumptions notwithstanding, one nonetheless has to acknowledge that Wolter et al. strive for a high level of transparency regarding their assumptions and actively encourage criticism. Leaving aside the factual validity of their assumptions 18, one central observation of my discussion is the high degree of normative saturation of their anticipatory assumptions.
Again, its transparency in this regard should be considered a virtue, rather than a failing of the study. But imagine for a moment a team of scientists that would have intended to model the impacts of so-called Industry 4.0 with the explicit goal of proving that it could lead to mass unemployment and/or ecological catastrophe. By slightly shifting a small number of assumptions—for instance, the positive effects of Industry 4.0 on domestic and international demand—or by reorienting the evaluative dimension, one could rather easily derive radically different conclusions than those Wolter et al. were able to derive. This is not to invite radical relativism and claim that about any conclusions might be legitimately drawn by the use of scenario modelling: The assumptions used, after all, have to be justified and defended in scientific discourse, first and foremost by showing that they are consistent with established knowledge [37]. However, given that hopes of “temporary monopoly profits” can, by definition, only be fulfilled for a limited number of economies, leaving the other economies’ competitors the short end of the stick, and that an interference-free continuation of the past seems highly unlikely provided that we are facing a deepening ecological crisis, exacerbated by exactly the kind of single-minded economic model Wolter et al. assume persists, such variations of assumptions and evaluative frameworks can hardly be ruled out as altogether “unrealistic.”
Nonetheless, it seems likely that such studies might be accused of “politicizing science,” of displaying an ideological bias or of fear-mongering. Or, more subtly but even more seriously, they might simply face a hard time acquiring funding for such outlandish lines of inquiry, especially under conditions in which major funding agencies, policymakers, and scientific common sense have gravitated towards Industry 4.0 as a normatively desirable national objective [26,27]. In any case, the mere fact that studies such as the one by Wolter et al. dominate much of the scientific and of the policy discourse on automation rather than being marginalized as “partisan science” cannot be explained through the merits of their methodology alone—rather, I would argue, it should be explained through the conformity of their approach and the linked anticipatory assumptions to the dominant “common sense” and the socioeconomic conditions that give rise to it 19.

3. Potentials, Projections, and Indeterminacy

Let us recapitulate: We learned that the technological potentials for automation are generally considered high in research, whereas there seems to be a more or less shared consensus in macroeconomic prognosis that negative labor market impacts of increased automation might be negligible—or even slightly positive, in light of hopes that automation might boost economic growth and economic competitiveness.
At the same time, we were able to see that although analyses of technological potentials are able to manage with relatively modest sets of assumptions (which nonetheless can be problematic), their explanatory power correspondingly is rather limited and should not be misinterpreted as statements about actual future developments. The other type of studies—macroeconomic projections of various forms—seem to have a stronger claim on anticipating future developments. Their statements about future developments are, however, based on much more expansive sets of anticipatory assumptions, which oftentimes seem quite optimistic and exhibit a strong normative bias. Not only that, but their very approach is also informed by the analysis of our economic past. Projections about the future, therefore, are based on the assumption that our economic future will essentially mirror our economic past, lest the whole argument for the epistemic validity of the modelling crumble. By perpetuating the past, these models obfuscate (or at the very least do not address) “the political and contingent basis” of this past [39] (p. 88), see also [40]. By doing so, they obfuscate the fact that rather than forming the indisputable basis for discussions about the future, this past might have looked altogether different if, for instance, other social and economic policies had been in place.
Consequently, any futures that might depend on radically transformed social relations, any future that might not be qualified as a mere continuation of the past, is thereby axiomatically ruled out. Although this seems a perfectly adequate and useful approach to the management of the status quo from an immanent perspective, I will conclude this paper by giving a brief assessment of the two competing research methodologies from the perspective of the Frankfurt School. Critical Theory has been wary of such scientific usefulness from the beginning. Rather, the seminal characterization of Critical Theory by Max Horkheimer starts out by urging scientists not to simply accept the dominant normative orientations of their time “as nonscientific presuppositions about which one can do nothing” and opt for “conscious opposition” in the interest of “emancipation and […] an alteration of society as a whole” [38] (pp. 205–208) instead.
Therefore, it should not come as a surprise that although the research of possible futures cannot be considered a research focus of the early Frankfurt School, Theodor W. Adorno in particular engaged critically with attempts to “calculate” the future. It is noteworthy that he developed his critique at a time at which scientific prognosis was first constituting itself as a field of research and was charged with a high level of optimism, often bordering or crossing over to deterministic understandings of societal development (for introductions into the development of research on the future, see [41,42]). This was precisely one of the key aspects of Adorno’s critique: The very form of scientific prognosis would reduce historic development to a simple analytical judgment and by treating humans and their behavior as just another variable, their agency would be fundamentally denied. By assuming that future developments could be anticipated deterministically in the same way as solving just any other mathematical problem, the very possibility of alternatives would be excluded [43] (p. 64).
In his attempt to outline a critical approach to empirical research, he connects the concreteness and binding character of scientific hypotheses with the fact that they are unable to qualitatively transcend dominant social relations—much like I have argued above in regards to macroeconomic models. He claims that the attempt to anticipate future developments through hypotheses that are confined to existing social relations amounts to little more than the intellectual reproduction of the past. It is incommensurable with the primary motivation of Critical Theory: advancing collective human emancipation in a liberated society [44] (pp. 198,199). Indeed, it seems rather evident that a group of Marxists convinced of a radical need for societal transformation would take offense to a technocratic scientific endeavor suspending qualitative societal progress in the interest of the perpetuation of a smoothly managed status quo. However, it would be intellectually dishonest to apply this critique to studies such as the one by Wolter et al. without further ado: Their approach is much more sophisticated and nuanced than early scientific prognostics—not just in terms of the past decades of refinement of computational modelling but also insofar as they do not claim to predict the future. Rather, their projection is to be understood as one possible future, which is contrasted both with a “baseline” scenario and a vaguely outlined scenario in which international competitors beat the German economy in adopting Industry 4.0 20. Therefore, the study is non-deterministic. Despite this relative indeterminacy, the critique remains that rather than enabling a wide-ranging debate on societal alternatives, the framework employed by Wolter et al. limits the development of scenarios to a quite narrow corridor of possibilities.
On a less abstract and normatively charged level, the fixation on “fair weather” scenarios that seems predominant in macroeconomic modelling around Industry 4.0 should be a matter of concern to anyone interested in reliable scientific expertise. After all, reality might defy common sense (in this case, regarding the economic opportunities offered by Industry 4.0), even one that is widely shared among economic, political, and scientific thought leaders. This was the case, for instance, when in the years following 2008, reality asserted itself against the wishful thinking of economists, bankers, and politicians alike. When in the aftermath British economists from both academia and the banking sector were confronted by the Queen with the question of why they failed to notice that a crisis was looming, they convened at the British Academy to draft an explanation. In it, they cite “wishful thinking combined with hubris,” “politicians […] charmed by the market,” a “psychology of denial,” and the “failure of the collective imagination of many bright people” in regards to systemic economic risks as reasons for the collective failure of their discipline. They are also keen to highlight the role economic models played in abetting these individual misjudgments—models that turned out to be “good at predicting the short-term and small risks” but were largely ill-equipped “to say what would happen when things went wrong as they have” [46] 21.
This is not to say that automation necessarily has to lead to any sort of systemic crisis in the near future. However, in light of the fact that the experience of the financial crisis seems to have had little effect on the methodology of macroeconomic modelling, the evaluative dimensions of scenarios or even the selection of values for specific assumptions threatens to make sure that dominant economic research would again fail to be of any use to see an socioeconomic crisis coming—or that its socioeconomic consequences might be exacerbated by automation (for a more detailed discussion of possible connections between crises and automation, see chapter 4.1). Or as Jonathan Aldred, a heterodox economist at Cambridge University, put it, “Conventional economic theories have had little to offer [to face looming crises triggered by ecological deterioration and technological change]. On the contrary, they have acted like a cage around our thinking” [48]. In light of this, it does not seem to be excessively critical to demand at least a fraction of the scrupulous soul-searching and reflexivity that is (rightly) demanded by anyone defying established social and scientific norms from established economists, too—particularly because their normative biases and professional failings have caused significant societal devastation in the past [49] 22. To summarize, not only does this form of scenario building not promote the exploration of societal alternatives, but it even fails to satisfy the demands that would need to be met to even responsibly manage the status quo.
In contrast, the exploration of the tension between social reality and objective societal potentials is a defining feature of critical thinking [44] (p. 197). I would argue that the analysis of technological potentials, represented by Frey and Osborne, lends itself well to an emancipatory appropriation in this context, as it offers insight into one dimension of potentials. Of course, not all the answer they give are necessarily accurate, but by limiting themselves to a question that is of special interest to Critical Theory (what might become (technologically) possible in the future?), they offer insights less burdened with the plethora of normative assumptions informing the scenario modelling we examined. That is not to say that scenario methods might not also be useful to inform, for instance, strategy building and planning in the context of social transformation, but given the normative biases presented in some of today’s scenario frameworks, existing frameworks would have to be heavily adapted (or substituted by new frameworks).
This distinction might also explain the quite different reception both studies received: Whereas the study by Frey and Osborne sparked vivid discourses about the impacts of technological change on society (and alternative ways to make use of these technological potentials), Wolter et al.’s study was also met with interest—but mostly by labor market experts and policymakers. I would suggest that this should not be explained exclusively by factors external to the studies themselves 23. Rather, the fact that Frey and Osborne highlighted vast technological potentials allowed for an opening up of public debate, as established social relations seemed challenged by technological change, offering a chance to present radical alternatives to the status quo (e.g., a society in which the dominance of wage labor in our lives would be transcended). As such, the Frey and Osborne study exhibited a strong discursive function. Wolter et al., on the other hand, provided an expertise that might provoke relatively little attention in public discourse—there is a way to implement Industry 4.0 that allows things to stay the way they are, although quite a number of workers might have to be requalified—but is immediately useful for the specialist discourses and strategy formation of policymakers [37] (pp. 28ff.).
Notwithstanding this practical usefulness, I hope that this paper succeeded in highlighting both the methodological limitations as well as the normative saturation of many of the assumptions that are key to explaining the results of studies offering projections of the labor market effects of automation, thereby contributing to a more enlightened and reflexive debate on the future of automation. Scholars that take the numbers these studies supply at face value without reflecting on them carefully are at risk of implicitly accepting the normative orientation of these studies and thereby missing the perhaps central discussions that should take place: Should automation be used to increase competitiveness or should it be used to allow for increased leisure? What kind of social innovations do we need so that automation leads to a better life for the many, rather than temporary surplus profits for the few? Which labor ought to be automated and which should be left to humans on normative grounds? Do capital owners get to decide autocratically what technology is used for or is this decision democratized [50]? What are the actual needs of workers? How might further increases in productivity be reconciled with ecological sustainability?
I would argue that rather than trying to predict what effects automation will have in the future—a question that cannot be answered conclusively in a non-deterministic framework—research should rather focus on the much more interesting question of what automation could and even more importantly should be used for. Rather than committing themselves to supplying knowledge for the more or less successful management of the status quo, such a reorientation would challenge researchers to include the societal conditions under which technological innovation takes place in their reflections and to understand them as the (modifiable) result of human practice, thereby moving from a technology-focused approach to one rooted in social theory. This broadening of their perspective would then allow scientists to explore ways to commission technology in the interest of societal progress, while at the same time emphasizing that societal progress will not result from technological development by itself [26,51].

Funding

This research received no external funding.

Acknowledgments

I would like to thank Armin Grunwald, Paul Grünke, and Simon Schaupp for the feedback they provide to earlier drafts of this paper. Furthermore, I would like to thank the KIT-Publication Fund of the Karlsruhe Institute of Technology for its support.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Frey, C.B.; Osborne, M.A. The Future of Employment: How Susceptible are Jobs to Computerisation? 2013. Available online: https://www.oxfordmartin.ox.ac.uk/downloads/academic/future-of-employment.pdf (accessed on 8 December 2020).
  2. Frey, C.B.; Osborne, M.A. The Future of Employment: How Susceptible are Jobs to Computerisation? Technol. Forecast. Soc. Chang. 2017, 114, 254–280. [Google Scholar] [CrossRef]
  3. Matuschek, I. Industrie 4.0, Arbeit 4.0—Gesellschaft 4.0: Eine Literaturstudie. Available online: https://www.rosalux.de/fileadmin/rls_uploads/pdfs/Studien/Studien_02-2016_Industrie_4.0.pdf (accessed on 10 January 2020).
  4. Kaltenborn, B. Auswirkungen der Digitalisierung auf die Erwerbstätigkeit in Deutschland: Literaturstudie; Working Paper Forschungsförderung No. 157; Hans-Böckler-Stiftung: Düsseldorf, Germany, 2019. [Google Scholar]
  5. Laukhuf, A.; Runschke, B.; Spies, S.; Stohr, D. Beschäftigungseffekte der Digitalisierung in Branchen: Ein Literaturüberblick; Working Paper Forschungsförderung No. 162, 2019. Available online: https://www.boeckler.de/pdf/p_fofoe_WP_162_2019.pdf (accessed on 10 January 2020).
  6. Brzeski, C.; Burk, I. Die Roboter kommen: Folgen der Automatisierung für den deutschen Arbeitsmarkt. INGDiBa Econ. Res. 2015, 30, 7p. [Google Scholar]
  7. Bonin, H.; Gregory, T.; Zierahn, U. Übertragung der Studie von Frey/Osborne (2013) auf Deutschland. ZEW Kurzexpertise 0174-4992 2015, FB455, 51. [Google Scholar]
  8. Manyika, J.; Chui, M.; Miremadi, M.; Bughin, J.; Georg, K.; Willmott, P.; Dewhurst, P. A Future that Works: AI, Automation, Employment and Productivity. Available online: https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Digital%20Disruption/Harnessing%20automation%20for%20a%20future%20that%20works/MGI-A-future-that-works_Full-report.ashx (accessed on 10 January 2020).
  9. Betz, G. Fallacies in Scenario Reasoning; Karlsruher Institut für Technologie (KIT): Karlsruhe, Germany, 2016. [Google Scholar]
  10. EPTA. The Future of Labour in the Digital Era: Ubiquitous Computing, Virtual Platforms, and Real-Time Production. Available online: Epub.oeaw.ac.at/ita/ita-projektberichte/EPTA-2016-Digital-Labour.pdf (accessed on 15 August 2019).
  11. Frey, C.B.; Osborne, M. Automation and the Future of Work—Understanding the Numbers. Available online: https://www.oxfordmartin.ox.ac.uk/blog/automation-and-the-future-of-work-understanding-the-numbers/ (accessed on 15 November 2018).
  12. Pfeiffer, S.; Suphan, A. Der AV-Index. Lebendiges Arbeitsvermögen und Erfahrung als Ressourcen auf dem Weg zu Industrie 4.0. Available online: http://www.sabine-pfeiffer.de/files/downloads/2015 (accessed on 20 August 2016).
  13. Nuffield Council on Bioethics. Emerging Biotechnologies: Technology, Choice and the Public Good; Nuffield Council on Bioethics: London, UK, 2012; ISBN 9781904384274. [Google Scholar]
  14. Edwards, M.A.; Roy, S. Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition. Environ. Eng. Sci. 2017, 34, 51–61. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. OECD. Getting Skills Right: Skills for Jobs Indicators; OECD Publishing: Paris, France, 2017; ISBN 978-92-64-27787-8. [Google Scholar]
  16. Autor, D. Polanyi’s Paradox and the Shape of Employment Growth. In Proceedings of the Polanyi’s Paradox and the Shape of Employment Growth, National Bureau of Economic Research. Cambridge, MA, USA; 2014; pp. 129–177. [Google Scholar]
  17. Frey, C.B. The Technology Trap: Capital, Labor, and Power in the Age of Automation; Princeton University Press: Princeton, NJ, USA, 2019; ISBN 9780691172798. [Google Scholar]
  18. Brownlee, J. How Much Training Data is Required for Machine Learning? Available online: https://machinelearningmastery.com/much-training-data-required-machine-learning/ (accessed on 22 December 2019).
  19. Brandes, P.; Wattenhofer, R. Opening the Frey/Osborne Black Box: Which Tasks of a Job are Susceptible to Computerization? Available online: http://arxiv.org/pdf/1604.08823v2 (accessed on 20 October 2020).
  20. Durán, J.M. Varying the Explanatory Span: Scientific Explanation for Computer Simulations. Int. Stud. Philos. Sci. 2017, 31, 27–45. [Google Scholar] [CrossRef] [Green Version]
  21. Durán, J.M.; Formanek, N. Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism. Minds Mach. 2018, 28, 645–666. [Google Scholar] [CrossRef] [Green Version]
  22. Krohs, U. How Digital Computer Simulations Explain Real-World Processes. Int. Stud. Philos. Sci. 2008, 22, 277–292. [Google Scholar] [CrossRef]
  23. Humphreys, P. Computational Science and Its Effects. In Science in the Context of Application; Carrier, M., Nordmann, A., Eds.; Springer Netherlands: Dordrecht, The Netherlands, 2011; pp. 131–142. ISBN 978-90-481-9050-8. [Google Scholar]
  24. Valenduc, G.; Vendramin, P. Work in the Digital Economy: Sorting the Old From the New; European Trade Union Institute Brussels: Brussels, Belgium, 2016. [Google Scholar]
  25. Wolter, M.I.; Mönnig, A.; Hummel, M.; Weber, E.; Zika, G.; Helmrich, R.; Maier, T.; Neuber-Pohl, C. Economy 4.0 and Its Labour Market and Economic Impacts: Scenario Calculations in Line with the BIBB-IAB Qualification and Occupational Field Projections; IAB Research Report 13/2016; Institut für Arbeitsmarkt- und Berufsforschung: Nürnberg, Germany, 2016. [Google Scholar]
  26. Frey, P.; Schaupp, S. Futures of digital industry: Techno-managerial or techno-political utopia? BEHEMOTH-A J. Civilis. 2020, 13. [Google Scholar] [CrossRef]
  27. Pfeiffer, S. The Vision of “Industrie 4.0” in the Making-a Case of Future Told, Tamed, and Traded. Nanoethics 2017, 11, 107–121. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Das IAB/INFORGE-Modell: Ein Sektorales Makroökonometrisches Projektions- und Simulationsmodell zur Vorausschätzung des Längerfristigen Arbeitskräftebedarfs; Zika, G.; Schnur, P.; Bertelsmann, W. (Eds.) W. Bertelsmann: Bielefeld, Germany, 2009; ISBN 3763940057. [Google Scholar]
  29. Ahlert, G.; Distelkamp, M.; Lutz, C.; Meyer, B.; Mönnig, A.; Wolter, M.I. Das IAB/INFORGE-Modell. In Das IAB/INFORGE-Modell: Ein Sektorales Makroökonometrisches Projektions- und Simulationsmodell zur Vorausschätzung des Längerfristigen Arbeitskräftebedarfs; Zika, G., Schnur, P., Bertelsmann, W., Eds.; W. Bertelsmann: Bielefeld, Germany, 2009; pp. 15–170. ISBN 3763940057. [Google Scholar]
  30. Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data Soc. 2016, 3, 205395171562251. [Google Scholar] [CrossRef]
  31. Tetens, H. Wissenschaftstheorie: Eine Einführung, Orig.-Ausg; Beck: München, Germany, 2013; ISBN 978-3-406-65331-5. [Google Scholar]
  32. Timcke, S. The One-Dimensionality of Econometric Data: The Frankfurt School and the Critique of Quantification. Triple C 2020, 18, 429–443. [Google Scholar] [CrossRef]
  33. Colander, D.; Goldberg, M.; Haas, A.; Juselius, K.; Kirman, A.; Lux, T.; Sloth, B. The Financial Crisis and the Systemic Failure of the Economics Profession. Crit. Rev. 2009, 21, 249–267. [Google Scholar] [CrossRef]
  34. Naidu, S.; Rodrik, D.; Zucman, G. Economics after Neoliberalism: Introducing the EfIP Project. AEA Pap. Proc. 2020, 110, 366–371. [Google Scholar] [CrossRef]
  35. Dengler, K.; Matthes, B. Folgen der Digitalisierung für die Arbeitswelt: Substituierbarkeitspotenziale von Berufen in Deutschland; IAB Research Report 11/2015; Institut für Arbeitsmarkt- und Berufsforschung: Nürnberg, Germany, 2015. [Google Scholar]
  36. Schiølin, K. Revolutionary dreams: Future essentialism and the sociotechnical imaginary of the fourth industrial revolution in Denmark. Soc. Stud. Sci. 2020, 50, 542–566. [Google Scholar] [CrossRef] [PubMed]
  37. Dieckhoff, C.; Appelrath, H.-J.; Fischedick, M.; Grunwald, A.; Höffler, F. Zur Interpretation von Energieszenarien; Stand: Mai 2014; Acatech—Deutsche Akademie der Technikwissenschaften: München, Germany, 2014; ISBN 978-3-9817048-1-5. [Google Scholar]
  38. Horkheimer, M. Traditional and Critical Theory. In Critical Theory: Selected Essays; Horkheimer, M., Ed.; A&C Black: Edinburgh, UK, 2002; pp. 188–243. ISBN 0826400833. [Google Scholar]
  39. Srnicek, N.; Williams, A. Inventing the Future: Postcapitalism and a World without Work; Verso: Brooklyn, NY, USA, 2015; ISBN 9781784780968. [Google Scholar]
  40. Weeks, K. Anti/Postwork Feminist Politics and A Case for Basic Income. Triple C Commun. Cap. Crit. Open Access J. Glob. Sustain. Inf. Soc. 2020, 575–594. [Google Scholar] [CrossRef]
  41. Grunwald, A. Technology Assessment in Practice and Theory; Routledge: London, NY, USA, 2019; ISBN 1138337080. [Google Scholar]
  42. Gransche, B. Vorausschauendes Denken: Philosophie und Zukunftsforschung jenseits von Statistik und Kalkül; Transcript: Bielefeld, Germany, 2015; ISBN 978-3-8376-3038-1. [Google Scholar]
  43. Adorno, T.W. Spengler nach dem Untergang. In Kulturkritik und Gesellschaft I/II; Adorno, T.W., Ed.; Suhrkamp: Frankfurt am Main, Germany, 1977; pp. 47–71. ISBN 3518572261. [Google Scholar]
  44. Adorno, T.W. Soziologie und empirische Forschung. In Soziologische Schriften I.; Adorno, T.W., Ed.; Suhrkamp: Frankfurt am Main, Germany, 1972; pp. 196–216. ISBN 3518572261. [Google Scholar]
  45. Kosow, H.; León, C.D. Die Szenariotechnik als Methode der Experten- und Stakeholdereinbindung. In Methoden der Experten- und Stakeholdereinbindung in der sozialwissenschaftlichen Forschung; Niederberger, M., Wassermann, S., Eds.; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2015; pp. 217–242. ISBN 978-3-658-01686-9. [Google Scholar]
  46. Besley, T.; Hennessy, P. Letter to Her Majesty The Queen: The Global Financial Crisis—Why Didn’t Anybody Notice? The British Academy: London, UK, 2009. [Google Scholar]
  47. Solow, R.M. Written Statement. In Building a Science of Economics for the Real World: Hearing before the Subcommittee on Investigations and Oversight; Committee on Science and Technology: Washington, DC, USA, 2010; pp. 14–15. [Google Scholar]
  48. Aldred, J. This Pandemic Has Exposed the Uselessness of Orthodox Economics. Available online: https://www.theguardian.com/commentisfree/2020/jul/05/pandemic-orthodox-economics-covid-19 (accessed on 7 July 2020).
  49. Grunwald, A. Transformative Wissenschaft als honest broker? Das passt! GAIA—Ecol. Perspect. Sci. Soc. 2018, 27, 113–116. [Google Scholar] [CrossRef] [Green Version]
  50. Frey, P.; Schneider, C.; Wadephul, C. Demokratisierung von Technik ohne Wirtschaftsdemokratie? TATuP 2020, 29, 30–35. [Google Scholar] [CrossRef]
  51. Frey, P.; Schaupp, S.; Wenten, K.-A. Towards Emancipatory Technology Studies. Nanoethics 2021, 15, 19–27. [Google Scholar] [CrossRef]
1
Their approach thereby also circumvents the distinction between manual and cognitive labor, acknowledging the fact that the implicit identification of manual labor with (automatable) routine labor and cognitive labor with (unautomatable) non-routine labor might hold less and less true over time, allowing more widespread automation in the service sector.
2
To verify the reliability of the hand-labelled classification, Frey and Osborne used Gaussian process classifiers based on the set of O*NET variables linked to the engineering bottlenecks. The algorithm accurately managed to reproduce the hand-labels of the experts, verifying “that our subjective judgements were systematically and consistently related to the O*NET variables” ([1] (p. 34)).
3
To be fair, this should not be interpreted simply as a sign of excessive enthusiasm or even personal conceit, but (at least in part) as an effect of a highly competitive scientific system in which any scientist is called upon, even forced, to highlight the great potentials of the respective field she is researching, lest the scarce funding go to the development of some other promising technology—or even worse, the humanities [13,14].
4
In composing the data training set, the machine learning experts were accordingly asked to consider “the possibility of task simplification” to the best of their knowledge ([1] (p. 30)).
5
In light of the immense volumes of data utilized in today’s machine learning, a training data set of 70 feature vectors, each containing only nine variables (the engineering bottleneck-related variables of O*NET, deemed relevant to the question of automatability), seems rather modest. Although the amount of data needed for machine learning depends on the specific use case, this concern seems particularly relevant in this case, as non-linear algorithms are known to require even bigger training data sets [18].
6
In a notable exception, two computer scientists of the Swiss Federal Institute of Technology in Zurich dedicated themselves to “Opening the Frey/Osborne Black Box” [19]. Yet although they refer to the study as a black box, they do not engage in great detail with its workings. Rather, they build their own model to identify outliers in the results of Frey and Osborne in order to allow for a more detailed scrutiny of the study’s results.
7
A scientific discussion on the epistemic power of computer simulations does exist [20,21,22], but it does not play a substantial role in the papers discussing Frey/Osborne.
8
The literature review on cross-country validity of O*NET scores of a recent OECD study concluded, however, “that occupational titles refer to very similar activities and skill demands across different countries” [15] (p. 42), implying that the claim that the findings could not be applied to other economies might owe less to actual differences in job realities and more to an implicit nationalist bias.
9
One might, of course, also criticize their study by claiming that they should have dealt with labor market impacts, rather than simply highlighting technological potentials. I will return to the “use value” of these studies at the end of this chapter. Thus far, I have focused on a form of immanent critique, reviewing the study in the light of the objectives it sets itself.
10
The term Economy 4.0 represents an extension of the Industry 4.0 term, popular in contemporary German debates to denote the current phase of technological development, to the whole of the economy, as the study does not limit itself to changes within industry and agriculture [25] (p. 9). For an introduction to the Industry 4.0 discourse see [26,27].
11
The QuBe was developed by the BIBB and focuses on modelling the general demography of Germany (by nationality, gender and age), labour supply (with factors including for instance levels of labour participation and qualification) and labour demand (with factors including occupational requirements and wage and price levels).
12
The study actually reads “95 percent of all households will have a 50 Mbit/s connection by 2018“ [25] (p. 26). I would suggest interpreting this assumption as saying that they in principle could access broadband, rather than that they in fact will have such a connection, provided that there might be a number of reasons for households not to opt for more expensive broadband tariffs—unless the connection would be supplied by the public sector to all households free of charge as a public service. However, Wolter et al. give no indication that they had that in mind.
13
I would suggest that the reservations towards the (self-)assessment of practitioners that were raised above regarding AI experts should also be taken into account here. After all, within a societal context that is buzzing with high expectations and the normative pressure to endorse and enact innovation to attract investors, the assessment of technological potentials appears to be at very least skewed (regarding the normative power of the Industry 4.0 discourse, see [26,27].
14
See my discussion of O*NET above. The BERUFENET, for instance, also does not cover differences in occupational realities within job profiles. Nonetheless, it should be positively noted that using a German database bypasses issues resulting from applying assessments from the US labor market to the German one.
15
To be fair, in a more recent paper, published after the peak of the Industry 4.0 debates and unavailable in English, Wolter et al. addressed both these desiderata by moving towards a methodology much closer to the one developed by Frey and Osborne (which can be understood as a tacit vindication of their approach) and by modeling branch-specific utilization levels based on investment activities. Although the projected job losses due to accelerated technological development are much higher in comparison to the 2016 study (e.g., they projected that 100,000 jobs will be lost in 2030 compared to just 30,000 in the 2016 projection), they remain miniscule in comparison to the whole of the labor market. This is consistent with my earlier expositions regarding the socioeconomic determinacy of technological unemployment: Even if one assumes a higher technological dynamic and use, the development of unemployment ultimately depends strongly on demand for goods and services and the associated job creation, rather than technological development per se.
16
For instance, their projections of increased governmental consumer spending is limited to the areas of cyber crime and/or cyber warfare, with the state projected to hire 14,000 additional soldiers and boost the federal police force by 2000 employees [25] (p. 45)). The exclusive focus on additional military and police spending seems, for lack of a better term, odd. Another assumption—that domestic consumer demand will be boosted by rising wages as productivity increases—is normatively appealing and should, in my opinion, indeed be pursued as a policy goal, but is currently not as self-evident as Wolter et al. assume. After all, the erosion of the link between productivity and wage increases can be considered one of the key contributors to the increased social polarization of the last decades.
17
The findings should not be mistaken as direct “instructions” for policymaking, however. Not only normatively because of the relative autonomy of the political sphere, but also because the study seems to lack robust sensitivity analyses for individual factors that that might then inform policymaking [37] (p. 33). The approach to create a number of scenarios that build on each other, each linked to a more limited set of assumptions, could be charitably interpreted as serving as an “aggregate sensitivity analysis” of sorts, but even then we do not know whether specific changes in the scenarios are dependent on a certain exact assumption.
18
Since it is central to this Special Issue’s subject, I would only like to remind you of the exemplary fact that the assumption regarding the form and extent of automation in the future employed by Wolter et al. is informed by an outdated understanding of automatability and an additional ad hoc assumption (see above, also for a reference to the 2019 study improving on this assumption). It is also noteworthy that although the assumptions are discussed individually, there is no attempt to justify them in combination (i.e., is it possible that all these assumptions will come to pass at once?), although it seems likely to me that such a justification could be achieved. Regarding the need not only to justify individual assumptions in scenario modelling but also their combination, see [37] (p. 24).
19
This realization echoes earlier comments by Horkheimer, who pointed out that directions and goals of research “are not self-explanatory nor are they, in the last analysis, a matter of insight” [38] (p. 196). Rather, they should be understood as being shaped by social conditions.
20
The awareness of alternative futures constitutes a key epistemic advantage of scenario modelling in comparison to earlier prognostic models, as it owns up to the epistemic uncertainty linked to any attempt to ”look into the future“ [45].
21
Much in the same spirit, the Committee on Science and Technology of the US Congress convened a year later for a hearing committed to “Building a science of economics for the real world“ (note the delegitimization this title implies—after all, one should have expected economics to always have been about the real world, particularly in light of the prominence of economists in scientific advisory practices). Among the witnesses was Robert Solow, one of the most highly decorated and influential economists of the period after the Second World War (not only did Solow receive the Nobel Prize for Economics himself, but so did four former PhD students of his). In his statement, he echoes his British colleagues, pointing out that “the approach to macroeconomics that dominates serious thinking, certainly in our elite universities and in many central banks and other influential policy circles, seems to have absolutely nothing to say about the problem [of justifying their basic concepts, particularly in relation to (un-)employment]. Not only does it offer no guidance or insight, it really seems to have nothing useful to say” [47] (p. 14).
22
On a side note, the disproportionate scrutiny facing scientific critics of contemporary society was already reflected on by Horkheimer: “Although critical theory at no point proceeds arbitrarily and in chance fashion, it appears, to prevailing modes of thought, to be subjective and speculative, one-sided and useless. Since it runs counter to prevailing habits of thought, which contribute to the persistence of the past and carry on the business of an outdated order of things […], it appears to be biased and unjust” [38] (p. 218).
23
e.g., that Frey and Osborne were first, that the public outreach of Oxford University might be better than that of IAB and BIBB, or that statements about the US labour market are deemed more interesting internationally than those about the German labor market.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Frey, P. Visions of Automation: A Comparative Discussion of Two Approaches. Societies 2021, 11, 63. https://doi.org/10.3390/soc11020063

AMA Style

Frey P. Visions of Automation: A Comparative Discussion of Two Approaches. Societies. 2021; 11(2):63. https://doi.org/10.3390/soc11020063

Chicago/Turabian Style

Frey, Philipp. 2021. "Visions of Automation: A Comparative Discussion of Two Approaches" Societies 11, no. 2: 63. https://doi.org/10.3390/soc11020063

APA Style

Frey, P. (2021). Visions of Automation: A Comparative Discussion of Two Approaches. Societies, 11(2), 63. https://doi.org/10.3390/soc11020063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop