Algorithms codes.

Algorithms & work

28 August 2018

The article at a glance

The implications of learning algorithms in the workplace pose difficult issues of control and morality. Dr Stella Pachidi of Cambridge Judge Business …

The implications of learning algorithms in the workplace pose difficult issues of control and morality. Dr Stella Pachidi of Cambridge Judge Business School discusses these dilemmas in a new research paper.

Learning algorithms have already reshaped many businesses, from retailing to music, as machines master consumer preferences and tailor offerings to individual tastes.

Dr Stella Pachidi
Dr Stella Pachidi

But what are the implications for the workplace and organisations? What sort of policies are needed to regulate or modify technology that, in its purest form, leaves no role for the accumulated experience and judgement of people who have worked for years in a company or other workplace environment?

These are the issues raised in a new research paper co-authored by Dr Stella Pachidi, Lecturer in Information Systems at Cambridge Judge Business School, recently published in the journal Information and Organization. The paper calls for extensive study into “societal issues such as the extent to which the algorithm is authorised to make decisions” and “the need to incorporate morality in the technology”.

The paper – entitled “Working and organising in the age of the learning algorithm” – is co-authored by Professor Samer Faraj and Karla Sayegh of McGill University in Montreal, and Dr Stella Pachidi of Cambridge Judge Business School.

Dr Stella Pachidi talks about some of the findings of the paper, and areas for further research:

Artificial intelligence has been around for half a century, but learning algorithms are now able to equal and even exceed the performance of humans in a wide variety of tasks. The implications of such technological advances are staggering, affecting occupations ranging from the unskilled to the very highly skilled. In fact, even the very definition of “skill” is changing before our eyes in this new world of the algorithmic workplace.

There are positive and negative implications for business. Algorithms can perform many complex tasks more quickly and accurately than the human mind, and this can clearly boost efficiency. On the other hand, algorithms often function in a “black box” manner – people find it difficult to understand how algorithms yielded a certain outcome – so this can make it difficult to assign legal responsibility and explain the fairness of decisions.

People in certain types of occupations may need to refine their roles or core expertise. A salesperson whose reputation is built on frequent interaction with customers and knowledge of those customer accounts may need instead to recognise sales opportunities based on algorithms. Such devaluation of human expertise has immense implications for the workplace as we now know it, and we’ve barely begun to wrestle with these consequences.

We may need a totally new system of occupational accountability. In traditional workplaces, mistakes or errors in analysis can usually be traced to an individual or team (as can genius ideas or terrific implementation). But who’s to blame if an algorithm arrives at a faulty solution? Throughout history, human decision making has been based not only on previously seen patterns but on a broad set of contextual factors – thus the term “judgment call” – but algorithms are changing the nature of such decision making. Perhaps some of the decisions themselves will be more predictable, but the consequences may be anything but.

Moral issues are quickly coming to the fore as algorithms blur the line between user and technology. Should an algorithm for a self-driving car always “overpower” an occupant driving too fast (there may be an emergency or other valid reason) or only under certain circumstances such as being drunk? And who should decide, given that the view of commercial operators may be at odds with others in society?

Our paper says that learning algorithms are creating a new “digital iron-cage” whose bars cannot readily be grasped or bent. We need a lot more research to sort out some of the implications of this type of unbending technology.