Martin Paul Eve bio photo

Martin Paul Eve

Professor of Literature, Technology and Publishing at Birkbeck, University of London

Email Books Twitter Github Stackoverflow MLA CORE Institutional Repo ORCID ID  ORCID iD Wikipedia Pictures for Re-Use

Even as worldwide militaries develop autonomous killer robots, when we think of the ethics of AI, we often turn to the Asimov principles:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These seem sound principles if we wish to avoid the feared robot takeover preached to us by Elon Musk etc.

Further, we know that training linguistic models on broad corpora tends to reproduce oppressive racist and gendered structures. This also seems like an important ethical area.

Perhaps, though, what we need to think more about, in the ethics of AI, is the way that we treat the human data processors who prepare material for the training of artificial neural networks and other machine learning techniques. For instance, staff on precarious contracts at Facebook and Google are paid $0.02 for each image that they moderate, meaning that they must sift through heaps of scarring images of child abuse for a tiny quantity of remuneration. This area has expanded in recent years with the first conference on the subject being held last year.

The point of what I’m trying to say here is this: we think that the ethics of AI are about restricting the actions of advanced machine-learning algorithms to operate within specific normative moral bounds. What we don’t often acknowledge is that such learning often still depends upon vast quantities of human labour to filter the datasets. This work is repetitive and mentally scarring. And it is paid very badly. Those who preach the need for AI ethics principles are also, often, Silicon Valley billionaires. Yet their wealth relies on the exploitation of people who filter and moderate content, to feed to AI. Perhaps we should address the ethics of this, before we heed the cries for ethics to be transferred solely to the realm of machine regulation.