A Rabbi Meets DeepMind

Britain’s former Chief Rabbi searches in vain through book shops piled with fresh publications on self-help, mindfulness, managing anxiety and how to achieve focus in uncertainty. These are individual concerns, but Jonathan Sacks rarely finds anything to read on ethics.

“We need ethics more than ever,” Sacks told Mustafa Suleyman, co-founder of Google subsidiary DeepMind, in an interview broadcast on BBC Radio 4. Ethics is the study of collective morality, of the relations within society and each person’s responsibility to the whole.

Suleyman agreed. The standard troops of science fiction loomed large in their discussion of artificial intelligence. But Suleyman was less concerned by the distant prospect – avoidable, he said – of autonomous super-intelligent machines seizing control of human society: “That kind of speculation does a disservice to the more practical, real world, near-term ethical consequences of processing data (at scale).”

Sacks recalled meeting a Jewish woman who, on becoming a mother, told him that she found herself better able to understand God. She knew what it is to create something you can’t control.

For Suleyman, super-intelligent machines are our best hope: the means to tackle human catastrophe, from malnourishment to disease and child poverty. On the ethics of AI, Suleyman urged a collective response to the “horrible complacency” which has obscured pressing social problems. “We have assumed that we’ve reached the end of history, the end of boom and bust,” he said.

The co-founder of DeepMind recently bought an old-fashioned analogue alarm clock. He doesn’t want to look at a screen when he wakes up. In an age of abundant information, Suleyman’s advice is “to adapt and learn self-restraint” in our use of screen time and social media.

He emphasised more prosaic uses for AI: to ease traffic congestion, improve medical diagnosis, find videos online and discover potential friends. Alreaady, narrow learning by task-specific algorithms means that machines run complex systems more efficiently than management consultants. DeepMind has achieved savings of 40% in Google’s energy consumption for cooling cloud servers, for example.

The spectre beloved by science fiction — of so-called general learning, which spans relative and systemic factors — is harder to compute. Algorithms process recorded data, which often reflects historic bias. Hence, recent attempts to use AI to assess risks of re-offending by convicted criminals revealed a familiar pattern of racism and racial profiling.

Justice is a human concept. So the governance of such processes is a civic responsability. Or, as Suleyman advised Sacks: “We have agency. We are still in a negotiation.”

About the author

Leave a Reply