stephen hawking on artificial intelligence -- 6/26/19

Today's selection -- from Brief Answers to the Big Questions by Stephen Hawking. Stephen Hawking on artificial intelligence:

"I think there is no significant difference between how the brain of an earthworm works and how a computer computes. I also believe that evolution implies there can be no qualitative difference between the brain of an earthworm and that of a human. It therefore follows that computers can, in principle, emulate human intel­ligence, or even better it. It's clearly possible for something to acquire higher intelligence than its ances­tors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents.

"If computers continue to obey Moore's Law, doubling their speed and memory capacity every eighteen months, the result is that computers are likely to over­take humans in intelligence at some point in the next hundred years. When an artificial intelligence (AI) becomes better than humans at AI design, so that it can recursively improve itself without human help, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails. When that happens, we will need to ensure that the computers have goals aligned with ours. It's tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.

"For the last twenty years or so, AI has been focused on the problems surrounding the construction of intelligent agents, systems that perceive and act in a particular environment. In this context, intelligence is related to statistical and economic notions of rationality ­-- that is, colloquially, the ability to make good decisions, plans or inferences. As a result of this recent work, there has been a large degree of integration and cross-fertilisation among Al, machine-learning, statis­tics, control theory, neuroscience and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks, such as speech recognition, image classification, autonomous vehicles, machine transla­tion, legged locomotion and question-answering systems.

"As development in these areas and others moves from laboratory research to economically valuable tech­nologies, a virtuous cycle evolves, whereby even small improvements in performance are worth large sums of money, prompting further and greater investments in research. There is now a broad consensus that AI research is progressing steadily and that its impact on society is likely to increase. The potential benefits are huge; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide. The eradication of disease and poverty is possible. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding poten­tial pitfalls. Success in creating AI would be the biggest event in human history.

"Unfortunately, it might also be the last, unless we learn how to avoid the risks. Used as a toolkit, AI can augment our existing intelligence to open up advances in every area of science and society. However, it will also bring dangers. While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans. The concern is that AI would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded. And in the future AI could develop a will of its own, a will that is in conflict with ours. Others believe that humans can command the rate of tech­nology for a decently long time, and that the potential of AI to solve many of the world's problems will be realised. Although I am well known as an optimist regarding the human race, I am not so sure."


 | www.delanceyplace.com

author:

Stephen Hawking

title:

Brief Answers to the Big Questions

publisher:

Bantam Books

date:

Copyright 2018 by Spacetime Publications Limited

pages:

184-187
amazon.com
barns and noble booksellers
walmart
Support Independent Bookstores - Visit IndieBound.org

All delanceyplace profits are donated to charity and support children’s literacy projects.


COMMENTS (0)

Sign in or create an account to comment