Almost Human by HC Denham (Speculative Fiction)

Writing: 3/5 Plot: 3/5 Characters: 2/5
A cautionary tale of a future where AI based humanoid robots may be slowly taking over the planet while the human population they are intended to serve remains blissfully unaware. All but one — the utterly competent, perfectly empathic, friendly humanoid robots just give Stella Mayfield the creeps.

While the plot had potential, I really didn’t enjoy this book. The characters were extremely stereotyped (and really, although Stella was a Biologist with a PhD, she behaved like the stereotypical neurotic woman while the men behaved like stereotypical men — completely out of touch with their feelings blah blah blah. The techie robot engineer was the biggest stereotype of all (and spoke some weird dialect that didn’t match anything I’m familiar with and I live and work in Silicon Valley!). There is very little science and very little plot — instead it includes lots of filler encompassing a lot of clichéd relationship stuff that had little to do with the plot. Writing is decent enough that I finished the book, but nothing special, and the end was completely predictable. Could have made a decent short story with better characters and more philosophic depth.

Human Compatible — Artificial Intelligence and the Problem of Control by Stuart Russell (Nonfiction)

An extremely well-written, comprehensive overview of Artificial Intelligence (AI) — with a focus on the very real risks it poses to the continued viability of the human race and a proposal for how to move forward reaping the benefits of AI without making us “seriously unhappy.”

AI Pioneer Stuart Russell is a Professor of Computer Science at UC Berkeley, has numerous awards, fellowships, chairmanships, etc. and has co-authored a textbook on AI with Peter Norvig. This is a book written by that rare creature — someone who knows his subject thoroughly and can explain it. He does not shy away from the complexity of the topic but breaks it down and explains it, simply making it accessible to anyone who is willing to read and think. He includes short, clear examples from science, philosophy, history, and even science fiction and references current and historical work from academia, research labs, and startups from around the world.

The book is divided into three parts: the concept and definition of intelligence in humans and machines; a set of problems around the control of machines with superhuman intelligence; and a proposal for shifting our approach to AI to prevent these problems from occurring rather than trying to “stuff the genie back into the bottle” once it is too late.

Russell explains the potential problems of unleashing a massively intelligent machine on the world. An AI machine offers incredible scale. Think of an entity that (with the proper sensors) can see the entire physical world at once, that can listen and process all concurrent conversations at once, that can absorb all the documented history of the planet in a single hour. And we plan to control this entity via programming. With a superhuman intelligence, the programming would need to be at the objective level. And yet — specifications — even with every day human programmers — are incredibly hard to get right. Russell uses the example of giving the machine the task to counter the rapid acidification of the oceans resulting from higher carbon dioxide levels. The machine does this in record time, unfortunately depleting the atmosphere of oxygen in the process (and we all die). Remember the old stories about getting three wishes and always screwing it up? This would make those stories look trivial. Russell never uses scare tactics and does not wildly overstate the thesis — instead he uses practical examples and includes one tremendously simple chapter (the not-so-great debate) that lists every argument people have made that we don’t have to worry and rebuts them quickly.

His solution: we should design machines correctly now so we don’t have to try to control them later. He wants to build a “provably beneficial machine” — provably in the mathematical sense. His machine would operate on only three principles: the machine’s only objective is to maximize realization of human preferences; the machine is initially uncertain as to what these preferences are; and the ultimate source of information on human preferences is human behavior. This is interesting — he wants to “steer away from the driving idea of 20th century technology that optimize a given objective” and instead “develop an AI system that defers to humans and gradually align themselves to user preferences and intentions.” There follows an entire chapter devoted to how we can program the machines to determine what those human preferences are, particularly in light of competing preferences, potentially evil preferences, the cognitive limitations of humans to understand their own preferences, behavioral economics, the nature of mind, definitions of altruism — you name it — all the fascinating areas of understanding human behavior become part of the problem. Which, while completely fascinating, strikes me as even more difficult than trying to work out how to define exact specifications in the first place!

I was left with a knot in my gut about how fast AI is moving without much oversight and how suddenly relevant these issues (that I had long relegated to comfortable musings in science fiction) have become. While I find his proposed solution intriguing, it is hard, hard, hard — and expecting random investors and startups to tackle harder design problems instead of racing towards monetization will be tricky. On the other hand, we move forward as a civilization by raising the issues and embedding them in our moral consciousness and Russell has done an excellent job of clearly teeing up a huge number of costs, benefits, and issues from technical to ethical. Highly recommended if you have any interest in the topic.