Ian Hogarth is an investor in AI startups in Europe and the U.S. and the co-author of the annual State of AI Report. In an essay for Financial Times, he makes a case for colleagues and companies in the AI space to slow down the global race toward AGI, or artificial general intelligence. “God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race,” he writes early on in the piece. Hogarth urges companies to invest more in AI alignment research (the area focused on mitigating existential risk), to collaborate rather than compete, to focus on safety, and to be open to some kind of governmental oversight. Here, Hogarth combines expertise and knowledge with a frank and unexpectedly personal perspective. There are a lot of pieces floating about on AI, but don’t miss this one. It’s insightful — but also terrifying.
Those of us who are concerned see two paths to disaster. One harms specific groups of people and is already doing so. The other could rapidly affect all life on Earth.
The latter scenario was explored at length by Stuart Russell, a professor of computer science at the University of California, Berkeley. In a 2021 Reith lecture, he gave the example of the UN asking an AGI to help deacidify the oceans. The UN would know the risk of poorly specified objectives, so it would require by-products to be non-toxic and not harm fish. In response, the AI system comes up with a self-multiplying catalyst that achieves all stated aims. But the ensuing chemical reaction uses a quarter of all the oxygen in the atmosphere. “We all die slowly and painfully,” Russell concluded. “If we put the wrong objective into a superintelligent machine, we create a conflict that we are bound to lose.”