If no one builds it, you're never born.
Not building AGI is a risky thing....
Wilbur Wright, inventor of airplanes, died in 1912 at the age of 45 from typhoid fever because antibiotics didn’t exist then. Just imagine how our world would be without the medical and technological advancements of the past century! Actually, you wouldn’t have to imagine, because you wouldn’t exist!
Inventing technology and advancing science is how we overcame our challenges and managed to support 8 billion human souls on this planet, escaping the Malthusian trap of famines, diseases, and conflicts.
Automation of knowledge acquisition and thought is the next step, and the best tool humanity can build. The risk of not building AGI is that we won’t be prepared for the challenges the world throws at us, some of which would be challenges that our own existence creates.
A(G)I safety is important, and here are my thoughts about it.
1. Scaling up current techniques is not going to lead to AGI. It will lead to powerful AI systems, but these will be supported by a lot of engineered scaffolding. In these cases, making AI work usefully is almost exactly the same as making AI safe. Since scaling has already proven to be useful, we are naturally on the path to exploiting it to the maximum, and we should.
2. We will eventually figure out how to build and scale AI that uses principles of human intelligence. These systems will learn causal structure and reliable world models that can be used for counterfactual thinking. This will lead to much more capable AI systems and AGI. But for these kinds of systems, increasing capability can also come with increasing controllability. (See my blog on questionable beliefs behind existential risk scenarios).
Where powerful AI and AGI is going to help us
Earthquakes, wildfires, hurricanes, floods: Despite all the technological advances, we are still at the mercy of nature when it comes to these disasters. Where are the army of robots helping to dig out people from collapsed buildings? Where are the ones to manage fires and help people? Having AGI means we will have robots that will help us in these cases to save lives and to recover faster.
Health: Antibiotic resistance, pandemics, …, we don’t know what challenges we will face in the future and it would be great to have powerful tools. In general, having much better understanding of how our bodies and minds work, and curing of diseases.
Flora and Fauna: Instead of conforming to the requirements of dumb machines, we might finally be able to do more organic multi-crop agriculture, reduce the amount of pesticides we use, and abolish factory farming. Intelligent machines will free us from the economic necessity of these.
Climate change, Energy, Materials, Education, Transportation, Space …. examples like these abound in each of those areas. Things that we accomplished crudely with dumb machinery will be done with more finesse with intelligent machines, and that will be important for humanity to thrive at scale.
Balancing the risks…
Of course the title is a play on the Yudkowsky and Soares book “If anyone builds it, everyone dies”. While I disagree with many things that the book asserts, their work has brought attention to the important problem of AI safety. Smart people working on AI safety is a good thing. It is important to continue that work, even if the specific x-risk scenarios in the book can be taken apart. In the midst of all the talk about the risks of AGI it is important to realize that not building AGI has risks as well.

Well articluated and completely agree with Point 1; Cant be more succinct than this
However, for point 2, There seems to be a presumption that AGI must follow the principles of human intelligence. Is that a given? Doesn't history shows us instances where we have not followed the equivalents posessed by humans or, to generalise, by nature and instead found completely new ways to build something (say the first working prototype of the aeroplane by wright brothers).
While I am all for human-intelligence aligned AGI, proposals from Dr Sutton such as Era of Experience/OAK indicate to a possible path of AGI which need not consider pricniples of human intelligence.
What is your take on such a possibility?