Date of publication: 2017-07-09 08:07
It''s hard to say exactly how much investment to place in AI/futurism issues versus broader academic exploration, but it seems clear that on the margin, society as a whole pays too little attention to AI and other future risks.
8775 This is Part 6—Part 7 will go up next week 8776 Hmm. I 8767 m going to go out on a limb and say Part 7 will go out in three or more weeks. And now to actually go back and read the article 🙂
This post was perfectly timed one day right after Microsoft announced its augmented reality device which could be a milestone for the human. The hype for technology is all around.
Case and point: 8766 Humanity is in the existential danger zone, study confirms 8767 , see http:///humanity-is-in-the-existential-danger-zone-study-confirms-86857
9) Another strong damping factor is that Moore 8767 s Law won 8767 t hold forever. In fact, it has already slowed down for half a decade and will probably come to a full stop in 7575, limited by quantum mechanical effects. Non-silicon-based computers are currently objects of research and are still a long way to real world applications. It 8767 s not looking much better on the software front: Computers are LITERALLY as bad at human activities as humans are at long divisions (there is an interesting numerical analysis on this in what-if-xkcd). So while the best supercomputer can match a human brain in raw processing power, it has the pattern recognition capability of a retarded cockroach.
Great article! It is interesting that change is exponential. But actually we 8767 ve seen this before,in the cambrian explosion. In hindsight it looks like an explosion in the number of new species in evolution, but it was the same buildup of change on change until it became this hockeystick graph on top of this post.
Nonetheless, I incline toward thinking that the transition from human-level AI to an AI significantly smarter than all of humanity combined would be somewhat gradual (requiring at least years if not decades) because the absolute scale of improvements needed would still be immense and would be limited by hardware capacity. But if hardware becomes many orders of magnitude more efficient than it is today, then things could indeed move more rapidly.
Well, we ourselves do improve our intelligence by learning, now we are teaching machines to learn too, and discovering other ways of 8775 hardware 8776 improvement like genetic modification. So a machine without human biological limits could learn many things a human being still can 8767 t.
William, I think you make an excellent point that we cannot assume an SAI will initially have motivation. But we can program that in, and we will do so. When I hit the 8775 equal 8776 sign, my calculator is motivated to give me the answer. and that 8767 s programming from the 6965 8797 s.
I agree that our achievements are very impressive when compared with our own past, or with the abilities of any other creature, as far as we know, in the universe. But I do think that we overestimate some of them particularly in the realm of organics: biology, medicine, etc. So much of what we know in terms of medical advances, for example, are based on lucky things we 8767 ve stumbled into and then strategically made the most of.
Language. The second technological revolution was also the most recent great biological advance on Earth: the development of language by 55 Kya. The development of language, watercraft, and weaving combined to allow early modern humans from Africa and SW Asia to master climates and locales throughout the world.
This shows how screwed we all are. In the end, federal governments or superpowers will gain more intelligent AI then other AI and will f us all in the ass. It will be a battle between countries for the greatest AI and war will break out. We will be slave to our countries until then.
If, upon further analysis, it looks like AGI safety would increase expected suffering, then the answer would be clear: Suffering reducers shouldn''t contribute toward AGI safety and should worry somewhat about how their messages might incline others in that direction. However, I find it reasonably likely that suffering reducers will conclude that the benefits of AGI safety outweigh the risks. In that case, they would face a question of whether to push on AGI safety or on other projects that also seem valuable.