Three years ago I was telling y'all that there is no "singularity" that would suddenly make humanity obsolete. A couple of years after writing that article, I was alarmed to see science and engineering celebrities like Elon Musk, Bill Gates and Stephen Hawking claiming that superintelligent artificial intelligences might just murder us all if we aren't really, super careful.
I remain unphased. I find the singularity--of quickly-self-improving superintelligent AIs--as implausible today as I did three years ago. So it's nice to see this rebuttal from Edward Moore Geist ("MacArthur Nuclear Security Fellow at Stanford University's Center for International Security and Cooperation (CISAC).") And the guy has a lot more knowledge of AI history than me, so his arguments sound pretty good. FWIW.
Saturday, August 01, 2015
Subscribe to:
Posts (Atom)