Richard Price:
I wrote this piece last week about Elon Musk's new company, Neuralink. Musk's idea is that we need to merge with AI in order to ward off the threat of superintelligence. In this piece, I discuss this strategy, and argue that merging with AI doesn't make AI safer.
Any thoughts, comments, or questions would be much appreciated!
https://www.academia.edu/s/0c5db7bb12/does-neuralink-solve-the-control-problem?source=link
*
Hi Richard, and All,
Have folks read "Why This Robot Ethicist Trusts Technology More Than Humans: MIT’s Kate Darling, who writes the rules of human-robot interaction, says an AI-enabled apocalypse should be the least of our concerns" -
https://magenta.as/why-this-robot-ethicist-trusts-technology-more-than-humans-8969d0b5f0a0#.7zzkfn2m1? (See, too -
http://scott-macleod.blogspot.com/2017/03/lake-michigan-stanford-law-codex-court.html).
As a followup to my observation two days ago here - that ""Does Neuralink Solve The Control Problem" may be a straw man," philosophically, my hope here, too, is that the extended computer science / brain and cognitive science departments/coders (over decades) of the Stanford/MITs, the Oxbridges, the Univ Tokyo+, and south Korean Universities (Seoul National Univ+), ... plus in each of all countries' official languages (how best to come into conversation with best universities in Chinese, Arabic and Persian languages, for example, seem to me to be important questions here) ... and especially with their cultures (the Don Knuths - the great Stanford Professor Emeritus of Computer Science) emerging from specific ethical traditions (e.g. Christianity, Buddhism), will be able to successfully inform the control problem concerning non-benevolent AI. "Identity" questions re anthropology (I'm an anthropologist) and language/coding questions seem central here too.
Perhaps CC World University and School -
http://worlduniversityandschool.org - which is like CC MIT OCW in its 7 languages currently -
https://ocw.mit.edu/courses/translated-courses/ - with CC Wikipedia in its 358 languages - will successfully help create ethical online CC universities in all countries' official languages. (World University and School is also Quaker-Friendly-informed in part).
Best,
Scott
scottmacleod.com
*
Thanks, Richard and all, and interesting.
The most promising modeling of a fly brain or a mouse brain that I've seen comes from Stanford's/Google's research head Tom Dean. See this Stanford Neuroscience talk from October 2016 - https://www.youtube.com/watch?v=HazJ7LHihG8 (and re the "brain" label in my blog - http://scott-macleod.blogspot.com/search/label/Brain - as well). With this modeling as a kind of AI - and with Google's and Stanford's digital and knowledge-of-brain resources, I think it will be a long time before this AI, as a pragmatic example and for STEM research too, will develop issues of control problems. Until then, "Does Neuralink Solve The Control Problem" may be a straw man. I find it fascinating though that we may be able to model a fly's brain as well as a mouse's brain by 2020, which Tom Dean may have been calling for in this talk - and possibly address issues of the threat of AI in new ways.
Best, Scott
scottmacleod.com
*
Monday, May 15, 2017
In response to a new thread from Michael Oghia and Richard Price:
Hi Michael and Richard,
Thanks, too, for your thoughtful responses.
I wonder too, however, if the question of developing Neuralink will also be limited not only by bandwidth, but also by philosophers', computer scientists' and brain scientists' inability to understand consciousness itself, for example, to synthesize the 1st and 3rd person accounts of awareness (which is a central problem in the philosophy of mind re the 'New Mysterions,' in which sphere I find myself sometimes) - to understand the unity of consciousness. Until this question is understood, especially in terms of coding/AI, a neural link and the control issue itself of malevolent developing AI super-intelligence will remain problematic. On this approach to consciousness and re philosophy of mind questions,
Richard writes:
"I do agree that it is reasonable to suppose that there would be a unified consciousness between the limbic system, the cortex, and the external computer that is connected to the neurons in the brain. We suppose that there is a unified consciousness between the limbic system and the cortex, so it seems reasonable to suppose that you if you add a third brain, and connect it in the right way, all three brain parts would join together in supporting a unified consciousness."
I'd suggest we need to understand how consciousness in humans / other species (how far along the phylogenetic tree?) is unified/works before any successful attempts at developing a neural link to an external-to-the-the-human-bodymind AI/supercomputer is possible (see, too: http://worlduniversity.wikia.com/wiki/Brain_and_Cognitive_Sciences#World_University_and_School_Links and http://scott-macleod.blogspot.com/search/label/consciousness).
I agree with your last sentence's conclusion, Richard, and would add that parity can't be delivered on by Neuralink, not because of bandwidth, but because of unsolvable questions about human / species' consciousness itself.
Best,
Scott
scottmacleod.com
*
...
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.