Thursday, May 26, 2016

Red lionfish: Introductions - Aldis is Head of Virtual Reality & Game Design for IBM Research and Ed is a Neuroengineer and Associate Professor at MIT Media Lab and MIT McGovern Institute, Dr. Gordon Pipa! Thanks for a fascinating IBM CSIG talk this morning on “Cognitive Computing” and your reply to my question about natural language processing and neuromorphic clusters of 1000 neurons, Let's explore further developing room-scale VR visualization for precise brain research at the neuronal, nano and Street View levels - and with CC in all 8k languages


Dear MIT Prof. Ed Boyden and Aldis Sipolins (Virtual Reality at IBM), 

I'm writing to introduce you to one another. Aldis is Head of Virtual Reality & Game Design for IBM Research (https://twitter.com/AldisSipolins) and Ed is a Neuroengineer and Associate Professor at MIT Media Lab and MIT McGovern Institute. Aldis and World University and School (which is MIT OCW-centric in its 7 languages +) have been emailing about exploring partnering around questions of developing online classrooms and Aldis recently emailed me with "I'd be happy to start a dialogue with MIT if you have connections there you could introduce me to, as well" so I'm emailing you both to introduce you to one another. 

Aldis, Ed speaks as an expert in this Youtube video - "Explorations with Bryan Johnson | Neurotechnology" to give you an idea of one of his current focuses ... https://www.youtube.com/watch?time_continue=5&v=qT67rpURAgE - and it's at the top of Ed's Twitter feed (https://twitter.com/eboyden3). Ed, Aldis just gave this fascinating IBM CSIG talk recently, which I posted to my blog - http://scott-macleod.blogspot.com/2016/05/white-tailed-tropicbird-fascinating.html - and I've also given a recent IBM CSIG talk - http://scott-macleod.blogspot.com/2016/05/fertilisation-ibm-csig-talk-05-may.html - on WUaS's plans to develop a Universal Translator in all 7,943 languages and for brain research too. 

World University and School, which is like CC Wikipedia (in ~300 languages) with best STEM CC OpenCourseWare (accrediting on CC MIT OCW in 7 languages and CC Yale OYC) is developing free online accrediting major CC universities in all countries' main languages (offering free CC MIT OCW in 7 languages and Yale OYC-centric BA/BS, Ph.D., Law, M.D. and I.B. degrees), on the one hand, and wiki schools in all 7,943+ languages for open teaching and learning, on the other hand, and would like to develop our platform and information technologies by planning for all-languages' brain science research and classrooms. CC WUaS plans further to develop in CC Wikidata/Wikibase/Wikipedia with artificial intelligence, machine learning and machine translation. 

Having just attended a Stanford talk last week with Jeff Dean, a Senior Fellow and head of engineering at Google, and who was talking about their Google Brain project in many ways re Artificial Intelligence and language - "Deep Learning for Text Understanding and Information Retrieval" - WUaS would like to explore developing our all-languages' approach to A.I. while modeling neurons computationally in a direct co-constituting relationship with biological neuronal processes. In his talk, Jeff Dean showed the computational model of a neuron, which Google Brain is iterating on only very gradually (among many key AI foci for Google). Further developing computational neuronal modeling in a co-constituting relationship with actual neurons doesn't appear to be a main focus for Google yet - and yet this will make A.I. and machine learning remarkably robust and generative in multiple ways - and in room-size classrooms (Aldis's focus) especially. And this co-constituting relationship however is where Virtual Reality will be an amazing bridge between computational neurons and biological neurons and especially re room-size classrooms. 

Jeff Dean's research is fascinating for WUaS/me re Google's relatively open, excellent, free, best-value ecosystem (e.g. TensorFlow, its new chip, Street View/Maps/Earth, Google Translate in 180 languages, G+ Social Media, Youtube, Google Cardboard, Android, Brain, Search ... and all their A.I./machine learning/machine translation work beyond Brain) and with their licensing. Because of Google's relative openness licensing-wise, and the possibility for Creative Commons' licensed WUaS to build upon this and with CC MIT OCW in 7 languages, and with WUaS's plans to be in all 7,943+ languages in Virtual Reality especially (for ethno-wiki-virtual-world-graphy in Google Street View/OpenSim with time slider and specifically for all languages - http://scott-macleod.blogspot.com/search/label/ethno-wiki-virtual-world-graphy - and for developing a comparative actual-virtual Harbin Hot Springs' ethnographic field site for my anthropological research and as a classroom too), WUaS sees much potential planning-wise in thinking in terms of Google's ecosystem. In developing World University's artificial intelligence room-size classrooms in something like Google group video Hangouts in a developing, ever more engineering-centric, virtual earth such as in Google Street View with time slider with Maps/Earth with OpenSim/Second Life, WUaS would like to develop our Virtual Reality both building on MIT OCW and via courses like yours, Ed, e.g. http://ocw.mit.edu/courses/brain-and-cognitive-sciences/9-123-neurotechnology-in-action-fall-2014/

Furthermore, CC WUaS's brain research and courses will focus, for example, on -  http://ocw.mit.edu/courses/brain-and-cognitive-sciences/ - and again are planned in all 7,943+ languages. And WUaS would like to head toward developing online teaching hospital classrooms (e.g. http://worlduniversity.wikia.com/wiki/Hospital) - and for clinically engaging brain related medical issues digitally (for our online medical students in all 200 countries' main languages, with a poorest country focus, to aid these countries medically) - again in all ~204 countries' main languages and in VR (e.g. something like Google Cardboard) as well as for brain research in Google Street View/Ecosystem with group-buildable OpenSim as WUaS's own project, as we grow. 

Brainwave head sets, such as Tan Le's or Brainfinger's - http://scott-macleod.blogspot.com/2013/01/flower-coral-brainfingers-hands-free.html - would be an integral part of WUaS's all-languages' brain research platform and our room-size classrooms - for both directions  - brain neuron > computational neuron and vice versa - as brains learn and WUaS learns about learning via these technologies, and faculty members teach about this in classrooms. 

Given that Aldis is "exclusively interested in developing for room-scale virtual reality with the HTC Vive," ... and ... "given that the system costs about $2000 all told ($800 for the headset, $1200 for the computer to run it), [and that] it'll be a while before the hardware is cheap enough to scale," and as a way for me to contribute to furthering this dialogue, in what ways do you, Aldis, a) envision brain research to be taught in room size classrooms and b) in what ways will class-room dialogue itself (e.g. between, for example, MIT students in group video) emerge in the classrooms (such as moving this Google Hangouts themselves into a virtual earth / Google Street View/Maps/Earth-informed - http://scott-macleod.blogspot.com/2015/03/parnassius-wuas-holding-conversation.html - https://www.youtube.com/watch?v=3PpOy64y9O4) you envision? (By the way, Stanford's upcoming president in June is neuroscience pioneer Marc Tessier-Lavigne). 

Ed, I'm including Jim and Dianne in this email since both are heads of IBM Global University and have been involved in Aldis and my correspondence. (Jim Spohrer also has a BS from MIT in physics and a Ph.D. from Yale in CS). 

Looking forward to further communication about this. Thank you, Ed and Aldis. 

All the best, 
Scott


*
Thanks for your email, Aldis Sipolins.

*
Hi, Aldis and Ed, 

Thanks, Aldis. It will indeed be interesting how gaming technologies (see the MIT OCW in the beginning "Gaming-Digital" wiki subject at WUaS -  http://worlduniversity.wikia.com/wiki/Gaming_-_Digital - as a kind of blueprint) emerge within a film-realistic, interactive, 3D, wiki, group build-able, all-languages’ virtual earth, with both realistic and fantastic avatars with brains (see too the MIT OCW in the beginning "Virtual World" subject here - http://worlduniversity.wikia.com/wiki/Virtual_Worlds - as a preamble to CC WUaS moving into CC Wikidata and adding CC MIT OCW in 7 languages to Wikidata too) scaled to room-size classrooms in physical space - for virtual multi-lingual teaching hospital classrooms and also for remote digital clinical brain care therein. The degrees and ways that the co-constituting relationships of brain-neurons to/from computational-neurons will inform versions and systems of AI and for gaming for learning are enormous. 

How CC World University and School will teach CC MIT OCW about this in countries' main languages (e.g. hiring MIT graduate students who are learning to become faculty members) re the systems you design with "brain-gaming" information technologies and Watson, Aldis, as well as in a realistic virtual earth in, for example, Google Street View / Google Brain systems with its word focus (Brainfingers is a brainwave headset that I’ve tried which allows one to pick letters from a keyboard with “brainwaves,” and without spoken words or gestures), and in new ways emerging from MIT, will create many new courses in many large languages (as well as much new virtual brain research potential).

Ed, where do you see brain research and virtual reality, and teaching about this, finding a generative future beyond the video you shared on Twitter, and in ways, as Aldis wrote, that  "capitalize on the cutting edge of VR tech that's hit the market in the past few weeks to make uniquely immersive, engaging lessons" and also particularly re university / company partnerships?

Best, Scott

Thanks for your email, Ed Boyden. 

*
Thanks for your email again, Aldis.


*
Hi, Aldis, Ed and Gordon (Jim and Dianne), 

Greetings, Gordon! Thanks for a fascinating IBM CSIG talk this morning on “Cognitive Computing” and your reply to my question about natural language processing and neuromorphic clusters of 1000 neurons. (Both IBM's Aldis and I have given recent IBM CSIG talks - http://cognitive-science.info/community/weekly-update/ - on VR and a universal translator). Ed Boyden here is a neuroengineer and MIT Professor; where you've worked I saw, Gordon (who is neuroscientist with IBM) and Aldis is the head of Virtual Reality at IBM. I'd like to introduce all of us to one another, Gordon, and invite you into this developing Virtual Reality / neural processing and engineering email conversation.

Aldis and Ed, in a parallel way, part of my long term actual-virtual Harbin Hot Springs' anthropological project is to digitally model the fluid environment of the Harbin warm pool - which is roughly 15 feet by 30 feet - for ongoing actual-virtual film-realistic, interactive, 3D, wiki, build-able, with realistic and fantastic avatars, all 8k languages' comparisons - at the Street View level, as well as at the neuronal and nano levels. Not only would there be aqueous parallels digitally/computationally between modeling and visualizing the brain at both the neuronal and nano levels/scales, but I'm also interested in terms of research on the effects on the brain re visualizing in the Harbin warm pool - and for a variety of ground-level specific mental / cognitive processes. And again in terms of AI, I'm further interested in the co-constituting relationship between computational neurons and biological neurons (see previous email).

In addition to Jeff Dean's recent Stanford talk on Google Brain - https://research.google.com/teams/brain/ - and words, which I mentioned attending initially in this email thread, I just came across Google Galaxy - https://cloud.google.com/blog/big-data/2016/05/explore-the-galaxy-of-images-with-cloud-vision-api (and Google Sky https://www.google.com/sky/ and Liquid Galaxy - http://www.google.com/earth/explore/showcase/liquidgalaxy.html) - as digital instantiations of some possible adaptable fluid/neuronal/nano processing environments. How best to make these precise for engineering at all levels - and in terms of 15 foot by 15 foot room-scale visualizing of the brain from within? 

And Watson Virtual Reality, Google Brain environments, your work, Gordon, and your team's "Multiplexed Neural Recording Down a Single Optical Fiber via Optical Reflectometry with Capacitive Signal Enhancement" work, Ed, will likely be taught at MIT OpenCourseWare - and in multiple languages. (Did you happen to watch and hear, Aldis, the very relevant CSIG IBM talk this morning by IBM's Dr. Gordon Pipa - http://cognitive-science.info/community/weekly-update/ - which touched on neuronal processing and cognitive computing?). Engaging brainwave headsets would be a very important part of this research and precise VR neural / nano developments.  

The important aspect of the methodology I'm developing which I'm calling ethno-wiki-virtual-world-graphy (here again are the slides from my UC Berkeley talk on this from last November - http://scott-macleod.blogspot.com/2015/11/waters-36-slides-from-uc-berkeley-talk.html) relating to your, Ed, very precise engineering approach at the nano-level, and your precise VR engineering approach for brain research (and learning about the brain through gaming approaches), Aldis, is that World University and School would like to make very precise STEM building/engineering processes accessible to potentially all researchers from a very wide variety of disciplines in all languages. (Again, CC WUaS is like CC Wikipedia in ~300 languages with CC MIT OCW in 7 languages and CC Yale OYC, and planning to develop in CC Wikidata/Wikibase eventually in all 7,943+ languages).

I'm planning to apply for this MIT Media Lab junior faculty position - http://www.media.mit.edu/about/faculty-search application due July 15) - re World University and School which again is planned in all 7,943 languages as wiki schools and which is developing major accrediting online CC universities (CC MIT OpenCourseWare- in 7 languages and CC Yale OYC-centric) in ~204 countries (for online free CC BS/BA, Ph.D., Law and M.D. and I.B. degrees). I think databases at the nano and neuronal levels in all these languages, which WUaS plans to facilitate, will be key to room-scale VR and nano-level modeling. 

Let's explore further developing room-scale VR visualization for precise brain research at the neuronal, nano and Street View levels, as this could "be an exciting area for collaboration."

Best, Scott




*



...



No comments:

Post a Comment

Note: Only a member of this blog may post a comment.