Friday, February 22, 2019

Whooper swan: Stanford Medicine - Harvard's Jukka-Pekka Onnela, BEIWE smartphone research - "Beiwe Research Platform" - and "digital phenotyping" - re the 6 billion + people who may have access to smartphones in some years * * * An example of a realistic virtual earth re the TIME SLIDER - "This #VR transports you to a classic '80s / '90s bedroom with playable versions of your favorite retro games" - yet with avatar bots, and with much more realism (and at the cellular and atomic levels) ... and in this #VR are playable versions of old video games, and I'm seeking too for a realistic virtual earth VR (for everything) to make design of such old, and new, video games, as well as computer chips themselves possible. Am thinking Beiwe may be able to stream much data about individuals / body minds into this single realistic virtual earth, which might then inhabit such an 80s / '90s' bedroom


Dear Jukka-Pekka,

Thanks for your far-reaching Stanford Medicine talk yesterday -

Psychiatry and Behavioral Sciences Grand Rounds: Smartphone-based Digital Phenotyping

https://events.stanford.edu/events/826/82650/ - and very nice to meet and talk with you afterward. 

I asked about, apart from medical anthropology questions with regards to diagnosing schizophrenia, 1) how Beiwe might be used to source data for developing a realistic virtual earth, and re new approaches to representing data in psychiatry - for example in the form of avatar bots. Afterward up front, I also asked about 2) your possible or inherent models of consciousness which might offer approaches to data re bringing together, for example, first and third person accounts (where consciousness is a potentially intractable question making it fascinating - see e.g. David Chalmer's work - yet possibly worth having some hypothetical models in "digital phenotyping" of the brain / mind (ahead) - if, for example, the unconscious is structured like a language, a possible helpful starting place thanks to Lacan). Am very appreciative of Beiwe's smartphone approach to collecting movement data, as well as language data with regards to both of my questions. 
In terms of a developing a realistic virtual earth, I have in mind, conceptually, Google Streetview with TIME SLIDER / Maps / Earth / TensorFlow / Brain / Translate in 100+ languages and the rest of the Google ecosystem, together with avatar bots of species and individuals vis-a-vis avatars in Second Life, Sansar, OpenSimulator, but realistic not cartoon-esque, and also at the cellular and atomic levels. I also have in mind Stanford and Duke Medicine's / Google's Project Baseline (think A) physical samples for 10,000 people who come into clinics, B) translated into data, into C) a pathway to health ... and then, potentially as I see it, into avatar bots in some years, and even for tele-robotic surgery, for example). 

 It was exciting to hear too your talk about digital phenotyping re an unfolding actual-virtual Harbin Hot Springs' ethnographic project I'm working on especially, and for a number of reasons. 

One main question I have in these regards is how might we explore best applying the methods / software and ideas in your talk re Beiwe to studying and even creating a realistic virtual Harbin Hot Springs/ Earth for actual-virtual, physical-digital comparison and STEM research (and for ethno-wiki-virtual-world-graphy, which I'll explain below), and even with A) a robotics' component (Lego robotics' too), B) A.I., as well as C) future of education aspects? And how best to collaborate with Harvard, and possibly even as an academic course (only in part)? This project would involve creative collaboration on a very large scale, and across languages and countries, and via innovative digital platforms including potentially re quantum computing in the future - https://scott-macleod.blogspot.com/2019/01/british-robin-quantum-computing-and-in.html. By realistic virtual Harbin / Earth, am thinking here again, conceptually, of Google Streetview with TIME SLIDER / Maps / Earth with Tensorflow re AI and machine learning, and at the cellular and atomic levels too (a ginormous data project), with realistic AVATAR BOTs, of both individuals and species  - and for 1-1 actual-virtual, physical-digital robotics' design innovation too.

Re psychiatry and brain and cognitive science questions too (e.g. https://ocw.mit.edu/courses/brain-and-cognitive-sciences/), I'm excited to hear mention of brain headsets re people in studies viewing physical images and then scientists seeing the physical physical images viewed in pictures of neural processing. Am interested in transmitting this - these images' data - into this realistic virtual earth - (think again of Google Streetview with TIME SLIDER  Maps / Earth / TensorFlow / Brain / Translate with avatar bots of individuals and species, and brains, - and eventually for tele-robotic surgery, for example). More under the 'brain' label in my blog - https://scott-macleod.blogspot.com/search/label/Brain. Am curious further how we might explore this with brain wave headsets even such as - https://scott-macleod.blogspot.com/2013/01/flower-coral-brainfingers-hands-free.html and https://scott-macleod.blogspot.com/2010/11/human-brain-music-brainwave-device-tan.html - and newly with smartphones, Beiwe and "digital phenotyping" - re the 6 billion + people who may have access to smartphones in some years. 

Re my physical-digital Harbin Hot Springs' ethnographic project, I'm excited too to learn of fNIR technologies that measure the same interval as FMRIs… but that one can use a cap for – so, no claustrophobia, - and that fNIR can done in real world (and re that both technologies measure blood-oxygen levels in a specific part of your brain) - re the potential here too for smartphones to be developed to do something similar, and for beginning to measure/study aspects of the brain. 

Re this realistic virtual Harbin Hot Springs' project - I'm also developing a new set of digital methods I'm calling 'ethno-wiki-virtual-world-graphy' https://scott-macleod.blogspot.com/search/label/ethno-wiki-virtual-world-graphy which is briefly characterized here https://scott-macleod.blogspot.com/2018/11/pacific-yew-world-univ-and-sch-medicine.html - and I'd think that data from Beiwe experiments could inform such a co-building of a realistic virtual earth in remarkable and rigorous STEM ways.

So, with regard to this beginning Google-centric realistic virtual earth, for example: visit the Harbin gate in Google Street View here ~ http://tinyurl.com/p62rpcg ~ https://twitter.com/HarbinBook ~ where you can "walk" down the road "4 miles" to Middletown, California, and "amble" around the streets there, if inclined. And add some photos or videos if you have them - re this new social science method I'm developing (again - https://scott-macleod.blogspot.com/search/label/ethno-wiki-virtual-world-graphy - think ethnography as interpretive social science practices, wiki (think fast adding / curating of Wikipedia) and co-building of a virtual world (like in Sansar / Second Life /Open Simulator with avatar bots, but not cartoon-esque, rather realistically)).

(Check out too the A.I. wiki page at World University and School, of which I'm the founder - https://wiki.worlduniversityandschool.org/wiki/Artificial_Intelligence and some other of the Subjects here including, for example, "brain and cognitive science" - https://wiki.worlduniversityandschool.org/wiki/Subjects - for the MIT OpenCourseWare courses that WUaS will eventually offer for credit online toward free-to-students' accrediting degrees, and in these 5 languages - https://ocw.mit.edu/courses/translated-courses/ - early on to start, but then in all ~200 countries' languages, and with a focus on developing wiki schools for open teaching and  learning in all 7097 living languages too. Here too is the main Nation States' wiki page - https://wiki.worlduniversityandschool.org/wiki/Nation_States - from which major online universities will emerge, and in countries' main languages - https://wiki.worlduniversityandschool.org/wiki/Languages). Eventually students and people around the world will be helping to co-build this realistic virtual earth, probably Google-centric). 

And all of this presents, I think, collaborative potential for Beiwe, including possibly with regard to clinical trials - https://wiki.worlduniversityandschool.org/wiki/Clinical_Trials_at_WUaS_(for_all_languages) - and planned with machine learning for each of all 7,097 living languages, and with Wikidata / Wikibase in (Wikipedia's) 300 languages as a "back end."

To close, am interested re your Beiwe 'digital phenotyping' smartphone developments, and re psychiatry / psychology questions too, how to study, for example, eliciting loving bliss neurophysiology brain biochemistry from in the Harbin warm pool, as well as from bath tubs at home (for the relaxation response meditation in warm water - e.g how to measure or study this with smartphones - re Herbert Benson MD's in Boston's related work?) while visiting virtual Harbin, as STEM research, by wearing brain wave headsets / using smartphones, and transmitting data into a realistic virtual Harbin/Earth with avatar bots at the cellular and atomic levels to study how loving bliss brain chemistry works - and for developing sophisticated modeling of brains with real time real world data. What would you suggest to develop this further? So not mental illness, rather mental flourishing, and how to generate this. In what ways could digital phenotyping, smartphones and Beiwe help? 

Here by the way, is the beginning of CC-4 MIT OCW-centric Finland World University and School - https://wiki.worlduniversityandschool.org/wiki/Finland - planned in the Finnish language, and for free-to-students' online degrees (Bachelor, Ph.D., Law and M.D. as well as I.B. high school). I noticed you went to an I.B. school in Wales. I spent one year in high school at Fettes College in Edinburgh, Scotland, which later became I.B. coincidentally, and it's an interesting question for me about how to develop online I.B. schools building on all MIT OpenCourseWare high school - https://ocw.mit.edu/high-school/ - as well as incorporating even Lego robotics with Scratch drag and drop programming language toward engineering concentrations from the home.

Thank you so much for your fascinating talk, Jukka-Pekka, and I very much look forward to further communication about all of this with you. 




https://www.hsph.harvard.edu/jukka-pekka-onnela/
https://en.wikipedia.org/wiki/Beaivi


-- 
- Scott MacLeod - Founder, President & Professor

- World University and School

- 415 480 4577

- CC World University and School - like CC Wikipedia with best STEM-centric CC OpenCourseWare - incorporated as a nonprofit university and school in California, and is a U.S. 501 (c) (3) tax-exempt educational organization. 


*

Harvard's @JPONNELA on BEIWE Research Platform for SMARTPHONES https://youtu.be/aI8VAvqbvg0 per his far-reaching Whooper swan: Stanford Medicine talk on "Digital Phenotyping" re the 6 billion + people projected to have phone access https://events.stanford.edu/events/826/82650 - https://scott-macleod.blogspot.com/2019/02/whooper-swan-stanford-medicine-harvards.html  -

https://twitter.com/WorldUnivAndSch/status/1099424445083529216


* *

Jukka-Pekka,

This is what I have in mind by a realistic virtual earth re the TIME SLIDER idea (and am thinking in terms of Google Streetview with TIME SLIDER) ...

This transports you to a classic '80s / '90s bedroom with playable versions of your favorite retro games

https://twitter.com/rajat_shrimal/status/1098063044977008646 -

yet with realistic avatar bots, and with much more realism (and at the cellular and atomic levels) ... than in this #VR are playable versions of old video games, - and I'm seeking too to develop VR for a SINGLE realistic virtual earth, e.g. to design & make such old, and new, video games, as well as computer chips themselves possible (and for (for everything actually). Am thinking Beiwe may be able to stream much data about individuals / body minds into this single realistic virtual earth, for example in May 2020 and in October 2021, (and with video from Youtubes too for inhabiting such '80s / '90s' bedrooms), for example ... as researchers' engagement with Beiwe grows.

Scott


* *
JP,

Thanks again for your Stanford talk and :

Beiwe Research Platform

https://www.hsph.harvard.edu/onnela-lab/beiwe-research-platform/

*

Beiwe for the Onnela Lab at Harvard

Helping health researchers collect unprecedented data

http://www.rocketfarmstudios.com/portfolio/beiwe-onnela-lab-harvard/


*

Jukka-Pekka "JP" Onnela "Smartphone Based Digital Phenotyping"


https://youtu.be/aI8VAvqbvg0

*

Beiwe - Wiki Main Page

Digital Phenotyping

"Digital phenotyping is the “moment-by-moment quantification of the individual-level human phenotype in situ using data from personal digital devices,” in particular smartphones. This is our definition of the concept and it highlights some of the important aspects of digital phenotyping, such as using existing personal devices rather than introducing additional instrumentation. To truly leverage moment-by-moment data collected in situ, in the wild, one must rely on the use of passive data, i.e., smartphone sensor and usage data"
http://wiki.beiwe.org/wiki/Main_Page



*









...



No comments: