Thursday, December 3, 2015

Reproduction: Artificial Intelligence overview, Curious where brain wave head sets, e.g. see - "Flower Coral Brainfingers" blogpost - and AI are "heading" re your presentation, and how they will potentially transform the AI conversation, If CC World University and School (which is like CC Wikipedia in 300 languages with CC MIT OpenCourseWare in 7 languages and CC Yale OYC) had 100 million dollars to hire researchers to work on these questions, WUaS would focus on World University and School's all 7,938 languages' goals and "ethno-wiki-virtual-world-graphy", Wikispecies and MediaWiki now have data access to Wikidata, WUaS has an invitation to make another presentation at FAHE in England in 2016, WUaS Artificial Intelligence research program


Hi Pat, Jim and Diane,

Pat, I was listening to your interesting talk and Artificial Intelligence overview 


ISSIP COI CSIG Pat Langley Dec 3, 2015, 10 31 12 AM


https://www.youtube.com/watch?v=OfZvQ6V47ZI
(http://www.slideshare.net/diannepatricia/progress-and-challenges-in-interactive-cognitive-systems from http://cognitive-science.info/community/weekly-update/), pressed unmute on my phone at the very end to ask a question about brain wave head sets, AI and your knowledge of new research in these regards, but my phone didn't unmute.

I'm curious where brain wave head sets, e.g. see -

http://scott-macleod.blogspot.com/2013/01/flower-coral-brainfingers-hands-free.html -

and AI are "heading" re your presentation, and how they will potentially transform the AI conversation.

(For open source brain–computer interfaces too, see -https://en.wikipedia.org/wiki/Comparison_of_consumer_brain–computer_interfaces).

Pat and Jim, if CC World University and School (which is like CC Wikipedia in 300 languages with CC MIT OpenCourseWare in 7 languages and CC Yale OYC) had 100 million dollars to hire researchers to work on these questions, WUaS would focus on World University and School's all 7,938 languages' goals and "ethno-wiki-virtual-world-graphy" - http://scott-macleod.blogspot.com/search/label/ethno-wiki-virtual-world-graphy.

Thank you for an edifying talk,
Scott

https://twitter.com/WorldUnivAndSch

*

Hi,

Thanks for your emails.

When I used a brain wave headset with just 3 sensors on the forehead in a headband in Greece about 10 years ago, made by an inventor from Ohio who was also there for a visit, I and any end user could pick letters from a keyboard on a computer screen to spell words (e.g. "HELLO WORLD" - and when I used this I remember picking letters to spell a word of my choice, and this video says - https://www.youtube.com/watch?v=L9OZQ7X590I - "The technology can use facial muscles and brain waves to control a computer keyboard and a mouse" - see, too: https://www.youtube.com/watch?v=Vh6UbFdysiY - but I can't find a current Youtube that shows someone actually picking letters), and the end user also needed to be able to relax fully to use the device. This suggests to me the beginnings of high-level cognition as a brain-wave output channel, as well as therefore "developing systems that reason, plan, and use language" because this letter picking involved all of the above. And that this eventually could interface with cognitive systems. How far from inter-facing, and in what ways, seems to be an opportunity for academic brain and AI researchers.

What I have in mind by ethno-wiki-virtual-world-graphy - http://scott-macleod.blogspot.com/search/label/ethno-wiki-virtual-world-graphy - is a film-realistic, interactive, group build-able, 3D virtual earth - think Google Street View / Maps / Earth with OpenSim and Second Life - in all languages. And for all of us to add, do and model our STEM research also in all languages - including brain linguistics' research. I think some avatars these days are already kinds of digital robots, and will become very much more sophisticated. What's interesting about this realistic virtual earth approach is that Google has much of the infrastructure to integrate all of this including Google Translate and the new TensorFlow AI software (and World University and School - WUaS - is Google-verified and has Google Classroom, for example). I also see this developing realistic virtual earth as a classroom for CC WUaS's free CC MIT OCW-centric degree courses moving toward free bachelor, Ph.D., Law, M.D. and I.B. degrees in most countries' main languages. So WUaS will have lots of high achieving and creative students to help explore related research questions digitally and build out such technologies.

It would be great to talk further about this in person some time.

Best wishes,
Scott


*
Dear Larry and sporadic Universitians, 

Very nice to talk with you just now, Larry.

Three further things came to mind to add to our phone conversation just now: 

1. Wikispecies and MediaWiki now have data access to Wikidata.

This is momentous because WUaS MediaWiki will now be able to build in Wikidata, but we haven't yet heard back from Romaine Wiki or the other core group of Wikidata developers I'm hoping are beginning to build this. Perhaps when I talk in person with the Wikimedia Foundation Legal Director on Monday, we can further this communication. 

2. WUaS has an invitation to make another presentation at the Friends' Association of Higher Education at Woodbrooke Quaker Centre in England in June of 2016, but FAHE also got back to me saying they'd like me to focus on standards, and internationally, I think, rather than an overview of WUaS in a new way, like my presentation in June. This may involve too exploring accrediting additionally with a Quaker organization.  

3. Concerning picking letters from a computer screen keyboard and using Brainfingers brain wave headset WITHOUT language or gestures, I did this about 10 years ago, which is significant for developing Artificial Intelligence/Cognitive Systems, and how WUaS would build an ALL-languages' research project for this re ethno-wiki-virtual-world-graphy, - but how close this may be, in multiple ways, to informing how we build cognitive systems with AI is something for researchers and WUaS students to continue to focus on. 

Sincerely, 
Scott
-- 
- Scott MacLeod - Founder & President
http://worlduniversityandschool.org


*
Pat, Jim and Dianne,

Interesting ... when choosing to spell a word which then gets transposed by the brain-wave headset into on-screen keyboard action, I assume some high-level cognition is involved. Language-use and -choice with brain-wave headsets for me entails high-level cognition as well as the involvement of a system that reasons, plans, (and, of course, uses language), i.e. the brain. Here's the OCW courseware -
http://ocw.mit.edu/courses/brain-and-cognitive-sciences/ - which I hope AI learners, both matriculated and open-ended at WUaS, will engage to further explore these questions and hypotheses.

I think too that the hypothesis that one could use a brain-wave headset (such as Brainfingers) to play with DragonBox Algebra, as a cognitive system, is testable, given my experiences with picking letters on a digital keyboard about 10 years ago without words or gestures, see -
http://scott-macleod.blogspot.com/2015/12/colorado-hairstreak-butterfly-expanding.html
- but I, and WUaS, don't have access to these technologies or approaches to set up easily such an experiment.

Scott


*
Pat and Jim,

Thanks again for your interesting ISSIP Cognitive Systems' talk - https://www.linkedin.com/groups/6729452/6729452-6078213294865342468 - on Thursday, which I just re-heard. I added the new URL to the Youtube video -
http://scott-macleod.blogspot.com/2015/12/reproduction-artificial-intelligence.html - with your slides to my blog entry about this, with related ideas. Also, in terms of the later slides in your talk, Pat, about intelligent cognitive actors in gaming and similar, please see the links in the my blog about avatar agency -
http://scott-macleod.blogspot.com/search/label/avatar%20agency - and particularly vis-a-vis a virtual Richard Rorty (Stanford philosophy professor) -
http://scott-macleod.blogspot.com/2008/09/web-avatar-agency-talking-richard-rorty.html
(http://scott-macleod.blogspot.com/search/label/Richard%20Rorty). And it's such avatars, 1) "real" (like in OpenSim and Second Life), 2)fantastical and 3) as virtual robots that I have in mind for this planned STEM-centric virtual earth for ethno-wiki-virtual-world-graphy.

Scott

*
December 5, 2015

I found this brainwave head set Stanford University talk from 2008 - 

Lecture 15: Demonstration of Brain Computer Interface Using the Emotive Epoc




*




...


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.