Hi Juan-Pedro, Richard, Adrian, and All,
Brainfingers.com is the most relevant brainwave-headset to your observations that I've come across, and even tried once, and which focuses on generating language on a screen without using words or gestures. (It's the only one I've tried too). Using http://brainfingers.com, a fairly recent, innovative, human-computer interface which allows an enduser wearing this headband device to pick letters from a key board with only her 'mind,' and to spell words on a screen (like “hello world,” the example Junker and others have used, or any words, letter by letter), without spoken, written or typed language or gestures, - and which, to function well, requires the end user to relax, to become chill, I was able to spell out a word or two using Brainfingers in 2007 (possibly 2004), as well as play some simple games; one first has to be able to relax. When Andrew Junker, the inventor, brought this to try to the British physicist, and cosmologist, Stephen Hawking, who has ALS/Lou Gehrig's disease and who is also quite disabled and agitated with his conditions, and he tried this, he couldn't relax enough to make the Brainfingers' device function properly, for example, according to Junker (personal communication 2007); Hawking uses other technologies. Andrew also tested it in the US's NIH (National Institute of Health) (http://scott-macleod.blogspot.com/2010/12/deep-star-field-ethnographic-field-work.html - and there are few more posts in my blog about Brainfingers and other brainwave devices). Although the potential for research is remarkable, I don't think the research monies combined with clear scientific outcomes have emerged yet with Brainfingers, for example. I hope World University and School can perhaps explore this further, and possibly in collaboration with Stanford/MIT even. From what I've seen, the MIT Media Lab's "Dreadlocks" brainwave headset and Tan Le's brainwave headset don't focus on language in what they enable an enduser on a computer screen to do without words or gestures, or in other ways.
Further research needed,
Ocean sunfish: Co-generating such an AI super-intelligence?, World University and School seeks to develop and research re questions of AI-informed super-intelligence, 8,444 languages (all), with multiple participants potentially in subsequent studies, MIT Media Lab's Dreadlock's brainwave-headset, Tan Le's Brainwave Headset and Brainfingers.com, In what ways could such studies be carried out in collaboration with researchers at Stanford Universityhttp://scott-macleod.blogspot.com/2017/05/ocean-sunfish-co-generating-such-ai.html
Iaraka River leaf chameleon: ""Does Neuralink Solve The Control Problem" may be a straw man" philosophically, "Why This Robot Ethicist Trusts Technology More Than Humans: MIT’s Kate Darling ...," Modeling of a fly brain or a mouse brain, Extended computer science / brain and cognitive science departments/coders (over decades) of the Stanford/MITs, the Oxbridges, the Univ Tokyo+, Conversation with best universities in Chinese, Arabic and Persian languages, for example, seems to me to be important herehttp://scott-macleod.blogspot.com/2017/05/iaraka-river-leaf-chameleon-does.html
How can we best explore coding for consciousness/awareness from within a realistic virtual earth, something like Google Streetview / Maps / Earth with time slider ... and in Google/Stanford's Tom Dean's modeling of fly and mouse brains? ... And even, furthermore, re virtual world avatar robotics ... http://scott-macleod.blogspot.com/2017/05/costas-hummingbird-avatar-robotics.html ?