Daniel, Gerard, Lydia, and Wikidatans,
Long interested here in a Creative Commons' licensed Universal Translator for all 7929+ languages (that is significantly CC Wikidata/Wikipedia- informed between all its 288 languages) and that is extensible, and especially with developing machine learning and voice etc., this Wikidata conversation is GREAT because aliases of properties can be coded for the unique multiplicity of connotations/denotations, almost infinitely, in any given word (or meme even, as replicating cultural unit), and then be recombined. In what ways can this discussion further anticipate a large translator for all of Wikipedia's 288 languages 5, 10, 20 + years ahead, I wonder, and with developing CC artificial intelligence? (Interspecies' communication and related coding systems? Genetics' link to language use? Brain neuron firing patterns to language use? Reflexivity and subjectivity questions of consciousness down the phylogenetic tree, etc? ...each a Q-item + ...all vis-a-vis CC AI?)
In what ways would a far-reaching and great Wikidata translator facilitate far greater usage of Wikipedia and its sister projects?