Dear Sebastian, Markus, Peter, Denny, Wikidatans, All,
much appreciated. Will reply inline with > :
"Dear Scott,
you are describing a Linked Data idea. If you claim to create 7.5 Billion Wikidata Q's, please do so. I couldn't even create 2.2 million for all German companies.
> CC-4 MIT OCW-centric wiki World University and School which donated itself to Wikidata in 2015 for co-development, and received WUaS Miraheze MediaWiki in 2017 as a consequence would like to lay "claim to create 7.5 Billion Wikidata Q's" - thank you - and re Linked Open(?) Data.
The integration paradigm with Wikidata does not scale. It is basically saying to copy all data in one place. The free hosting is nice, though.
> am wondering how to build and scale from Markus's "DBpedia EN has 32 people educated at the University ofLeipzig, whereas Wikidata has 1217" - and with regards to World University and School's seeking to begin building toward 7.5 billion people as Wikidata Q items with registered / matriculated students and "Universitians" - WUaS wiki learners and teachers, editors and volunteers - and focusing on 500-2000 undergraduates matriculating in English in 2020 for our 2nd undergraduate class as we proceed with licensing (BPPE) and accreditation (WASC).
DBpedia can do an ad-hoc integration over Linked Data, i.e. you take out the data that you need for your use case from the cloud, stored decentrally.
> Sounds good (https://wiki.dbpedia.org/ publications/blog & https:// wiki.dbpedia.org/why-is- dbpedia-so-important)
With the Databus, we are building tools where you can do exactly this type of integration with a few command lines based on downloadable files client-side, but still using Linked Data principles as integration paradigm.
> Great (https://wiki.dbpedia.org/ blog/dbpedia-databus-–- transforming-linked-data- networked-data-economy)
All that you need are: Dump downloads, stable URIs, sameAs links and ontology mappings, the later two not necessarily created by yourself, but by other users.
> Sounds do-able
Regarding stable URIs: yes, if you want to follow the enumeration pattern like Wikidata's Q's that is fine.
> Good re "Regarding stable URIs" esp.
Regarding Google's involvement: LOD needs some improvements in tooling like the Databus is building, people don't want to do 7.5 Billion HTTP Requests, they want to just download it or the part they need. Hence I would like to compare integration power of Linked Data via DBpedia and Databus to the Google Knowledge Graph. My concern with Wikidata is that each person thinks about their Q's and looks at tiny snippets of data, preventing the formation of a larger thing, i.e. a worldwide open knowledge graph, so it is majorly derailing and slowing the Linked Data movement. Google keeps the branding.
you are describing a Linked Data idea. If you claim to create 7.5 Billion Wikidata Q's, please do so. I couldn't even create 2.2 million for all German companies.
> CC-4 MIT OCW-centric wiki World University and School which donated itself to Wikidata in 2015 for co-development, and received WUaS Miraheze MediaWiki in 2017 as a consequence would like to lay "claim to create 7.5 Billion Wikidata Q's" - thank you - and re Linked Open(?) Data.
The integration paradigm with Wikidata does not scale. It is basically saying to copy all data in one place. The free hosting is nice, though.
> am wondering how to build and scale from Markus's "DBpedia EN has 32 people educated at the University ofLeipzig, whereas Wikidata has 1217" - and with regards to World University and School's seeking to begin building toward 7.5 billion people as Wikidata Q items with registered / matriculated students and "Universitians" - WUaS wiki learners and teachers, editors and volunteers - and focusing on 500-2000 undergraduates matriculating in English in 2020 for our 2nd undergraduate class as we proceed with licensing (BPPE) and accreditation (WASC).
DBpedia can do an ad-hoc integration over Linked Data, i.e. you take out the data that you need for your use case from the cloud, stored decentrally.
> Sounds good (https://wiki.dbpedia.org/
With the Databus, we are building tools where you can do exactly this type of integration with a few command lines based on downloadable files client-side, but still using Linked Data principles as integration paradigm.
> Great (https://wiki.dbpedia.org/
All that you need are: Dump downloads, stable URIs, sameAs links and ontology mappings, the later two not necessarily created by yourself, but by other users.
> Sounds do-able
Regarding stable URIs: yes, if you want to follow the enumeration pattern like Wikidata's Q's that is fine.
> Good re "Regarding stable URIs" esp.
Regarding Google's involvement: LOD needs some improvements in tooling like the Databus is building, people don't want to do 7.5 Billion HTTP Requests, they want to just download it or the part they need. Hence I would like to compare integration power of Linked Data via DBpedia and Databus to the Google Knowledge Graph. My concern with Wikidata is that each person thinks about their Q's and looks at tiny snippets of data, preventing the formation of a larger thing, i.e. a worldwide open knowledge graph, so it is majorly derailing and slowing the Linked Data movement. Google keeps the branding.
> Re Google and planning for 7.5 billion living people, each a Wikidata Q item, to begin WUaS in Wikidata is seeking this for 3 initial reasons:
1) registering / matriculating students for free highest quality universal education with online major MIT OCW-centric (in its 5 languages) universities in ~200 countries' main languages, and wiki schools for open teaching in each of all 7,111 known living languages (plus wiki schools for all extinct languages as well), where Wikipedia is in ~300 languages
2) 'Avatar bot electronic medical records' for all 7.5 billion people.
3) Exploring facilitating a single cryptocurrency with blockchain, distributed potentially via Universal Basic Income (UBI) experiments (partly to alleviate poverty) ... to all 7.5 billion people
3) Exploring facilitating a single cryptocurrency with blockchain, distributed potentially via Universal Basic Income (UBI) experiments (partly to alleviate poverty) ... to all 7.5 billion people
In exploring coding for this in Wikidata and re DBPedia re Linked Data,
Not only building out in a modular way - eg 10,000 patients overnight in a hospital over the course of a year - eg with Markus's example of 1217 Leipzig degrees - and eventually aggregated, seems feasible, but I'm not a database coder or expert ... WUaS would seek to build out matriculating class by class for all 5 degrees - Bachelor, Ph.D., Law, and M.D. as well as I.B. high school/similar (https://ocw.mit.edu/high- school/) but also as planned in each of all ~200 countries as major online Universities, and in each of their languages (and indeed in each of all 7,111 known living languages eg for subjects in online clinical trials - https://wiki. worlduniversityandschool.org/ wiki/Clinical_Trials_at_WUaS_( for_all_languages)). The 7.5 billion and the 2.2 million German corporations' numbers may seem so large because the coding for scaling Wikidata & DBPedia for this isn't yet written in full (or begun).
Re UBI EXPERIMENTS for some of 7.5 billion people initially, Google's Quantum Supremacy - https://news.bitcoin.com/ what-googles-quantum- breakthrough-means-for- blockchain-cryptography/ - seems to be an advantage here in Google's stake in Wikidata, as does potentially planning for similar UBI experiments, brainstorming-wise, with a new CryptoCurrency called Pi, created by some Stanford Ph.D.s, see - https://scott-macleod. blogspot.com/2019/09/aquatic- plant-mine-pi-from-faqs-what- is.html ... Google's and Stanford's collaborations run deep knowledge-wise (which is are networks WUaS seeks to collaborate in far-reaching ways further with as well) ... and with regard to building up modular-wise to 7.5 billion for Universal Basic Income experiments also to distribute a SINGLE cryptocurrency backed by some number of ~200 countries' central banks ... to alleviate poverty for one.
Seems like Google's Knowledge Graph could play an integral role here as well.
All the best, Scott
*
Kingsley, Sebastian, Markus, Houcemeddine and All,
Am also thinking of making interoperable World University and School's "front end" WUaS Miraheze MediaWiki with our "back end" Wikidata / Wikibase
7.5 billion people here -
Planning ~200 online MIT OCW-centric undergraduates colleges (thinking Oxbridge / MIT too) -
Planning ~200 online MIT OCW-centric Medical Schools in countries' official languages -
https://wiki. worlduniversityandschool.org/ wiki/World_University_Medical_ School - with degrees and graduates.
Could create datasets for all published Science Fiction writers and wiki add them -
re Wikidata_databases_and_ ecosystems -
Accessible from -
the main /Subjects' page
And re the main SUBJECT TEMPLATE - https://wiki. worlduniversityandschool.org/ wiki/SUBJECT_TEMPLATE - which may become editable too in a visual editor, but less editable than the above perhaps.
Cheers, Scott
* *
Hi Denny, Markus, Wikidatans,
Markus wrote re DBpedia:
Of course, it [methodological comparison between Wikidata and DBPedia] has to be fair, taking into account that DBpedia editions
are based on a Wikipedia in one language (hence is always missing
entities that Wikidata has). For example, I recently computed the
difference between the following two:
(1) The set of all pairs of ancestors that one can find by following
(paths of) parent relations on EN DBPedia.
(2) The set of all pairs of ancestors that one can find by following
(paths of) mother/father relations on Wikidata, but visiting only items
that are present in English Wikipedia.
Am curious how Wikidata could plan in its 300 languages re DBPedia "based on a Wikipedia in one language," and using your genetics' example, Markus, for genes of all 7.5 billion people on the planet (where every person could become a Wikidata Q item), and other species' individuals as well, and then for genetics of pairs going back as far as the evidence people could add and query about would take us. Am thinking in particular of another wiki project, the WikiTree project, which is developing potentially for all 7.5 billion people and related pairings,
the The Free Family Tree, growing stronger since 2008 ...
Together we're growing an accurate single family tree using DNA and traditional genealogical sources.
(e.g. https://www.wikitree.com/wiki/MacLeod-2524)
Scott
I think such DBPedia data into a Wikidata-like environment could be invaluable to medicine as well as especially to the genetic engineering revolution taking place in universities around the world (and in so many other ways). Wikidata might benefit greatly from planning further for this.
And since Google has a stake in Wikidata per this thread, such genetic data could be vital to even projects like Alphabet's / Google's / Stanford University's / Duke University's Project Baseline which has a genetics' focus too - https://www.nytimes.com/2018/10/18/well/live/project-baseline-cancer-prevention-heart-disease-illness.html - and especially if and as this may be extended to all 7.5 billion people, each a Wikidata Q-item and for genetic health questions.
Am not emailing the Wikidata list here, since I seem not be able to post to it.
Cheers, Scott
- https://meta.wikimedia.org/wiki/User:Scott_WUaS -
*
|
Sep 21, 2019, 4:02 PM (2 days ago)
| |||
Hi Kingsley, Sebastian, Markus, Wikidatans and All,
Since I can't seem to post to the Wikidata list, I'm emailing all of you directly (and please see email correspondence below thus far if interested - and re MIT OCW-centric wiki World University and School in Wikidata, and planning for all 7.5 billion people on the planet with a Wikidata Q-item).
Kingsley (or Sebastian), can you say a little more re this thread about how you might envision Google's stake playing a role coding-wise re Linked Open Data in Wikidata re DBPedia? Using Markus's example of
(1) The set of all pairs of ancestors that one can find by following
(paths of) parent relations on EN DBPedia.
(2) The set of all pairs of ancestors that one can find by following
(paths of) mother/father relations on Wikidata, but visiting only items
that are present in English Wikipedia.
Am curious how Wikidata could plan in its 300 languages re DBPedia "based on a Wikipedia in one language," and using your genetics' example, Markus, for genes of all 7.5 billion people on the planet (where every person could become a Wikidata Q item), and other species' individuals as well, and then for genetics of pairs going back as far as the evidence people could add and query about would take us. Am thinking in particular of another wiki project, the WikiTree project, which is developing potentially for all 7.5 billion people and related pairings,
the The Free Family Tree, growing stronger since 2008 ...
- Together we're growing an accurate single family tree using DNA and traditional genealogical sources.
Scott
I think such DBPedia data into a Wikidata-like environment could be invaluable to medicine as well as especially to the genetic engineering revolution taking place in universities around the world (and in so many other ways). Wikidata might benefit greatly from planning further for this.
And since Google has a stake in Wikidata per this thread, such genetic data could be vital to even projects like Alphabet's / Google's / Stanford University's / Duke University's Project Baseline which has a genetics' focus too - https://www.nytimes.com/ 2018/10/18/well/live/project- baseline-cancer-prevention- heart-disease-illness.html - and especially if and as this may be extended to all 7.5 billion people, each a Wikidata Q-item and for genetic health questions.
Thanks.
Scott
* *
Tuesday, September 24, 2019
Hi Léa, Markus, Houcemeddine, Kingsley, Denny, Larry, Peter, Lydia, Katherine, Stephen, Gerard, Sebastian, Magnus and Wikidatans,
Re Wikidata strategy - https://meta.wikimedia.org/ wiki/Wikidata/Strategy/2019 - and "[Wikidata] Google's stake in Wikidata and Wikipedia," I wonder what role Google Tensor Flow, "an end-to-end open source platform for machine learning" (https://www.tensorflow.org/) is playing in these discussions? And especially in planning for medicine in a variety of ways and in a variety of languages, as well as genetics, in both Wikidata and DBPedia? (And since I can't curiously post to Wikidata email list, or to Wikimedia/Wikipedia sites - I'm emailing you all here). Google Tensor Flow could be significant to both 'strategy papers' as well as 'language' questions. (See, too, my comments re these strategy questions in "[Wikidata] Google's stake in Wikidata and Wikipedia" esp. re planning for 7.5 billion people each a Wikidata Q-item, and in the 3 ways WUaS is planning for initially - https://scott-macleod. blogspot.com/2019/09/lake- mashu-japan-wikidata-googles- stake.html).
What role is Tensor Flow playing in these strategy questions? I don't recall seeing it mentioned in these papers I looked at a few weeks ago.
Tensor Flow seems to be 'low hanging fruit' :) with regards to strategizing.
Cheers, Scott
*
...
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.