End-of-week "fun" with #SPARQL & #tidyverse: Checking whether #CoVID had/has an impact of the life expectancy of (famous/relevant) people listed in #Wikidata.
https://codeberg.org/gittaca/life-expectancy-at-wikidata#life-expectancy-at-wikidata

End-of-week "fun" with #SPARQL & #tidyverse: Checking whether #CoVID had/has an impact of the life expectancy of (famous/relevant) people listed in #Wikidata.
https://codeberg.org/gittaca/life-expectancy-at-wikidata#life-expectancy-at-wikidata
(Solved thanks to the great Mastodon crew!)
HELP needed ASAP in #SPARQL & #Wikidata for a research project as I haven’t used both in years & fail miserably. Please boost and thanks for your help.
The query should list all German federal ministries and their associated federal or otherwise supervised federal agencies etc. The list should make visible which federal ministry is „in charge“ of each entity.
Any ideas?
Ich habe über ein neues Feierabend-Projekt gebloggt: "Jumelages - Finde Partnerkommunen mit Wikidata und Openstreetmap (und Vibe-Coding)"
ever visited somewhere and wondered what the local politics are? well, over at my place you can select a map location to ask wikidata! https://joeldn.srht.site/site/post/local-government-political-party-finder.html #map #mistral #maplibre #wikidata #sparql @mapsmania
I didn’t manage to focus on preparing a workshop I’ll be teaching on Friday and thus I wrote a #SPARQL query instead to map the global distribution of periodicals in publication languages from the Eastern Mediterranean (Arabic, Ottoman, Greek, Ladino, Coptic …).
https://query-chest.toolforge.org/redirect/rdblWlDg4OqEQKa2yg2aSgeU8G2soiG4QaweemeeGWn
Last April, @fabien_gandon highlighted how @w3c #LinkedData standards (such as RDF, SHACL, and #SPARQL) enable knowledge extraction, sharing, and machine learning across domains like #robotics, culture, medicine, and chemistry. These standards support interoperability, agent collaboration, and distributed #AI. He concluded his talk with a call to address AI's impact on user attention and to encourage ethical dialogue within the W3C community.
Watch 'LLM & Linked Data': https://youtu.be/CVFhPYTVBlI
Die Kirchenkreise der #Nordkirche auf einer #Karte auf Basis der Daten in @wikidata
Hier ist die #SPARQL-Abfrage dazu: https://w.wiki/Dm8f
An der Akademie der Wissenschaften und der Literatur in #Mainz wurde am #girlsday in Zusammenarbeit mit #NFDI4Culture an einer Webanwendung gearbeitet, die sich "In meiner Nähe/Around me" nennt.
Es wurden Wikidata-Einträge zu Museen, Schulen und Parks zur #Georeferenzierung um einen Kartenmarker abgefragt. Dabei hatten die Mädchen Einblick in #wikidata #SPARQL -Queries, #html #CSS und #javascript Und vor allem viel Spaß!
^gp (sp)
New blog post! I've started taking regular snapshots of Library of Congress linked open data (as part of a much broader data rescue and monitoring project) and took the opportunity to finally get to grips with #SPARQL, #RDF and all that comes along with it. I've started small though...
On fooling around with triples
https://erambler.co.uk/blog/on-fooling-around-with-triples/
Voilà mes interventions d'ici la fin (et un peu au delà) de ma dernière résidence #Wikimédia au sein d'une #URFIST. https://fr.wikipedia.org/wiki/Projet:Wikifier_la_science/Nice #Wikipedia #SPARQL #OpenRefine #Wikidata #WikiCommons #LicenceLibre
Looking for some #SPARQL help, as I'm a newbie to this and my mental model of how triple stores work is incomplete.
I have two versions of the same set of terms in two Turtle files, and I want to load them up into a triple store and then compare them to see what changed: terms updated, added or deleted.
An der #SPARQL-Query hätte ich auch Interesse
It‘s fascinating how well the SPINACH tool works in many cases. It generates #SPARQL queries for #Wikidata from questions in natural language using an #LLM.
Try it for instance with a question like “Give me all current members of the Bundestag”. https://spinach.genie.stanford.edu/
Here’s the paper on the technology behind it: https://arxiv.org/abs/2407.11417
One of the more convincing use cases for an LLM, if you ask me.
Das ist ja ein interessanter Beitrag. Über die Beispiel-#SPARQL-Abfrage zu #Wikidata-Einträgen die im Staatsarchiv Leipzig archiviert sind habe ich gelernt, dass es in meiner Heimatstadt mal ein königliches Gericht gab.
Und wenn man die Abfrage auf das Staatsarchiv #Dresden anpasst findet man u.a. raus dass 1911 (!) in Reick eine Firma namens "Vereinigte Windturbinenwerke" gegründet wurde.
→ #Energiewende in der Kaiserzeit.
it took me quite a while to understand that the ominous `wikibase:directClaim` in #Wikidata is basically just string replacement, so "?dc wikibase:directClaim ?prop ." stands for "FILTER ( STRSTARTS ( STR(?prop), "http://www.wikidata.org/prop/direct/" ) )
BIND( IRI(REPLACE( STR(?prop),"prop/direct/", "entity/" ) ) AS ?dc)" in plain #SPARQL .(https://w.wiki/CSc4).
Is there any documentation on these wikidata-specific functions? They are important if queries must be adapted to #qlever.
Anyone have experiences they are willing to share in managing a public facing #sparql endpoint? I am concerned about boundless compute specifically
#rdf #knowledgegraph users, do you know how prevalent the usage of the OGC #geosparql ontology and associated #sparql engines that support it are ?
We introduce the new blog series "Who's using OC?" with a post dedicated to @dblp, a reference for bibliographic information on major computer science publications, which directly ingests the open citation data released by OpenCitations. Using the linkage provided by #OMID, the dblp users can perform citation analyses using its #SPARQL query service. More at: https://opencitations.hypotheses.org/3677
Thank you #dblp for reusing our data!