http://info216.wiki.uib.no/index.php?title=Special:NewPages&feed=atom&hideredirs=1&limit=50&offset=&namespace=0&username=&tagfilter=&size-mode=max&size=0info216 - New pages [en]2024-03-29T15:02:46ZFrom info216MediaWiki 1.39.6http://info216.wiki.uib.no/Solution_examples_2021Solution examples 20212024-03-13T15:36:27Z<p>Sinoa: Created page with "<pre> *** Examples related to the "OWL in TTL" task from 2021: A country has one or more regions. @prefix : <http://ex.org/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . :Country rdfs:subClassOf [ a owl:Restriction ; owl:onProperty :hasRegion ; owl:someValuesFrom :Region ] . A city is located in exactly one country. @prefix : <http://ex.org/> . @prefix owl: <http://www.w3.org..."</p>
<hr />
<div><pre><br />
<br />
*** Examples related to the "OWL in TTL" task from 2021:<br />
<br />
<br />
A country has one or more regions.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix owl: <http://www.w3.org/2002/07/owl#> .<br />
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .<br />
<br />
:Country rdfs:subClassOf [ a owl:Restriction ;<br />
owl:onProperty :hasRegion ;<br />
owl:someValuesFrom :Region ] .<br />
<br />
<br />
<br />
A city is located in exactly one country.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix owl: <http://www.w3.org/2002/07/owl#> .<br />
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .<br />
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .<br />
<br />
:City rdfs:subClassOf [ a owl:Restriction ;<br />
owl:cardinality "1"^^xsd:nonNegativeInteger ;<br />
owl:onProperty :inCountry ] .<br />
<br />
<br />
<br />
A capital city is a city.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .<br />
<br />
:CapitalCity rdfs:subClassOf :City .<br />
<br />
<br />
<br />
A country has only one capital.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix owl: <http://www.w3.org/2002/07/owl#> .<br />
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .<br />
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .<br />
<br />
:Country rdfs:subClassOf [ a owl:Restriction ;<br />
owl:onClass :CapitalCity ;<br />
owl:onProperty [ owl:inverseOf :inCountry ] ;<br />
owl:qualifiedCardinality "1"^^xsd:nonNegativeInteger ] .<br />
<br />
<br />
<br />
A division is either a country or a region.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix owl: <http://www.w3.org/2002/07/owl#> .<br />
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .<br />
<br />
:Division owl:EquivalentClass [ owl:unionOf ( :Country :Region ) ] .<br />
<br />
<br />
<br />
Anything that is adjacent to something is a division.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .<br />
<br />
:adjacentTo rdfs:domain :Division ;<br />
rdfs:range :Division .<br />
<br />
<br />
<br />
A division cannot be adjacent to itself.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix owl: <http://www.w3.org/2002/07/owl#> .<br />
<br />
:adjacentTo a owl:IrreflexiveProperty .<br />
<br />
<br />
<br />
A city is located in at most one region.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix owl: <http://www.w3.org/2002/07/owl#> .<br />
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .<br />
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .<br />
<br />
:City rdfs:subClassOf [ a owl:Restriction ;<br />
owl:maximumCardinality "1"^^xsd:nonNegativeInteger ;<br />
owl:onProperty :inRegion ] .<br />
<br />
<br />
<br />
A capital region is a region that has a capital city.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix owl: <http://www.w3.org/2002/07/owl#> .<br />
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .<br />
<br />
:CapitalRegion owl:intersectionOf ( :Region [ a owl:Restriction ;<br />
owl:onProperty [ owl:inverseOf :inRegion ] ;<br />
owl:someValuesFrom :CapitalCity ] ) .<br />
<br />
<br />
<br />
If a city is in a region, it must be in the country of that region.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix owl: <http://www.w3.org/2002/07/owl#> .<br />
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .<br />
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .<br />
<br />
:inRegion rdfs:subPropertyOf [ owl:propertyChainAxiom ( :inCountry :hasRegion ) ] .<br />
<br />
<br />
<br />
An island state is a country that is next to no (other) country.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix owl: <http://www.w3.org/2002/07/owl#> .<br />
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .<br />
<br />
:IslandState owl:intersectionOf ( :Country [ owl:complementOf [ a owl:Restriction ;<br />
owl:onProperty :adjancentTo ;<br />
owl:someValuesFrom :Country ] ] ) .<br />
<br />
<br />
<br />
A country with only one city and at most one region is a city state.<br />
<br />
<br />
@prefix : <http://ex.org/> .<br />
@prefix owl: <http://www.w3.org/2002/07/owl#> .<br />
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .<br />
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .<br />
<br />
:CityState owl:intersectionOf ( :Country [ a owl:Restriction ;<br />
owl:onClass :City ;<br />
owl:onProperty [ owl:inverseOf :inCountry ] ;<br />
owl:qualifiedCardinality "1"^^xsd:NonNegativeInteger ] [ a owl:Restriction ;<br />
owl:maxQualifiedCardinality "1"^^xsd:NonNegativeInteger ;<br />
owl:onClass :Region ;<br />
owl:onProperty :hasRegion ] ) .<br />
<br />
<br />
<br />
*** Examples related to the "SPARQL" task from 2021:<br />
<br />
<br />
<br />
PREFIX : <http://ex.org/><br />
<br />
INSERT DATA {<br />
:Norway :hasRegion :OsloRegion, :Vestland, :Trondelag, :Rogaland, :Viken .<br />
:OsloRegion :hasCity :Oslo .<br />
}<br />
<br />
<br />
<br />
PREFIX : <http://ex.org/><br />
<br />
INSERT DATA {<br />
:Norway :citiesByPopulation ( :Oslo :Bergen :Trondheim :Stavanger :Drammen ) .<br />
}<br />
<br />
<br />
<br />
PREFIX : <http://ex.org/><br />
PREFIX rdf: <{RDF}><br />
<br />
SELECT ?city WHERE {{<br />
:Norway (:citiesByPopulation / rdf:rest* / rdf:first) ?city .<br />
}}<br />
<br />
<br />
<br />
PREFIX : <http://ex.org/><br />
<br />
INSERT {<br />
:Norway :hasCity ?city .<br />
} WHERE {<br />
:Norway (:citiesByPopulation / rdf:rest* / rdf:first) ?city .<br />
}<br />
<br />
<br />
<br />
INSERT DATA {<br />
:Norway :hasCity :Os, :Voss, :Sandnes, :Fredrikstad, :Sarpsborg .<br />
<br />
:OsloRegion :regionalCity :Oslo .<br />
:Vestland :regionalCity :Bergen, :Os, :Voss .<br />
:Trondelag :regionalCity :Trondheim .<br />
:Rogaland :regionalCity :Stavanger, :Sandnes .<br />
:Viken :regionalCity :Drammen, :Fredrikstad, :Sarpsborg .<br />
<br />
:Oslo :hasPopulation 580000 .<br />
:Bergen :hasPopulation 213585 .<br />
:Os :hasPopulation 14046 .<br />
:Voss :hasPopulation 6043 .<br />
:Trondheim :hasPopulation 147139 .<br />
:Stavanger :hasPopulation 121610 .<br />
:Drammen :hasPopulation 90722 .<br />
:Fredrikstad :hasPopulation 72760 .<br />
:Sandnes :hasPopulation 63032 .<br />
:Sarpsborg :hasPopulation 52159 .<br />
}<br />
<br />
<br />
<br />
PREFIX : <http://ex.org/><br />
<br />
SELECT ?region (SUM(?pop) AS ?cityPop) WHERE {<br />
?region :regionalCity / :hasPopulation ?pop .<br />
}<br />
GROUP BY ?region<br />
ORDER BY DESC(?cityPop)<br />
<br />
</pre></div>Sinoahttp://info216.wiki.uib.no/Solution_examples_2022Solution examples 20222024-03-13T15:35:45Z<p>Sinoa: Created page with "<pre> *** Examples related to the Part 2 - Programming task from 2022: The examples only show the triples that should be in the solution, not the actual programming code, so they are not sufficient as answers on their own. See Robin's suggestions for how to program this. Question 26: :Agent rdf:type owl:Class . :Author rdfs:subClassOf :Agent . :Organization rdfs:subClassOf :Agent . :Country rdfs:subClassOf :Agent . :Publication rd..."</p>
<hr />
<div><pre><br />
<br />
<br />
*** Examples related to the Part 2 - Programming task from 2022:<br />
<br />
The examples only show the triples that should be in the solution, not the actual programming code, so they are not sufficient as answers on their own. See Robin's suggestions for how to program this.<br />
<br />
Question 26:<br />
<br />
:Agent rdf:type owl:Class .<br />
:Author rdfs:subClassOf :Agent .<br />
:Organization rdfs:subClassOf :Agent .<br />
:Country rdfs:subClassOf :Agent .<br />
:Publication rdf:type owl:Class .<br />
:Paper rdfs:subClassOf :Publication .<br />
<br />
:name rdfs:domain :Agent ;<br />
rdfs:range xsd:string .<br />
:affiliation rdfs:domain :Author ;<br />
rdfs:range :Organization .<br />
:country rdfs:domain :Author ;<br />
rdfs:range :Country .<br />
:title rdfs:domain :Publication ;<br />
rdfs:range xsd:string .<br />
:author rdfs:domain :Publication ;<br />
rdfs:range :Author .<br />
:publication rdfs:domain :Paper ;<br />
rdfs:range :Publication .<br />
:publisher rdfs:domain :Publication ;<br />
rdfs:range :Organization .<br />
:year rdfs:domain :Publication ;<br />
rdfs:range xsd:int .<br />
<br />
<br />
Question 27:<br />
<br />
:DBpedia_A_nucleus a :Paper ;<br />
:author :Christian_Bizer,<br />
:Soren_Auer ;<br />
:publication :The_semantic_web_book ;<br />
:publisher :Springer_Nature ;<br />
:title "DBpedia A nucleus" ;<br />
:year 2007 .<br />
<br />
:Linked_data_The_story_so_far a :Paper ;<br />
:author :Christian_Bizer,<br />
:Tim_Berners-Lee ;<br />
:publication :Semantic_services_interoperability_and_web_applications ;<br />
:publisher :IGI_Global ;<br />
:title "Linked data The story so far" ;<br />
:year 2011 .<br />
<br />
:The_semantic_web a :Paper ;<br />
:author :James_Hendler,<br />
:Tim_Berners-Lee ;<br />
:publication :Scientific_American ;<br />
:publisher :Springer_Nature ;<br />
:title "The semantic web" ;<br />
:year 2001 .<br />
<br />
:James_Hendler a :Author ;<br />
:affiliation :Rensselaer_Polytechnic_Institute ;<br />
:country :United_States ;<br />
:name "James Hendler" .<br />
<br />
:Soren_Auer a :Author ;<br />
:affiliation :Leibniz_University_Hannover ;<br />
:country :Germany ;<br />
:name "Soren Auer" .<br />
<br />
:IGI_Global a :Organization ;<br />
:name "IGI Global" .<br />
<br />
:Leibniz_University_Hannover a :Organization ;<br />
:name "Leibniz University Hannover" .<br />
<br />
:Massachusetts_Institute_of_Technology a :Organization ;<br />
:name "Massachusetts Institute of Technology" .<br />
<br />
:Rensselaer_Polytechnic_Institute a :Organization ;<br />
:name "Rensselaer Polytechnic Institute" .<br />
<br />
:University_of_Mannheim a :Organization ;<br />
:name "University of Mannheim" .<br />
<br />
:Scientific_American a :Publication ;<br />
:title "Scientific American" .<br />
<br />
:Semantic_services_interoperability_and_web_applications a :Publication ;<br />
:title "Semantic services interoperability and web applications" .<br />
<br />
:The_semantic_web_book a :Publication ;<br />
:title "The semantic web book" .<br />
<br />
:Christian_Bizer a :Author ;<br />
:affiliation :University_of_Mannheim ;<br />
:country :Germany ;<br />
:name "Christian Bizer" .<br />
<br />
:Tim_Berners-Lee a :Author ;<br />
:affiliation :Massachusetts_Institute_of_Technology ;<br />
:country :United_States ;<br />
:name "Tim Berners-Lee" .<br />
<br />
:Germany a :Country ;<br />
:name "Germany" .<br />
<br />
:United_States a :Country ;<br />
:name "United States" .<br />
<br />
:Springer_Nature a :Organization ;<br />
:name "Springer Nature" .<br />
<br />
<br />
*** Examples related to the Part 4 - Restrictions and reasoning task from 2022:<br />
<br />
Question 40:<br />
:Organization rdfs:subClassOf :Agent .<br />
<br />
Question 41:<br />
:affiliation rdfs:domain :Author .<br />
<br />
Question 42:<br />
:affiliation rdfs:range :Organization .<br />
<br />
Question 43:<br />
:publication rdf:type owl:FunctionalProperty .<br />
<br />
Question 44:<br />
:year rdf:type owl:FunctionalProperty .<br />
<br />
Question 45:<br />
:publication rdf:type owl:IrreflexiveProperty .<br />
<br />
Question 46:<br />
:publication rdf:type owl:TransitiveProperty .<br />
<br />
Question 47:<br />
:name rdf:type owl:InverseFunctionalProperty .<br />
<br />
Question 48:<br />
:Author owl:disjointWith :Organization .<br />
<br />
Question 49:<br />
:title rdfs:range xsd:string .<br />
<br />
<br />
<br />
Question 50:<br />
:Paper rdfs:subClassOf [<br />
owl:someValuesFrom :Author ;<br />
owl:onProperty :author <br />
]<br />
<br />
Question 51:<br />
:Paper rdfs:subClassOf [<br />
owl:cardinality 1 ;<br />
owl:onProperty :author<br />
]<br />
<br />
Question 52:<br />
:year rdfs:range [<br />
a rdfs:Datatype ;<br />
owl:onDatatype xsd:int ;<br />
owl:withRestrictions ( <br />
[xsd:minInclusive 1900]<br />
[xsd:maxInclusive 2050]<br />
)<br />
]<br />
<br />
Question 53:<br />
[<br />
rdf:type owl:AllDisjointClasses ;<br />
owl:members (<br />
:Author :Organization :Country <br />
)<br />
]<br />
<br />
Question 54:<br />
:Publisher owl:equivalentClass (<br />
:ACM :IEEE_CS :Springer_Nature :IGI_Global <br />
)<br />
<br />
<br />
*** Examples related to Part 6 - SPARQL from 2022:<br />
<br />
Question 62:<br />
SELECT ?title WHERE {<br />
?paper rdf:type :Paper ;<br />
:title ?title .<br />
}<br />
<br />
Question 63:<br />
SELECT DISTINCT ?name WHERE {<br />
?publ rdf:type :Publication ;<br />
:publisher / :name ?name<br />
}<br />
ORDER BY ?name<br />
<br />
Question 64:<br />
SELECT ?author ?title WHERE {<br />
?author ^:name / ^:author / :title ?title<br />
}<br />
<br />
Question 65:<br />
SELECT ?country (COUNT(?paper) AS ?number) WHERE {<br />
?paper rdf:type :Paper ;<br />
:author / :country / :name ?country<br />
}<br />
GROUP BY ?country<br />
<br />
Question 66:<br />
SELECT ?author (COUNT(?paper) AS ?number) WHERE {<br />
?author ^:name / ^:author ?paper<br />
}<br />
GROUP BY ?author<br />
<br />
Question 67:<br />
SELECT ?name (MIN(?year) AS ?min) (MAX(?year) AS ?max) WHERE {<br />
?author rdf:type :Author ;<br />
:name ?name ;<br />
^:author / :year ?year <br />
}<br />
GROUP BY ?name<br />
<br />
Question 68:<br />
SELECT ?name WHERE {<br />
?author rdf:type :Author ;<br />
:name ?name<br />
MINUS {<br />
?author :country / :name "Germany"<br />
}<br />
}<br />
<br />
Question 69:<br />
ASK {<br />
"James Hendler" ^:name / ^:author ?paper1 ;<br />
^:name / ^:author ?paper2 <br />
FILTER (?paper1 != ?paper2)<br />
}<br />
<br />
Question 70:<br />
CONSTRUCT {<br />
?author rdf:type :Author ;<br />
:name ?name ;<br />
:affiliation ?affiliation ;<br />
:country ?country<br />
} WHERE {<br />
?author rdf:type :Author ;<br />
:name ?name ;<br />
:affiliation ?affiliation ;<br />
:country ?country ;<br />
^:author / :author / :name "Christian Bizer" <br />
}<br />
<br />
Question 71:<br />
CONSTRUCT {<br />
?author rdf:type :Author ;<br />
:name ?name ;<br />
:affiliation ?affiliation ;<br />
:country ?country .<br />
?paper rdf:type :Paper ;<br />
:author ?author ;<br />
:title ?title<br />
} WHERE {<br />
?author rdf:type :Author ;<br />
:name ?name ;<br />
:affiliation ?affiliation ;<br />
:country ?country ;<br />
^:author / :author / :name "Christian Bizer" ;<br />
^:author ?paper .<br />
?paper :title ?title<br />
}<br />
<br />
<br />
Question 72:<br />
INSERT {<br />
?org rdf:type :Institution<br />
} WHERE {<br />
?author rdf:type :Author ;<br />
:affiliation ?org<br />
}<br />
<br />
Question 73:<br />
INSERT {<br />
?org :locatedIn ?country<br />
} WHERE {<br />
?author rdf:type :Author ;<br />
:affiliation ?org ;<br />
:country ?country<br />
}<br />
<br />
Question 74:<br />
INSERT {<br />
?paper :producedBy ?org<br />
} WHERE {<br />
?paper :author / :affiliation ?org<br />
}<br />
<br />
Question 75:<br />
INSERT {<br />
?paper :producedIn ?country<br />
} WHERE {<br />
?paper :author / :country ?country<br />
}<br />
<br />
Question 76:<br />
DELETE {<br />
?author :country ?country<br />
} WHERE {<br />
?author :country ?country<br />
}<br />
<br />
Question 77:<br />
DELETE {<br />
?paper :year ?year<br />
} WHERE {<br />
?paper :year ?year ;<br />
:year ?earlier .<br />
FILTER(?year > ?earlier)<br />
}<br />
<br />
<br />
*** Example related to Task 78 - Error detection from 2022:<br />
<br />
The errors are:<br />
<br />
Graph() without assignment<br />
BASE not a Namespace<br />
Parse 'owl' format<br />
No tuples in add<br />
No Literal()<br />
No owlrl import<br />
No closure<br />
DELETE without WHERE<br />
<br />
<br />
</pre></div>Sinoahttp://info216.wiki.uib.no/Solution_examples_2023Solution examples 20232024-03-13T15:31:56Z<p>Sinoa: Created page with "<pre> *** SHACL examples - includes answers to the exam questions @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix dc: <http://purl.org/dc/elements/1.1/> . @prefix foaf: <http://xmlns.com/foaf/0.1/> . @prefix sh: <http://www.w3.org/ns/shacl#> . @prefix : <http://info216.uib.no/movies/> . :Di..."</p>
<hr />
<div><pre><br />
<br />
*** SHACL examples - includes answers to the exam questions<br />
<br />
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . <br />
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . <br />
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . <br />
@prefix owl: <http://www.w3.org/2002/07/owl#> . <br />
@prefix dc: <http://purl.org/dc/elements/1.1/> .<br />
@prefix foaf: <http://xmlns.com/foaf/0.1/> .<br />
@prefix sh: <http://www.w3.org/ns/shacl#> .<br />
@prefix : <http://info216.uib.no/movies/> .<br />
<br />
<br />
:DirectorShape a sh:NodeShape ;<br />
sh:targetClass :Director ;<br />
# A Director must have exactly one foaf:name of type xsd:string.<br />
sh:property [<br />
sh:path foaf:name ;<br />
sh:minCount 1 ;<br />
sh:maxCount 1 ;\\<br />
sh:type xsd:string<br />
] ;<br />
# A Director must be the director of at least one Movie.<br />
sh:property [<br />
sh:path :director_of ;<br />
sh:minCount 1 ;<br />
sh:class :Movie<br />
] .<br />
<br />
:ActorShape a sh:NodeShape ;<br />
sh:targetClass :Actor ;<br />
sh:property [<br />
sh:path foaf:name ;<br />
sh:minCount 1 ;<br />
sh:maxCount 1 ;<br />
sh:type xsd:string<br />
] ;<br />
# If an actor is an actor in a resource, that resource must be a movie.<br />
sh:property [<br />
sh:path :actor_in ;<br />
sh:minCount 1 ;<br />
sh:class :Movie<br />
] ;<br />
sh:property [<br />
sh:path :plays_role ;<br />
sh:class :Role ;<br />
] ;<br />
# If an actor plays a role that is a role in some resource, that resource must be a movie.<br />
sh:property [<br />
sh:path ( :plays_role :role_in ) ;<br />
sh:qualifiedValueShape [ sh:path :actor_in ] ;<br />
sh:qualifiedMinCount 1 ;<br />
] .<br />
<br />
:MovieShape a sh:NodeShape ;<br />
sh:targetClass :Movie ;<br />
sh:property [<br />
sh:path dc:title ;<br />
sh:minCount 1 ;<br />
sh:maxCount 1 ;<br />
sh:type xsd:string<br />
] ;<br />
# A movie must be directed by at least one dIrector or acted in by at least one actor.<br />
sh:or (<br />
[ sh:property [<br />
sh:path [ sh:inversePath :actor_in ] ;<br />
sh:minCount 1 ;<br />
] ]<br />
[ sh:property [<br />
sh:path [ sh:inversePath :director_of ] ;<br />
sh:minCount 1 ;<br />
] ] <br />
) ;<br />
sh:property [<br />
sh:path :year ;<br />
sh:minCount 1 ;<br />
sh:maxCount 1 ;<br />
sh:type xsd:year<br />
] .<br />
<br />
:RoleShape a sh:NodeShape ;<br />
sh:targetClass :Role ;<br />
sh:property [<br />
sh:path foaf:name ;<br />
sh:minCount 1 ;<br />
sh:maxCount 1 ;<br />
sh:type xsd:string<br />
] ;<br />
sh:property [<br />
sh:path :role_in ;<br />
sh:minCount 1 ;<br />
sh:class :Movie<br />
] .<br />
<br />
:LeadRoleShape a sh:NodeShape ;<br />
sh:node :RoleShape ;<br />
sh:targetClass :LeadRole .<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
*** RDFS rules<br />
<br />
A resource that is a director_of something is a director.<br />
<br />
:director_of rdfs:domain :Director .<br />
<br />
A resource that something else is a director_of is a movie.<br />
<br />
:director_of rdfs:range :Movie .<br />
<br />
The year of something has type xsd:year.<br />
<br />
:year rdfs:range xsd:year .<br />
<br />
An actor is a Person.<br />
<br />
:Actor rdfs:subClassOf foaf:Person .<br />
<br />
A director is a person.<br />
<br />
:Director rdfs:subClassOf foaf:Person .<br />
<br />
<br />
*** OWL axioms<br />
<br />
Nothing can be both a person and a movie.<br />
<br />
:Person owl:disjointWith :Movie .<br />
<br />
<br />
Nothing can be more than one of a person, a role, or a movie.<br />
<br />
[] a owl:DisjointClass ;<br />
owl:disjointClasses ( :Person :Role :Movie ) .<br />
<br />
Something that plays in at least one Movie is an Actor.<br />
<br />
:Actor rdfs:subClassOf [<br />
a owl:Restriction ;<br />
owl:onProperty :play_in ;<br />
owl:someValueFrom owl:Thing<br />
]<br />
<br />
A LeadActor is an Actor that plays at least one LeadRole.<br />
<br />
:LeadActor rdfs:subClassOf :Actor, [<br />
a owl:Restriction ;<br />
owl:onProperty :plays_role ;<br />
owl:someValueFrom :LeadRole .<br />
] .<br />
<br />
<br />
*** SPARQL queries<br />
<br />
Count the number of movies that are represented in the graph.<br />
<br />
SELECT (COUNT(?movie) AS ?count) WHERE {<br />
?movie rdf:type :Movie<br />
}<br />
<br />
List the titles and years of all movies.<br />
<br />
SELECT ?title ?year WHERE {<br />
?movie rdf:type :Movie ;<br />
dc:title ?title ;<br />
dc:year ?year <br />
}<br />
<br />
List the titles and years of all movies since 2000.<br />
<br />
SELECT ?title ?year WHERE {<br />
?movie rdf:type :Movie ;<br />
dc:title ?title ;<br />
dc:year ?year <br />
FILTER (INTEGER(?year) >= 2000)<br />
}<br />
<br />
SELECT ?title ?year WHERE {<br />
?movie rdf:type :Movie ;<br />
dc:title ?title ;<br />
dc:year ?year <br />
FILTER (?year >= "2000"^^xsd:year)<br />
}<br />
<br />
List the titles and years of all movies sorted first by year, then by name.<br />
<br />
SELECT ?title ?year WHERE {<br />
?movie rdf:type :Movie ;<br />
dc:title ?title ;<br />
dc:year ?year <br />
}<br />
ORDER BY ?year, ?name<br />
<br />
Count the number of movies for each year with more than one movie.<br />
<br />
SELECT ?year (COUNT(?movie) AS ?count) WHERE {<br />
?movie rdf:type :Movie ;<br />
dc:year ?year <br />
}<br />
GROUP BY ?year<br />
HAVING ?count > 1<br />
<br />
List the names of all persons that are both directors and actors.<br />
<br />
SELECT ?name WHERE {<br />
?person (:plays_in & :director_of) / rdf:type :Movie ;<br />
foaf:name ?name<br />
}<br />
<br />
List the actor name and movie title for all lead roles.<br />
<br />
SELECT ?name ?title WHERE {<br />
?role rdf:type :LeadRole ;<br />
^:plays_role / foaf:name ?name ;<br />
:role_in / dc:title ?title<br />
}<br />
<br />
List all distinct pairs of actor names that have played lead roles in the same movies.<br />
<br />
SELECT ?name1 ?name2 WHERE {<br />
?movie rdf:type :Movie ;<br />
^:?role_in ?role1, ?role2 .<br />
?role1 rdf:type :LeadRole ;<br />
^:plays_role / foaf:name ?name1 .<br />
?role2 rdf:type :LeadRole ;<br />
^:plays_role / foaf:name ?name2 .<br />
FILTER (STR(?name1) < STR(?name2))<br />
}<br />
<br />
<br />
*** Examples related to the programming task<br />
<br />
from owlrl import DeductiveClosure, OWLRL_Semantics<br />
import pandas as pd<br />
from pyshacl import validate<br />
from rdflib import Namespace, Graph, Literal, RDF, DC, FOAF, XSD<br />
<br />
<br />
ONTOLOGY_FILE = './movie-ontology.ttl'<br />
SHACL_FILE = './movie-shacl.ttl'<br />
DIRECTOR_FILE = './movie-director-year.csv'<br />
LEAD_ROLE_FILE = './movie-actor-lead-role.csv'<br />
OTHER_ROLE_FILE = './movie-actor-other-role.csv'<br />
<br />
BASE_URI = 'http://example.org/'<br />
MOVIE = Namespace(BASE_URI)<br />
<br />
<br />
def add_movie_triples(g, row):<br />
movie = row.to_dict()<br />
# example dict:<br />
# {'Movie': 'Pulp_Fiction', 'Director': 'Quentin_Tarantino', 'Year': 1994}<br />
movie_name = movie['Movie']<br />
director_name = movie['Director']<br />
movie_year = movie['Year']<br />
# update g with a set of triples that represent the movie and its director<br />
g.add((MOVIE[director_name], RDF.type, MOVIE.Director))<br />
g.add((MOVIE[director_name], FOAF.name, Literal(director_name)))<br />
g.add((MOVIE[director_name], MOVIE.director_of, MOVIE[movie_name]))<br />
g.add((MOVIE[movie_name], RDF.type, MOVIE.Movie))<br />
g.add((MOVIE[movie_name], DC.title, Literal(movie_name)))<br />
g.add((MOVIE[movie_name], MOVIE.year, Literal(movie_year, datatype=XSD.year)))<br />
<br />
<br />
def add_lead_role_triples(g, row):<br />
movie = row.to_dict()<br />
# example dict:<br />
# {'Movie': 'Pulp_Fiction', 'Director': 'Quentin_Tarantino', 'Year': 1994}<br />
movie_name = movie['Movie']<br />
actor_name = movie['Actor']<br />
role_name = movie_name+'-role-'+movie['LeadRole']<br />
# update g with a set of triples that represent the movie and its director<br />
g.add((MOVIE[actor_name], RDF.type, MOVIE.Actor))<br />
g.add((MOVIE[actor_name], FOAF.name, Literal(actor_name)))<br />
g.add((MOVIE[actor_name], MOVIE.actor_in, MOVIE[movie_name]))<br />
g.add((MOVIE[actor_name], MOVIE.plays_role, MOVIE[role_name]))<br />
g.add((MOVIE[role_name], RDF.type, MOVIE.LeadRole))<br />
g.add((MOVIE[role_name], FOAF.name, Literal(movie['LeadRole'])))<br />
g.add((MOVIE[role_name], MOVIE.role_in, MOVIE[movie_name]))<br />
g.add((MOVIE[movie_name], RDF.type, MOVIE.Movie))<br />
<br />
<br />
def add_other_role_triples(g, row):<br />
movie = row.to_dict()<br />
# example dict:<br />
# {'Movie': 'Pulp_Fiction', 'Director': 'Quentin_Tarantino', 'Year': 1994}<br />
movie_name = movie['Movie']<br />
actor_name = movie['Actor']<br />
role_name = movie_name+'-role-'+movie['Role']<br />
# update g with a set of triples that represent the movie and its director<br />
g.add((MOVIE[actor_name], RDF.type, MOVIE.Actor))<br />
g.add((MOVIE[actor_name], FOAF.name, Literal(actor_name)))<br />
g.add((MOVIE[actor_name], MOVIE.actor_in, MOVIE[movie_name]))<br />
g.add((MOVIE[actor_name], MOVIE.plays_role, MOVIE[role_name]))<br />
g.add((MOVIE[role_name], RDF.type, MOVIE.Role))<br />
g.add((MOVIE[role_name], FOAF.name, Literal(movie['Role'])))<br />
g.add((MOVIE[role_name], MOVIE.role_in, MOVIE[movie_name]))<br />
g.add((MOVIE[movie_name], RDF.type, MOVIE.Movie))<br />
<br />
<br />
def load_movie_triples(g, fn):<br />
df = pd.read_csv(fn)<br />
df.apply(lambda row: add_movie_triples(g, row), axis=1)<br />
<br />
<br />
def load_lead_role_triples(g, fn):<br />
df = pd.read_csv(fn)<br />
df.apply(lambda row: add_lead_role_triples(g, row), axis=1)<br />
<br />
<br />
def load_other_role_triples(g, fn):<br />
df = pd.read_csv(fn)<br />
df.apply(lambda row: add_other_role_triples(g, row), axis=1)<br />
<br />
<br />
g = Graph()<br />
g.bind('', MOVIE)<br />
load_movie_triples(g, DIRECTOR_FILE)<br />
load_lead_role_triples(g, LEAD_ROLE_FILE)<br />
load_other_role_triples(g, OTHER_ROLE_FILE)<br />
print(g.serialize(format='ttl'))<br />
<br />
<br />
sg = Graph()<br />
sg.parse(SHACL_FILE, format='ttl')<br />
r = validate(g,<br />
shacl_graph=sg,<br />
# ont_graph=og,<br />
inference='rdfs'<br />
)<br />
val, rg, rep = r<br />
print(rep)<br />
<br />
<br />
g.parse(ONTOLOGY_FILE)<br />
DeductiveClosure(OWLRL_Semantics).expand(g)<br />
<br />
print(g.serialize(format='ttl'))<br />
</pre></div>Sinoahttp://info216.wiki.uib.no/Lab:_JSON-LDLab: JSON-LD2024-03-06T10:55:21Z<p>Rbo027: </p>
<hr />
<div><br />
==Topics==<br />
JSON-LD @context and processing in the JSON-LD Playground.<br />
<br />
Using a Web APIs to retrieve JSON-LD data from ConceptNet, parse them programmatically, use JSON-LD to turn them into RDF.<br />
<br />
==Useful Reading==<br />
* [https://json-ld.org/ JSON for Linking Data]<br />
* [https://github.com/commonsense/conceptnet5/wiki/API ConceptNet API]<br />
* [https://www.w3.org/TR/json-ld11/#basic-concepts Guide on JSON-LD]<br />
* [https://docs.google.com/presentation/d/1pRuO-6FZJbq3fAdXVfOU0vIP0VoAKG6kbmosf0qaK8Y/edit?usp=sharing JSON-LD Lab Presentation]<br />
<br />
Imports:<br />
* import json<br />
* import json-ld<br />
* import rdflib<br />
* import requests<br />
<br />
==Tasks==<br />
<br />
===Part 1: Basic JSON-LD===<br />
<br />
In the first part of the lab you will start with an existing JSON-LD document:<br />
<pre><br />
{<br />
"@context": {<br />
"@base": "http://example.org/",<br />
"edges": "http://example.org/triple",<br />
"start": "http://example.org/source",<br />
"rel": "http://example.org/predicate",<br />
"end": "http://example.org/object",<br />
"Person" : "http://example.org/Person",<br />
"birthday" : {<br />
"@id" : "http://example.org/birthday",<br />
"@type" : "xsd:date"<br />
},<br />
"nameEng" : {<br />
"@id" : "http://example.org/en/name",<br />
"@language" : "en"<br />
}<br />
},<br />
"@graph": [<br />
{<br />
"@id": "people/Jeremy",<br />
"@type": "Person",<br />
"birthday" : "1987.1.1",<br />
"nameEng" : "Jeremy"<br />
},<br />
{<br />
"@id": "people/Tom",<br />
"@type": "Person"<br />
},<br />
{"edges" : [<br />
{<br />
"start" : "people/Jeremy",<br />
"rel" : "knows",<br />
"end" : "people/Tom"<br />
<br />
}<br />
]}<br />
]<br />
}<br />
</pre><br />
<br />
'''Task:'''<br />
Using the JSON-LD document as a starting point, complete the following tasks:<br />
<br />
* Create a new property named age, that has an integer type, and add it to the people already in the graph.<br />
* Add the following 2 more people to the graph, make sure that their names' languages follow their nationality:<br />
* Ju: chinese, 22 years old, likes to play basketball<br />
* Louis: french, 45 years old, and has black hair<br />
* Add the following edges to the graph:<br />
* Tom knows Louis<br />
* Louis teaches Ju<br />
* Ju plays basketball with Jeremy and Tom<br />
<br />
'''Task:'''<br />
In a web browser, go to [http://json-ld.org http://json-ld.org] and continue to the ''Playground''. Copy the JSON-LD document into the ''JSON-LD Input'' form.<br />
<br />
In the ''Compacted'' form below the input form you will see a processed version of the JSON-LD input. Compare the information about Jeremy in the Input and Compacted output. Many of the keys and values in the output have been expanded according to mappings defined in the ''@context'' object at the beginning of the document.<br />
<br />
[http://json-ld.org http://json-ld.org] provides many other processed variants of the JSON-LD, but ''Compacted'' may be easiest to start with.<br />
<br />
===Part 2: Retrieving JSON-LD from ConceptNet===<br />
<br />
'''Task:'''<br />
In a web browser, go to [http://conceptnet.io http://conceptnet.io] and search for a term you are interested in. (It is good to take concept related to the Mueller investigation, for example 'indictment'.)<br />
<br />
The same URL, but with ''https://api.conceptnet.io/'' instead of just ''https://conceptnet.io/'' returns the data as JSON-LD. It looks awfully detailed, but it will be easy to simplify it!<br />
<br />
'''Task:'''<br />
In another web browser tab, go to [http://json-ld.org http://json-ld.org] again and continue to the ''Playground''. Copy your JSON-LD data from the ''ConceptNet'' tab to the ''JSON-LD Input'' form.<br />
<br />
In the ''Expanded'' form you again will see a processed version of the JSON-LD input. This time the keys and values in the output have been expanded according to mappings defined in the ''@context'' file [http://api.conceptnet.io/ld/conceptnet5.7/context.ld.json http://api.conceptnet.io/ld/conceptnet5.7/context.ld.json], as specified in the beginning of the JSON-LD input:<br />
<br />
"@context": [<br />
"http://api.conceptnet.io/ld/conceptnet5.7/context.ld.json"<br />
],<br />
<br />
'''Task:'''<br />
Instead of a file, we will write our own simpler ''@context'' object into the JSON-LD Input. It should look like this instead:<br />
<br />
"@context": {<br />
"current_key": "url_we_want_the_key_mapped_to",<br />
...<br />
},<br />
<br />
We are interested in these keys: edges, start, rel, end. Map them to simple URLs, like ''http://ex.org/t'' (for triple), ''http://ex.org/s'', ''http://ex.org/p'' and ''http://ex.org/o''. These are the basic triples we are most interested in!<br />
<br />
Look at the Expanded version again. It is much simpler now: the JSON-LD processor ignores regular keys that are not mapped (but the special keys with ''@'' are still there.)<br />
<br />
'''Task:'''<br />
Remove the line that maps the ''edges'' key. What happens and why? Put the ''edges'' mapping back in again.<br />
<br />
'''Task:'''<br />
In addition to the ''Expanded'' tab, the Playground can show ''Compacted'' and ''Flattened'' versions of the JSON-LD Input too. They are different ways of processing the same data, each of them useful for different purposes.<br />
<br />
Which one do you prefer for reading? Which one would be easiest to program as JSON?<br />
<br />
'''Task:'''<br />
We have lost the labels again!<br />
<br />
Map ''label'' to ''http://www.w3.org/2000/01/rdf-schema#label'' and see what happens.<br />
<br />
===Part 3: Programming JSON-LD in Python===<br />
<br />
'''Task:'''<br />
Install the ''rdflib-jsonld'' package in the same environment as you have ''rdflib'' installed.<br />
<br />
Create a graph object and parse the ''https://api.conceptnet.io/...'' URL you used to download JSON-LD data earlier. You need to add the argument ''format="json-ld"'' when you call ''parse(...)'', but you should not need to ''import'' more than ''rdflib'' as before.<br />
<br />
'''Task:'''<br />
Inspect the graph object using simple SPARQL queries to find the distinct predicates and types used.<br />
<br />
You can also count the number of triples in the graph:<br />
print(len(g))<br />
or iterate through all the triples<br />
for s, p, o in g:<br />
print(s, p, o)<br />
<br />
'''Task:'''<br />
Unfortunately, the graph is much more complex than we need and it is not easy to pick out the triples we want. We want to add our own context object like we did in the Playground. Instead of parsing a graph directly from a URL, we first download it as a JSON object, for example:<br />
<br />
import json<br />
import requests<br />
<br />
CN_BASE = 'http://api.conceptnet.io/c/en/'<br />
<br />
json_obj = requests.get(CN_BASE+'indictment').json()<br />
<br />
Now, ''json_obj['@context']'' contains the @context object. Define your own context object in Python similar to the one you used in the play ground, and assign it to ''json_obj['@context']''.<br />
<br />
First parse the modified JSON object into a JSON string (''import json'' and ''json.dumps(...)''). Then create another graph object and parse the JSON string. You need to add the argument ''data=...'' in addition to ''format="json-ld"'' when you call ''parse(...)'' because you are no longer parsing from a file or URL, but from a string.<br />
<br />
Save the JSON string for later, so you do not have to retrieve the same data over and over from ConceptNet.<br />
<br />
'''Task:'''<br />
Create a new SPARQL SELECT query that lists all the ''(s, p, o)'' triples in your graph.<br />
<br />
The URLs for predicates should be fine now, but the URLs of subjects and objects can be improved by mapping the special ''@base'' key in the ''@context'' object to a simple URL like ''http://ex.org/''.<br />
<br />
'''Task:'''<br />
Extend the SELECT query so that it also lists all the labels of subjects and objects.<br />
<br />
'''Task:'''<br />
Change the SELECT query into a CONSTRUCT query that return a new graph of all the basic triples in the original JSON-LD data. Save it to file and look at it in a visualiser you like.<br />
<br />
==If you have more time...==<br />
<br />
'''Task:'''<br />
Merge the new triples with your existing graph if they fit there.<br />
<br />
'''Task:'''<br />
Wrap the code you have written into a function ''describe_concept(...)'' that takes a concept name as argument (e.g., 'indictment') and returns a ConceptNet subgraph that describes the concept.<br />
<br />
'''Task:'''<br />
The original JSON-LD data from ''https://api.conceptnet.io/...'' contains a ''view'' object at the end. Check it out!<br />
<br />
By default, the API only returns 20 edges at a time. You can modify that by adding a ''?limit=...'' argument to your URL.<br />
<br />
Modify your ''describe_concept(...)'' method to take an extra argument that controls how many edges are downloaded.<br />
<br />
'''Task:'''<br />
You still have way too many triples. Use ''FILTER'' and ''STRENDS'' to ignore some predicates like ''Synonym'' and general ''RelatedTo''.<br />
<br />
'''Task:'''<br />
Modify your ''@context'' and query so you can remove triples with concepts that are not in English language (''en'').</div>Sinoahttp://info216.wiki.uib.no/Lab:_DBpedia_SpotlightLab: DBpedia Spotlight2024-03-03T18:47:23Z<p>Sinoa: Created page with "== If you have more time == '''Task:''' If you have more time, you can use DBpedia Spotlight to try to link the people (and other "named entities") mentioned in the dataset to DBpedia resources. pip install pyspotlight You can start with the code example below, but you will need exception-handling when DBpedia is unable to find a match. For instance: <syntaxhighlight> import spotlight ENDPOINT = 'https://api.dbpedia-spotlight.org/en/annotate' CONFIDENCE = 0.5 # filt..."</p>
<hr />
<div>== If you have more time ==<br />
<br />
<br />
'''Task:'''<br />
If you have more time, you can use DBpedia Spotlight to try to link the people (and other "named entities") mentioned in the dataset to DBpedia resources.<br />
pip install pyspotlight<br />
You can start with the code example below, but you will need exception-handling when DBpedia is unable to find a match. For instance:<br />
<syntaxhighlight><br />
import spotlight<br />
<br />
ENDPOINT = 'https://api.dbpedia-spotlight.org/en/annotate'<br />
CONFIDENCE = 0.5 # filter out results with lower confidence<br />
<br />
def annotate_entity(entity_name, filters={'types': 'DBpedia:Person'}):<br />
annotations = []<br />
try:<br />
annotations = spotlight.annotate(ENDPOINT, entity_name, confidence=CONFIDENCE, filters=filters)<br />
except spotlight.SpotlightException as e:<br />
# catch exceptions thrown from Spotlight, for example when no DBpedia resource is found<br />
print(e)<br />
# handle exceptions here<br />
return annotations<br />
</syntaxhighlight><br />
The example uses the types-filter with DBpedia:Person, because we only want it to match with people. You can choose to only implement the URIs in the response, or the types as well.<br />
<br />
Useful materials:<br />
* [https://www.dbpedia-spotlight.org/api Spotlight Documentation]<br />
* [https://pypi.org/project/pyspotlight/ pyspotlight 0.7.2 at PyPi.org]<br />
'''Task:'''<br />
If you have more time, you can use DBpedia Spotlight to try to link the people (and other "named entities") mentioned in the dataset to DBpedia resources. <br />
pip install pyspotlight<br />
You can start with the code example below, but you will need exception-handling when DBpedia is unable to find a match. For instance:<br />
<syntaxhighlight><br />
import spotlight<br />
<br />
ENDPOINT = 'https://api.dbpedia-spotlight.org/en/annotate'<br />
CONFIDENCE = 0.5 # filter out results with lower confidence<br />
<br />
def annotate_entity(entity_name, filters={'types': 'DBpedia:Person'}):<br />
annotations = []<br />
try:<br />
annotations = spotlight.annotate(ENDPOINT, entity_name, confidence=CONFIDENCE, filters=filters)<br />
except spotlight.SpotlightException as e:<br />
# catch exceptions thrown from Spotlight, for example when no DBpedia resource is found<br />
print(e)<br />
# handle exceptions here<br />
return annotations<br />
</syntaxhighlight><br />
The example uses the types-filter with DBpedia:Person, because we only want it to match with people. You can choose to only implement the URIs in the response, or the types as well.<br />
<br />
Useful materials:<br />
* [https://www.dbpedia-spotlight.org/api Spotlight Documentation]<br />
* [https://pypi.org/project/pyspotlight/ pyspotlight 0.7.2 at PyPi.org]</div>Sinoahttp://info216.wiki.uib.no/Lab:_Wikidata_in_RDFLab: Wikidata in RDF2024-02-21T10:47:20Z<p>Rbo027: </p>
<hr />
<div>==Topics==<br />
Wikidata in RDF: <br />
* retrieve primary triples about a Wikidata entity<br />
* load the semantic data and metadata into GraphDB<br />
* visualise the semantic data and metadata<br />
<br />
''Motivation:'' So far you have built your own knowledge graph and worked on a small grap you were given. This week we will look at how to retrieve knowledge graphs from Wikidata, which can then be merged with your own graph to provide additional context. This is not a trivial problem because Wikidata most likely contains a lot more data - and in particular metadata - than you need. <br />
<br />
==Useful materials==<br />
* [https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/A_gentle_introduction_to_the_Wikidata_Query_Service A gentle introduction to the Wikidata Query Service] (Simple)<br />
* [https://www.mediawiki.org/wiki/Wikibase/Indexing/RDF_Dump_Format RDF Dump Format]<br />
* [https://en.wikibooks.org/wiki/SPARQL/Expressions_and_Functions SPARQL Expressions and Functions] - you will need this a lot<br />
<br />
==Tasks==<br />
'''Getting ready:'''<br />
In a web browser, go to [http://query.wikidata.org Wikidata's Query Service (WDQS)]. Be careful to ''always use a limit like LIMIT 100 when you test things''. Otherwise, you risk being blocked from the query service or, worse, you risk blocking out a whole subdomain.<br />
<br />
''Emergency data:'' <br />
If Wikdata's Query Service is unavailable, you can load [[:File:Q42-extended.tar | this Turtle file]] into GraphDB instead, and continue there using ''Q42'' as your example entity. (Remember to rename the file from ''.tar'' to ''.ttl'' - it is not a ''.tar''-file.)<br />
<br />
'''Task:'''<br />
From [https://www.wikidata.org/ Wikidata's ordinary UI], find the Q-code of one of the people or entities involved in the Mueller investigation. Use that entity as your reference in the rest of this lab. (The Q-code should look like this ''https://www.wikidata.org/entity/Q42'' or ''wd:Q42''.)<br />
<br />
'''Task:'''<br />
Use a DESCRIBE query to retrieve some triples about your entity (remember LIMIT 100, although it is less critical on DESCRIBE queries). <br />
<br />
'''Task:'''<br />
Use a SELECT query to retrieve the first 100 triples about your entity. <br />
<br />
''Tip:'' Always save your queries and updates as soon as they succeed. You may need to go back to them later.<br />
<br />
'''Task:'''<br />
Start GraphDB on your local machine. Create a new repository (''No inference'' needed), and activate it. Write a local SELECT query that embeds a <https://query.wikidata.org/bigdata/namespace/wdq/sparql> SERVICE query to retrieve the first 100 triples about your entity ''to your local machine''. <br />
<br />
''Tip:'' ''wd:'' is a PREFIX for ''<http://www.wikidata.org/entity/>''.<br />
<br />
''Tip:'' To make LIMIT work inside a SERVICE query, you have to add another SELECT inside it, like this:<br />
<br />
SELECT ... { # the local query<br />
SERVICE ... { # the remote service<br />
SELECT ... {<br />
...<br />
} LIMIT 100 # this limit works on the remote service<br />
}<br />
} # a limit here would work on your local service, <br />
# but is not strictly necessary when you already have an inner LIMIT<br />
<br />
'''Task:'''<br />
Change the SELECT query to an INSERT query that adds the Wikidata triples your local repository. Use a local ASK and/or SELECT query to check that the triples have actually been added.<br />
<br />
'''Task:'''<br />
Go back to the [http://query.wikidata.org Wikidata Query Service (WDQS)]. (You can run the rest of the lab using a remote SERVICE inside GraphDB, but using WDQS might give you better error messages etc.)<br />
<br />
Primary Wikidata statements use the prefix ''wd:'' for resources and ''wdt:'' for predicates. Use a FILTER statement to only SELECT primary triples in this sense.<br />
<br />
These PREFIXes are built into WDQS, but you will need them if you run inside GraphDB:<br />
PREFIX wd: <http://www.wikidata.org/entity/><br />
PREFIX wdt: <http://www.wikidata.org/prop/direct/><br />
<br />
'''Task:'''<br />
Use Wikidata's in-built ''SERVICE wikibase:label'' to get labels for all the object resources. (Autocompletion with Ctrl-Space will help you set up the service.)<br />
<br />
Afterwards, instead of ''SELECT *'' or ''SELECT ?p ?o'', you can write ''SELECT ?p ?o ?oLabel'' to see the labels for all resource objects.<br />
<br />
'''Task:'''<br />
You now have labels for all the resource objects, but you have no primary triples with literal values. Edit your query (by relaxing the FILTER expression) so it also returns triples where the object has DATATYPE ''xsd:string''.<br />
<br />
'''Task:'''<br />
You still do not have the "fingerprint" triples, i.e., the label, aliases and description of your reference entity. Wikidata uses special properties like ''rdfs:label'', ''skos:altLabel'' and ''schema:description'' for these. Relax the FILTER expression again so it also returns triples with these three predicates.<br />
<br />
PREFIXes you may need:<br />
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#><br />
PREFIX skos: <http://www.w3.org/2004/02/skos/core#><br />
PREFIX schema: <http://schema.org/><br />
<br />
'''Task:'''<br />
Now you may have too many fingerprint triples! Try to restrict the FILTER expression again so that, when the predicate is ''rdfs:label'', ''skos:altLabel'' and ''schema:description'', the object must have LANG "en".<br />
<br />
'''Task:'''<br />
Go back to your local GraphDB, and run your Wikidata SELECT query inside a SERVICE statement like you did before. (You must declare all PREFIXes now.)<br />
<br />
'''Task:'''<br />
Change the SELECT query to an INSERT query that adds the Wikidata triples your local repository. Go to ''Explorer -> Graph view'' and enter the URI (Q-code) of your reference entity. You can click the cogwheel to extend the number of relations shown for the reference node.<br />
<br />
==If you have more time...==<br />
<br />
'''Task:'''<br />
An earlier task returned labels for all the resource objects, but predicates have labels too. Unfortunately, the wikibase:label SERVICE only provides labels for entities with a ''wd:'' prefix. You must therefore REPLACE all ''wdt:'' prefixes of properties with ''wd:'' prefixes and BIND the new URI AS a new variable, for example ''?pw''.<br />
<br />
''Tip:'' Use the STR(...) and IRI(...) functions carefully: URIs and prefixes are not strings, and strings are not URIs.<br />
<br />
Afterwards, instead of ''SELECT ?p ?o ?oLabel'', you can write ''SELECT ?p ?pwLabel ?o ?oLabel'' to see the labels for both predicates and resource objects.<br />
<br />
'''Task:'''<br />
Now you can go back to the SELECT statement that returned primary triples with only resource objects (not literal objects or fingerprints). Extend it so it also includes primary triples "one step out", i.e., triples where the ''subjects'' are ''objects'' of triples involving your reference entity.<br />
<br />
In your local GraphDB, use an INSERT statement to add the triples to your local repository. Use ''Explorer'' and ''Graph view'' to visualise the extended graph. You can click on nodes and ''Extend'' them to see their neighbouring nodes.</div>Sinoahttp://info216.wiki.uib.no/Lab:_SPARQL_2Lab: SPARQL 22024-02-11T10:52:37Z<p>Bamos3003: Created page with "==Topics== * SPARQL updates * SPARQL insertions * SPARQL deletions * DESCRIBE and CONSTRUCT ==Useful materials== GraphDB documentation: * [https://graphdb.ontotext.com/docume..."</p>
<hr />
<div>==Topics==<br />
* SPARQL updates<br />
* SPARQL insertions<br />
* SPARQL deletions<br />
* DESCRIBE and CONSTRUCT<br />
<br />
==Useful materials==<br />
GraphDB documentation:<br />
* [https://graphdb.ontotext.com/documentation/10.0/quick-start-guide.html GraphDB 10.0 Quick Start Guide]<br />
* [https://graphdb.ontotext.com/documentation/10.5/ GraphDB 10.5 Documentation]<br />
<br />
SPARQL reference:<br />
* [https://www.w3.org/TR/sparql11-query/ SPARQL Query Documentation]<br />
* [http://www.w3.org/TR/sparql11-update/ SPARQL Update Documentation]<br />
* [https://en.wikibooks.org/wiki/SPARQL/Expressions_and_Functions SPARQL Expressions and Functions]<br />
<br />
==Tasks==<br />
<br />
'''Task:'''<br />
Write the following SPARQL updates:<br />
* The ''muellerkg:name'' property is misnamed, because the object in those triples is always a resource. Rename it to something like ''muellerkg:person''.<br />
* Update the graph so all the investigated person and president nodes (such as ''muellerkg:G._Gordon_Liddy'' and ''muellerkg:Richard_Nizon'') become the subjects in ''foaf:name'' triples with the corresponding strings (''G. Gordon Liddy'' and ''Richard Nixon'') as the literals. (''Tip:'' Use ''STR(kgmueller:)'' inside a REPLACE in a BIND statement to remove the URI path.)<br />
<br />
'''Task:'''<br />
Load the RDF graph you created in exercises 1 and 2. (Maybe you want to create a new namespace in GraphDB first.) Use INSERT DATA updates to add these triples to your graph:<br />
* George Papadopoulos was adviser to the Trump campaign.<br />
** He pleaded guilty to lying to the FBI.<br />
** He was sentenced to prison. <br />
* Roger Stone is a Republican.<br />
** He was adviser to Trump.<br />
** He was an official in the Trump campaign.<br />
** He interacted with Wikileaks.<br />
** He made a testimony for the House Intelligence Committee.<br />
** He was cleared of all charges.<br />
<br />
'''Task:'''<br />
Use DELETE DATA and then INSERT DATA updates to correct that Roger Stone was cleared of all charges. Actually,<br />
* He was indicted for making false statements, witness tampering, and obstruction of justice.<br />
<br />
'''Task:'''<br />
* Use a DESCRIBE query to show the updated information about Roger Stone.<br />
* Use a CONSTRUCT query to create a new RDF group with triples only about Roger Stone (in other words, having Roger Stone as the subject.)<br />
<br />
==If you have more time==<br />
<br />
'''Task:'''<br />
In the ''russia_investigation_kg.ttl'' dataset, the ''muellerkg:name'' property used as predicate is already covered by a standard term from an estalished vocabulary in the LOD cloud: ''foaf:name'', where ''foaf:'' is ''http://xmlns.com/foaf/0.1/''. <br />
* If you have not done so already: write a SPARQL DELETE/INSERT update to change every ''muellerkg:name'' predicate in your graph to ''foaf:name''. (It is easy to destroy your RDF graph when you do this, so it is good you saved a copy in the previous task.)<br />
* Otherwise: find another resource to rename everywhere. For example, you can change your local URI for a public person to a standard [https://wikidata.org Wikidata] URI.<br />
<br />
'''Task:''' Write a DELETE/INSERT statement to change one of the prefixes in your graph, renaming all the resources that use that prefix.<br />
<br />
'''Task:''' Write an INSERT statement to add at least one significant date to the Mueller investigation, with literal type xsd:date. Write a DELETE/INSERT statement to change the date to a string, and a new DELETE/INSERT statement to change it back to xsd:date.</div>Bamos3003