Lab Solutions: Difference between revisions

From info216
No edit summary
No edit summary
Line 552: Line 552:
</syntaxhighlight>
</syntaxhighlight>


==Semantic Lifting - HTML==
<syntaxhighlight>
from bs4 import BeautifulSoup as bs, NavigableString
from rdflib import Graph, URIRef, Namespace
from rdflib.namespace import RDF
g = Graph()
ex = Namespace("http://example.org/")
g.bind("ex", ex)
html = open("tv_shows.html").read()
html = bs(html, features="html.parser")
shows = html.find_all('li', attrs={'class': 'show'})
for show in shows:
    title = show.find("h3").text
    actors = show.find('ul', attrs={'class': 'actor_list'})
    for actor in actors:
        if isinstance(actor, NavigableString):
            continue
        else:
            actor = actor.text.replace(" ", "_")
            g.add((URIRef(ex + title), ex.stars, URIRef(ex + actor)))
            g.add((URIRef(ex + actor), RDF.type, ex.Actor))
    g.add((URIRef(ex + title), RDF.type, ex.TV_Show))
print(g.serialize(format="turtle").decode())
</syntaxhighlight>
===HTML code for the example above===
<syntaxhighlight>
<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <title></title>
</head>
<body>
    <div class="tv_shows">
        <ul>
            <li class="show">
                <h3>The_Sopranos</h3>
                <div class="irrelevant_data"></div>
                <ul class="actor_list">
                    <li>James Gandolfini</li>
                </ul>
            </li>
            <li class="show">
                <h3>Seinfeld</h3>
                <div class="irrelevant_data"></div>
                <ul class="actor_list">
                    <li >Jerry Seinfeld</li>
                    <li>Jason Alexander</li>
                    <li>Julia Louis-Dreyfus</li>
                </ul>
            </li>
        </ul>
    </div>
</body>
</html>
</syntaxhighlight>


==WEB API Calls (In this case JSON)==
==WEB API Calls (In this case JSON)==

Revision as of 12:39, 19 April 2020

This page will be updated with Python examples related to the lectures and labs. We will add more examples after each lab has ended. The first examples will use Python's RDFlib. We will introduce other relevant libraries later.


Lecture 1: Python, RDFlib, and PyCharm

Printing the triples of the Graph in a readable way

# The turtle format has the purpose of being more readable for humans. 
print(g.serialize(format="turtle").decode())

Coding Tasks Lab 1

from rdflib import Graph, Namespace, URIRef, BNode, Literal
from rdflib.namespace import RDF, FOAF, XSD

g = Graph()
ex = Namespace("http://example.org/")

g.add((ex.Cade, ex.married, ex.Mary))
g.add((ex.France, ex.capital, ex.Paris))
g.add((ex.Cade, ex.age, Literal("27", datatype=XSD.integer)))
g.add((ex.Mary, ex.age, Literal("26", datatype=XSD.integer)))
g.add((ex.Mary, ex.interest, ex.Hiking))
g.add((ex.Mary, ex.interest, ex.Chocolate))
g.add((ex.Mary, ex.interest, ex.Biology))
g.add((ex.Mary, RDF.type, ex.Student))
g.add((ex.Paris, RDF.type, ex.City))
g.add((ex.Paris, ex.locatedIn, ex.France))
g.add((ex.Cade, ex.characteristic, ex.Kind))
g.add((ex.Mary, ex.characteristic, ex.Kind))
g.add((ex.Mary, RDF.type, FOAF.Person))
g.add((ex.Cade, RDF.type, FOAF.Person))

Lecture 2: RDF programming

Different ways to create an address

from rdflib import Graph, Namespace, URIRef, BNode, Literal
from rdflib.namespace import RDF, FOAF, XSD

g = Graph()
ex = Namespace("http://example.org/")


# How to represent the address of Cade Tracey. From probably the worst solution to the best.

# Solution 1 -
# Make the entire address into one Literal. However, Generally we want to separate each part of an address into their own triples. This is useful for instance if we want to find only the streets where people live. 

g.add((ex.Cade_Tracey, ex.livesIn, Literal("1516_Henry_Street, Berkeley, California 94709, USA")))


# Solution 2 - 
# Seperate the different pieces information into their own triples

g.add((ex.Cade_tracey, ex.street, Literal("1516_Henry_Street")))
g.add((ex.Cade_tracey, ex.city, Literal("Berkeley")))
g.add((ex.Cade_tracey, ex.state, Literal("California")))
g.add((ex.Cade_tracey, ex.zipcode, Literal("94709")))
g.add((ex.Cade_tracey, ex.country, Literal("USA")))


# Solution 3 - Some parts of the addresses can make more sense to be resources than Literals.
# Larger concepts like a city or state are typically represented as resources rather than Literals, but this is not necesarilly a requirement in the case that you don't intend to say more about them. 

g.add((ex.Cade_tracey, ex.street, Literal("1516_Henry_Street")))
g.add((ex.Cade_tracey, ex.city, ex.Berkeley))
g.add((ex.Cade_tracey, ex.state, ex.California))
g.add((ex.Cade_tracey, ex.zipcode, Literal("94709")))
g.add((ex.Cade_tracey, ex.country, ex.USA))


# Solution 4 
# Grouping of the information into an Address. We can Represent the address concept with its own URI OR with a Blank Node. 
# One advantage of this is that we can easily remove the entire address, instead of removing each individual part of the address. 
# Solution 4 or 5 is how I would recommend to make addresses. Here, ex.CadeAddress could also be called something like ex.address1 or so on, if you want to give each address a unique ID. 

# Address URI - CadeAdress

g.add((ex.Cade_Tracey, ex.address, ex.CadeAddress))
g.add((ex.CadeAddress, RDF.type, ex.Address))
g.add((ex.CadeAddress, ex.street, Literal("1516 Henry Street")))
g.add((ex.CadeAddress, ex.city, ex.Berkeley))
g.add((ex.CadeAddress, ex.state, ex.California))
g.add((ex.CadeAddress, ex.postalCode, Literal("94709")))
g.add((ex.CadeAddress, ex.country, ex.USA))

# OR

# Blank node for Address.  
address = BNode()
g.add((ex.Cade_Tracey, ex.address, address))
g.add((address, RDF.type, ex.Address))
g.add((address, ex.street, Literal("1516 Henry Street", datatype=XSD.string)))
g.add((address, ex.city, ex.Berkeley))
g.add((address, ex.state, ex.California))
g.add((address, ex.postalCode, Literal("94709", datatype=XSD.string)))
g.add((address, ex.country, ex.USA))


# Solution 5 using existing vocabularies for address 

# (in this case https://schema.org/PostalAddress from schema.org). 
# Also using existing ontology for places like California. (like http://dbpedia.org/resource/California from dbpedia.org)

schema = "https://schema.org/"
dbp = "https://dpbedia.org/resource/"

g.add((ex.Cade_Tracey, schema.address, ex.CadeAddress))
g.add((ex.CadeAddress, RDF.type, schema.PostalAddress))
g.add((ex.CadeAddress, schema.streetAddress, Literal("1516 Henry Street")))
g.add((ex.CadeAddress, schema.addresCity, dbp.Berkeley))
g.add((ex.CadeAddress, schema.addressRegion, dbp.California))
g.add((ex.CadeAddress, schema.postalCode, Literal("94709")))
g.add((ex.CadeAddress, schema.addressCountry, dbp.United_States))

Typed Literals

from rdflib import Graph, Literal, Namespace
from rdflib.namespace import XSD
g = Graph()
ex = Namespace("http://example.org/")

g.add((ex.Cade, ex.age, Literal(27, datatype=XSD.integer)))
g.add((ex.Cade, ex.gpa, Literal(3.3, datatype=XSD.float)))
g.add((ex.Cade, FOAF.name, Literal("Cade Tracey", datatype=XSD.string)))
g.add((ex.Cade, ex.birthday, Literal("2006-01-01", datatype=XSD.date)))


Writing and reading graphs/files

   # Writing the graph to a file on your system. Possible formats = turtle, n3, xml, nt.
g.serialize(destination="triples.txt", format="turtle")

   # Parsing a local file
parsed_graph = g.parse(location="triples.txt", format="turtle")

   # Parsing a remote endpoint like Dbpedia
dbpedia_graph = g.parse("http://dbpedia.org/resource/Pluto")


Collection Example

from rdflib import Graph, Namespace
from rdflib.collection import Collection


# Sometimes we want to add many objects or subjects for the same predicate at once. 
# In these cases we can use Collection() to save some time.
# In this case I want to add all countries that Emma has visited at once.

b = BNode()
g.add((ex.Emma, ex.visit, b))
Collection(g, b,
    [ex.Portugal, ex.Italy, ex.France, ex.Germany, ex.Denmark, ex.Sweden])

# OR

g.add((ex.Emma, ex.visit, ex.EmmaVisits))
Collection(g, ex.EmmaVisits,
    [ex.Portugal, ex.Italy, ex.France, ex.Germany, ex.Denmark, ex.Sweden])

Lecture 3: SPARQL

SPARQL queries from the lecture

SELECT DISTINCT ?p WHERE {
    ?s ?p ?o .
}
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>

SELECT DISTINCT ?t WHERE {
    ?s rdf:type ?t .
}
PREFIX owl: <http://www.w3.org/2002/07/owl#>
CONSTRUCT { 
    ?s owl:sameAs ?o2 . 
} WHERE {
    ?s owl:sameAs ?o .
    FILTER(REGEX(STR(?o), "^http://www\\.", "s"))
    BIND(URI(REPLACE(STR(?o), "^http://www\\.", "http://", "s")) AS ?o2)
}

Select all contents of lists (rdfllib.Collection)

# rdflib.Collection has a different interntal structure so it requires a slightly more advance query. Here I am selecting all places that Emma has visited.

PREFIX ex:   <http://example.org/>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>

SELECT ?visit
WHERE {
  ex:Emma ex:visit/rdf:rest*/rdf:first ?visit
}

Lecture 4- SPARQL PROGRAMMING

SELECTING data from Blazegraph via Python

from SPARQLWrapper import SPARQLWrapper, JSON

# This creates a server connection to the same URL that contains the graphic interface for Blazegraph. 
# You also need to add "sparql" to end of the URL like below.

sparql = SPARQLWrapper("http://localhost:9999/blazegraph/sparql")

# SELECT all triples in the database.

sparql.setQuery("""
    SELECT DISTINCT ?p WHERE {
    ?s ?p ?o.
    }
""")
sparql.setReturnFormat(JSON)
results = sparql.query().convert()

for result in results["results"]["bindings"]:
    print(result["p"]["value"])

# SELECT all interests of Cade

sparql.setQuery("""
    PREFIX ex: <http://example.org/>
    SELECT DISTINCT ?interest WHERE {
    ex:Cade ex:interest ?interest.
    }
""")
sparql.setReturnFormat(JSON)
results = sparql.query().convert()

for result in results["results"]["bindings"]:
    print(result["interest"]["value"])

Updating data from Blazegraph via Python

from SPARQLWrapper import SPARQLWrapper, POST, DIGEST

namespace = "kb"
sparql = SPARQLWrapper("http://localhost:9999/blazegraph/namespace/"+ namespace + "/sparql")

sparql.setMethod(POST)
sparql.setQuery("""
    PREFIX ex: <http://example.org/>
    INSERT DATA{
    ex:Cade ex:interest ex:Mathematics.
    }
""")

results = sparql.query()
print(results.response.read())

Lecture 5: RDFS

RDFS inference with RDFLib

You can use the OWL-RL package to add inference capabilities to RDFLib. Download it GitHub and copy the owlrl subfolder into your project folder next to your Python files.

OWL-RL documentation.

Example program to get started:

import rdflib.plugins.sparql.update
import owlrl.RDFSClosure

g = rdflib.Graph()

ex = rdflib.Namespace('http://example.org#')
g.bind('', ex)

g.update("""
PREFIX ex: <http://example.org#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
INSERT DATA {
    ex:Socrates rdf:type ex:Man .
    ex:Man rdfs:subClassOf ex:Mortal .
}""")

# The next three lines add inferred triples to g.
rdfs = owlrl.RDFSClosure.RDFS_Semantics(g, False, False, False)
rdfs.closure()
rdfs.flush_stored_triples()

b = g.query("""
PREFIX ex: <http://example.org#>
ASK {
    ex:Socrates rdf:type ex:Mortal .
} 
""")
print('Result: ' + bool(b))

Languaged tagged RDFS labels

from rdflib import Graph, Namespace, Literal
from rdflib.namespace import RDFS

g = Graph()
ex = Namespace("http://example.org/")

g.add((ex.France, RDFS.label, Literal("Frankrike", lang="no")))
g.add((ex.France, RDFS.label, Literal("France", lang="en")))
g.add((ex.France, RDFS.label, Literal("Francia", lang="es")))

Lecture 6: RDFS Plus / OWL

RDFS Plus / OWL inference with RDFLib

You can use the OWL-RL package again as for Lecture 5.

Instead of:

# The next three lines add inferred triples to g.
rdfs = owlrl.RDFSClosure.RDFS_Semantics(g, False, False, False)
rdfs.closure()
rdfs.flush_stored_triples()

you can write this to get both RDFS and basic RDFS Plus / OWL inference:

# The next three lines add inferred triples to g.
owl = owlrl.CombinedClosure.RDFS_OWLRL_Semantics(g, False, False, False)
owl.closure()
owl.flush_stored_triples()

Example updates and queries:

PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX owl: <http://www.w3.org/2002/07/owl#>
PREFIX ex: <http://example.org#>

INSERT DATA {
    ex:Socrates ex:hasWife ex:Xanthippe .
    ex:hasHusband owl:inverseOf ex:hasWife .
}
ASK {
   ex:Xanthippe ex:hasHusband ex:Socrates .
 }
ASK {
   ex:Socrates ^ex:hasHusband ex:Xanthippe .
 }
INSERT DATA {
    ex:hasWife rdfs:subPropertyOf ex:hasSpouse .
    ex:hasSpouse rdf:type owl:SymmetricProperty . 
}
ASK {
   ex:Socrates ex:hasSpouse ex:Xanthippe .
 }
ASK {
   ex:Socrates ^ex:hasSpouse ex:Xanthippe .
 }

Lab 9

Download from BlazeGraph

"""
Dumps a database to a local RDF file.
You need to install the SPARQLWrapper package first...
"""

import datetime
from SPARQLWrapper import SPARQLWrapper, RDFXML

# your namespace, the default is 'kb'
ns = 'kb'

# the SPARQL endpoint
endpoint = 'http://info216.i2s.uib.no/bigdata/namespace/' + ns + '/sparql'

# - the endpoint just moved, the old one was:
# endpoint = 'http://i2s.uib.no:8888/bigdata/namespace/' + ns + '/sparql'

# create wrapper
wrapper = SPARQLWrapper(endpoint)

# prepare the SPARQL update
wrapper.setQuery('CONSTRUCT { ?s ?p ?o } WHERE { ?s ?p ?o }')
wrapper.setReturnFormat(RDFXML)

# execute the SPARQL update and convert the result to an rdflib.Graph 
graph = wrapper.query().convert()

# the destination file, with code to make it timestamped
destfile = 'rdf_dumps/slr-kg4news-' + datetime.datetime.now().strftime('%Y%m%d-%H%M') + '.rdf'

# serialize the result to file
graph.serialize(destination=destfile, format='ttl')

# report and quit
print('Wrote %u triples to file %s .' %
      (len(res), destfile))

Semantic Lifting - CSV

from rdflib import Graph, Literal, Namespace, URIRef
from rdflib.namespace import RDF, FOAF, RDFS, OWL
import pandas as pd

g = Graph()
ex = Namespace("http://example.org/")
g.bind("ex", ex)

# Load the CSV data as a pandas Dataframe.
csv_data = pd.read_csv("task1.csv")

# Here I deal with spaces (" ") in the data. I replace them with "_" so that URI's become valid.
csv_data = csv_data.replace(to_replace=" ", value="_", regex=True)

# Here I mark all missing/empty data as "unknown". This makes it easy to delete triples containing this later.
csv_data = csv_data.fillna("unknown")

# Loop through the CSV data, and then make RDF triples.
for index, row in csv_data.iterrows():
    # The names of the people act as subjects.
    subject = row['Name']
    # Create triples: e.g. "Cade_Tracey - age - 27"
    g.add((URIRef(ex + subject), URIRef(ex + "age"), Literal(row["Age"])))
    g.add((URIRef(ex + subject), URIRef(ex + "married"), URIRef(ex + row["Spouse"])))
    g.add((URIRef(ex + subject), URIRef(ex + "country"), URIRef(ex + row["Country"])))

    # If We want can add additional RDF/RDFS/OWL information e.g
    g.add((URIRef(ex + subject), RDF.type, FOAF.Person))

# I remove triples that I marked as unknown earlier.
g.remove((None, None, URIRef("http://example.org/unknown")))

# Clean printing of the graph.
print(g.serialize(format="turtle").decode())

CSV file for above example

"Name","Age","Spouse","Country"
"Cade Tracey","26","Mary Jackson","US"
"Bob Johnson","21","","Canada"
"Mary Jackson","25","","France"
"Phil Philips","32","Catherine Smith","Japan"

Semantic Lifting - XML

from rdflib import Graph, Literal, Namespace, URIRef
from rdflib.namespace import RDF, XSD, RDFS
import xml.etree.ElementTree as ET

g = Graph()
ex = Namespace("http://example.org/TV/")
prov = Namespace("http://www.w3.org/ns/prov#")
g.bind("ex", ex)
g.bind("prov", prov)

tree = ET.parse("tv_shows.xml")
root = tree.getroot()

for tv_show in root.findall('tv_show'):
    show_id = tv_show.attrib["id"]
    title = tv_show.find("title").text

    g.add((URIRef(ex + show_id), ex.title, Literal(title, datatype=XSD.string)))
    g.add((URIRef(ex + show_id), RDF.type, ex.TV_Show))

    for actor in tv_show.findall("actor"):
        first_name = actor.find("firstname").text
        last_name = actor.find("lastname").text
        full_name = first_name + "_" + last_name
        
        g.add((URIRef(ex + show_id), ex.stars, URIRef(ex + full_name)))
        g.add((URIRef(ex + full_name), ex.starsIn, URIRef(title)))
        g.add((URIRef(ex + full_name), RDF.type, ex.Actor))

print(g.serialize(format="turtle").decode())


XML Data for above example

<data>
    <tv_show id="1050">
        <title>The_Sopranos</title>
        <actor>
            <firstname>James</firstname>
            <lastname>Gandolfini</lastname>
        </actor>
    </tv_show>
    <tv_show id="1066">
        <title>Seinfeld</title>
        <actor>
            <firstname>Jerry</firstname>
            <lastname>Seinfeld</lastname>
        </actor>
        <actor>
            <firstname>Julia</firstname>
            <lastname>Louis-dreyfus</lastname>
        </actor>
        <actor>
            <firstname>Jason</firstname>
            <lastname>Alexander</lastname>
        </actor>
    </tv_show>
</data>

Semantic Lifting - HTML

from bs4 import BeautifulSoup as bs, NavigableString
from rdflib import Graph, URIRef, Namespace
from rdflib.namespace import RDF

g = Graph()
ex = Namespace("http://example.org/")
g.bind("ex", ex)

html = open("tv_shows.html").read()
html = bs(html, features="html.parser")

shows = html.find_all('li', attrs={'class': 'show'})
for show in shows:
    title = show.find("h3").text
    actors = show.find('ul', attrs={'class': 'actor_list'})
    for actor in actors:
        if isinstance(actor, NavigableString):
            continue
        else:
            actor = actor.text.replace(" ", "_")
            g.add((URIRef(ex + title), ex.stars, URIRef(ex + actor)))
            g.add((URIRef(ex + actor), RDF.type, ex.Actor))

    g.add((URIRef(ex + title), RDF.type, ex.TV_Show))


print(g.serialize(format="turtle").decode())

HTML code for the example above

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8">
    <title></title>
</head>
<body>
    <div class="tv_shows">
        <ul>
            <li class="show">
                <h3>The_Sopranos</h3>
                <div class="irrelevant_data"></div>
                <ul class="actor_list">
                    <li>James Gandolfini</li>
                </ul>
            </li>
            <li class="show">
                <h3>Seinfeld</h3>
                <div class="irrelevant_data"></div>
                <ul class="actor_list">
                    <li >Jerry Seinfeld</li>
                    <li>Jason Alexander</li>
                    <li>Julia Louis-Dreyfus</li>
                </ul>
            </li>
        </ul>
    </div>
</body>
</html>

WEB API Calls (In this case JSON)

import requests
import json
import pprint

# Retrieve JSON data from API service URL. Then load it with the json library as a json object.
url = "http://api.geonames.org/postalCodeLookupJSON?postalcode=46020&#country=ES&username=demo"
data = requests.get(url).content.decode("utf-8")
data = json.loads(data)
pprint.pprint(data)


JSON-LD

import rdflib

g = rdflib.Graph()

example = """
{
  "@context": {
    "name": "http://xmlns.com/foaf/0.1/name",
    "homepage": {
      "@id": "http://xmlns.com/foaf/0.1/homepage",
      "@type": "@id"
    }
  },
  "@id": "http://me.markus-lanthaler.com/",
  "name": "Markus Lanthaler",
  "homepage": "http://www.markus-lanthaler.com/"
}
"""

# json-ld parsing automatically deals with @contexts
g.parse(data=example, format='json-ld')

# serialisation does expansion by default
for line in g.serialize(format='json-ld').decode().splitlines():
    print(line)

# by supplying a context object, serialisation can do compaction
context = {
    "foaf": "http://xmlns.com/foaf/0.1/"
}
for line in g.serialize(format='json-ld', context=context).decode().splitlines():
    print(line)


INFO216, UiB, 2017-2020. All code examples are CC0.