Tagged: RDF

Matching authors against VIAF identities

At Ghent University Library we enrich catalog records with VIAF identities to enhance the search experience in the catalog. When searching for all the books about ‘Chekov’ we want to match all name variants of this author. Consult VIAF http://viaf.org/viaf/95216565/#Chekhov,_Anton_Pavlovich,_1860-1904 and you will see many of them.

  • Chekhov
  • Čehov
  • Tsjechof
  • Txékhov
  • etc

Any of the these names variants can be available in the catalog data if authority control is not in place (or not maintained). Searching any of these names should result in results for all the variants. In the past it was a labor intensive, manual job for catalogers to maintain an authority file. Using results from Linked Data Fragments research by Ruben Verborgh (iMinds) and the Catmandu-RDF tools created by Jakob Voss (GBV) and RDF-LDF by Patrick Hochstenbach, Ghent University started an experiment to automatically enrich authors with VIAF identities. In this blog post we will report on the setup and results of this experiment which will also be reported at ELAG2015.

Context

Three ingredients are needed to create a web of data:

  1. A scalable way to produce data.
  2. The infrastructure to publish data.
  3. Clients accessing the data and reusing them in new contexts.

On the production site there doesn’t seem to be any problem creating huge datasets by libraries. Any transformation of library data to linked data will quickly generate an enormous number of RDF triples. We see this in the size of public available datasets:

Also for accessing data, from a consumers perspective the “easy” part seems to be covered. Instead of thousands of APIs available and many documents formats for any dataset, SPARQL and RDF provide the programmer a single protocol and document model.

The claim of the Linked Data Fragments researchers is that on the publication side, reliable queryable access to public Linked Data datasets largely remains problematic due to the low availability percentages of public SPARQL endpoints [Ref]. This is confirmed by the 2013 study by researchers from Pontificia Universidad Católica in Chili and National University of Ireland where more than half of the public SPARQL endpoints seem to be offline 1.5 days per month. This gives an availability rate of less than 95% [Ref].

The source of this high rate of inavailability can be traced back to the service model of Linked Data where two extremes exists to publish data (see image below).

At one side, data dumps (or dereferencing of URLs) can be made available which requires a simple HTTP server and lots of processing power on the client side. At the other side, an open SPARQL endpoint can be provided which requires a lot of processing power (hence, hardware investment) on the serverside. With SPARQL endpoints, clients can demand the execution of arbitrarily complicated queries. Furthermore, since each client requests unique, highly specific queries, regular caching mechanisms are ineffective, since they can only optimized for repeated identical requests.

This situation can be compared with providing a database SQL dump to endusers or open database connection on which any possible SQL statement can be executed. To a lesser extent libraries are well aware of the different modes of operation between running OAI-PMH services and Z39.50/SRU services.

Linked Data Fragment researchers provide a third way, Triple Pattern Fragments, to publish data which tries to provide the best of both worlds: access to a full dump of datasets while providing a queryable and cachable interface. For more information on the scalability of this solution I refer to the report  presented at the 5th International USEWOD Workshop.

The experiment

VIAF doesn’t provide a public SPARQL endpoint, but a complete dump of the data is available at http://viaf.org/viaf/data/. In our experiments we used the VIAF (Virtual International Authority File), which is made available under the ODC Attribution License.  From this dump we created a HDT database. HDT provides a very efficient format to compress RDF data while maintaining browser and search functionality. Using command line tools RDF/XML, Turtle and NTriples can be compressed into a HDT file with an index. This standalone file can be used to without the need of a database to query huge datasets. A VIAF conversion to HDT results in a 7 GB file and a 4 GB index.

Using the Linked Data Fragments server by Ruben Verborgh, available at https://github.com/LinkedDataFragments/Server.js, this HDT file can be published as a NodeJS application.

For a demonstration of this server visit the iMinds experimental setup at: http://data.linkeddatafragments.org/viaf

Using Triple Pattern Fragments a simple REST protocol is available to query this dataset. For instance it is possible to download the complete dataset using this query:


$ curl -H "Accept: text/turtle" http://data.linkeddatafragments.org/viaf

If we only want the triples concerning Chekhov (http://viaf.org/viaf/95216565) we can provide a query parameter:


$ curl -H "Accept: text/turtle" http://data.linkeddatafragments.org/viaf?subject=http://viaf.org/viaf/95216565

Likewise, using the predicate and object query any combination of triples can be requested from the server.


$ curl -H "Accept: text/turtle" http://data.linkeddatafragments.org/viaf?object="Chekhov"

The memory requirements of this server are small enough to run a copy of the VIAF database on a MacBook Air laptop with 8GB RAM.

Using specialised Triple Pattern Fragments clients, SPARQL queries can be executed against this server. For the Catmandu project we created a Perl client RDF::LDF which is integrated into Catmandu-RDF.

To request all triples from the endpoint use:


$ catmandu convert RDF --url http://data.linkeddatafragments.org/viaf --sparql 'SELECT * {?s ?p ?o}'

Or, only those Triples that are about “Chekhov”:


$ catmandu convert RDF --url http://data.linkeddatafragments.org/viaf --sparql 'SELECT * {?s ?p "Chekhov"}'

In the Ghent University experiment a more direct approach was taken to match authors to VIAF. First, as input a MARC dump from the catalog is being streamed into a Perl program using a Catmandu iterator. Then, we extract the 100 and 700 fields which contain $a (name) and $d (date) subfields. These two fields are combined in a search query, as if we would search:


Chekhov, Anton Pavlovich, 1860-1904

If there is exactly one hit in our local VIAF copy, then the result is reported. A complete script to process MARC files this way is available at a GitHub gist. To run the program against a MARC dump execute the import_viaf.pl command:


$ ./import_viaf.pl --type USMARC file.mrc
000000089-2 7001  L $$aEdwards, Everett Eugene,$$d1900- http://viaf.org/viaf/110156902
000000122-8 1001  L $$aClelland, Marjorie Bolton,$$d1912-   http://viaf.org/viaf/24253418
000000124-4 7001  L $$aSchein, Edgar H.
000000124-4 7001  L $$aKilbridge, Maurice D.,$$d1920-   http://viaf.org/viaf/29125668
000000124-4 7001  L $$aWiseman, Frederick.
000000221-6 1001  L $$aMiller, Wilhelm,$$d1869- http://viaf.org/viaf/104464511
000000256-9 1001  L $$aHazlett, Thomas C.,$$d1928-  http://viaf.org/viaf/65541341

[edit: 2017-05-18 an updated version of the code is available as a Git project https://github.com/LibreCat/MARC2RDF ]

All the authors in the MARC dump will be exported. If there is exactly one single match against VIAF it will be added to the author field. We ran this command for one night in a single thread against 338.426 authors containing a date and found 135.257 exact matches in VIAF (=40%).

In a quite recent follow up of our experiments, we investigated how LDF clients can be used in a federated setup. When combining in the LDF algorithm the triples result from many LDF servers, one SPARQL query can be run over many machines. These results are demonstrated at the iMinds demo site where a single SPARQL query can be executed over the combined VIAF and DBPedia datasets. A Perl implementation of this federated search is available in the latest version of RDF-LDF at GitHub.

We strongly believe in the success of this setup and the scalability of this solution as demonstrated by Ruben Verborgh at the USEWOD Workshop. Using Linked Data Fragments a range of solutions are available to publish data on the web. From simple data dumps to a full SPARQL endpoint any service level can be provided given the resources available. For more than a half year DBPedia has been running an LDF server with 99.9994% availability on a 8 CPU , 15 GB RAM Amazon server with 4.5 million requests. Scaling out, services such has the LOD Laundromat cleans 650.000 datasets and provides access to them using a single fat LDF server (256 GB RAM).

For more information on the Federated searches with  Linked Data Fragments  visit the blog post of Ruben Verborgh at: http://ruben.verborgh.org/blog/2015/06/09/federated-sparql-queries-in-your-browser/

Day 17: Exporting RDF data with Catmandu

Yesterday we learned how to import RDF data with Catmandu. Exporting RDF can be as easy as this:

catmandu convert RDF --url http://d-nb.info/1001703464 to RDF

By default, the RDF exporter Catmandu::Exporter::RDF emits RDF/XML, an ugly and verbose serialization format of RDF. Let’s configure catmandu to use the also verbose but less ugly NTriples. This can either by done by appending --type ntriple on command line or by adding 17_librecatthe following to config file catmandu.yml:

exporter:
  RDF:
    package: RDF
      options:
        type: ntriples

The NTriples format illustrates the “true” nature of RDF data as a set of RDF triples or statements, each consisting of three parts (subject, predicate, object).

Catmandu can be used for converting between one RDF serialization format to another, but more specialized RDF tools, such as such rapper are more performant especially for large data sets. Catmandu can better help to process RDF data to JSON, YAML, CSV etc. and vice versa.

Let’s proceed with a more complex workflow and with what we’ve learned at day 13 about OAI-PMH and another popular repository: http://arxiv.org. There is a dedicated Catmandu module Catmandu::ArXiv for searching the repository, but ArXiv also supports OAI-PMH for bulk download. We could specify all options at command line, but putting the following into catmandu.yml will simplify each call:

importer:
  arxiv-cs:
    package: OAI
    options:
      url: http://export.arxiv.org/oai2
      metadataPrefix: oai_dc
      set: cs

Now we can harvest all computer science papers (set: cs) for a selected day (e.g. 2014-12-19):

$ catmandu convert arxiv --from 2014-12-19 --to 2014-12-19 to YAML

The repository may impose a delay of 20 seconds, so be patient. For more precise data, we better use the original data format from ArXiV:

$ catmandu convert arxiv --set cs --from 2014-12-19 --to 2014-12-19 --metadataPrefix arXiv to YAML > arxiv.yaml

The resulting format is based on XML. Have a look at the original data (requires module Catmandu::XML):

$ catmandu convert YAML to XML --field _metadata --pretty 1 < arxiv.yaml
$ catmandu convert YAML --fix 'xml_simple(_metadata)' to YAML < arxiv.yaml

Now we’ll transform this XML data to RDF. This is done with the following fix script, saved in file arxiv2rdf.fix:

xml_simple(_metadata)
retain_field(_metadata)
move_field(_metadata,m)

move_field(m.id,_id)
prepend(_id,”http://arxiv.org/abs/&#8221;)

move_field(m.title,dc_title)
remove_field(m)

The following command generates one RDF triple per record, consisting of an arXiv article identifier, the property http://purl.org/dc/elements/1.1/title and the article title:

$ catmandu convert YAML to RDF --fix arxiv2rdf.fix < arxiv.yaml

To better understand what’s going on, convert to YAML instead of RDF, so the internal aREF data structure is shown:

$ catmandu convert YAML to YAML --fix arxiv2rdf.fix < arxiv.yaml

_id: http://arxiv.org/abs/1201.1733
dc_title: On Conditional Decomposability

This record looks similar to the records imported from RDF at day 13. The special field _id refers to the subject in RDF triples: a handy feature for small RDF graphs that share the same subject in all RDF triples. Nevertheless, the same RDF graph could have been encoded like this:

---
http://arxiv.org/abs/1201.1733:
  dc_title: On Conditional Decomposability
...

To transform more parts of the original record to RDF, we only need to map field names to prefixed RDF property names. Here is a more complete version of arxiv2rdf.fix:


xml_simple(_metadata)
retain_field(_metadata)
move_field(_metadata,m)
    
move_field(m.id,_id)
prepend(_id,"http://arxiv.org/abs/")
    
move_field(m.title,dc_title)
move_field(m.abstract,bibo_abstract)
    
move_field(m.doi,bibo_doi)
copy_field(bibo_doi,owl_sameAs)
prepend(owl_sameAs,"http://dx.doi.org/")
            
move_field(m.license,cc_license)
          
move_field(m.authors.author,dc_creator)
unless exists(dc_creator.0)
  move_field(dc_creator,dc_creator.0)
end         
            
do list(path=>dc_creator)
  add_field(a,foaf_Person)
  copy_field(forenames,foaf_name.0)
  copy_field(keyname,foaf_name.$append)
  join_field(foaf_name,' ')
  move_field(forenames,foaf_givenName)
  move_field(keyname,foaf_familyName)
  move_field(suffix,schema_honoricSuffix)
  remove_field(affiliation)
end 
    
remove_field(m)

The result is one big RDF graph for all records:

$ catmandu convert YAML to RDF --fix arxiv2rdf.fix < arxiv.yaml

Have a look at the internal aREF format by using the same fix with convert to YAML and try conversion to other RDF serialization forms. The most important part of transformation to RDF is to find matching RDF properties from existing ontologies. The example above uses properties from Dublin Core, Creative Commons, Friend of a Friend, Schema.org, and Bibliographic Ontology.

Continue to Day 18: Merry Christmas! >>

Day 16: Importing RDF data with Catmandu

16_librecatA common problem of data processing is the large number of data formats, dialects, and conceptions. For instance the author field in one record format may differ from a similar field another format in its meaning or name. As shown in the previous articles, Catmandu can help to bridge such differences, but it can also help to map from and to data structured in a completely different paradigm. This article will show how to process data expressed in RDF, the language of Semantic Web and Linked Open Data.

RDF differs from previous formats, such as JSON and YAML, MARC, or CSV in two important aspects:

  1. There are no records and fields: RDF data instead is a graph structure, build of nodes (“resources” or “values”) and directed links.
  2. Link types (“properties”) are identified by URI and defined in “ontologies”. In theory this removes the introductory common problem of data processing.

Because graph structures are fundamentally different to record structures, there is no obvious mapping between RDF and records in Catmandu. For this reason you better use dedicated RDF technology as long as your data is RDF. Catmandu, however, can help to process from RDF and to RDF, as shown today and tomorrow, respectively. Let’s first install the Catmandu module Catmandu::RDF for RDF processing:

$ cpanm –sudo Catmandu::RDF

If you happen to use this on a virtual machine from the Catmandu USB stick, you may first have to update another module to remove a nasty bug (the password is “catmandu”):

$ cpanm –sudo List::Util

You can now retrieve RDF data from any Linked Open Data URI like this:

$ catmandu convert RDF –url http://dx.doi.org/10.2474/trol.7.147 to YAML

We could also download RDF data into a file and parse the file with Catmandu afterwards:

$ curl -L -H 'Accept: application/rdf+xml' http://dx.doi.org/10.2474/trol.7.147 > rdf.xml
$ catmandu convert RDF --type rdfxml to YAML < rdf.xml
$ catmandu convert RDF --file rdf.xml to YAML # alternatively

Downloading RDF with Catmandu::RDF option --url, however, is shorter and adds an _url field that contains the original source. The RDF data converted to YAML with Catmandu looks like this (I removed some parts to keep it shorter). The format is called another RDF Encoding Form (aREF) because it can be transformed from and to other RDF encodings:

---
_url: http://dx.doi.org/10.2474/trol.7.147
http://dx.doi.org/10.2474/trol.7.147:
  dct_title: Frictional Coefficient under Banana Skin@
  dct_creator:
  - <http://id.crossref.org/contributor/daichi-uchijima-y2ol1uygjx72>
  - <http://id.crossref.org/contributor/kensei-tanaka-y2ol1uygjx72>
  - <http://id.crossref.org/contributor/kiyoshi-mabuchi-y2ol1uygjx72>
  - <http://id.crossref.org/contributor/rina-sakai-y2ol1uygjx72>
  dct_date:- 2012^xs_gYear
  dct_isPartOf: <http://id.crossref.org/issn/1881-2198>
http://id.crossref.org/issn/1881-2198:
  a: bibo_Journal
  bibo_issn: 1881-2198@
  dct_title: Tribology Online@
http://id.crossref.org/contributor/daichi-uchijima-y2ol1uygjx72:
  a: foaf_Person
  foaf_name:Daichi Uchijima@
http://id.crossref.org/contributor/kensei-tanaka-y2ol1uygjx72:
  foaf_name: Kensei Tanaka@
http://id.crossref.org/contributor/kiyoshi-mabuchi-y2ol1uygjx72:
  foaf_name: Kiyoshi Mabuchi@
http://id.crossref.org/contributor/rina-sakai-y2ol1uygjx72:
  foaf_name: Rina Sakai@
...

The sample record contains a special field _url with the original source URL and six fields with URLs (or URIs), each corresponding to an RDF resource. The field with the original source URL (http://dx.doi.org/10.2474/trol.7.147) can be used as starting point. Each subfield (dct_title, dct_creator, dct_date, dct_isPartOf) corresponds to an RDF property, abbreviated with namespace prefix. To fetch data from these fields, we could use normal fix functions and JSON path expressions, as shown at day 7 but there is a better way:

Catmandu::RDF provides the fix function aref_query to map selected parts of the RDF graph to another field. Try to get the the title field with this command:

$ catmandu convert RDF –url http://dx.doi.org/10.2474/trol.7.147 –fix ‘aref_query(dct_title,title)’ to YAML

More complex transformations should better be put into a fix file, so create file rdf.fix with the following content:

aref_query(dct_title,title)
aref_query(dct_date,date);
aref_query(dct_creator.foaf_name,author)
aref_query(dct_isPartOf.dct_title,journal)

If you apply the fix, there are four additional fields with data extracted from the RDF graph:

$ catmandu convert RDF –url http://dx.doi.org/10.2474/trol.7.147 –fix rdf.fix to YAML

The aref_query function also accepts a language, similar to JSON path, but the path is applied to an RDF graph instead of a simple hierarchy. Moreover one can limit results to plain strings or to URIs. For instance the author URIs can be accessed with aref_query(dct_creator.,author). This feature is useful especially if RDF data contains a property with multiple types of objects, literal strings, and other resources. We can aggregate both with the following fixes:

aref_query(dct_creator@, authors)
aref_query(dct_creator.foaf_name@, authors)

Before proceeding you should add the following option to config file catmandu.yaml:

importer:
  RDF:
    package: RDF
    options:
      ns: 2014091

This makes sure that RDF properties are always abbreviated with the same prefixes, for instance dct for http://purl.org/dc/terms/.

Continue to Day 17: Exporting RDF data with Catmandu >>

Day 15 : MARC to Dublin Core

13_librecatToday we will look a bit further into MARC processing with Catmandu. By now you should already know how to startup the Virtual Catmandu (hint: see day 1) and start up the UNIX command prompt (hint: see day 2). We already saw a bit of MARC processing in day 9 and today we will show you how to transform MARC records into Dublin Core. This as a preparation to create RDF and Linked Data in the later posts.

First I’m going to teach you how to process different types of MARC files. On the Virtual Catmandu system we provided five  example MARC files. You can find them in your Documents folder:

  • Documents/camel.mrk
  • Documents/camel.usmarc
  • Documents/marc.xml
  • Documents/rug01.aleph
  • Documents/rug01.sample

When you examine these files with the UNIX less command you will see that all the files have a bit different format:

$ less Documents/camel.mrk
$ less Documents/camel.usmarc
$ less Documents/marc.xml
$ less Documents/rug01.sample

There are many ways in which MARC data can be written into a file. Every vendor likes to use its own format. You can compare this with the different ways a text document can be stored: as Word, as Open Office, as PDF and plain text. If we are going to process these files with catmandu, then we need to tell the system what the exact format is.

We will work today with the last example rug01.sample which is a small export out of the Aleph catalog from Ghent University Library. Ex Libris uses a special MARC format to structure their data which is called Aleph sequential. We need to tell catmandu not only that our input file is in MARC but also in this special Aleph format. Let’s try to create YAML to see what it gives:

$ catmandu convert MARC --type ALEPHSEQ to YAML < Documents/rug01.sample

To transform this MARC file into Dublin Core we need to create a fix file. You can use the UNIX command nano for this (hint: see day 5 how to create files with nano). Create a file dublin.fix:

$ nano dublin.fix

And type into nano the following fixes:

marc_map(245,title)

marc_map(100,creator.$append)
marc_map(700,creator.$append)

marc_map(020a,isbn.$append)
marc_map(022a,issn.$append)

marc_map(260b,publisher)
marc_map(260c,date)

marc_map(650a,subject.$append)

remove_field(record)

Every MARC record contains in the 245-field the title of a record. In the first line we map the MARC-245 field to new field in the record called title:

marc_map(245,title)

In the second and third line we map authors to a field creator. In the rug01.sample file the authors are stored in the MARC-100 and MARC-700 field. Because there is usually more than one author in a record, we need to $append them to create an array (a list) of one or more creator-s.

In line 4 and line 5 we do the same trick to filter out the ISBN and ISSN number out of the record which we store in separate fields isbn and issn (indeed these are not Dublin Core fields, we will process them later).

In line 6 and line 7 we read the MARC-260 field which contains publisher and date information. Here we don’t need the $append trick because there is usually only one 260-field in a MARC record.

In line 8 the subjects are extracted from the 260-field using the same $append trick as above. Notice that we only extracted the $a subfields? If you want to add more subfields you can list them as in marc_map(650abcdefgh,subject.$append)

Given the dublin.txt file above we can execute the filtering command like this:

$ catmandu convert MARC --type ALEPHSEQ to YAML --fix dublin.fix < Documents/rug01.sample

As always you can type | less at the end of this command to slow down the screen output, or store the results into a file with > results.txt. Hint:

$ catmandu convert MARC --type ALEPHSEQ to YAML --fix dublin.fix < Documents/rug01.sample | less
$ catmandu convert MARC --type ALEPHSEQ to YAML --fix dublin.fix < Documents/rug01.sample > results.txt

The results should look like this:

_id: '000000002'
creator:
- Katz, Jerrold J.
date: '1977.'
isbn:
- '0855275103 :'
publisher: Harvester press,
subject:
- Semantics.
- Proposition (Logic)
- Speech acts (Linguistics)
- Generative grammar.
- Competence and performance (Linguistics)
title: Propositional structure and illocutionary force :a study of the contribution of sentence meaning to speech acts /Jerrold J. Katz.
...

Congratulations, you’ve created your first mapping file to transform library data from MARC to Dublin Core! We need to add a bit more cleaning to delete some periods and commas here and there but as is we already have our first mapping.

Below you’ll find a complete example. You can read more about our Fix language online.

marc_map(245,title, -join => " ")

marc_map(100,creator.$append)
marc_map(700,creator.$append)

marc_map(020a,isbn.$append)
marc_map(022a,issn.$append)

replace_all(isbn.," .","")
replace_all(issn.," .","")

marc_map(260b,publisher)
replace_all(publisher,",$","")

marc_map(260c,date)
replace_all(date,"\D+","")

marc_map(650a,subject.$append)
remove_field(record)

Continue to Day 16: Importing RDF data with Catmandu >>