CovidPubGraph: A FAIR Information Graph of COVID-19 Publications

RDF information mannequin design

The ontology behind our data graph was derived from the supply from which it was extracted, i.e., the full-texts of publications offered as a part of the CORD-19 dataset. The ontology was designed to allow search, query answering and machine studying. On the time of writing, our dataset relies on CORD-19 model 2021-11-08 ( Our conversion course of is applied in Python 3.6 with RDFLib 5.0.0 ( We make our supply code publicly accessible ( to make sure the reproducibility of our outcomes and the fast conversion of novel CORD-19 variations. One model of the generated RDF dataset may be discovered at Zenodo22.

RDF namespaces

To facilitate the reusability of our data graph, we signify our information in broadly used vocabularies and namespaces as proven in Itemizing 1.

RDF information mannequin

Determine 1 exhibits essential lessons (e.g., papers, authors, sections, bibliographic entries, and named entities) in addition to predicates (e.g., first title, final title, license).

Fig. 1
figure 1

UML class diagram of the CovidPubGraph Ontology.


We signify bibliographic info of papers utilizing 4 vocabularies: bibo, bibtex, fabio, and schema (see namespaces above). Essential attributes embrace the title, PMID, DOI, publication date, writer, writer URI, license and authors. For every paper, we retailer provenance info. Specifically, our code permits the reference to the unique CORD-19 uncooked information in addition to the time after we generate the useful resource. The URIs of our generated Paper sources observe the format resource/<paperId> the place <paperId> is the distinctive paper id inside the CORD-19 dataset. An instance useful resource is given in Itemizing 2.


Authors are represented in FOAF ( Essential attributes embrace the primary, center, and final names in addition to mail addresses and establishments.


Papers are additional subdivided by part and the corresponding info is expressed within the SALT ontology23. We hold monitor of a set of predefined sections together with Summary, Introduction, Background, Associated Work, Preliminaries, Conclusion, Experiment and Dialogue. In case one other part heading seems within the paper, we assign it to the default part Physique. We additional subdivide a piece utilizing cvdo:hasSection. An instance is given in Itemizing 3.


References to different sections, figures and tables within the textual content are resolved and saved as RDF utilizing Bibref. Essential attributes are the anchor of the reference (e.g., the variety of the part, determine, or desk), its supply string within the textual content (nif:referenceContext) together with its place within the textual content (nif:beginIndex, nif:endIndex) in addition to the referenced object (its:taIdentRef) which could be a paper (BibEntry), a determine (Determine), or a desk (Desk).

Named entities

As machine studying and query answering typically depend on named entities and their places in texts, we annotate CORD-19 papers accordingly and signify this info with the NIF 2.0 Core Ontology ( Additional particulars of our entity linking course of are described in Linking Part.

RDF instance sources

Itemizing 2 supplies an instance of a paper represented as an RDF useful resource. Itemizing 3 exhibits an instance of a piece useful resource. Every part is linked to its textual content string through nif:isString and its title through bibtex:hasTitle. If a piece consists of references to different papers, figures or tables (e.g., (1-3), (4,5), Determine 1A,Fig. 1, and so on.), we signify a reference in RDF as follows: We signify the anchor of the reference with nif:anchorOf (e.g., the variety of a determine), the beginning place of the reference with nif:beginIndex, the tip place of the reference with nif:endIndex, the supply part of the reference with nif:referenceContext, and the referenced goal with its:taIdentRef (e.g., a bibtex entry, determine or desk). An instance is proven in Itemizing 4. Itemizing 5 exhibits an instance of provenance info.


We hyperlink our dataset to different information sources to make sure its reusability and integrability in addition to to enhance its use for search, query answering and structured machine studying. We generate hyperlinks from our paper and creator sources to publicly accessible associated data bases. Furthermore, we extract named entities associated to ailments, genes, and cells from all transformed papers and hyperlink them to a few exterior data bases.

Linking publications, authors and institutes

We hyperlink publications in our data graph to 6 different datasets utilizing the owl:sameAs and rdfs:seeAlso predicates (see high six rows of Desk 2). To the most effective of our data, these six datasets are probably the most related RDF datasets that cope with the identical publication information. We go away it to future work to hyperlink our dataset to non-RDF datasets corresponding to Covid19-KG12 and Wikidata Scholia24.

Cord19-NEKG and our dataset use the identical CORD-19 paperId making the linking course of easy. For LitCovid, we use the PubMed Central Id (PMC-id) that’s offered as a part of CORD-19. For Covid-19-Literature and Cord-19-on-FHIR, we make use of sha hash values from CORD-19. Furthermore, we hyperlink our dataset to the publications’ JSON information in Wire-19-on-FHIR with the predicate rdfs:seeAlso. Itemizing 6 exhibits an instance of linked publications from our dataset CovidPubGraph to Cord19-NEKG and LitCovid.

We hyperlink our sources of each our authors and institutes to the Microsoft Educational Information Graph (MAKG)25 utilizing the newest model of our hyperlink discovery framework LIMES26. For linking the authors, LIMES is configured to find owl:sameAs hyperlinks between our situations of foaf:Particular person and Microsoft’s makg:Writer. For linking the institutes, we search for hyperlinks between situations of kind dbo:EducationalInstitution from our data graph and MAKG’s sources of kind makg:Affiliation. LIMES configuration information for linking authors and institutes can be found as a part of our supply code (

Linking named entities

We apply entity linking to attach entities derived from the sections of papers to different data bases. This course of includes two steps: (1) entity extraction and (2) entity linking. For the extraction step, we use Scispacy27 in model 0.2.4 along side the mannequin en_ner_bionlp13cg_md ( which permits the extraction of biomedical entities corresponding to ailments, genes and cells. Scispacy is a specialised NLP library based mostly on the spaCy library ( The NER mannequin in spaCy is a transition-based chunking mannequin that represents tokens as hashed embedded representations of the prefix, suffix, form and lemmatized options of particular person phrases27.

For the linking step, we adapt the entity linking framework MAG28 to hyperlink our extracted sources to the three data bases Sider19, Okayegg20 and DrugBank18—utilizing their RDF variations offered by the Bio2RDF mission ( We adapt MAG by making a search index for every of the exterior data bases and operating MAG as soon as per data base. The output is a set of entities within the NLP Interchange Format (NIF) ( In Itemizing 7, we offer an instance for the named entity “folic acid”.

Automated era of CovidPubGraph

CORD-19 uploaded new information virtually day-after-day for the second half of 2020. Attributable to this truth, now we have to automate the method of updating our data graph. To this finish, we developed a pipeline to automate the complete course of, which may be present in Fig. 2. This pipeline incorporates a number of steps:

  1. 1.

    Crawling. We begin by crawling the newest model as a zipper file from the CORD-19 web site, which features a CSV metadata file and JSON parsed full texts of scientific papers concerning the coronavirus.

  2. 2.

    RDF conversion. Then, we convert the CORD-19 information into an RDF data graph with a Python script utilizing the RDFLib library (

  3. 3.

    Linking. We combine the AGDISTIS library ( into the era course of to extract and hyperlink the named entities from abstracts of the scholarly articles. Furthermore, we supply out the entity linking duties (i.e., hyperlink publication and authors to different datasets) by making use of the hyperlink discovery framework LIMES (

  4. 4.

    KG Replace. We add the brand new model of CovidPubGraph dumps into the HOBBIT server ( in addition to to the Virtuoso triple retailer (

Fig. 2
figure 2

Ranging from 2021, CORD-19 publishes new information solely each two weeks. Due to this fact, we hold our KG up-to-date by crawling the brand new model of the CORD-19 dataset biweekly. Then, we observe the KG creation process offered in Fig. 2. Because the dataset remains to be not too huge to be regenerated, we regenerate the whole dataset biweekly. Nonetheless, having an automated incremental replace is a part of our future plans.

Supply hyperlink

Leave a Reply

Your email address will not be published.