Semantic Web Challenges
Bio2RDF and Kibio federated query in Life Science challenge
Organizer: François Belleau, Mickael Leclerc, Régis Ongaro-Carcy, Arnaud Droit
How can we run the Wikidata FAIR paper query without using Wikidata SPARQL endpoint (https://tinyurl.com/53ra32bp)? Participants must obtain the same results by using federated query techniques.
Two different tasks are proposed, participant select one:
(1) Running Wikidata query using federated SPARQL query or other programming approach that will consume Bio2RDF’s endpoints.
(2) Running Wikidata query using any programming approach that will consume Kibio’s Elasticsearch indexes.
Both endpoints, Bio2RDF and Kibio, store the same datasets extracted from the Wikidata JSON dump. Bio2RDF exposing SPARQL endpoints and RDF triples, Kibio.science exposing Elasticsearch standard API and JSON-LD documents.
Knowledge Base Construction from Pre-trained Language Models (LM-KBC)
Pre-trained language models (LMs) have advanced a range of semantic tasks and have also shown promise for knowledge extraction from the models itself. Although several works have explored this ability in a setting called probing or prompting, the viability of knowledge base construction from LMs has not yet been explored. In this challenge, participants are asked to build actual knowledge bases from LMs, for given subjects and relations. In crucial difference to existing probing benchmarks like LAMA (Petroni et al., 2019), we make no simplifying assumptions on relation cardinalities, i.e., a subject-entity can stand in relation with zero, one, or many object-entities. Furthermore, submissions need to go beyond just ranking the predictions, and materialize outputs, which are evaluated by established KB metrics of precision and recall. The challenge comes with two tracks: (i) a BERT-type LM track with low computational requirements, and (ii) an open track, where participants can use any LM of their choice.
Semantic Answer Type, Entity, and Relation Linking Task (SMART2022)
The challenge, SMART 2022, consists of three tasks Semantic Answer Type prediction task, Entity and Relation Linking tasks. Given a question in natural language, the three tasks in this challenge are to predict the answer type for a given question, identify entities and relations in the questions using a target ontology (e.g., DBpedia or Wikidata). This is a continuation of SMART2020, and SMART2021 challenges.
Semantic Reasoning Evaluation Challenge (SemREC 2022)
Despite the development of several ontology reasoning optimizations, the traditional methods either do not scale well or only cover a subset of OWL 2 language constructs. As an alternative, neuro-symbolic approaches are gaining significant attention. However, the existing methods can not deal with very expressive ontology languages. Other than that, some SPARQL query engines also support reasoning, but their performance also is still limited. To find and improve these performance bottlenecks of the reasons, we ideally need several real-world ontologies that span the broad spectrum in terms of their size and expressivity. However, that is often not the case. One of the potential reasons for the ontology developers to not build ontologies that vary in terms of size and expressivity is the performance bottleneck of the reasons. SemREC aims to deal with this chicken and egg problem.
The second edition of this challenge includes the following tasks-
Submit a real-world ontology that is a challenge in terms of the reasoning time or memory consumed during reasoning.
Submit an ontology/RDFS reasoner that uses neural-symbolic techniques or SPARQL query engines that support reasoning.
Semantic Web Challenge on Tabular Data to Knowledge Graph Matching (SemTab 2022)
This challenge aims at benchmarking systems dealing with the tabular data to KG matching problem, so as to facilitate their comparison on the same basis and the reproducibility of the results.