Create, Structure, Enrich, Annotate and Publish:

Annotate|

FAIR data from one Suite

Create

Wikibase4Research

Your main tool to create, store and manage the Linked Open Data of your institution.

Wikibase4Research

Features

  • Collaborative editing
  • User and access-management
  • Version control
  • Multilingual
  • SPARQL endpoint and query editor
  • Reconciliation with OpenRefine
  • Mappings to standard ontologies
  • Links to Wikidata, GND, Iconclass, and more
  • Custom frontend options
  • Fully open source

What does it do?

Your main tool to create, store and manage the Linked Open Data of your institution.

Play Video

How does it integrate?

Semantic Kompakkt and the CPS pipeline can be configured to use Wikibase4Research instances as data sources. Antelope can be integrated via a dedicated plug-in. Import / Export routes to Wikidata, NFDI KGs, and other EU KG resources enable further data re-use.

Wikibase4Research Integration

Architecture

Software Architecture of the WikiBase for Research Service

Wikibase4Research builds on existing open source tools and libraries. It also features an OpenRefine reconciliation endpoint and multiple possibilities for frontend customization based on native MediaWiki or static site generation technologies.

Case Studies

Collections from Antwerp’s Foto Museum

The FOMU uses Wikibase4Research to organise various data projects focused around the museum collection. The Gevaert Paper Project hosts images and metadata on objects from the historic Agfa-Gevaert photo paper collection, it also features a custom frontend built on the Snowman static site framework. Other data collections focus on individual photographer’s archives, such as the Marie-Françoise Plissart archive, or on the biographies of historica Belgian photographers.

Documenting Ukrainian Cultural Heritage

This project organises photographic documentation and associated metadata of culturally significant buildings in Ukraine that are threatened by ongoing war. Data on building damage and destruction events can be searched and visualized via the SPARQL endpoint. Due to the sensitive nature of the information, the project is not accessible on the public web without an account and password login.

Sharing Knowledge in the Arts

The project coordinated by the Critical Media Lab in Basel focuses on “THEswissTHING” – a ‘90s Bulletin Board System (BBS), website, and a media lab for artists and cultural producers. Digital and analog documents from the archive of artist Barbara Strebel are organised and structured into a Wikibase4Research archive. An events-oriented data model prioritses connections between agents and acitivies over classical archival records to describe the socio-technical infrastructure.

Partners

Visualize

Semantic Kompakkt

View and annotate 2D images, 3D models, and other AV media within a Linked Open Data environment.

View and annotate 2D images, 3D models, and other AV media within a Linked Open Data environment.

Semantic Kompakkt

Features

  • Collaborative annotation
  • User and access-management
  • Version control
  • Multilingual
  • Wide range of file formats, like 3D meshes and point-clouds
  • SPARQL endpoint and query editor
  • IIIF 3D compliant
  • Links to Wikidata, GND, Iconclass, and more
  • Host it yourself or use the free public instance
  • Fully open source

What does it do?

Research data is increasingly multi-modal. Semantic Kompakkt allows you to upload, view and annotate 2D, 3D and AV media files while benefiting from the data management capabilities of Wikibase4Research.

How does it integrate?

Semantic Kompakkt includes all integration capabilities of Wikibase4Research, adding the viewer environment of Kompakkt which allows for 3D data and annotations exchange with other IIIF 3D-compliant viewers. Media from Semantic Kompakkt can be integrated easily in publications generated by CPS.

Semantic Kompakkt Integragtions

Architecture

Software Architecture of the Semantic Kompakkt Service

Semantic Kompakkt integrates Kompakkt's open source Babylon.js viewer technology with Wikibase4Research as LOD metadata management solution. The OpenRefine reconciliation endpoint allows for bulk metadata upload and enrichment.

Case Studies

NFDI4Culture

The programme ‘Give your Data More Meaning in all Dimensions’ organised by NFDI4Culture works with 8 projects from a range of institutions from Germany, Switzerland and the Netherlands to test the capacities of Semantic Kompakkt to hold diverse sets of 3D objects and contextual metadata and make that data available for further reuse, querying and visualisation.

Corpus of Baroque Ceiling Paintings in Germany

3D models of Baroque castles in Germany and associated metadata about the physical architecture of the castles and the paintings within them have been uploaded to Semantic Kompakkt and its Wikibase backend. The models are annotated with art historical texts and additional 2D images. The project is coordinated by the Institute of Art History at the Ludwig Maximilian University of Munich.

Manor Houses in the Baltic Sea Region

Historical research, 3D models and scans of a range of manor houses from the Baltic sea countries are uploaded in a custom instance of Semantic Kompakkt and Wikibase4Research. Combining a custom-designed interface running on Semantic MediaWiki and the 3D viewing environment, the project offers multiple interactive data visualisations to browse and explore the complex architecture and history of the manor houses.

Partners

Annotate

Antelope

Annotate different types of data with terminology search and entity linking and integrate it anywhere.

Annotate different types of data with terminology search and entity linking and integrate it anywhere.

Antelope

Features

  • Easy-to-use graphical interface
  • Terminology search across Wikidata, GND, Iconclass, TIB Terminology Service and more
  • Built-in visualizations of search results
  • Entity linking with standard and custom vocabularies
  • Image recognition and annotation with iArt
  • Integration in third-party data management services via API or iFrame
  • Host it yourself or use the free public instance
  • Fully open source

What does it do?

Antelope supports your data annotation with terminology search, entity linking, and image recognition.

How does it integrate?

The service framework is accessible both via a frontend web portal and as a separate data service. It can be integrated into third-party research data management (RDM) systems via API or iFrame embedding.

Antelope Integration

Architecture

Software Architecture of the Antelope Service

Antelope extends the functionalities of existing tools (Falcon 2.0; iArt; TIB Terminology Service) into a common framework with the goal of introducing automation in assisted data curation and annotation workflows.

Case Studies

NFDI4Culture

Antelope provides terminology search and lookup services designed to meet the needs of the NFDI4Culture community. Terminology sources include GND, Iconclass, Getty Vocabularies, among others. New sources can be added on demand. Image recognition via iArt includes the Iconclass terminology by default, and users can upload custom dictionaries, too.

Phiwiki

The PhiWiki is a software application that enables philosophers and other humanities scholars to semantically capture data on the ideas and concepts of their discipline and to discover new connections within this data. Using ANTELOPE, philosophical texts are tagged with the terms of a domain-specific standard vocabulary and the results are presented interactively via a Semantic Wikibase.

Partners

Publish

Computational Publishing

Publish using LOD directly from any Wikibase as multi-format outputs – web, print, or hybrid.

Publish using LOD directly from any Wikibase as multi-format outputs – web, print, or hybrid.

Computational Publishing

Features

  • Collaborative
  • Use data from Wikidata and other sources
  • Embed code and media
  • Output as multi-format and LOD (RDFa, JSON)
  • Interactive outputs: Jupyter Notebooks
  • Customizable pipelines
  • Open-source software

What does it do?

CPS enables publishing directly from digital collections and data. Users can create scripts to retrieve text, media, code, or data and output as multi-format publications, including LOD versions.

Computational Publishing Service

How does it integrate?

The core pipeline uses Wikibase4Research for LOD storage, Jupyter Notebooks for authoring, a render engine for multi-format outputs, and Git for publication storage. Antelope and Semantic Kompakkt have pipeline integration options.

Architecture

Software Architecture of the WikiBase for Research Service

The pipeline architecture connects best in class technologies: Wikibase4Research for LOD, Jupyter Notebooks for runtime scripting, Quarto for rendering, Git for versioning, GitHub Codespace for runtime environments, and W3C standards.

Case Studies

Baroque Ceiling Painting in Germany

The project uses media and text from the records of the building, rooms, and artworks of the online collection Baroque Ceiling Painting in Germany. The goal was to demonstrate automated multi-format publishing from Open Data (LOD) in Wikibase, as — web, PDF, ePub, Docx, and an interactive Jupyter Notebook.

Learning Unit: Collections Publishing

The project is an OER package for teaching computational publishing from digital collections. The learning unit is taught as part of a class ‘OpenKnowledge’ at Hanover University of Applied Sciences and Arts. The challenge for the unit is to work with a cultural digital collection that has an open licensed image bank and curatorial texts and create an exhibition catalogue.

Co-Site: Semantic Glossary Co-Creation

The project Co-Site was for TH Köln – University of Applied Sciences. It is a demonstration of a workflow for co-creating semantic glossaries, which involves: storing linked open data, outputting in multiple formats, and using glossaries for data analysis – for example, searching open literature repositories.

Partners

About us

The Open Science Lab at TIB develops open source software services for FAIR research data management in close cooperation with national and international initiatives and third-party-funded project partners.

All software tools are developed iteratively, following best practices of user-centred design in order to meet concrete needs and user requirements.

Our tools are hosted in public Gitlab and Github repositories always open to external contributions. The Lab plays a leading role in international open source community projects and working groups for open ontologies and standards (e.g. within NFDI e.V., Europeana, IIIF).

Wikibase4Research@tib.eu