A Neural Approach for Text Extraction from Scholarly Figures

A Neural Approach for Text Extraction from Scholarly Figures

This is the readme for the supplemental data for our ICDAR 2019 paper.

You can read our paper via IEEE here: https://ieeexplore.ieee.org/document/8978202

If you found this dataset useful, please consider citing our paper:

@inproceedings{DBLP:conf/icdar/MorrisTE19,
  author    = {David Morris and
               Peichen Tang and
               Ralph Ewerth},
  title     = {A Neural Approach for Text Extraction from Scholarly Figures},
  booktitle = {2019 International Conference on Document Analysis and Recognition,
               {ICDAR} 2019, Sydney, Australia, September 20-25, 2019},
  pages     = {1438--1443},
  publisher = {{IEEE}},
  year      = {2019},
  url       = {https://doi.org/10.1109/ICDAR.2019.00231},
  doi       = {10.1109/ICDAR.2019.00231},
  timestamp = {Tue, 04 Feb 2020 13:28:39 +0100},
  biburl    = {https://dblp.org/rec/conf/icdar/MorrisTE19.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

This work was financially supported by the German Federal Ministry of Education and Research (BMBF) and European Social Fund (ESF) (InclusiveOCW project, no. 01PE17004).

Datasets

We used different sources of data for testing, validation, and training. Our testing set was assembled by the work we cited by Böschen et al. We excluded the DeGruyter dataset, and use it as our validation dataset.

Testing

These datasets contain a readme with license information. Further information about the associated project can be found in the authors' published work we cited: https://doi.org/10.1007/978-3-319-51811-4_2

Validation

The DeGruyter dataset does not include the labeled images due to license restrictions. As of writing, the images can still be downloaded from DeGruyter via the links in the readme. Note that depending on what program you use to strip the images out of the PDF they are provided in, you may have to re-number the images.

Training

We used label_generator's generated dataset, which the author made available on a requester-pays amazon s3 bucket. We also used the Multi-Type Web Images dataset, which is mirrored here.

Code

We have made our code available in code.zip. We will upload code, announce further news, and field questions via the github repo.

Our text detection network is adapted from Argman's EAST implementation. The EAST/checkpoints/ours subdirectory contains the trained weights we used in the paper.

We used a tesseract script to run text extraction from detected text rows. This is inside our code code.tar as text_recognition_multipro.py.

We used a java script provided by Falk Böschen and adapted to our file structure. We included this as evaluator.jar.

Parameter sweeps are automated by param_sweep.rb. This file also shows how to invoke all of these components.

Data and Resources

Cite this as

David Morris, Peichen Tang, and Ralph Ewerth (2019). Dataset: A Neural Approach for Text Extraction from Scholarly Figures. https://doi.org/10.25835/0030443

DOI retrieved: June 27, 2019

Additional Info

Field Value
Imported on October 14, 2021
Last update August 4, 2023
License CC-BY-3.0
Source https://data.uni-hannover.de/dataset/a-neural-approach-for-text-extraction-from-scholarly-figures
Author David Morris
More Authors
Peichen Tang
and Ralph Ewerth
Maintainer David Morris, Peichen Tang, and Ralph Ewerth
Source Creation 27 June, 2019, 16:29 PM (UTC+0000)
Source Modified 20 January, 2022, 13:48 PM (UTC+0000)