You're currently viewing an old version of this dataset. To see the current version, click here.

Pre-trained Language Models in Biomedical Domain: A Systematic Survey

This paper summarizes the recent progress of pre-trained language models in the biomedical domain and their applications in downstream biomedical tasks.

Data and Resources

Cite this as

Benyou Wang, Sribd & SDS, The Chinese University of Hong Kong, Shenzhen, China, Qianqian Xie, Jiahuan Pei, University of Amsterdam, Netherlands, Zhihong Chen, SRIBD & SSE, The Chinese University of Hong Kong, Shenzhen, China, Prayag Tiwari, School of Information Technology, Halmstad University, Sweden, Zhao Li, The University of Texas Health Science Center at Houston, USA, Jie Fu, Mila, University of Montreal, Canada (2024). Dataset: Pre-trained Language Models in Biomedical Domain: A Systematic Survey. https://doi.org/10.57702/kobufw7x

DOI retrieved: December 3, 2024

Additional Info

Field Value
Created December 3, 2024
Last update December 3, 2024
Defined In https://doi.org/10.48550/arXiv.2110.05006
Author Benyou Wang
More Authors
Sribd & SDS
The Chinese University of Hong Kong, Shenzhen, China
Qianqian Xie
Jiahuan Pei
University of Amsterdam, Netherlands
Zhihong Chen
SRIBD & SSE, The Chinese University of Hong Kong, Shenzhen, China
Prayag Tiwari
School of Information Technology, Halmstad University, Sweden
Zhao Li
The University of Texas Health Science Center at Houston, USA
Jie Fu
Mila, University of Montreal, Canada