You're currently viewing an old version of this dataset. To see the current version, click here.

LV-BERT: Exploiting Layer Variety for BERT

Modern pre-trained language models are mostly built upon backbones stacking self-attention and feed-forward layers in an interleaved order. This paper aims to improve pre-trained models by exploiting layer variety from two aspects: the layer type set and the layer order.

Data and Resources

Cite this as

Weihao Yu, Zihang Jiang, Fei Chen, Qibin Hou, Jiashi Feng (2024). Dataset: LV-BERT: Exploiting Layer Variety for BERT. https://doi.org/10.57702/5hahv5rs

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Weihao Yu
More Authors
Zihang Jiang
Fei Chen
Qibin Hou
Jiashi Feng