You're currently viewing an old version of this dataset. To see the current version, click here.

LV-BERT: Exploiting Layer Variety for BERT

Modern pre-trained language models are mostly built upon backbones stacking self-attention and feed-forward layers in an interleaved order. This paper aims to improve pre-trained models by exploiting layer variety from two aspects: the layer type set and the layer order.

Data and Resources

This dataset has no data

Cite this as

Weihao Yu, Zihang Jiang, Fei Chen, Qibin Hou, Jiashi Feng (2024). Dataset: LV-BERT: Exploiting Layer Variety for BERT. https://doi.org/10.57702/5hahv5rs

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Weihao Yu
More Authors
Zihang Jiang
Fei Chen
Qibin Hou
Jiashi Feng