You're currently viewing an old version of this dataset. To see the current version, click here.

Multi-label Transformer

The proposed Multi-label Transformer architecture is designed for multi-label image classification, combining pixel attention and cross-window attention to better excavate the transformer's activity.

Data and Resources

This dataset has no data

Cite this as

Xing Cheng, Hezheng Lin, Xiangyu Wu, Fan Yang, Dong Shen, Zhongyuan Wang, Nian Shi, Honglin Liu (2024). Dataset: Multi-label Transformer. https://doi.org/10.57702/g8r721y2

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 3, 2024
Last update December 3, 2024
Author Xing Cheng
More Authors
Hezheng Lin
Xiangyu Wu
Fan Yang
Dong Shen
Zhongyuan Wang
Nian Shi
Honglin Liu
Homepage https://github.com/starmemda/MlTr/