You're currently viewing an old version of this dataset. To see the current version, click here.

Multi-label Transformer

The proposed Multi-label Transformer architecture is designed for multi-label image classification, combining pixel attention and cross-window attention to better excavate the transformer's activity.

Data and Resources

Cite this as

Xing Cheng, Hezheng Lin, Xiangyu Wu, Fan Yang, Dong Shen, Zhongyuan Wang, Nian Shi, Honglin Liu (2024). Dataset: Multi-label Transformer. https://doi.org/10.57702/g8r721y2

DOI retrieved: December 3, 2024

Additional Info

Field Value
Created December 3, 2024
Last update December 3, 2024
Author Xing Cheng
More Authors
Hezheng Lin
Xiangyu Wu
Fan Yang
Dong Shen
Zhongyuan Wang
Nian Shi
Honglin Liu
Homepage https://github.com/starmemda/MlTr/