-
Prototypical contrastive learning of unsupervised representations
Prototypical contrastive learning of unsupervised representations. -
A simple framework for contrastive learning of visual representations
A simple framework for contrastive learning of visual representations. -
Prototypical Alignment, Uniformity and Correlation
Contrastive self-supervised learning (CSL) with a prototypical regularization has been introduced in learning meaningful representations for downstream tasks that require strong... -
Self-Distillation Prototypes Network: Learning Robust Speaker Representations...
Training speaker-discriminative and robust speaker verification systems without explicit speaker labels remains a persisting challenge. In this paper, we propose a new... -
Self-supervised learning: Generative or contrastive
Self-supervised learning: Generative or contrastive. -
Footpath Segmentation using Remote Sensing Data and Self-supervised Learning
Footpath mapping, modeling, and analysis can provide important geospatial insights to many fields of study, including transport, health, environment and urban planning. -
A simple data mixing prior for improving self-supervised learning
A simple data mixing prior for improving self-supervised learning. -
EquiMod: An Equivariance Module to Improve Visual Instance Discrimination
Recent self-supervised visual representation methods are closing the gap with supervised learning performance. Most of these successful methods rely on maximizing the similarity... -
SSL-MAE dataset for TransUNet
Self-supervised learning dataset for TransUNet pretraining -
MST: Masked Self-Supervised Transformer for Visual Representation
The proposed method is a self-supervised learning approach for visual representation learning, which can explicitly capture the local context of an image while preserving the... -
SSL4EO-S12
SSL4EO-S12: A large-scale, globally distributed, multi-temporal and multi-sensor dataset for self-supervised learning in Earth observation. -
Mono-ViFI: A Unified Framework for Self-supervised Monocular Depth Estimation
Self-supervised monocular depth estimation has gathered no-table interest since it can liberate training from dependency on depth annotations. In monocular video training case,... -
S5Mars: Self-supervised and semi-supervised learning for Mars segmentation
A self-supervised and semi-supervised learning for Mars segmentation dataset. -
MOCA: Masked Online Codebook Assignments prediction
Self-supervised representation learning for Vision Transformers (ViT) to mitigate the greedy needs of ViT networks for very large fully-annotated datasets.