-
Sub-sentence encoder
The sub-sentence encoder is a contrastive learning framework for learning contextual embeddings for semantic units on the sub-sentence level. -
Decoupled Contrastive Learning
Contrastive learning is one of the most successful paradigms for self-supervised learning (SSL). In a principled way, it considers two augmented views of the same image as... -
Contrastive Visual-Linguistic Pretraining
Contrastive Visual-Linguistic Pretraining (CVLP) is a novel approach to visual-linguistic pretraining that solves the domain bias and noisy label problems encountered with... -
Multimodal Contrastive Learning
The dataset used in the paper is a collection of pairs of observations (xi, ˜xi) from two modalities, where xi ∈ Rd1 and ˜xi ∈ Rd2. The dataset is used to evaluate the... -
RANKCLIP: Ranking-Consistent Language-Image Pretraining
Self-supervised contrastive learning models, such as CLIP, have set new benchmarks for vision-language models in many downstream tasks. However, their dependency on rigid... -
Adaptive Multi-head Contrastive Learning
Adaptive Multi-head Contrastive Learning (AMCL) framework, which tackles intra- and inter-sample similarity and adaptive temperature mechanism re-weighting each similarity pair. -
Relational Self-Supervised Learning
Self-supervised learning framework that maintains the relational consistency between instances under different augmentations. -
Inter-Instance Similarity Modeling for Contrastive Learning
The existing contrastive learning methods widely adopt one-hot instance discrimination as pretext task for self-supervised learning, which inevitably neglects rich... -
Contrastive learning for compact single image dehazing
Contrastive learning for compact single image dehazing -
Contrastive multiview coding
Contrastive multiview coding. -
A simple framework for contrastive learning of visual representations
A simple framework for contrastive learning of visual representations. -
Prototypical Alignment, Uniformity and Correlation
Contrastive self-supervised learning (CSL) with a prototypical regularization has been introduced in learning meaningful representations for downstream tasks that require strong... -
RECLIP: Resource-efficient CLIP by Training with Small Images
A simple method that minimizes computational resource footprint for CLIP (Contrastive Language Image Pretraining). -
Dual-Granularity Contrastive Learning for Session-based Recommendation
The data encountered by Session-based Recommendation System(SBRS) is typically highly sparse, which also serves as one of the bottlenecks limiting the accuracy of recommendations. -
CLIP dataset
The CLIP dataset is used to train a contrastive learning model. -
DisCo-CLIP: A Distributed Contrastive Loss for Memory Efficient CLIP Training
We propose DisCo-CLIP, a distributed memory-efficient CLIP training approach, to reduce the memory consump- tion of contrastive loss when training contrastive learning models.