-
BSDS500 dataset
The dataset used in this paper is the BSDS500 dataset, which contains 200 natural images with over 1000 ground truth labellings. -
GTA5 and SYNTHIA
The dataset used in the paper is GTA5 and SYNTHIA, which are used for domain adaptive semantic segmentation (DASS). -
Semantic Segmentation for Partially Occluded Apple Trees Based on Deep Learning
The dataset used in this paper for occluded apple tree segmentation. -
COCO-20i, FSS-1000, and LVIS-92i
The dataset used for one-shot semantic segmentation, including COCO-20i, FSS-1000, and LVIS-92i. -
Gaofen dataset
The Gaofen dataset is a high-quality PolSAR semantic segmentation dataset. -
Osteoarthritis Initiative (OAI) dataset
Knee OsteoArthritis (KOA) dataset used for early detection of KOA (KL-0 vs KL-2) using Vision Transformer (ViT) model with selective shuffled position embedding and key-patch... -
CREMI dataset
The CREMI dataset consists of brain electron microscopy images (EM), and the ultimate goal is to reconstruct neurons at the micro-scale level. -
COCO-Stuff 164K
Semantic segmentation is one of the most fundamental tasks that aims to classify every pixel of a given image into a specific class. It is widely applied to many applications... -
MoNuSeg dataset
The MoNuSeg dataset is published for the Multi-organ Nuclei Segmentation challenge in MICCAI 2018. The training dataset consists of 30 images generated from multiple organs... -
Image dataset
The dataset used in the paper is a set of images, and the authors used it to train and test their ladder network model. -
Internal Dataset
The internal dataset contains 6 million real-world driving scenarios from Las Vegas (LV), Seattle (SEA), San Francisco (SF), and the campus of the Stanford Linear Accelerator... -
Segmentation dataset for jet flames
The dataset used for training and testing the UNet and Attention UNet models for segmentation of radiation zones within the jet flames. -
Caltech-UCSD Birds-200-2011 Dataset
The Caltech-UCSD Birds-200-2011 Dataset consists of 11,169 bird images from 200 categories and each category has 60 images averagely. -
SA-1B dataset
The SA-1B dataset used for training the SAM model, containing 11M images. -
COCONut-B [6] and EntitySeg [49] and DIS5K [51] datasets
The dataset used for training the RWKV-SAM model, containing 242K images, including COCO labeled and unlabelled images, EntitySeg dataset with 30k high-resolution images... -
BraTS and MVTec AD datasets
The dataset used in the paper is a combination of medical images, including T1, T2, and Flair MRI scans from BraTS, and images from MVTec AD. -
Pascal VOC, Pascal Context, COCO-Object, Cityscapes, and ADE20k datasets
The Pascal VOC, Pascal Context, COCO-Object, Cityscapes, and ADE20k datasets are used for evaluation of the proposed method. -
EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM
EdgeSAM is an accelerated variant of the Segment Anything Model (SAM) optimized for efficient execution on edge devices with minimal compromise in performance.