-
Hardware-Aware Latency Pruning
The proposed hardware-aware latency pruning (HALP) paradigm. Considering both performance and latency contributions, HALP formulates global structural pruning as a global... -
PERMUTOHEDRAL LATTICE CONVOLUTION
The permutohedral lattice convolution is used to process sparse input features, allowing for efficient filtering of signals that do not lie on a dense grid. -
ResNet18 dataset
The dataset used in the paper is the ResNet18 dataset, which is a convolutional neural network dataset. -
Tied Block Convolution: Leaner and Better CNNs with Shared Thinner Filters
Convolution is the main building block of convolutional neural networks (CNN). We observe that an optimized CNN often has highly correlated filters as the number of channels... -
Deep Geometric Moment (DGM) Model
The proposed model consists of three components: 1) Coordinate base computation: uses a 2D coordinate grid as input and generates the bases, 2) Image feature computation:... -
Improving Shape Awareness and Interpretability in Deep Networks Using Geometr...
Deep networks for image classification often rely more on texture information than object shape. This paper presents a deep-learning model inspired by geometric moments, a... -
CIFAR10, CIFAR100, SVHN, ImageNet
The dataset used in the paper is not explicitly described, but it is mentioned that the authors used four widely used datasets: CIFAR10, CIFAR100, SVHN, and ImageNet. -
A Deep Neural Network for Multiclass Bridge Element Parsing in Inspection Ima...
Aerial robots such as drones have been leveraged to perform bridge inspections. Inspection images with both recognizable structural elements and apparent surface defects can be... -
Tied-Augment: Controlling Representation Similarity Improves Data Augmentation
Data augmentation methods have played an important role in the recent advance of deep learning models, and have become an indispensable component of state-of-the-art models in... -
Transform Quantization for CNN Compression
The dataset used in this paper is a collection of convolutional neural network (CNN) weights, which are compressed using transform quantization. -
Engineering the Neural Collapse Geometry of Supervised-Contrastive Loss
Supervised-contrastive loss (SCL) is an alternative to cross-entropy (CE) for classification tasks that makes use of similarities in the embedding space to allow for richer... -
ACCO: Automated Causal CNN Scheduling Optimizer for Real-Time Edge Accelerators
ACCO: Automated Causal CNN Scheduling Optimizer for Real-Time Edge Accelerators -
Sparse-MLP
Mixture-of-Experts (MoE) architecture, conditional computing, cross-token modeling, Sparse-MLP model -
FSCNN: A Fast Sparse Convolution Neural Network Inference System
Convolutional Neural Network (CNN) has demonstrated its success in plentiful computer vision application, but typically accompanies high computation cost and numerous redundant... -
Transformer-Based Attention Networks for Continuous Pixel-Wise Prediction
The proposed TransDepth framework for pixel-wise prediction problems involving continuous labels. -
ExplainFix: Explainable Spatially Fixed Deep Networks
ExplainFix adopts two design principles: the “fixed filters” principle that all spatial filter weights of convolutional neural networks can be fixed at initialization and never... -
Rectified Binary Convolutional Networks for Enhancing the Performance of 1-bit...
The proposed rectified binary convolutional networks (RBCNs) are used to improve the performance of 1-bit DCNNs for mobile and AI chips based applications. -
Occluded CIFAR
The dataset used in the paper is Occluded CIFAR. -
Cluttered MNIST and CIFAR-10
The dataset used in the paper is Cluttered MNIST and CIFAR-10.