-
ImageNet + ResNet101 and WT103 + TransformerXL models
The dataset used in the paper is ImageNet + ResNet101 and WT103 + TransformerXL models. -
Monadic Deep Learning
The dataset used in this paper is a simple dynamic neural network, which is a type of deep learning model. It is used to demonstrate the capabilities of the DeepLearning.scala... -
Alternating optimization method based on nonnegative matrix factorizations fo...
The proposed method uses the MNIST and CIFAR10 datasets for fully-connected DNNs. -
Batch Normalization: Accelerating Deep Network Training by Reducing Internal ...
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. -
lfads-torch: A modular and extensible implementation of latent factor analysi...
Latent factor analysis via dynamical systems (LFADS) is an RNN-based variational sequential autoencoder that achieves state-of-the-art performance in denoising high-dimensional... -
Generative Adversarial Networks
Generative Adversarial Networks (GANs) consist of two networks: a generator G(z) and a discriminator D(x). The discriminator is trying to distinguish real objects from objects... -
CirCNN: Accelerating and Compressing Deep Neural Networks using Block-Circula...
CirCNN is a neural network architecture that uses block-circulant matrices to reduce the number of parameters and computations. -
Building Efficient Deep Neural Networks with Unitary Group Convolutions
Unitary group convolutions (UGConvs) are a building block for neural networks that combines a group convolution with unitary transforms in feature space. -
MNIST and CIFAR-10 datasets
The MNIST and CIFAR-10 datasets are used to test the theory suggesting the existence of many saddle points in high-dimensional functions. -
VGG-16 and ResNet-50 DNNs
The VGG-16 and ResNet-50 DNNs are used as victim DNNs in the attack. -
VGG and ResNet DNNs
The VGG and ResNet DNNs are used as victim DNNs in the attack. -
Tensor Regression Networks with various Low-Rank Tensor Approximations
Tensor regression networks achieve high compression rate of neural networks while having slight impact on performances. They do so by imposing low tensor rank structure on the... -
Deep Neural Networks
Deep Neural Networks (DNNs) are universal function approximators providing state-of-the-art solutions on wide range of applications. Common perceptual tasks such as speech... -
Lookahead Pruning
The dataset used in this paper is a neural network, and the authors used it to test the performance of their lookahead pruning method. -
MobileNetV2
The dataset used in this paper is a MobileNetV2 model, which is a type of deep neural network. The dataset is used to evaluate the performance of the proposed heterogeneous system. -
LUT-NN: Empower Efficient Neural Network Inference with Centroid Learning and...
The dataset used in the paper is not explicitly described. However, it is mentioned that the authors used a range of datasets, including CIFAR-10, GTSRB, Google Speech Command,... -
Training Dataset
The training dataset is a collection of the publicly available Arabic corpora listed below: The unshuffled OSCAR corpus (Ortiz Su´arez et al., 2020). The Arabic Wikipedia dump... -
Exploring the Limits of Large Scale Pre-training
A dataset for exploring the limits of large-scale pre-training. -
Broken Neural Scaling Laws
A smoothly broken power law functional form that accurately models and extrapolates the scaling behaviors of deep neural networks for various architectures and tasks.