-
Leapfrogging for parallelism in deep neural networks
The dataset used in the paper is a neural network with L layers numbered 1,..., L, in which each of the hidden layers has N neurons. -
NITI: INTEGER TRAINING
The dataset used in this paper is MNIST, CIFAR10, and ImageNet. -
Training Over-Parameterized Deep Neural Networks
The dataset used in this paper is a collection of training data for over-parameterized deep neural networks. -
TensorQuant
TensorQuant toolbox is used to apply fixed point quantization to DNNs. The simulations are focused on popular CNN topologies, such as Inception V1, Inception V3, ResNet 50 and... -
TransparentFPGAAccelerationwithTensorFlow
The dataset used in this paper is a collection of neural network acceleration with TensorFlow and FPGA. -
Anomalous diffusion dynamics of learning in deep neural networks
The dataset used in the paper is not explicitly described, but it is mentioned that the authors used ResNet-14, ResNet-20, ResNet-56, and ResNet-110 networks, as well as... -
Transformations between deep neural networks
The dataset used in the paper is a collection of neural networks trained on different tasks, including scalar functions, two-dimensional vector fields, and images of a rotating... -
Progressive Feedforward Collapse of ResNet Training
The dataset used in the paper is a ResNet trained on various datasets, including MNIST, Fashion MNIST, CIFAR10, STL10, and CIFAR100. -
Breast Cancer
A neural network with single-hidden layer of 64 hidden units and ReLU activations. A prior precision of ε = 1, a minibatch size of 128 and 16 Monte-Carlo samples are used for... -
Batch Normalization: Accelerating Deep Network Training by Reducing Internal ...
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. -
lfads-torch: A modular and extensible implementation of latent factor analysi...
Latent factor analysis via dynamical systems (LFADS) is an RNN-based variational sequential autoencoder that achieves state-of-the-art performance in denoising high-dimensional... -
Generative Adversarial Networks
Generative Adversarial Networks (GANs) consist of two networks: a generator G(z) and a discriminator D(x). The discriminator is trying to distinguish real objects from objects... -
MNIST and CIFAR-10 datasets
The MNIST and CIFAR-10 datasets are used to test the theory suggesting the existence of many saddle points in high-dimensional functions. -
Deep Neural Networks
Deep Neural Networks (DNNs) are universal function approximators providing state-of-the-art solutions on wide range of applications. Common perceptual tasks such as speech... -
Lookahead Pruning
The dataset used in this paper is a neural network, and the authors used it to test the performance of their lookahead pruning method. -
MobileNetV2
The dataset used in this paper is a MobileNetV2 model, which is a type of deep neural network. The dataset is used to evaluate the performance of the proposed heterogeneous system. -
LUT-NN: Empower Efficient Neural Network Inference with Centroid Learning and...
The dataset used in the paper is not explicitly described. However, it is mentioned that the authors used a range of datasets, including CIFAR-10, GTSRB, Google Speech Command,...