-
Leapfrogging for parallelism in deep neural networks
The dataset used in the paper is a neural network with L layers numbered 1,..., L, in which each of the hidden layers has N neurons. -
NITI: INTEGER TRAINING
The dataset used in this paper is MNIST, CIFAR10, and ImageNet. -
Training Over-Parameterized Deep Neural Networks
The dataset used in this paper is a collection of training data for over-parameterized deep neural networks. -
NN-EMD: Efficiently Training Neural Networks using Encrypted Multi-sourced Da...
Training complex neural network models using third-party cloud-based infrastructure among multiple data sources is a promising approach among existing machine learning... -
Neural Network Training on In-memory-computing Hardware with Radix-4 Gradients
The dataset used in this paper is a neural network training dataset with radix-4 gradients. -
TensorQuant
TensorQuant toolbox is used to apply fixed point quantization to DNNs. The simulations are focused on popular CNN topologies, such as Inception V1, Inception V3, ResNet 50 and... -
TransparentFPGAAccelerationwithTensorFlow
The dataset used in this paper is a collection of neural network acceleration with TensorFlow and FPGA. -
Anomalous diffusion dynamics of learning in deep neural networks
The dataset used in the paper is not explicitly described, but it is mentioned that the authors used ResNet-14, ResNet-20, ResNet-56, and ResNet-110 networks, as well as... -
Transformations between deep neural networks
The dataset used in the paper is a collection of neural networks trained on different tasks, including scalar functions, two-dimensional vector fields, and images of a rotating... -
Deep Learning Models
The dataset used in this paper is a set of 20 well-known deep-learning models, including AlexNet, ResNet, VGG, DenseNet, etc. -
Progressive Feedforward Collapse of ResNet Training
The dataset used in the paper is a ResNet trained on various datasets, including MNIST, Fashion MNIST, CIFAR10, STL10, and CIFAR100. -
Im2win: An Efficient Convolution Paradigm on GPU
Convolutional neural network (CNN) is an important network model widely used in computer vision, image processing, and scientific computing. CNN consists of an input layer, an... -
Generative Adversarial Networks
Generative Adversarial Networks (GANs) consist of two networks: a generator G(z) and a discriminator D(x). The discriminator is trying to distinguish real objects from objects... -
Building Efficient Deep Neural Networks with Unitary Group Convolutions
Unitary group convolutions (UGConvs) are a building block for neural networks that combines a group convolution with unitary transforms in feature space. -
MNIST and CIFAR-10 datasets
The MNIST and CIFAR-10 datasets are used to test the theory suggesting the existence of many saddle points in high-dimensional functions. -
Tensor Regression Networks with various Low-Rank Tensor Approximations
Tensor regression networks achieve high compression rate of neural networks while having slight impact on performances. They do so by imposing low tensor rank structure on the... -
Deep Neural Networks
Deep Neural Networks (DNNs) are universal function approximators providing state-of-the-art solutions on wide range of applications. Common perceptual tasks such as speech... -
Lookahead Pruning
The dataset used in this paper is a neural network, and the authors used it to test the performance of their lookahead pruning method.