-
GalliformeSpectra: A Hen Breed Dataset
A comprehensive dataset featuring ten distinct hen breeds, capturing unique characteristics and traits of each breed. -
Memory Association Networks
Memory Association Networks (MANs) that memorize and remember any data. This neural network has two memories. One consists of a queue-structured short-term memory to solve the... -
Discriminative Deep Forest (DisDF)
A Discriminative Deep Forest (DisDF) as a metric learning algorithm is proposed in the paper. -
Authentication of Copy Detection Patterns under Machine Learning Attacks: A S...
Copy detection patterns (CDP) are an attractive technology that allows manufacturers to defend their products against counterfeiting. The main assumption behind the protection... -
Kaggle Dataset
The dataset used in the paper is a publicly available dataset from Kaggle, used for demonstrating the effectiveness of the Lai loss function. -
Machine Learning and Deep Learning Methods for Cybersecurity
Machine learning and deep learning methods for cybersecurity -
A Data-Centric Optimization Framework for Machine Learning
DaCeML is a Data-Centric Machine Learning framework that provides a simple, flexible, and customizable pipeline for optimizing training of arbitrary deep neural networks. -
Accelerating Deep Learning with Shrinkage and Recall
Deep Learning is a very powerful machine learning model. Deep Learning trains a large number of parameters for multiple layers and is very slow when data is in large scale and... -
BEND: Bagging Deep Learning Training Based on Efficient Neural Network Diffusion
The paper proposes a Bagging Deep Learning Training Framework (BEND) based on efficient neural network diffusion. -
Adam: A method for stochastic optimization
This dataset is used to test the robustness of watermarking methods against adaptive attacks. -
Part VI: combining compressions
Model compression is generally performed by using quantization, low-rank approximation or pruning, for which various algorithms have been researched in recent years. -
Low-rank compression of neural nets: Learning the rank of each layer
Model compression is generally performed by using quantization, low-rank approximation or pruning, for which various algorithms have been researched in recent years. -
Part V: combining compressions
Model compression is generally performed by using quantization, low-rank approximation or pruning, for which various algorithms have been researched in recent years. -
Model compression as constrained optimization
Model compression is generally performed by using quantization, low-rank approximation or pruning, for which various algorithms have been researched in recent years. -
MNIST and CIFAR-10 datasets
The MNIST and CIFAR-10 datasets are used to test the theory suggesting the existence of many saddle points in high-dimensional functions. -
BERT: Pre-training of deep bidirectional transformers for language understanding
This paper proposes BERT, a pre-trained deep bidirectional transformer for language understanding.