-
Gradient Adversarial Training
The dataset used for gradient adversarial training of neural networks. -
FP6-LLM: Efficiently Serving Large Language Models Through FP6-Centric Algori...
Six-bit quantization can effectively reduce the size of large language models and preserve the model quality consistently across varied applications. -
Multi-Labelled Value Networks for Computer Go
A new approach to a value network architecture for the game Go, called a multi-labelled (ML) value network. The ML value network has the three advantages, offering different... -
GalliformeSpectra: A Hen Breed Dataset
A comprehensive dataset featuring ten distinct hen breeds, capturing unique characteristics and traits of each breed. -
Memory Association Networks
Memory Association Networks (MANs) that memorize and remember any data. This neural network has two memories. One consists of a queue-structured short-term memory to solve the... -
Discriminative Deep Forest (DisDF)
A Discriminative Deep Forest (DisDF) as a metric learning algorithm is proposed in the paper. -
Authentication of Copy Detection Patterns under Machine Learning Attacks: A S...
Copy detection patterns (CDP) are an attractive technology that allows manufacturers to defend their products against counterfeiting. The main assumption behind the protection... -
Kaggle Dataset
The dataset used in the paper is a publicly available dataset from Kaggle, used for demonstrating the effectiveness of the Lai loss function. -
Machine Learning and Deep Learning Methods for Cybersecurity
Machine learning and deep learning methods for cybersecurity -
A Data-Centric Optimization Framework for Machine Learning
DaCeML is a Data-Centric Machine Learning framework that provides a simple, flexible, and customizable pipeline for optimizing training of arbitrary deep neural networks. -
Accelerating Deep Learning with Shrinkage and Recall
Deep Learning is a very powerful machine learning model. Deep Learning trains a large number of parameters for multiple layers and is very slow when data is in large scale and... -
BEND: Bagging Deep Learning Training Based on Efficient Neural Network Diffusion
The paper proposes a Bagging Deep Learning Training Framework (BEND) based on efficient neural network diffusion. -
Adam: A method for stochastic optimization
This dataset is used to test the robustness of watermarking methods against adaptive attacks. -
Part VI: combining compressions
Model compression is generally performed by using quantization, low-rank approximation or pruning, for which various algorithms have been researched in recent years. -
Low-rank compression of neural nets: Learning the rank of each layer
Model compression is generally performed by using quantization, low-rank approximation or pruning, for which various algorithms have been researched in recent years. -
Part V: combining compressions
Model compression is generally performed by using quantization, low-rank approximation or pruning, for which various algorithms have been researched in recent years. -
Model compression as constrained optimization
Model compression is generally performed by using quantization, low-rank approximation or pruning, for which various algorithms have been researched in recent years.