Visual Context-Aware Convolution Filters for Transformation-Invariant Neural Networks

The proposed framework generates a unique set of context-dependent filters based on the input image, and combines them with max-pooling to produce transformation-invariant feature representations.

Data and Resources

Cite this as

Suraj Tripathi, Abhay Kumar, Chirag Singh (2024). Dataset: Visual Context-Aware Convolution Filters for Transformation-Invariant Neural Networks. https://doi.org/10.57702/1d8ex6mq

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.1906.09986
Author Suraj Tripathi
More Authors
Abhay Kumar
Chirag Singh