Visual Context-Aware Convolution Filters for Transformation-Invariant Neural Networks
The proposed framework generates a unique set of context-dependent filters based on the input image, and combines them with max-pooling to produce transformation-invariant feature representations.
BibTex: