Revisiting Dynamic Convolution Via Matrix Decomposition

Yinpeng Chen2

Xiyang Dai2

Mengchen Liu2

Dongdong Chen2

Lu Yuan2

Zicheng Liu2


UC San Diego1, Microsoft2

Overview


Recent research in dynamic convolution shows substantial performance boost for efficient CNNs, due to the adaptive aggregation of K static convolution kernels. It has two limitations: (a) it increases the number of convolutional weights by K-times, and (b) the joint optimization of dynamic attention and static convolution kernels is challenging. In this project, we revisit it from a new perspective of matrix decomposition and reveal the key issue is that dynamic convolution applies dynamic attention over channel groups after projecting into a higher dimensional latent space. To address this issue, we propose dynamic channel fusion to replace dynamic attention over channel groups. Dynamic channel fusion not only enables significant dimension reduction of the latent space, but also mitigates the joint optimization difficulty. As a result, our method is easier to train and requires significantly fewer parameters without sacrificing accuracy.

paper

Published in International Conference on Learning Representations (ICLR), 2021.

Arxiv

Repository

Bibtex

Models


paper

Architecture: Dynamic convolution decomposition on tensor.

paper

Architecture: Dynamic convolution decomposition embedded in the main network branch.

Highlights

High efficient dynamic operation in low dimensional feature space.

Benefit

  1. More compact model with less parameters.
  2. Improved performance for both model convergent speed and recognition accuracy.

Video


Authors



Yunsheng Li

UC San Diego