Our paper on deep virtual networks is accepted to CVPR 2019


The following paper is accepted to the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2019):

  • Deep Virtual Networks for Memory Efficient Inference of Multiple Tasks by Eunwoo Kim, Chanho Ahn, Philip Torr, and Songhwai Oh
    • Abstract: In this work, we address the problem of learning multiple tasks under varying memory requirements and propose a novel deep neural network architecture containing multiple virtual networks. Each virtual network is specialized for a single task and hierarchically structured. The hierarchical structure of a virtual network allows multiple different inference paths for different memory requirements. The building block of a virtual network is a disjoint collection of parameters of a network, which we call a unit. The lowest level of hierarchy in a virtual network is a unit and higher levels of hierarchy contain lower levels’ units and other additional units. Based on the provided memory requirement, a different level of hierarchy of a virtual network can be chosen to perform the task. A unit can be shared by different virtual networks, allowing multiple virtual networks in a single physical network to perform multiple tasks. In addition, shared units provide an assistance to the target task with additional knowledge learned from another tasks and this cooperative configuration of virtual networks makes it possible to handle multiple tasks using a single physical network in a memory-aware manner. Our experiments show that the proposed method performs competitively compared to existing approaches in several image classification tasks, under the same memory requirements. Notably, ours is highly efficient as it allows diverse inference paths for different memory constraints.