This work formulates a temporal preprocessing scheme that introduces robustness to input video speed variations, previously lacking, in artificial neural networks. The work includes analysis and classification of existing state-of-the-art models based on their response to extreme variations in speed of input videos.
Creation of a tensorflow-based activity recognition framework to aid reproducible research, quick proto-typing and reduce time consumed by unnecessary pipeline development
This project focused on the development of a pytorch-based video platform that can handle any image- or video-based application with minimal changes. It includes strong bookkeeping, mimics large mini-batch computations on lowmemory systems while including a large suite of video-specific preprocessing functions.
This work revolves around the design of a state-of-the-art online streaming feature selection algorithm called Geometric Online Ap-proach which is fully functional when both features and samples are simultaneously streaming.
This work proposes a novel label-based curriculum learning algorithm called Learning with Incremental Labels and Adaptive Compensation. It emphasizes sample equality while incrementally learning labels and regularizes learning by adaptively modifying the target label vector. It performs label-based curriculum learning while surpassing performance from standard batch learning techniques.
This project introduces the notion of using conditional mutual informationas a measure of dependence between neurons. The method, titled MINT. focuses on passing through a majority of information while retaining a small percentage of neurons between layers. Using only a single train-prune-retrain step, MINT is extremely competitive with commonly used DNN pruning baselines.
SNACS advances the state-of-the-art in single shot neural network pruning by focusing on 3 key aspects of the pruning pipeline, 1) faster computation of connectivity scores, which determine the importance of a weight, 2) proposal of guidelines that automate the definition of the upper pruning percentage limits in all the layers of a neural network, and 3) identification of sensitivity as a priority measure to determine which weights are protected or pruned.
We all know that larger and deeper neural network models are the modus operandi in tackling real-world problems. However, the focus on large capacity DNNs runs counter to the requirements for their hardware implementation. Neural network compression-via-pruning has emerged as a popular approach that can help bridge the gap between exorbitantly large theoretical models and their slimmer hardware counterparts, while maintaining a desired level of performance. Most approaches to neural network pruning focus on using deterministic constraints on the learned weight matrices, either by evaluating a filter’s importance using appropriate norms or modifying the objective function with sparsity constraints. While they offer a useful way to approximate contributions from filters, they either ignore the dependency between layers or solve a needlessly more difficult optimization objective. In this talk, I propose an alternative approach to neural network pruning, using the power of Conditional Mutual Information (CMI) under a probabilistic framework. In this work, I use CMI as a measure of connectivity between filters of adjacent layers across the entire DNN which can then be used to prune filters that offer lesser information to subsequent layers. Further expanding on this, I show how we can leverage ideas from the original weight-based approaches and our newly proposed probabilistic framework to offer a hybrid solution to pruning that is extremely effective.