Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

publications

ViP: Video Platform for PyTorch

Published in arXiv as Technical Report, 2019

This project focused on the development of a pytorch-based video platform that can handle any image- or video-based application with minimal changes. It includes strong bookkeeping, mimics large mini-batch computations on lowmemory systems while including a large suite of video-specific preprocessing functions.

A Geometric Approach to Online Streaming Feature Selection

Published in arXiv as Technical Report, 2020

This work revolves around the design of a state-of-the-art online streaming feature selection algorithm called Geometric Online Ap-proach which is fully functional when both features and samples are simultaneously streaming.

Rethinking Curriculum Learning with Incremental Labels and Adaptive Compensation

Published in BMVC, 2020

This work proposes a novel label-based curriculum learning algorithm called Learning with Incremental Labels and Adaptive Compensation. It emphasizes sample equality while incrementally learning labels and regularizes learning by adaptively modifying the target label vector. It performs label-based curriculum learning while surpassing performance from standard batch learning techniques.

MINT: Deep Network Compression via Mutual Information-based Neuron Trimming

Published in ICPR, 2021

This project introduces the notion of using conditional mutual informationas a measure of dependence between neurons. The method, titled MINT. focuses on passing through a majority of information while retaining a small percentage of neurons between layers. Using only a single train-prune-retrain step, MINT is extremely competitive with commonly used DNN pruning baselines.

SNACS: Slimming Neural Networks Using Adaptive Connectivity Scores

Published in TNNLS (Under Review), 2021

SNACS advances the state-of-the-art in single shot neural network pruning by focusing on 3 key aspects of the pruning pipeline, 1) faster computation of connectivity scores, which determine the importance of a weight, 2) proposal of guidelines that automate the definition of the upper pruning percentage limits in all the layers of a neural network, and 3) identification of sensitivity as a priority measure to determine which weights are protected or pruned.

talks

Modelling Connectivity: An Alternative Approach to Neural Network Compression

Published:

We all know that larger and deeper neural network models are the modus operandi in tackling real-world problems. However, the focus on large capacity DNNs runs counter to the requirements for their hardware implementation. Neural network compression-via-pruning has emerged as a popular approach that can help bridge the gap between exorbitantly large theoretical models and their slimmer hardware counterparts, while maintaining a desired level of performance. Most approaches to neural network pruning focus on using deterministic constraints on the learned weight matrices, either by evaluating a filter’s importance using appropriate norms or modifying the objective function with sparsity constraints. While they offer a useful way to approximate contributions from filters, they either ignore the dependency between layers or solve a needlessly more difficult optimization objective. In this talk, I propose an alternative approach to neural network pruning, using the power of Conditional Mutual Information (CMI) under a probabilistic framework. In this work, I use CMI as a measure of connectivity between filters of adjacent layers across the entire DNN which can then be used to prune filters that offer lesser information to subsequent layers. Further expanding on this, I show how we can leverage ideas from the original weight-based approaches and our newly proposed probabilistic framework to offer a hybrid solution to pruning that is extremely effective.