Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published in arXiv as Technical Report, 2018
This work formulates a temporal preprocessing scheme that introduces robustness to input video speed variations, previously lacking, in artificial neural networks. The work includes analysis and classification of existing state-of-the-art models based on their response to extreme variations in speed of input videos.
Published in arXiv as Technical Report, 2018
Creation of a tensorflow-based activity recognition framework to aid reproducible research, quick proto-typing and reduce time consumed by unnecessary pipeline development
Published in arXiv as Technical Report, 2019
This project focused on the development of a pytorch-based video platform that can handle any image- or video-based application with minimal changes. It includes strong bookkeeping, mimics large mini-batch computations on lowmemory systems while including a large suite of video-specific preprocessing functions.
Published in arXiv as Technical Report, 2020
This work revolves around the design of a state-of-the-art online streaming feature selection algorithm called Geometric Online Ap-proach which is fully functional when both features and samples are simultaneously streaming.
Published in BMVC, 2020
This work proposes a novel label-based curriculum learning algorithm called Learning with Incremental Labels and Adaptive Compensation. It emphasizes sample equality while incrementally learning labels and regularizes learning by adaptively modifying the target label vector. It performs label-based curriculum learning while surpassing performance from standard batch learning techniques.
Published in ICPR, 2021
This project introduces the notion of using conditional mutual informationas a measure of dependence between neurons. The method, titled MINT. focuses on passing through a majority of information while retaining a small percentage of neurons between layers. Using only a single train-prune-retrain step, MINT is extremely competitive with commonly used DNN pruning baselines.
Published in TNNLS (Under Review), 2021
SNACS advances the state-of-the-art in single shot neural network pruning by focusing on 3 key aspects of the pruning pipeline, 1) faster computation of connectivity scores, which determine the importance of a weight, 2) proposal of guidelines that automate the definition of the upper pruning percentage limits in all the layers of a neural network, and 3) identification of sensitivity as a priority measure to determine which weights are protected or pruned.
Published:
We all know that larger and deeper neural network models are the modus operandi in tackling real-world problems. However, the focus on large capacity DNNs runs counter to the requirements for their hardware implementation. Neural network compression-via-pruning has emerged as a popular approach that can help bridge the gap between exorbitantly large theoretical models and their slimmer hardware counterparts, while maintaining a desired level of performance. Most approaches to neural network pruning focus on using deterministic constraints on the learned weight matrices, either by evaluating a filter’s importance using appropriate norms or modifying the objective function with sparsity constraints. While they offer a useful way to approximate contributions from filters, they either ignore the dependency between layers or solve a needlessly more difficult optimization objective. In this talk, I propose an alternative approach to neural network pruning, using the power of Conditional Mutual Information (CMI) under a probabilistic framework. In this work, I use CMI as a measure of connectivity between filters of adjacent layers across the entire DNN which can then be used to prune filters that offer lesser information to subsequent layers. Further expanding on this, I show how we can leverage ideas from the original weight-based approaches and our newly proposed probabilistic framework to offer a hybrid solution to pruning that is extremely effective.