Selected Pulications
Conferences
-
[DATE 2022] S. Kundu, S. Wang, Q. Sun, P. A. Beerel, M. Pedram, “BMPQ: Bit-Gradient Sensitivity Driven Mixed Precision Quantization of DNNs from Scratch". .
-
[NeurIPS 2021] S. Kundu, Q. Sun, Y. Fu, M. Pedram, P. A. Beerel, “Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation". .
-
[ICCV 2021] S. Kundu, M. Pedram, P. A. Beerel, “HIRE-SNN: Harnessing the Inherent Robustness of Deep Spiking Neural Networks by Training with Crafted Input Noise”. .
-
[ICASSP 2021] S. Kundu, S. Sundaresan, “AttentionLite: Towards Efficient Self-Attention Models for Vision”. .
-
[WACV 2021] S. Kundu, G.Datta, M. Pedram, P. A. Beerel, “Spike-Thrift: Towards Energy-Efficient Deep Spiking Neural Networks by Limiting Spiking Activity via Attention-Guided Compression”.
-
[ASP-DAC 2021] S. Kundu, M. Nazemi, P. A. Beerel, M. Pedram, “DNR: A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs”.
-
[Allerton 2019] S. Kundu, S. Prakash, H. Akrami, P. A. Beerel, K. M. Chugg, “pSConv: A Pre-defined Sparse Kernel Based Convolution for Deep CNNs”.
-
[IEEE ISEC 2019] S. Kundu, G.Datta, P.A. Beerel, M. Pedram, “qBSA: Logic Design of a 32-bitBlock-Skewed RSFQ Arithmetic Logic Unit”.
-
[IEEE ISEC 2019] G.Datta, H. Cong, S. Kundu, P. A. Beerel, “qCDC: Metastability-Resilient Synchronization FIFO for SFQ Logic”.
-
[ISVLSI 2019] S. Kundu, A. Fayyazi, Shahin Nazarian, Peter A. Beerel, Massoud Pedram, “CSrram: Area-Efficient Low-Power Ex-Situ Training Framework for Memristive Neuromorphic Circuits Based on Clustered Sparsity”.
Journals
-
[ACM Transactions on Embedded Computing Systems 2022] S. Kundu, Y. Fu, Q. Sun, B. Ye, P. A. Beerel, M. Pedram, “Towards Adversary aware Non-Iterative Model Pruning Through Dynamic Network Rewiring of DNNs”.
-
[Frontiers in Neuroscience 2022] G. Datta, S. Kundu, A. Jaiswal, P. A. Beerel, “ACE-SNN: Algorithm-Hardware Co-design of Energy-Efficient & Low-Latency Deep Spiking Neural Networks for 3D Image Recognition”.
-
[Transactions on Computers 2020] S. Kundu, M. Nazemi, M. Pedram, K. M. Chugg, P. A. Beerel, “Pre-defined Sparsity for Low-Complexity Convolutional Neural Networks”.
-
S. Kundu, G. Datta, M. Pedram, P. A. Beerel, “Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided Comp4ression”, under review.
Patents
-
[US Patent] D.J. Cummings, J.P, Munoz, S. Kundu, S.N. Sridhar, M. Szankin, “Machine Learning Model Scaling System with Energy Efficient Network Data Transfer for Power Aware Hardware”.
-
[US Patent] S. Sundaresan, S. Kundu, ‘‘Deep neural network optimization system for machine learning model scaling”.