Parameswaran Kamalaruban
About Me: Since October 2020, I've been a Senior Research Associate in Safe and Ethical AI at the Turing Institute, working with Adrian Weller. Before that, I was a Postdoctoral Researcher at EPFL, working with Volkan Cevher. I completed my Ph.D. in Computer Science at the Australian National University, working with Bob Williamson. I was also attached to the Analytics program of Data61. I obtained my honors degree (B.Sc (Hons)) in Electronics and Telecommunication Engineering at the University of Moratuwa.
Research Interests: (see Research Statement)
Sequential Decision Making: Experts, Bandits, and Reinforcement Learning.
Real-World Reinforcement Learning: Challenges and Opportunities.
Trustworthy Machine Learning (Robustness, Privacy, and Safety).
E-mail: kparameswaran [at] turing [dot] ac [dot] uk
Thesis:
[1] Parameswaran Kamalaruban.
Thesis (PhD), 2017.
Magazine / Survey Articles:
[1] A.T.D. Perera, and Parameswaran Kamalaruban.
Applications of reinforcement learning in energy systems.
[1] A.T.D. Perera, and Parameswaran Kamalaruban.
Applications of reinforcement learning in energy systems.
In Renewable and Sustainable Energy Reviews, 2021.
[2] Donghwan Lee, Niao He, Parameswaran Kamalaruban, and Volkan Cevher.
Optimization for Reinforcement Learning: From Single Agent to Cooperative Agents.
In IEEE Signal Processing Magazine, 2020.
Optimization for Reinforcement Learning: From Single Agent to Cooperative Agents.
In IEEE Signal Processing Magazine, 2020.
Conference Proceedings / Journals:
[1] Rati Devidze, Parameswaran Kamalaruban, and Adish Singla.
Exploration-Guided Reward Shaping for Reinforcement Learning under Sparse Rewards.
In Proceedings of The 36th Conference on Neural Information Processing Systems, 2022.
Exploration-Guided Reward Shaping for Reinforcement Learning under Sparse Rewards.
In Proceedings of The 36th Conference on Neural Information Processing Systems, 2022.
[2] Luca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Craig Innes, Subramanian Ramamoorthy, and Adrian Weller.
Robust Learning from Observation with Model Misspecification.
In Proceedings of The 21st International Conference on Autonomous Agents and Multiagent Systems, 2022.
Robust Learning from Observation with Model Misspecification.
In Proceedings of The 21st International Conference on Autonomous Agents and Multiagent Systems, 2022.
[3] Rati Devidze, Goran Radanovic, Parameswaran Kamalaruban, and Adish Singla.
Explicable Reward Design for Reinforcement Learning Agents.
In Proceedings of The 35th Conference on Neural Information Processing Systems, 2021.
Explicable Reward Design for Reinforcement Learning Agents.
In Proceedings of The 35th Conference on Neural Information Processing Systems, 2021.
[4] Gaurav Yengera, Rati Devidze, Parameswaran Kamalaruban, and Adish Singla.
Curriculum Design for Teaching via Demonstrations: Theory and Applications.
In Proceedings of The 35th Conference on Neural Information Processing Systems, 2021.
Curriculum Design for Teaching via Demonstrations: Theory and Applications.
In Proceedings of The 35th Conference on Neural Information Processing Systems, 2021.
[5] Luca Viano, Yu-Ting Huang, Parameswaran Kamalaruban, Adrian Weller, and Volkan Cevher.
Robust Inverse Reinforcement Learning under Transition Dynamics Mismatch.
In Proceedings of The 35th Conference on Neural Information Processing Systems, 2021.
Robust Inverse Reinforcement Learning under Transition Dynamics Mismatch.
In Proceedings of The 35th Conference on Neural Information Processing Systems, 2021.
[6] Parameswaran Kamalaruban, Yu-Ting Huang, Ya-Ping Hsieh, Paul Rolland, Cheng Shi, and Volkan Cevher.
Robust Reinforcement Learning via Adversarial training with Langevin Dynamics.
In Proceedings of The 34th Conference on Neural Information Processing Systems, 2020.
[7] Parameswaran Kamalaruban, Victor Perrier, Hassan Jameel Asghar, and Mohamed Ali Kaafar.
Not All Attributes are Created Equal: dX -Private Mechanisms for Linear Queries.
In Proceedings on Privacy Enhancing Technologies, 2020.
Robust Reinforcement Learning via Adversarial training with Langevin Dynamics.
In Proceedings of The 34th Conference on Neural Information Processing Systems, 2020.
[7] Parameswaran Kamalaruban, Victor Perrier, Hassan Jameel Asghar, and Mohamed Ali Kaafar.
Not All Attributes are Created Equal: dX -Private Mechanisms for Linear Queries.
In Proceedings on Privacy Enhancing Technologies, 2020.
[8] Parameswaran Kamalaruban, Rati Devidze, Volkan Cevher, and Adish Singla.
Interactive Teaching Algorithms for Inverse Reinforcement Learning.
In Proceedings of The 28th International Joint Conference on Artificial Intelligence, 2019.
[9] Teresa Yeo, Parameswaran Kamalaruban, Adish Singla, Arpit Merchant, Thibault Asselborn, Louis Faucon, Pierre Dillenbourg, and Volkan Cevher.
In Proceedings of The 33rd AAAI Conference on Artificial Intelligence, 2019.
[10] Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, and Purushottam Kar.
In Proceedings of The 31st Conference on Neural Information Processing Systems, 2017.
[11] Parameswaran Kamalaruban, Robert C. Williamson, and Xinhua Zhang.
In Proceedings of The 28th Conference on Learning Theory, 2015.
[12] Thalaiyasingam Ajanthan, Parameswaran Kamalaruban, and Ranga Rodrigo.
Automatic Number Plate Recognition in Low-Quality Videos.
IEEE 8th International Conference on Industrial and Information Systems, 2013.
Automatic Number Plate Recognition in Low-Quality Videos.
IEEE 8th International Conference on Industrial and Information Systems, 2013.
Workshop Papers:
[1] Dishanika Denipitiyage, Thalaiyasingam Ajanthan, Parameswaran Kamalaruban, and Adrian Weller.
Provable Defense Against Clustering Attacks on 3D Point Clouds.
In Workshop on Adversarial Machine Learning and Beyond, AAAI, 2022.
Provable Defense Against Clustering Attacks on 3D Point Clouds.
In Workshop on Adversarial Machine Learning and Beyond, AAAI, 2022.
[2] Parameswaran Kamalaruban, Rati Devidze, Volkan Cevher, and Adish Singla.
Environment Shaping in Reinforcement Learning using State Abstraction.
Workshop on Challenges of Real World Reinforcement Learning, NeurIPS 2020.
Environment Shaping in Reinforcement Learning using State Abstraction.
Workshop on Challenges of Real World Reinforcement Learning, NeurIPS 2020.
[3] Parameswaran Kamalaruban, Doga Tekin, Paul Rolland, and Volkan Cevher.
Reinforcement Learning with Langevin Dynamics.
Workshop on Optimization Foundation for Reinforcement Learning, NeurIPS 2019.
Reinforcement Learning with Langevin Dynamics.
Workshop on Optimization Foundation for Reinforcement Learning, NeurIPS 2019.
[4] Parameswaran Kamalaruban, Teresa Yeo, Trisha Mittal, Volkan Cevher, and Adish Singla.
Assisted Inverse Reinforcement Learning.
Learning by Instruction Workshop, NeurIPS 2018.
Assisted Inverse Reinforcement Learning.
Learning by Instruction Workshop, NeurIPS 2018.
Preprints:
[1] Parameswaran Kamalaruban.
arXiv, 2016.
[2] Parameswaran Kamalaruban, and Robert C. Williamson.
arXiv, 2018.
[3] Martin Troussard, Emmanuel Pignat, Parameswaran Kamalaruban, Sylvain Calinon, and Volkan Cevher.
Interaction-limited Inverse Reinforcement Learning.
arXiv, 2020.
Interaction-limited Inverse Reinforcement Learning.
arXiv, 2020.