I’m mainly interested in the development of methodologies to improve the interpretability
of deep learning approaches. Currently, I’m working on intrinsic methods embedded directly inside the architecture of deep neural networks, enriching them with elements usefull for explainaibility purpose.
Specifically, my recent works are closely related to the following domains:
- Memory-Augmented Neural Networks.
- Attention mechanisms.
- Image Classification.
- Explanation by Example and Counterfactuals
I would also very happy to work, deepening my knowledge on them, on:
- Network Pruning
- Explainable Reinforcement Learning.
- Machine Learning for Healthcare.
- Brain-inspired neural networks.
If you want to collaborate with me and you are a graduate student or a phd student, please send me an email with the subject [collaboration], indicating your advisor/laboratory. Usually, my response time is less than 1 day.