I’m mainly interested in the development of methodologies to improve the interpretability
of deep learning approaches. Currently, I’m working on intrinsic methods embedded directly inside the architecture of deep neural networks, enriching them with elements usefull for explainaibility purpose.
Specifically, my recent works are closely related to the following domains:
- Memory-Augmented Neural Networks.
- Attention mechanisms.
- Image Classification.
- Explanation by Example and Counterfactuals
- Concept-based explanations
If you want to collaborate with me or do you want to be supervised by me, please send me an email with the subject [collaboration], indicating your advisor/laboratory. Usually, my response time is less than 1 day. I have many projects available on several XAI topics other than the topics listed above, like:
- Explainable Reinforcement Learning.
- XAI methods to improve the training process
- Intepretable latent spaces