I’m broadly interested in eXplainable Artificial Intelligence. Currently I work on the development of methodologies to improve the interpretability of deep learning approaches. My latest works focus on using the internals of deep models to build explanations. This can be done by exploiting the already existent internals (e.g., Paper DNC or Paper Compositional) or adding new elements embedded directly inside the architecture of deep neural networks, enriching them with elements useful for explainability purposes (e.g., Memory Wrap).
Specifically, my recent works are closely related to the following topics:
- Memory-Augmented Neural Networks.
- Attention mechanisms.
- Image Classification.
- Explanation by Example and Counterfactuals
- Concept-based explanations
- Learned Features
- Feature Attribution
Please reach out if you have a cool idea and want to talk to me! I would love to collaborate. I also look forward to mentor students who are interested in research. If you are interested send me an email with the subject [collaboration], including some info about you and your work. Usually, my response time is less than 1 day.
I have many projects available on several XAI topics other than the topics listed above, like:
- Explainable Reinforcement Learning.
- XAI methods to improve the training process
- Intepretable latent spaces
- Explainable Neural Networks
- Fairness on Deep Learning