I’m broadly interested in eXplainable Artificial Intelligence. Currently I work on the development of methodologies to improve the interpretability of deep learning approaches. My latest works focus on using the internals of deep models to build explanations. This can be done by exploiting the already existent internals (e.g., Paper DNC or Paper Compositional) or adding new elements embedded directly inside the architecture of deep neural networks, enriching them with elements useful for explainability purposes (e.g., Memory Wrap).
Specifically, my recent works are closely related to the following topics:
- Memory-Augmented Neural Networks.
- Attention mechanisms.
- Image Classification.
- Explanation by Example
- Concept-based explanations
- Learned Features
- Feature Attribution
- Graph Neural Networks
- Network dissection and similar
- Debiasing techniques
Supervision: I am available for supervision/mentoring or co-supervision of undergrads or recented graduate students (like Master students) on topics related to eXplainable Artificial Intelligence. Both international (remote meetings only) and local students are welcome. There are several available projects, espcially for extensions or applications of Memory Wrap, but I would be happy to help you with your own idea if you have any.
If you are interested send me an email with the subject [supervision], including some info about you and your interest. Usually, my response time is less than 1 day.
Connecting with me: If you are a researcher working on similar topics and you want to start a collaboration, ask me questions about my papers, or just do conversation, please reach me out on Linkedin.