Biagio (Mattia) La Rosa bio photo

Biagio (Mattia) La Rosa

I am a PhD student on Computer Engineering @ sapienza. My focus is to study methodologies to improve the interpretability of Deep Learning techniques. Passionate also about neuroscience, psychology, games, and astronomy.

Twitter   G. Scholar LinkedIn Github e-Mail

Current and Past Projects

Active Projects

This list includes all the open research paths where I am involved in. On these projects we have several extensions available for theses or collaborations, both at local and international level. If you are interested, drop me an email including some info about you and your interests. Usually, my response time is less than 1 day.

Self-Interpretable Deep Neural Networks

This research aims at proposing novel self-interpretable neural networks. Self-interpretable means that the network retuns explanations by itself or it is easy to inspects its components to get an hint on its decision process. We are active in developing self-interpretable deep networks based on memory, concepts, or prototypes and applied to several domains like vision, sequential data and chemistry.

Available projects include the application of already developed networks to novel domains and problems (e.g., continual learning or reinforcement learning), the improvements of the design of the current ones, or the development of novel architectures.

Bringing explanations to the users

Visual Analytics (VA) systems have been widely used to help domain-experts (e.g., doctors, programmers, lawyers) to understand machine learning models by visualizing several aspects of data and models and letting the user interact and analyze them. Recently, more and more VA systems employ explanations methods to aid users for understanding deep learning models. Our research aims at exploring and studying how XAI methods can be embedded inside VA systems and increase the awareness of XAI for the VA reseach community and viceversa.

Available projects include the development of a common interface (i.e., library) that can bridge the gap between tools used in Deep learning field (e.g., PyTorch, TensorFlow) and technologies used to build the VA systems (e.g., React, D3.js) or the design of a novel VA system to support and Deep Learning through XAI methods (e.g. LLM like ChatGPT). Most of these projects will be jointly supervised also by Prof. Marco Angelini, who is an expert on VA systems. Students from both VA, XAI, and DL are welcome and it is not needed an expertize in all the topics.

Understanding Black-box Deep Learning

This reseach path aims at understanding black box deep learning models (e.g. models already trained like GPT). In particular, we aim at exploiting components of the networks itself (e.g., attention or activations) in order to probe the behavior and extract insights about the decision process. For example we can probe neurons’ activations to extract rules that explain their behavior.

Available projects in this direction include the development of novel heuristics for search-based algorithms, the study of optimality of these kind of explanations, the optimization of alternative metrics and the development of novel algorithms that relax current assumption. We are especially interested on exploring the domain of vision and nlp for the task.

Past Projects

Data-Augmentation for Word Sense Disambiguation

The objective of this project was to improve reliability of word-sense disambiguation systems. The task of word-sense disambiguation is to assign a sense to each word based on the context (e.g., disambiguating between tha animal “mouse” and the tool “mouse”). The project exploited the combination of Wikipedia, BabelNet, and a hierachical categorization to enlarge the training dataset.

Knowledge-Graphs Merging

The goal of this project was to merge two different knowledge graphs of relations between words. The pproject consisted on an analysis, pruning and merging of the knowledge-bases. Now, they are embedded inside BabelNet.