Biagio Mattia La Rosa bio photo

Biagio Mattia La Rosa

I am a PhD student on Computer Engineering @ sapienza. My focus is to study methodologies to improve the interpretability of Deep Learning techniques. Passionate also about neuroscience, psychology, games, and astronomy.

Twitter   G. Scholar LinkedIn Github e-Mail

Current and Past Projects

Active Projects

This list includes all the current projects where I am involved in and I am available to supervise students for theses or collaborate with them. If you are interested on talking or working on one of them, drop an email including some info about you and your interest. Usually, my response time is less than 1 day.

Self-Interpretable Deep Neural Networks

The project aims at proposing novel interpretable neural networks. This project has studied so far novel ways to exploit deep neural networks based on external memories and prototypes. In the first case, during the inference process, store information in an external memory and use them to compute the prediction. In the latter, the prediction is performed by comparing the representation of a given sample to a set of learned prototypes. Our goal is to develop models that are able to achieve the same (or better) performance of current SOTA models and provide at the same time explanations about their outputs. The project is domain-agnostic and has been tested so far on image classification, text classication and reinforcement learning. Possible future extensions include the application of these network to mitigate the bias, self-supervision, novel domains and further studies on RL.

Visual Analytics + Explainable Deep Learning

Visual analytics systems have been widely used to help domain-experts (e.g., doctors, programmers, lawyers) to understand machine learning models by visualizing several aspects of data and models and letting the user interact and analyze them. Recently, more and more VA systems employ explanations methods to aid users for understanding deep learning models. The goal of the project is to develop a common interface (i.e., library) that can bridge the gap between tools used in Deep learning field (e.g., PyTorch, TensorFlow) and technologies used to build the VA systems (e.g., React, D3.js).

Extracting Neuron’s Knowledge

The focus of this project is to understand what neurons learn during the training process. This is usually done by checking which concepts activate the most the neuron. In this project we are ideally interested on building a precise mapping between a given activation (i.e., the neuron’s behavior) and the concepts recognized by the neuron for the given sample.

Topics

Specifically, my recent works are closely related to the following topics:

  • Memory-Augmented Neural Networks.
  • Prototypes-based Neural Networks
  • Attention mechanisms.
  • Image Classification.
  • Explanation by Example
  • Counterfactuals
  • Concept-based explanations
  • Learned Features
  • Feature Attribution
  • Graph Neural Networks
  • Network Dissection
  • Debiasing techniques

Past Projects

Data-Augmentation for Word Sense Disambiguation

The objective of this project was to improve reliability of word-sense disambiguation systems. The task of word-sense disambiguation is to assign a sense to each word based on the context (e.g., disambiguating between tha animal “mouse” and the tool “mouse”). The project exploited the combination of Wikipedia, BabelNet, and a hierachical categorization to enlarge the training dataset.

Knowledge-Graphs Merging

The goal of this project was to merge two different knowledge graphs of relations between words. The pproject consisted on an analysis, pruning and merging of the knowledge-bases. Now, they are embedded inside BabelNet.