Experience

 
 
 
 
 

ML Engineer, Recommendations

Twitter

Jan 2021 – Present London, UK
 
 
 
 
 

Software Engineer

Bloomberg

Sep 2020 – Dec 2020 London, UK
 
 
 
 
 

AI Resident

IBM Research

Aug 2019 – Aug 2020 Yorktown Heights, New York

One of the first 10 AI Residents at IBM, located at the Yorktown Lab. The AI Residency program provides an opportunity to conduct research and engineering in recent topics in Artificial Intelligence. As a Resident, I have been involved in 2 lines of work:

Transfer Learning in Visual Reasoning:

Transfer Learning is a sample-efficient method to boost performance on tasks where data is scarce, or difficult to access. However, it is unclear what a methodology for effective Transfer Learning could be. Building on my previous work on Visual Question Answering, and extending to Video Reasoning, we propose a new taxonomy on 3 axes: temporal transfer, features transfer and reasoning transfer.
I took an active role in setting up the experiments and collecting the results, reusing the MI-Prometheus library I have co-developed.

Compositional Generalization:

People excel at imagining or recognizing complicated objects or scenes they have never seen before. We are able to recognize a new car by the composition of its parts: the windshield in front, doors on the sides, supported on four wheels. Even if the car is missing a part (say it’s an autonomous vehicle without steering wheel). we are still able to recognize it as a car. This is because our view of the world is fundamentally compositional.

It is natural to wonder whether neural networks, in the absence of any specific encouragement (bias) to generalize in this way, would do so. To address this, we analyzed whether neural networks are able to discriminate among different classes of visual objects.

I had several duties for this publication: * I helped design the experiments and the hypotheses they test, * I co-wrote the generation code to create the image samples we used in the experiments, * I wrote the code for training the models, tracking the statistics and saving the results to file, * Finally, I helped analyze the results and discuss them in the paper.

 
 
 
 
 

Machine Learning Software Engineering Intern

IBM Research

May 2018 – Dec 2018 San Jose, California

I was part of the Machine Intelligence team, where I played a central role in 2 projects. Both projects resulted in workshop publications at the NeurIPS 2018 conference (see publications below).

MI-Prometheus:

Reproducibility within Science is of paramount importance, and Machine Learning plays a major role here.

At IBM, I co-developed a Python library, to enable reproducible Machine Learning experiments. This framework relies on PyTorch and extensively uses its mechanisms for the distribution of computations on CPUs/GPUs.

Developing this library required to define core concepts (such as model, dataset and experiment) and thus sharpened my reasoning when developing new ML models. I played a major part in developing the core APIs as well as the documentation for the entire library. I also specifically ensured that the research done with this library had a public reproducibility page, assuring that our results could be reproduced (see for instance this page).

This library has allowed our team to develop new state-of-the-art models, all now published.

Visual Question Answering:

Visual Question Answering (VQA) is defined as answering a question by using information contained in an image. This task mirrors real-world scenarios, such as helping the visually impaired, and is thus of interest. Moreover, VQA models require a detailed understanding of the image as the questions can selectively target various regions of the image.

For this project, I reimplemented an existing state-of-the-art model, and reduced its training time by 10%, without impacting its performance. This was enabled by a simplification of its mathematical structure, without hindering its expressiveness and complexity.

The associated paper also shows the use of Transfer Learning to better investigate a model and its cases of failure.

 
 
 
 
 

Research Intern

University of Luxembourg - Legato team

Jul 2017 – Aug 2017 Belval, Luxembourg

As part of the team, I worked with Dr. Jack S. Hale to assess the performance per watt of the FEniCS computational simulations platform on an ARM architecture.

After investigation, I was able to provide a 24% reduction in runtime using load balancing to better use the different CPU cores. Indeed, ARM architectures support heterogeneous computing, and this can be used to optimize certain tasks.

See the slides summarizing the work for more details.

 
 
 
 
 

Software Engineer Intern

Siemens (Digital Factory division)

Sep 2016 – Feb 2017 Karlsruhe, Germany

My main task was to evaluate the capabilities and use-cases of the MindSphere platform. MindSphere is Siemens’ cloud-based IoT open operating system, aimed at the industry. MindSphere can be used to connect plants and systems to the cloud, allowing for data collection and analysis for profitability. It is also a platform where developers can propose industrial apps to all.

After analyzing the capabilities of MindSphere, by deploying web applications, I compared its offerings against GE’s Predix platform. This enabled me to refer the strong points of the competition, such as a more open platform, and collaborate with other teams on new Data Analytics features (notably based on Node-RED) within MindSphere.

Projects

*

The Transformer Model

Learning about the Attention Mechanism and the Transformer Model.

MI-Prometheus

Enabling reproducible Machine Learning research.

Contact