I am Computer Scientist, Senior Researcher at the German Research Center for Artificial Intelligence (DFKI) in Saarbrücken, Germany.
I am working in many national and international research projects as researcher, project manager, and leader of the development team.
I am also tutoring master and bachelor students.
Since January 2025, I am part of the new Cognitive and Affective AI (SCAAI) group.
From 2020 to 2024, I was part of the Affective Computing group.
In 2019-2020, I collaborated with the Interactive Machine Learning (IML) group.
I entered DFKI in 2012 as a member of the Sign Language Synthesis and Interaction (SLSI) research group, where I worked on the use of Natural User Interfaces for the authoring of sign language animations. In parallel, I worked on the connection between the aesthetics of virtual characters and the perception of personality, and on the development of the YALLAH framework for the generation of real-time interactive virtual humans.
From 2005 to 2012, before moving to Germany, I worked as a research programmer at the Virtual Reality and Multimedia Park, Torino, Italy. There, I worked on the development of interactive 3D applications for cultural heritage, entertainment, and sign language synthesis.
SignReality - Development of an Augmenter Reality (AR) app for the synthesis of Sign Language through a context-aware 3D avatar.
BIGEKO - Development of corpora and technologies for sign language synthesis, with a focus on facial expressions.
SocialWear - Development of corpora and technologies for sign language recognition and synthesis, in both desktop and AR environments.
AVASAG - Development of corpora and technologies for sign language recognition and synthesis.
EASIER - Sentiment analysis on text and video to enhance translation between text and sign languages.
MindBot - Generation and animation of virtual characters for mediating the interaction between humans and robots in the industry.
Skincare - Using Deep Learning for the development of a mobile application for patients and health professionals in the context of skin cancer diagnosis and treatment.
MMS-Player is the implementation of an MMS realizer. MMS stands for "Multimodal SignStream", and it is a machine-and-human-readable format to represent sign languages. The MMS Player, is able to read an MMS file and produce aign language animations in different formats (MP4, FBX, BVH, JSON animation data, blender scene) [Blender, Python].
Transfer BlendShapes via UV - A Blender add-on able to transfer Shape Keys (aka blend shapes) between geometries, using UV maps as bridging information [Blender, Python].
RecSyncNG - a tool for for video recording with wireless frame-level snchronization between Android cameras. Enhanced with remote desktop GUI [Android Studio, Java, Python, FFmpeg, PyQT].
SL-Videotools - Sign Language Video Processing Tools is an aggregation of procedures for analysing human body movement with the goal of extracting relevant information for sign language analysis [Python, FFmpeg, Pillow].
Visual Scene Maker (VSM) is a visual tool for configuring the behaviour of interactive social agents [IntelliJ IDEA, Java].
YALLAH (Yet Another Low Level Agent Handler), a framework for the generation of real-time interactive virtual characters, agents, avatars [Blender, Unity, Python, C#].
TIML (a Toolkit for Interactive Machine Learning) provides a set of command line tools and a web server to facilitate training an usage of Deep Convolutional Neural Networks for image classification and analysis through eXplainable Artificial Intelligence (XAI) techniques [Python, Keras, Tensorflow, Flask].
DeEvA, a platform for the generation of virtual characters from personality traits [Blender, Python, Django, R].
BlenderProjectTemplate, a file organization strategy to work, on the same computer, on many different projects based on different versions of Blender and plugins. Includes PyCham support scripts to see the blender `bpy` namespace and perform debug.