Laboratories

NEAL lab

The NEAL laboratory specializes in the design and optimization of advanced computer network systems and applications. Leveraging Software Defined Networking (SDN) and high-performance programmable switches powered by the Tofino 2 chipset—featuring 32 ports at 100Gbps—the lab develops sophisticated control logic and algorithms to enhance network performance across various environments, including data centers and Internet routing infrastructures. Additionally, NEAL employs rigorous analytical modeling to study social networks, supported by rich datasets available to its researchers.

Finally, the NEAL Lab focuses on the pervasive integration of energy efficiency and resilience in future ICT and telecommunication systems. In particular, it envisions the design and modeling of energy- and resilience-aware solutions and resource management strategies across future terrestrial and non-terrestrial networks. Research activities span from green communication networks to sustainable and edge AI, grounded in analytical modeling, simulation frameworks and data-driven evaluation tools.

Topics

  • Software Defined Networking (SDN) and P4
  • Traffic monitoring and cybersecurity applications
  • Green networking
  • Network modeling
  • Social networks

People

  • Prof. Andrea Bianco
  • Prof. Paolo Giaccone
  • Prof. Emilio Leonardi
  • Prof. Michela Meo
  • Prof. Daniela Renga

Resources

  • SUP4RNET platform
  • Programmable switches
  • GPU-equipped servers with smart NICs
  • Network emulators and simulators.

Project

SUPER/RESTART PNRR: https://fondazione-restart.it/it/progetti/s2-super/


SONIC lab

The SONIC Lab (Sound, Networking, and Interactive Computing Laboratory) focuses on cutting-edge research at the intersection of music, technology, and human interaction. Our activities explore innovative ways to create, experience, and study music and artistic performance, both locally and across networks. Key areas of research include:

  • Networked Music Performances: Designing systems that allow musicians to perform together in real time across distances, overcoming technical and latency challenges.
  • Inclusive Technologies for Remote Musical Education and Practices: Developing accessible tools and platforms that enable learners and performers of all abilities to engage in musical practice remotely.
  • Musical Interactions in the Metaverse: Investigating immersive, interactive musical experiences in virtual and augmented reality environments.
  • Music Informatics: Analyzing, modeling, and computing musical data to enhance understanding, composition, and creative expression.
  • Music and Art Computing: Exploring computational approaches to integrate music with other artistic media, fostering new forms of creative expression.
  • Human-Machine Interaction for Music and Artistic Performances: Creating interfaces and intelligent systems that enhance collaboration between performers, audiences, and machines.

Through these activities, the lab aims to push the boundaries of how music and art are created, shared, and experienced in both physical and virtual spaces.

People

Faculty

Post-doc

PhD

Projects

Ongoing projects

MUSMETMusical Metaverse made in Europe – EU EiC Pathfinder Open

The MUSMET project proposes a vision and cutting-edge technological innovation for the future classes of Musical Metaverse devices, networking systems, and services, capable of catering to the needs and expectations of musicians and audiences. The implementation of this vision will spur the creation of radically new ecosystems of interoperable devices and communities utilising them, which holds significant benefits for society, economy, and art.

TIVOM – Tactile Integration for Visually-impaired Orchestra MusiciansPOpS Social Impact Project

The project introduces a wearable vibrotactile device that translates the conductor’s gestures and facial expressions into real-time tactile feedback perceivable by musicians with visual impairments. Leveraging motion tracking technologies, multimodal data acquisition, and machine learning algorithms optimized for low-cost embedded platforms (e.g., Raspberry Pi), TIVOM aims to recognize conducting gestures and convert them into structured vibrotactile signals with minimal latency.

By combining accessible hardware design, real-time gesture recognition, and user-centered validation with visually-impaired musicians, the project seeks to remove sensory barriers that limit participation in orchestral contexts such as rehearsals, performances, and auditions.

The resulting system will promote inclusive musical practices, expand educational opportunities for music institutions, and stimulate innovation within the Internet of Musical Things (IoMusT) ecosystem, generating measurable social, scientific, and economic impact.

Past Projects

HiFiReMHigh Fidelity Remote Music Platform – FISR Project

The project envisions a web-based and hardware-augmented communication system capable of overcoming the audio quality degradation and synchronization limitations of conventional videoconferencing platforms. By prioritizing pristine audio transmission and minimizing mouth-to-ear latency below perceptual thresholds, HiFiReM enables geographically distributed musicians to interact “as if” they were co-present in the same acoustic space.

The platform combines innovative WebRTC-based software architecture, optimized audio transmission protocols, and a low-cost dedicated hardware unit (“music box”) to achieve high-quality, synchronized audio streams across standard broadband networks. Advanced features include real-time collaborative performance modes, distributed synchronization mechanisms, and remote audio mixing capabilities suitable for concerts, rehearsals, and music education.

HiFiReM opens new opportunities for remote artistic collaboration, expands access to music education, supports musicians with mobility constraints, and fosters the development of innovative digital music services. Its implementation contributes to strengthening the resilience and sustainability of the music sector in both emergency and non-emergency contexts.

Musical Metaverse (PRIN 2022 Project)Italian MUR PRIN 2022

The project envisions a new generation of interoperable Musical Metaverse ecosystems enabling musicians to compose, perform, and teach in immersive shared XR environments, overcoming geographical, physical, and social barriers. By integrating advanced human–computer interaction, low-latency networking architectures, machine learning–based traffic prediction and packet loss concealment, and ethical-by-design methodologies, the project aims to enable real-time synchronous musical collaboration between geographically distributed performers.

Through an iterative Design–Develop–Evaluate methodology, the project will deliver progressively refined prototypes capable of supporting collaborative composition, live performance, and music education across classical, pop, rock, and experimental genres. Special attention is devoted to inclusivity, accessibility, gender balance, and participation of visually- and motor-impaired musicians.

The implementation of this vision will lay the scientific and technological foundations of the Musical Metaverse field, fostering new socio-technical ecosystems, artistic practices, and digital communities, with significant impact on culture, research, industry, and society.

Real-Time MIDI Error Recovery for Ultra-Low Latency Networked Music Performance (PoC Project)

The project addresses a key challenge in Networked Music Performance (NMP): maintaining synchronous musical interaction between geographically distributed musicians under strict latency constraints (below 25–30 ms). Because such low latency prevents the use of retransmission protocols and large buffers, packet loss must be handled directly at the receiver without introducing additional delay.

The proposed solution integrates a patented real-time MIDI packet loss recovery mechanism that periodically transmits absolute system state information (active MIDI events), enabling reconstruction of missing data without retransmissions or acknowledgment mechanisms.

The prototype will be implemented on a low-cost embedded architecture based on FPGA and microcontroller technology, supporting both MIDI and raw audio streaming via RTP/UDP/IP in peer-to-peer configurations. It will ensure deterministic ultra-low-latency processing (<5 ms internal delay) and include real-time audio mixing capabilities controlled locally or remotely.

The demonstrator will be validated through objective measurements and perceptual testing with musicians under controlled latency and packet loss conditions. The technology opens opportunities for remote rehearsals, distributed live performances, music education, and professional studio collaboration, contributing to the advancement of Networked Music Performance and Internet of Musical Things ecosystems.

Publications

Journals

Conferences