Event: MARVEL: Audio-Visual Machine Learning Models and Applications in Transport
  Date: Wednesday 20 March 2024
  Time: 12:00 - 13:00
  Venue: Room 206, Building O, Campus Hub or Online () - in person participation is encouraged.
Smart city environments generate large amounts of data from multimodal sources such as video cameras and microphones installed across the cities. Most of this data is largely underutilised and eventually deleted, mainly because of engineering and technology limitations.
In an attempt to narrow the gap, MARVEL, a EU Horizon 2020 RIA Project, developed an experimental framework to manage the flow and processing of multimodal data over an Edge-to-Fog-to-Cloud (E2F2C) infrastructure, which would allow the end-user (e.g., researchers, engineers, managers or policy-makers) to extract useful information from the raw data via a graphical user interface (GUI).
The platform has been demonstrated across three experimental pilots carried out in Trento (Italy), Malta and the University of Novi Sad (Serbia). The use cases are designed from a user-centric perspective and address a number of societal challenges in urban mobility and personal security. Following an overview of the MARVEL framework, we will discuss some of the AI models implemented and their evaluation in the wild. We will conclude the talk with a discussion on the framework鈥檚 perceived impact on society.
Registration info is available .

 
								 
								