InEvent - Accessing Dynamic Networked Multimedia Events
Several partners of the inEvent consortium have indeed access to (and continuously generate) such large multimedia repositories, which keep being enriched everyday by new recordings, as well as social network data. The resulting resources often share common or related information, or are highly complementary, but also come from different sources, in different formats, and different types of metadata information (if any). Hence, it is still impossible to properly search across those very rich multimedia resources simply based on metadata.
Exploiting, and going beyond, the current state-of-the-art in audio, video, and multimedia processing and indexing, the present project proposes research and development towards a system that addresses the above problem by breaking our multimedia recordings into interconnected "hyper-events" (as opposed to hypertext) consisting of a particular structure of simpler "facets" which are easier to search, retrieve and share. Building and adaptively linking such "hyper-events", as a means to search and link networked multimedia archives, will result in more efficient search system, in which information can be retrieved based on "insights" and "experiences" (in addition to the usual metadata).
Reaching the aforementioned goal requires challenging RTD efforts going much beyond current state-of-the- art in the ﬁelds of knowledge representation, audio processing, video analysis, semantics of information, and exploitation of social network information. Ultimately, the main goal of inEvent could thus be summarized as developing new ways to replace the usual "hypertext" links (linking "information" bits) by multi-faceted "hyper-events" (linking different "experiences/insights" related to dynamic multimedia recordings).
Keywords: Networked multimedia events; indexing and searching, multimedia indexing, retrieval, searching, and sharing, social network data exploitation, hyper-events.