Outils pour utilisateurs

Outils du site


Panneau latéral

Accueil

Select other language :


Apprentissage

Enseignements

Enseignements Département Informatique SI5 et Master IFI

Enseignements Département Bâtiment Polytech'Nice

Autres Formations française et étrangère

Activités administratives, Ingénierie et Innovation Pédagogiques

Apprentissage Département Informatique SI5/Master 2 ingénierie informatique EUR DS4H


Recherche

Valorisation de la Recherche

Dépôts Logiciels à l’Agence de Protection des Programme (APP)

Valorisation des résultats de recherche et transfert

Diffusion de la Culture scientifique et Technologique

Communications de presse

Séminaire ENSI Tunis

Pédagogie Innovante

Relations industrielles et socio-économique

Organisation de Manifestations

  • Conférence sur les FabLabs, Alexandre Schneider, Professeur Agrégé en Génie Mécanique, Université de Reims Champagne-Ardenne Web
  • Journées UbiMob'14 Site Web

Animation de la Recherche

U-Santé

Privé

Outils

Sources d'Informations

news:journee_labex_esante:resume_bremond

Title : Scene understanding for Activity Monitoring

Intervenant : François Bremond, Directeur de Recherche INRIA, responsable du projet STARS d'Inria Sophia Antipolis - Méditerranée

Scene understanding is the process, often real time, of perceiving, analyzing and elaborating an interpretation of a 3D dynamic scene observed through a network of sensors (e.g. video cameras). This process consists mainly in matching signal information coming from sensors observing the scene with models which humans are using to understand the scene. Based on that, scene understanding is both adding and extracting semantic from the sensor data characterizing a scene. This scene can contain a number of physical objects of various types (e.g. people, vehicle) interacting with each others or with their environment (e.g. equipment) more or less structured. The scene can last few instants (e.g. the fall of a person) or few months (e.g. the depression of a person), can be limited to a laboratory slide observed through a microscope or go beyond the size of a city. Sensors include usually cameras (e.g. omni-directional, infrared), but also may include microphones and other sensors (e.g. optical cells, contact sensors, physiological sensors, radars, smoke detectors). Scene understanding is influenced by cognitive vision and it requires at least the melding of three areas: computer vision, cognition and software engineering. Scene understanding can achieve five levels of generic computer vision functionality of detection, localization, tracking, recognition and understanding. But scene understanding systems go beyond the detection of visual features such as corners, edges and moving regions to extract information related to the physical world which is meaningful for human operators. Its requirement is also to achieve more robust, resilient, adaptable computer vision functionalities by endowing them with a cognitive faculty: the ability to learn, adapt, weigh alternative solutions, and develop new strategies for analysis and interpretation. In this talk, we will discuss how scene understanding can be applied to Home Care Monitoring. Since the population of the older persons grows highly, the improvement of the quality of life of older persons at home is of a great importance. This can be achieved through the development of technologies for monitoring their activities at home. In this context, we propose activity monitoring approaches which aim at analysing older person behaviors by combining heterogeneous sensor data to recognize critical activities at home. In particular, this approach combines data provided by video cameras with data provided by environmental sensors attached to house furnishings. There are 3 categories of critical human activities:

  • Activities which can be well described or modeled by users
  • Activities which can be specified by users and that can be illustrated by positive/negative samples representative of the targeted activities
  • Rare activities which are unknown to the users and which can be defined only with respect to frequent activities requiring large datasets

In this talk, we will then present several techniques for the detection of people and for the recognition of human activities using in particular 2D or 3D video cameras combined with other sensors. More specifically, there are 3 categories of algorithms to recognize human activities:

  • Recognition engine using hand-crafted ontologies based on a priori knowledge (e.g. rules) predefined by users. This activity recognition engine is easily extendable and allows later integration of additional sensor information when needed [Robert 2012, Sacco 2013].
  • Supervised learning methods based on positive/negative samples representative of the targeted activities which have to be specified by users. These methods are usually based on Bag-of-Words computing a large variety of spatio-temporal descriptors [Bilinski 2012, 2013].
  • Unsupervised (fully automated) learned methods based on clustering of frequent activity patterns on large datasets which can generate/discover new activity models [Pusiol 2012].

We will also discuss important issues such as the processing and usage of large amount of video data. We will illustrate the proposed activity monitoring approaches through several home care application datasets:

Ger’Home

Démonstrations

Activity Recognition

CMRR Nice

Retour

news/journee_labex_esante/resume_bremond.txt · Dernière modification: 2014/10/28 11:56 par tigli