Plenary Speakers

University Gustave Eiffel, France
Wearable devices embed inertial sensors whose records are processed for navigation instructions, health assessment, sports training or change in mobility behaviour. The applications are processing inertial or telecommunication signals sensed in our clothes, shoes and glasses. Complex methods, more and more based on artificial intelligence, are developed to process these data but they sometimes forget that human behaviour defies the developed methods. Defining the minimum performance requirements for a targeted application, calibrating embedded sensors and accounting for the hardware constraints of the wearables are classical R&D steps. The influence that humans can have on the quality of measurements (signal attenuation by the human body, change of behaviour, ageing, etc.) is however often forgotten. In this presentation, we will analyse the observability of gait parameters and navigation data with signals sensed by devices worn on different body parts (upper/lower body). We will also observe the human gait variation for the same person in different kinematic contexts (visually impaired people guided by a cane or a dog). The analysis will be supported by a theoretical and experimental approach with inertial signals and GNSS phase and pseudo-ranges data collected by pedestrians.
Istituto Italiano di Tecnologia (IIT), Italy
In this talk I discuss the development of a large area electronic skin and its application to create a biologically motivated model of peripersonal space in a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction between the robot and the environment, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement.
Indian Institute of Science, Bangalore
Multiplexed sensing platforms will be the key enablers of smart electronic systems of the future. Such platforms will require the integration of miniaturized sensor arrays at the system/chip level, using heterogeneous technologies. While we have made substantial progress in vision, tactile and auditory sensing applications, an equivalent of Moore’s law is missing in biological and chemical sensing applications. With phenomenal advances in semiconductor nanotechnology and printed/flexible electronics, the stage is now set for a new wave of sensor systems to be equipped with massive sensory functions, specifically with biological and chemical sensor arrays. In this talk, I will present two case studies from our research: (i) Biosensor systems for point of care diagnostics : the story of managing the sensing of multiple analytes in blood and urine with an eventual goal to realize “Lab on Palm” (ii) Gas sensor systems for environmental monitoring, breath analysis and hazardous gas leakage detection, with an eventual goal to realize the “Electronic Nose” With this backdrop, I will end my talk with some thoughts on future challenges in achieving highly complex and intelligent nanoscale sensory systems.

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.