Collision avoidance behavior between walkers: global and local motion cues

Sean Lynch 1, 2 Richard Kulpa 1, 2 Laurentius Meerhoff 1, 2 Julien Pettré 3 Armel Crétual 1, 2 Anne-Hélène Olivier 1, 2
2 MIMETIC - Analysis-Synthesis Approach for Virtual Human Simulation
UR2 - Université de Rennes 2, Inria Rennes – Bretagne Atlantique , IRISA_D6 - MEDIA ET INTERACTIONS
3 RAINBOW - Sensor-based and interactive robotics
Inria Rennes – Bretagne Atlantique , IRISA_D5 - SIGNAUX ET IMAGES NUMÉRIQUES, ROBOTIQUE
Abstract : Daily activities require agents to interact with each other, such as during collision avoidance. The nature of visual 6 information that is used for a collision free interaction requires further understanding. We aim to manipulate the nature of visual 7 information in two forms, global and local information appearances. Sixteen healthy participants navigated towards a target in an 8 immersive computer-assisted virtual environment (CAVE) using a joystick. A moving passive obstacle crossed the participant's 9 trajectory perpendicularly at various pre-defined risks of collision distances. The obstacle was presented with one of five virtual 10 appearances, associated to global motion cues (i.e., a cylinder or a sphere), or local motion cues (i.e., only the legs or the trunk). A full 11 body virtual walker, showing both local and global motion cues, used as a reference condition. The final crossing distance was affected 12 by the global motion appearances, however, appearance had no qualitative effect on motion adaptations. These findings contribute 13 towards further understanding what information people use when interacting with others. 14 Ç 15 1 INTRODUCTION 16 M ANY ordinary situations require humans to coordinate 17 their movements with others. For example, when 18 walking through a public place, agents regulate interper-19 sonal interactions to avoid collision. Such interactions have 20 been extensively studied for example during locomotion, 21 focusing on following behavior [1], [2], head-on collision 22 avoidance [3], [4], [5], meeting and reaching [6], [7], or colli-23 sion avoidance [8], [9], [10], [11], [12]. 24 This paper focuses on collision avoidance between 25 walkers. Cutting et al.[13] proposed that the ability to recog-26 nize the likelihood of an upcoming collision and estimate its 27 timing are two fundamental issues that need to be resolved to 28 avoid potential contact. The ecological approach of visual per-29 ception recognizes behavior as being shaped by the relation-30 ship of an agent and their environment [14], suggesting an 31 agent's behavior is formed by their affordances, opportunities 32 for action depending on both the agent and the environment 33 [15]. Perception and action is a dynamic coupling between an 34 agent and their environment, the environment forming 35 objects that repel or attract an agent, depending on their rela-36 tive configurations (distance and angle with respect to head-37 ing) [16], [17]. An obstacle position is perceived in terms of 38 world coordinates as opposed to agent coordinates, determi-39 nation of obstacle positioning and heading being essential for 40 guiding locomotion and interaction [18]. Answering the two 41 fundamental issues posed by Cutting et al. [13] is aided 42 through this dynamic coupling of perception and action using 43 perceptual variables such as tau [19] and bearing angle [20]. 44 This allows agents to determine time-to-contact,-interaction, 45-passage, and gap closure determination [21]. In this context 46 it has been shown that control laws based only on visual per-47 ception of changes in bearing angle are sufficient for the emer-48 gence of self-organized walking patterns among groups of 49 pedestrians in crowd simulations [22]. 50 This work aims at answering the following question: what 51 are the visual cues from body motion that convey relevant 52 information and allow humans to accurately estimate others' 53 motion and corresponding risk of collision? Are these visual 54 cues extracted from local body parts, such as limbs, or from a 55 global perception of the body? To answer this question, we 56 manipulate the visual aspect of an obstacle and observe the 57 effect of appearance on collision avoidance strategies. In the 58 present work, we defined as local cues, all kinematic infor-59 mation of human body parts, such as the position and orien-60 tation of limbs, or the orientation of the trunk. On the other 61 hand, global cues represent the global kinematic information 62 that is independent of the limb motion such as the global dis-63 placement of a human that defines the global trajectory. 64 Based on these definitions, local visual cues are the informa-65 tion that can be picked up by a human from the local motions 66 of others' limbs or trunk, and global visual cues refer to infor-67 mation that is conveyed by others' global motion. 68 2 RELATED WORK 69 Olivier and colleagues [8] proposed a variable, named Mini-70 mal Predicted Distance (MPD) to describe combined inter-71 personal interactions during collision avoidance. MPD J. Pettr e is with INRIA Rennes,
Type de document :
Article dans une revue
IEEE Transactions on Visualization and Computer Graphics, Institute of Electrical and Electronics Engineers, 2017, 23, pp.1 - 10. 〈10.1109/TVCG.2017.2718514〉
Liste complète des métadonnées

Littérature citée [18 références]  Voir  Masquer  Télécharger

https://hal.inria.fr/hal-01557763
Contributeur : Sean Lynch <>
Soumis le : jeudi 23 novembre 2017 - 10:37:41
Dernière modification le : lundi 4 juin 2018 - 10:39:20

Identifiants

Citation

Sean Lynch, Richard Kulpa, Laurentius Meerhoff, Julien Pettré, Armel Crétual, et al.. Collision avoidance behavior between walkers: global and local motion cues. IEEE Transactions on Visualization and Computer Graphics, Institute of Electrical and Electronics Engineers, 2017, 23, pp.1 - 10. 〈10.1109/TVCG.2017.2718514〉. 〈hal-01557763〉

Partager

Métriques

Consultations de la notice

443