HoME: a Household Multimodal Environment - Inria - Institut national de recherche en sciences et technologies du numérique Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

HoME: a Household Multimodal Environment

Résumé

We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.

Dates et versions

hal-01653037 , version 1 (01-12-2017)

Identifiants

Citer

Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, et al.. HoME: a Household Multimodal Environment. NIPS 2017's Visually-Grounded Interaction and Language Workshop, Dec 2017, Long Beach, United States. ⟨hal-01653037⟩
609 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More