HoME: a Household Multimodal Environment

Abstract : We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.
Type de document :
Communication dans un congrès
NIPS 2017's Visually-Grounded Interaction and Language Workshop, Dec 2017, Long Beach, United States
Liste complète des métadonnées

https://hal.inria.fr/hal-01653037
Contributeur : Florian Strub <>
Soumis le : vendredi 1 décembre 2017 - 04:34:48
Dernière modification le : mardi 3 juillet 2018 - 11:34:56

Lien texte intégral

Identifiants

  • HAL Id : hal-01653037, version 1
  • ARXIV : 1711.11017

Citation

Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, et al.. HoME: a Household Multimodal Environment. NIPS 2017's Visually-Grounded Interaction and Language Workshop, Dec 2017, Long Beach, United States. 〈hal-01653037〉

Partager

Métriques

Consultations de la notice

512