HoME: a Household Multimodal Environment

Abstract : We introduce HoME: a Household Multimodal Environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more. We hope HoME better enables artificial agents to learn as humans do: in an interactive, multimodal, and richly contextualized setting.
Complete list of metadatas

https://hal.inria.fr/hal-01653037
Contributor : Florian Strub <>
Submitted on : Friday, December 1, 2017 - 4:34:48 AM
Last modification on : Tuesday, September 17, 2019 - 11:02:02 AM

Links full text

Identifiers

  • HAL Id : hal-01653037, version 1
  • ARXIV : 1711.11017

Citation

Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, et al.. HoME: a Household Multimodal Environment. NIPS 2017's Visually-Grounded Interaction and Language Workshop, Dec 2017, Long Beach, United States. ⟨hal-01653037⟩

Share

Metrics

Record views

593