Skip to Main content Skip to Navigation
Conference papers

Improving memory efficiency for processing large-scale models

Gwendal Daniel 1 Gerson Sunyé 1, 2 Amine Benelallam 1, 2 Massimo Tisi 1 
1 ATLANMOD - Modeling Technologies for Software Production, Operation, and Evolution
LINA - Laboratoire d'Informatique de Nantes Atlantique, Département informatique - EMN, Inria Rennes – Bretagne Atlantique
Abstract : Scalability is a main obstacle for applying Model-Driven Engineering to reverse engineering, or to any other activity manipulating large models. Existing solutions to persist and query large models are currently ine cient and strongly linked to memory availability. In this paper, we propose a memory unload strategy for Neo4EMF, a persistence layer built on top of the Eclipse Modeling Framework and based on a Neo4j database backend. Our solution allows us to partially unload a model during the execution of a query by using a periodical dirty saving mechanism and transparent reloading. Our experiments show that this approach enables to query large models in a restricted amount of memory with an acceptable performance.
Document type :
Conference papers
Complete list of metadata

Cited literature [10 references]  Display  Hide  Download
Contributor : Amine Benelallam Connect in order to contact the contributor
Submitted on : Tuesday, July 22, 2014 - 6:16:16 PM
Last modification on : Wednesday, April 27, 2022 - 3:50:11 AM
Long-term archiving on: : Tuesday, November 25, 2014 - 11:35:37 AM


Files produced by the author(s)


  • HAL Id : hal-01033188, version 1


Gwendal Daniel, Gerson Sunyé, Amine Benelallam, Massimo Tisi. Improving memory efficiency for processing large-scale models. BigMDE, University of York, Jul 2014, York, UK, United Kingdom. ⟨hal-01033188⟩



Record views


Files downloads