Improving memory efficiency for processing large-scale models

Gwendal Daniel 1 Gerson Sunyé 1, 2 Amine Benelallam 1, 2 Massimo Tisi 1
1 ATLANMOD - Modeling Technologies for Software Production, Operation, and Evolution
LINA - Laboratoire d'Informatique de Nantes Atlantique, Département informatique - EMN, Inria Rennes – Bretagne Atlantique
Abstract : Scalability is a main obstacle for applying Model-Driven Engineering to reverse engineering, or to any other activity manipulating large models. Existing solutions to persist and query large models are currently ine cient and strongly linked to memory availability. In this paper, we propose a memory unload strategy for Neo4EMF, a persistence layer built on top of the Eclipse Modeling Framework and based on a Neo4j database backend. Our solution allows us to partially unload a model during the execution of a query by using a periodical dirty saving mechanism and transparent reloading. Our experiments show that this approach enables to query large models in a restricted amount of memory with an acceptable performance.
Document type :
Conference papers
Complete list of metadatas

Cited literature [10 references]  Display  Hide  Download

https://hal.inria.fr/hal-01033188
Contributor : Amine Benelallam <>
Submitted on : Tuesday, July 22, 2014 - 6:16:16 PM
Last modification on : Wednesday, December 5, 2018 - 1:22:14 AM
Long-term archiving on : Tuesday, November 25, 2014 - 11:35:37 AM

File

bigmde14_submission_6_1_.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01033188, version 1

Collections

Citation

Gwendal Daniel, Gerson Sunyé, Amine Benelallam, Massimo Tisi. Improving memory efficiency for processing large-scale models. BigMDE, University of York, Jul 2014, York, UK, United Kingdom. ⟨hal-01033188⟩

Share

Metrics

Record views

643

Files downloads

618