Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching optimization for speeding up Xtext serialization #317

Open
NicolasRouquette opened this issue Jul 12, 2019 · 0 comments
Open

Caching optimization for speeding up Xtext serialization #317

NicolasRouquette opened this issue Jul 12, 2019 · 0 comments

Comments

@NicolasRouquette
Copy link
Member

Xtext serialization is very slow because it invokes org.eclipse.xtext.serializer.tokens.CrossReferenceSerializer.serializeCrossRef whenever it needs to serialize a reference to an element, which, in turn leads to calls to an implementation of org.eclipse.xtext.scoping.getScope, which, in turn leads to scanning for candidates for resolving the reference. When loading a model, calls to getScope cannot be optimized because the model is being modified. When saving a model, calls to getScope could be optimized using caching techniques.

Converting large OMLZip models to OML is extremely slow because of the excessive calls to expensive getScope lookups. When saving the model, we know that the objects & references will not change so these calls are ripe for optimization with caching techniques.

This will not address the problem of parsing very large OML models because during parsing, the model is being modified so it is more difficult to determine an effective strategy for determining which calls to getScope cannot be optimized because the resource is being loaded & modified vs. calls to getScope that could be optimized because the scope involved has already been read. In the latter case, this does not guarantee that the resource loaded will not be further modified in general.

For saving, how can an implementation of getScope determine whether it is being called as part of a save operation such that caching optimization can be enabled?

There are several ways to achieve this, including but not limited to the following:

  1. Add to ResourceSet.getLoadOptions() a temporary flag

Unfortunately, there is no ResourceSet.getSaveOptions -- instead, save options are specified on the call to Resource.save(...) but are not stored explicitly on the Resource or ResourceSet being saved. Despite that, one could consider abusing the EMF API and use ResourceSet.getLoadOptions() for indicating whether we're saving resources.

Roughly,

try {
  rs.getLoadOptions().put("OMLSAVE", "true")
  r.save(...)
} finally {
  rs.getLoadOptions().put("OMLSAVE", "false")
}

In the context of getScope, check the OMLSAVE flag on the resource set to use or bypass caching optimization.

  1. Use AspectJ Wormhole pattern to determine whether getScope is called in the context of a call to an implementation of org.eclipse.emf.ecore.resource.Resource.save()

For example: https://github.com/JPL-IMCE/imce.magicdraw.library.enhanced_api/blob/master/src/main/aspectj/gov/nasa/jpl/magicdraw/advice/ui/browser/EnhanceBrowserContextConfiguratorWithShowPopupMenuAdvice.java

Alternative (2) seems cleaner than (1).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant