Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark Performance #7

Closed
penghzhang opened this issue Mar 6, 2017 · 20 comments
Closed

Benchmark Performance #7

penghzhang opened this issue Mar 6, 2017 · 20 comments

Comments

@penghzhang
Copy link

I have installed the CPS benchmark projects in eclipse. But the performance is not so good as described in performance evaluation. So i don't know why it is. Could you share specific user manual with me to help find out where is my problem or give me any suggestions? Hope for your reply.

@abelhegedus
Copy link
Member

Hi,

are you referring to the contents of https://github.com/viatra/viatra-cps-benchmark/wiki/Performance-evaluation ? Those results are quite outdated and also specific to a given hardware-software configuration.

You can see up-to-date results here: https://build.incquerylabs.com/jenkins/job/viatra-cps-benchmark/lastSuccessfulBuild/artifact/benchmark/cpsBenchmarkReport.html
Note that this is running on a dedicated computer with Linux Mint 17.2, SATA3 SSD, 16 GB RAM, Core i5-4590 CPU. The measurements are done by executing a runtime Eclipse application separately for each configuration with a warmup phase to ensure class-loading does not affect the real runtime and with enough memory to avoid GC thrashing. The Eclipse application is prepared by Maven and is called from a Python script.

If you are running the benchmark in a different way, then I cannot guess what is the issue, since I don't know your hardware setup, software versions, what do you run and how.

If you need help in running the benchmark in the same way as we do, I would be happy to help. You can check the build output and the benchmark and reporting scripts

@penghzhang
Copy link
Author

Thank you for your reply.
I can't still run the benchmark with expected performance. my computer is Linux Redhat, 256 RAM, Core E5-2670 CPU. The steps i install the CPS benchmark are as fellows:

  1. Download the Eclipse(Neon) and install the Viatra plugins;
  2. Download the CPS exmaples project from https://github.com/viatra/org.eclipse.viatra.examples/tree/master/cps and download benchmark project from https://github.com/viatra/viatra-cps-benchmark;
  3. I run the Eclipse and then import CPS exmaples project;
  4. I start a run-time eclipse(by right click on CPS example project, choose run as ->Eclipse Application) and the import the benchmark project;
  5. I run the Junit test case defined in CPSDemonstratorIntegrationTest.xtend;
  6. Fetch the elapsed time from the console of eclipse;
    Could you help me check my steps and give your suggestions? And if possible, could you offer me an guide of the benchmark to help me run the benchmark with expected performance? Thanks in advance!

@abelhegedus
Copy link
Member

Linux Redhat, 256 RAM, Core E5-2670 CPU

Please clarify, your computer has 256MB RAM?

Download the Eclipse(Neon) and install the Viatra plugins ...

The recommended way for getting the benchmark up is described on the main page: https://github.com/viatra/viatra-cps-benchmark#getting-started

What you are doing adds two additional levels of runtimes and a huge number of unnecessary plugins.

If you just want to run the benchmark, you can download the prepared products and execute it the same way as the Python script does.

I run the Junit test case defined in CPSDemonstratorIntegrationTest.xtend;

That is not a benchmark test, as it's name says it is an integration test to ensure the toolchain works before actually runnning benchmarks.

If you really want to run benchmarks from the runtime Eclipse, either run the com.incquerylabs.examples.cps.rcpapplication.headless.application Eclipse application or the ToolchainPerformanceStatisticsBasedTest with the following properties file as described in its header comment.

Also, what is the "expected performance" that you are referring to?

@penghzhang
Copy link
Author

Thanks for you reply.

  1. My computer has 256GB RAM;
  2. I have tried the recommended way, but failed. I will try it later again;
  3. The "expected performance" means incremental transformation can be seed apparently;
  4. I am sorry i can not undertand the usage of CPSDemonstratorIntegrationTest.xtend;
  5. I want to not only run the benchmark but also do research on the implementation of VIATRA.
    I will try the benchmark with your suggestion, maybe i will encounter any questions and will ask you for help, so forgive me to disturb you. Thanks in advance!

@abelhegedus
Copy link
Member

My computer has 256GB RAM;

Yes, sorry, that would have been a more sensible guess. Did you also make sure to set the memory limits of the JVMs with -Xmx at least in the run configuration where you execute the tests?

I have tried the recommended way, but failed. I will try it later again;

If you describe where do you get stuck, I may be able to help.

The "expected performance" means incremental transformation can be seed apparently;

I am sorry i can not understand the usage of CPSDemonstratorIntegrationTest.xtend;

No problem. It is simply used during the Maven build of the benchmark code to execute all the steps on small inputs that will be executed in the real benchmark through the prepared product. This can catch errors in the tool implementations that may not come out when running the benchmark.

I want to not only run the benchmark but also do research on the implementation of VIATRA.

Great, if you have VIATRA specific questions, it may be best to ask them on the Eclipse Forums: https://www.eclipse.org/forums/index.php/f/147/

I will try the benchmark with your suggestion, maybe i will encounter any questions and will ask you for help, so forgive me to disturb you. Thanks in advance!

The way you have set it up is also OK, as long as everything compiles. You can import the benchmark projects into the same Eclipse where you have the CPS example projects imported.
That way, you only need to execute a Run Configuration to execute the tests or the benchmark.
The following is an example launch config of running the product as an Eclipse Application from the Eclipse IDE:
m2m_benchmark.zip

Simply unzip and copy the .launch file to the com.incquerylabs.examples.cps.rcpapplication.headless project then right-click, Run as... m2m_benchmark. You may have to open the Run Configurations, and Validate the Plug-ins and even Add Required Plugins if your setup is a bit different from mine. For the various argument options, this page should provide some help: https://github.com/viatra/viatra-cps-benchmark/wiki/Benchmark-specification

@abelhegedus
Copy link
Member

abelhegedus commented Mar 8, 2017 via email

@abelhegedus
Copy link
Member

Can I copy-paste your e-mail on the issue or could you do that yourself?

Well, that's a moot question now 😄

I will answer your question soon.

@abelhegedus
Copy link
Member

I tried to change the case to "CLIENT_SERVER" by setting option "-case" in run configuration parameter to "CLIENT_SERVER". After execution, i saw the EObjects in the log file is 196 and EReferences is 340 and when i change the "-scale" option to 4, 16, 32, the EObjects and EReferences didn't change. Is my operation right?

You are probably seeing the statistics for the "warmup" phase, which always generates with scale 1, later in the log you should have a second "CPS Stats:" part that lists the size with the scale you entered.
See for example in this log file:

EObjects: 43831
EReferences: 4426283

Have you run the benchmark with large scale(like Eobjects > 1,000,000)? In your benchmark performance report, i just see the case of "STATISTICS_BASED", Have you test other cases(like "CLIENT_SERVER", "LOWSYNCH")?

We are running the statistics based case only in our CI, although it would be trivial to run it with the other options. If you want, I can perform a comparative run on our server with one of the other cases. Similarly, we would be very interested to receive the JSON or CSV results from a run on your server.

If you didn't conduct 2, could you share a guide with me to help me run the benchmark with large scale model(like Eobjects > 1,000,000)?

We should do some digging to see which case would be most suitable. As you can see, the number of references (I'm not sure if it includes attribute values as well) scales more steeply than the EObjects in the statistics based case, so some other case should probably be better. I will look around if there is a test to generate CPS stats easier without running the full benchmark.

Looking at https://github.com/viatra/viatra-cps-benchmark/wiki/Performance-evaluation running the Low-synch scenario with scales higher than 512 already results in the numbers you want to get.

As for running a benchmark on these sizes:

  • Set both -Xms and -Xmx appropriately to avoid memory problems (I assume that if you have a server with 256GB RAM, you are aware of JVM memory specifics). I would suggest -Xms12G and -Xmx16G at least.
  • We usually run the benchmark with timeouts, but the 1000 second timeout may be too low.
  • Some variants cannot scale to these sizes, but I guess you are running the variants one-by-one for now.
  • You may want to try other JVMs, like https://www.azul.com/products/zing/ (free for 30 days) which is optimized for heaps above 10GB and to run on many-core systems.

We would be very interested to run some comparisons with the same set of benchmarks on your machine and ours?
Would you be open to do some shared experiments (essentially running the same product with the same input configurations on both machines and sharing the raw results)? We are happy to answer questions even if you are not interested in this, but we would be willing to offer more feedback and collaboration in case you are interested.

@penghzhang
Copy link
Author

I am sorry i didn't answer you in time because i was running the benchmark with your suggestions in different case, different scale and different scenario and also there is time difference between two of us. Now i have run several scenarios on my linux server and part of the report is as follows(Unit of consumed time is ns):
Test 1: type=INCR_VIATRA_TRANSFORMATION, case=LOW_SYNCH , scale=256, consumed time: InitializationPhase=7240759457, M2M1=102168809371, EmptyModification=1196, M2M2=81030
Test 2: based on Test 1, i set the scale to 512,consumed time: InitializationPhase=18326172850, M2M1=375126942876, EmptyModification=887, M2M2=87759;
Could you help me check whether the results above are reasonable ? And in the log file i find the print like "Non-unique ApplicationInstance identifier: simple.cps.app.AC1665.inst0", is it normal output?

Besides the test result, i still have a question:

  1. You list four incremental transformation type("INCR_VIATRA_QUERY_RESULT_TRACEABILITY",
    "INCR_VIATRA_EXPLICIT_TRACEABILITY",
    "INCR_VIATRA_AGGREGATED" and
    "INCR_VIATRA_TRANSFORMATION"), is each of them used for specific scenario and scale?
    or they are equivalent and can be used for any scale and any scale ?

@abelhegedus
Copy link
Member

InitializationPhase=7240759457, M2M1=102168809371, EmptyModification=1196, M2M2=81030
Could you help me check whether the results above are reasonable ?

You can see that it says EmptyModification which means it does not modify the model, therefore the M2M2 phases will do less than if the model is modified. I have opened #9 to track this.

And in the log file i find the print like "Non-unique ApplicationInstance identifier: simple.cps.app.AC1665.inst0", is it normal output?

It is possible that the Low-synch scenario generator implementation creates duplicate identifiers. I will have to check that (opened #8). However, this should not cause problems in the built-in transformation variants.

You list four incremental transformation type("INCR_VIATRA_QUERY_RESULT_TRACEABILITY",
"INCR_VIATRA_EXPLICIT_TRACEABILITY",
"INCR_VIATRA_AGGREGATED" and
"INCR_VIATRA_TRANSFORMATION"), is each of them used for specific scenario and scale?
or they are equivalent and can be used for any scale and any scale ?

These are different implementations of the same transformation (see https://github.com/viatra/viatra-docs/blob/master/cps/Alternative-transformation-methods.adoc ) and can be used for any case and any scale (though they scale differently).

I have also opened #10 to track the need to make model size statistics generation easier.

abelhegedus added a commit that referenced this issue Mar 9, 2017
@penghzhang
Copy link
Author

Yeah, I just scaned the benchmark source code and I saw there were only two modification phase file for the cases of CLIENT_SERVER and STATISTICS_BASED. So for now, i think i can only run these two cases and if i want to run a large scale model(EObject>1000000), i should change the "scale" parameter based on the existing two cases, am i right?

@abelhegedus
Copy link
Member

In the meantime, I have added the modification phases to all cases, so you can use LOW_SYNCH as well after pulling the changes.

Also, yes, unfortunately you will have to change the scales for some cases, as the generator rules are not synchronised between cases.

@penghzhang
Copy link
Author

penghzhang commented Mar 10, 2017

Thanks for your updating the benchmark source code so quickly. and i have downloaed it. The latest report is as follows(Time unit is ns):
scale Initialization M2MTransformation1 Modification M2MTransformation2
128 4121197437 38276939661 10073802 5303
256 8839844135 104213357352 13028806 6442
512 17417958360 341063492940 20576541 6293
Could you help me check whether the data is reasonable?
In the meantime, the modification in the benchmark is just adding a new instance. Have ever tried updating an instance and deleting an instance? Is the performance trend same as adding operation?

@abelhegedus
Copy link
Member

Could you help me check whether the data is reasonable?

Although I don't know exactly what output are you looking at, the values look reasonable.

In the meantime, the modification in the benchmark is just adding a new instance. Have ever tried updating an instance and deleting an instance? Is the performance trend same as adding operation?

We haven't explicitly measured other modifications, but in general, the type of the change is not relevant for most variants. For the incremental variants, usually the size of the change matters (deleting an app instance would be still rather small, deleting an app type with a big state machine and lots of instances is another matter).

@penghzhang
Copy link
Author

Thanks for your reply and i understand what you said.
And Corresponding to the report above(Abstracted form the file under jason folder) , the XformType is INCR_VIATRA_EXPLICIT_TRACEABILITY and case is LOW_SYNCH(forgot to attach them then).
Thanks again!

@abelhegedus
Copy link
Member

For your information, I have run a benchmark with the Low-synch case with some of the transformations on our build server: https://build.incquerylabs.com/jenkins/job/viatra-cps-benchmark/76/artifact/benchmark/cpsBenchmarkReport.html

As you can see, with the -Xmx10G limit, we cannot run the incremental transformation on the 1024 scale, although the Local-search based variant can still complete due to its lower memory requirements (of course it is not incremental).

@penghzhang
Copy link
Author

Yeah. At the same time, I have several questions about CPS:

  1. When using VIATRA for M2M incremental transformation, must i build the traceability model?
  2. In CPS domain model(https://github.com/viatra/viatra-docs/blob/master/cps/Domains.adoc), i see two abstract objects, 'identifiable' and 'DeploymentElement'. What are they used for? Are they necessary?

@abelhegedus
Copy link
Member

When using VIATRA for M2M incremental transformation, must i build the traceability model?

In the CPS-to-deployment transformation, defined here, building the traceability model is required. However, there have been previous discussions about this (see #4 as a result), since some transformation tools build internal traceability models that can be used instead of this explicit one.

However, you can definitely write M2M incremental transformations in VIATRA without an explicit traceability model. One option is to link the target model and the source model directly, the other is to use some inherent data for correspondence (e.g. in the CPS example, it would be possible to use the id, ip address, and other mapped data to pair CPS and deployment elements, although maybe not entirely).

Finally, while the CPS-to-deployment functional test suite enforces the traceability model, the performance benchmarks do not check transformation correctness (since we assume the transformation passed the functional tests). Of course, directly comparing approaches that build the traceability model to approaches that don't is not really correct from a benchmarking point of view, since the performed tasks are different.

In CPS domain model(https://github.com/viatra/viatra-docs/blob/master/cps/Domains.adoc), i see two abstract objects, 'identifiable' and 'DeploymentElement'. What are they used for? Are they necessary?

Well, they are necessary to define a simple traceability metamodel (see on the same page). Additionally, they contain common features (id and description) that many other types inherit.

@penghzhang
Copy link
Author

penghzhang commented Mar 14, 2017

  • One option is to link the target model and the source model directly
    How to understand it? Could you explain it in detail or offer me an example?
  • the other is to use some inherent data for correspondence.
    For exmaple in action parts(including CREATED, UPDATED and DELETED) of HostInstance opetation rule of CPS demostrator, I can use the key attribute(nodeIp) of matched HostInstance in CPS model to find DeploymentHost(ip==nodeIp) in deployment model, then finish the operations(like update or delete the element in deployment model) . Am i right?

And now, from my point, if i want to use VIATRA to implement the incremental Xform with a model defined by myself, the traceability model and the two abstract object are all optional. Am i right?

@abelhegedus
Copy link
Member

One option is to link the target model and the source model directly
How to understand it? Could you explain it in detail or offer me an example?

You could change the DeploymentHost EClass to have a cpsHostInstance EReference with CPS HostInstance type and set it during the transformation.

the other is to use some inherent data for correspondence.
For exmaple in action parts(including CREATED, UPDATED and DELETED) of HostInstance opetation rule of CPS demostrator, I can use the key attribute(nodeIp) of matched HostInstance in CPS model to find DeploymentHost(ip==nodeIp) in deployment model, then finish the operations(like update or delete the element in deployment model) . Am i right?

Yes.

And now, from my point, if i want to use VIATRA to implement the incremental Xform with a model defined by myself, the traceability model and the two abstract object are all optional. Am i right?

Yes, you can implement incremental transformations with VIATRA on arbitrary models.

If you need more detailed information, we should set up a Skype call or similar teleconference, to discuss your use cases and any future industrial or academic collaboration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants