You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First off, I want to say great work on EvoMaster. I'm just a bachelor university student who's stumbled upon this, and I successfully managed to generate tests and then (successfully) get those tests to fail upon changing the schema..
My question is: For whitebox testing, how are the tests generated (on the most basic level - i.e. where are the files created, where are the files being written to)? What is the general flow? My guess, from skimming through the different classes & functions in core is: HTTP Call -> Solution -> TestCase -> TestSuites -> <written to files somehow>
I've read through the doc which although didn't have what I needed, it pointed me in the right direction - however most of the useful information I've found is by digging through the very old issues & closed PRs which seemed to target the core functionality of EvoMaster.
Sorry if my thoughts are all over the place. I guess in a broader sense what I'm asking for is something similar to a use case diagram with the core functionalities of EvoMaster. If you just pointed me in the right direction, that would be great.
Thank you so much, and sorry for the wall of text.
The text was updated successfully, but these errors were encountered:
What I feel like would help me a lot is getting to understand the flow of the very first few iterations of EvoMaster, i.e. tag v0.0.2 - here: https://github.com/EMResearch/EvoMaster/tree/v0.0.2 (how are the tests generated? how does the "mutator" work?)
Sorry if I'm being a bit overbearing here, if you feel like this issue doesn't add anything to EvoMaster you can close it
thanks for your interest in EM.
a more detailed documentation of the internal aspects of EM would be useful for new developers (or people that want to make PR), but it is not something we have at the moment.
might do it in the future, but, unfortunately, likely not any time soon, as we have a long backlog of tasks
Hello,
First off, I want to say great work on EvoMaster. I'm just a bachelor university student who's stumbled upon this, and I successfully managed to generate tests and then (successfully) get those tests to fail upon changing the schema..
My question is: For whitebox testing, how are the tests generated (on the most basic level - i.e. where are the files created, where are the files being written to)? What is the general flow? My guess, from skimming through the different classes & functions in
core
is: HTTP Call -> Solution -> TestCase -> TestSuites -> <written to files somehow>I've read through the doc which although didn't have what I needed, it pointed me in the right direction - however most of the useful information I've found is by digging through the very old issues & closed PRs which seemed to target the core functionality of EvoMaster.
Sorry if my thoughts are all over the place. I guess in a broader sense what I'm asking for is something similar to a use case diagram with the core functionalities of EvoMaster. If you just pointed me in the right direction, that would be great.
Thank you so much, and sorry for the wall of text.
The text was updated successfully, but these errors were encountered: