Run a C++ model from python? #927
-
Is there a way to define a model in C++ and then run that model in a Python CUDA ensemble? We have a model already created in C++, but we want to wrap an evolutionary algorithm around it in Python. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
Officially via FLAMEGPU, no. All our Python bindings are generated via SWIG. We tell it what we want wrapped in If you wanted to achieve this, you'd probably want to create a C++ method that returns (or fills) a It's not something we've tried or required, and SWIG can be a bit unpredictable at times, so I'm not sure if any unexpected issues would occur whilst trying to get SWIG to play nicely. |
Beta Was this translation helpful? Give feedback.
-
Alternatively, translating the model to Python (using C++ style agent functions) should be fairly painless as there is a one to one mapping of all API calls. This obviously depends on if you have done anything with any other C++ libraries. In which case Robs answer is sensible. It would also be possible to have Python drive a C++ ensemble indirectly through file IO. |
Beta Was this translation helpful? Give feedback.
Officially via FLAMEGPU, no. All our Python bindings are generated via SWIG. We tell it what we want wrapped in
swig/python/flamegpu.i
.If you wanted to achieve this, you'd probably want to create a C++ method that returns (or fills) a
ModelDescription
with your model (I expect this would need to be part of the mainflamegpu
project. Then extendswig/python/flamegpu.i
, so that SWIG additionally wraps that method to add it to the Python interface. If all is done correctly, when you rebuildpyflamegpu
your method should now be available in thepyflamegpu
module.It's not something we've tried or required, and SWIG can be a bit unpredictable at times, so I'm not sure if any unexpected issues…