-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JIT compilation in MathInterpreter? #10
Comments
Been doing some reading on this and it looks like this might be a bit simpler to achieve if the interpreted language were manipulated using LLVM. EDIT: see http://llvm.org/docs/tutorial/LangImpl1.html, though I think you've already done the first couple of chapters worth by the looks of it (lexer and AST generation). EDIT 2: There's also this: http://luajit.org/dynasm.html, though I'm not sure it'd work quite as well as LLVM. |
In principle we are talking about a week of work probably, as you said the AST generation is already implemented. One just have to rewrite the different virtual instructions to map them to whatever instruction generating function in the JIT framework. |
All good points. I envisaged some sort of preprocessor/autoconf/template magic that would detect which hardware platform the library was being compiled on and declare the correct instructions for the current platform, or even disable the JIT compiler if the platform is unknown. This would of course mean more work, but it would at least overcome the problem of cross-platform compatibility. |
The math interpreter is obviously very powerful. Performance is currently (and understandably) worse than native machine code. So my question: how hard would it be to add JIT compilation to speed up the interpreted code? I mean I don't expect this to be a trivial task, but are we talking weeks, months or years of time? I wouldn't expect a wide variety of instruction sets to be supported, but x86 and perhaps x86_64 might be a nice feature.
The text was updated successfully, but these errors were encountered: