-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Proper tco optimization #121
base: master
Are you sure you want to change the base?
Conversation
Builtins can be implemented too, I.e by utilizing generators/async-await In cpp-jsonnet, there is a PR to implement manifestification inside of interpreter loop, and it looks like a cool idea |
pub(crate) struct TcVM { | ||
pub(crate) vals: Fifo<Val>, | ||
pub(crate) ctxs: Fifo<Context>, | ||
pub(crate) apply: Fifo<TailCallApply>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right here I have 3 stacks, where TailCallApply is an enum with items of varying length, that contains references to other stacks.
This enum value is pretty big, with no means to shrink without moving some things to allocation.
rsjsonnet did funny thing here, where it has 9 stacks instead (https://github.com/eduardosm/rsjsonnet/blob/main/rsjsonnet-lang/src/program/eval/mod.rs#L32-L40), and keeps allocations even, because all the variable-size data is in the other stacks. This is pretty verbose code, with some implicit relations...
But I think it is possible to extend this approach using proc macro:
#[derive(FifoData)]
enum Op {
A { a: u32, b: u32 },
B { v: String },
C { a: Context, b: Val, c: u32, d: String }
// There should be a way to opt-out from moving data to stacks,
// the remaining data might be attached to... OpTag enum?
}
Which would expand into
enum OpTag {
A,
B,
C,
}
struct OpFifo {
ops: Vec<OpTag>,
u32s: Vec<u32>,
strings: Vec<String>,
// Where Val can also be inlined to some extent together with its children, so this proc macro
// should allow composition of its elements
vals: Vec<Val>,
contexts: Vec<Context>,
}
impl Op {
fn pop(fifo: &mut OpFifo) -> Self {
match fifo.ops.pop() {
OpTag::A => Self::A { b: fifo.u32s.pop(), a: fifo.u32s.pop() },
OpTag::B => Self::B { v: fifo.strings.pop() },
OpTag::C => Self::C { d: fifo.strings.pop(), c: fifo.u32s.pop(), b: fifo.vals.pop() },
}
}
fn push(self, fifo: &mut OpFifo) {
match self {
Self::A {a, b} => { fifo.ops.push(OpTag::A); fifo.u32s.push(a); fifo.u32s.push(b); },
_ => todo!(),
}
}
}
This structure is kinda dumb for the provided enum, but might make sense for the whole jrsonnet?
C++ jsonnet does have proper TCO optimization.
However, I haven't used this approach since it is hard to maintain in some cases, especially when it is used in most of the code.
But now I got an idea: Why not implement a hybrid approach where TCO exists only for the main interpreter loop and not in the rest of the codebase, i.e. builtins?
In this PR I explore this possibility, it may also allow for async builtins/std.parallelMap/huge optimizations in recursive functions/better GC root management strategies.
Stack traces and some other things are broken for now, not ready for use.