Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the usage of Lazy #2

Open
AndrasKovacs opened this issue Apr 6, 2020 · 4 comments
Open

About the usage of Lazy #2

AndrasKovacs opened this issue Apr 6, 2020 · 4 comments

Comments

@AndrasKovacs
Copy link

Am I correct that the reason Lazy is used in evaluation, is that evaluation returns in the strict M monad, hence returned values are always forced?

As I see M only does fresh var generation. What do you think about using de Bruijn indices and levels? In that case, would eval be pure in your setup, and then Lazy could be dropped?

@ollef
Copy link
Owner

ollef commented Apr 6, 2020

M is also a Task, which allows the evaluator to make queries such as asking for the definition or type of top-level entities. It is used by the compiler driver to do fine-grained dependency tracking for incremental compilation.

If it wasn't for that, I would probably try to go for a pure solution with levels in the domain, but since it's already impure, I chose (locally) unique integers because they're more convenient when you can afford them.

I wonder if it'd be possible to implement a new kind of mutable reference that doesn't have the performance problems of IORef, but limits usage to situations like Lazy. What do you think?

@AndrasKovacs
Copy link
Author

AndrasKovacs commented Apr 6, 2020

I see now the Task, that's where the IO actually comes from. As to the IORef issue, you could try unsafeInterleaveIO or its dupable cousin, as the supposedly standard solution for laziness in IO.

I haven't yet tried interleaving, I always just blithely unsafeperformed. Basically,unsafeInterleave-ing is roughly the same as creating a thunk via unsafePerform. However, if we go with this, then in the case of evaluation, we're writing an IO function whose only actual IO effect happens via unsafePerform, which is a bit silly, so why not just drop IO altogether. This has been my previous reasoning. I think side effecting lazy evaluation is not avoidable in a high-performance implementation. Of course, it should be possible to make this safer by some phantom typed API.

In my planned implementations, the main complication of unsafe IO comes up when we want to backtrack, e.g. undo meta solutions. Here, we have to make sure that evaluation only reads data which is "committed" and cannot be backtracked from. AFAIK something like this is implemented in Agda.

@AndrasKovacs
Copy link
Author

AndrasKovacs commented Apr 6, 2020

BTW, I'm warming up to your version of Glued and I plan to benchmark an optimized version of it versus the older smalltt version. Without benchmarking I can only guess, but now I think there is a reasonable chance that it could be actually faster than my version. It seems better in terms of sharing computation, and a bit worse in terms of thunk creation and indirections. However, if it has comparable performance, then I'll probably switch to it, because it makes the code much simpler.

ollef added a commit that referenced this issue Apr 6, 2020
This might be faster than using an IORef, though I don't notice any
difference when measuring. The API should be safe even though it's
implemented using unsafe functions.

Ref. #2.
ollef added a commit that referenced this issue Apr 6, 2020
This might be faster than using an IORef, though I don't notice any
difference when measuring. The API should be safe even though it's
implemented using unsafe functions.

Ref. #2.
@ollef
Copy link
Owner

ollef commented Apr 6, 2020

I've just switched to unsafeDupablePerformIO wrapped in (what I believe is) a safe API. It doesn't make any noticable performance difference in my tests, but it might be that it will show up in other workloads. Either way it seems like a good idea, so I'm gonna keep it.

Interesting remarks about Gleud. The main motivation for my version was ease of implementation, but I'd be delighted if it turns out to actually be more performant as well. I'll be eagerly awaiting the results of your benchmarks and any other insights you might have around them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants