Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic wrapping of specialized methods, solve many ambiguities #813

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

manuelbb-upb
Copy link
Contributor

This is a WIP/proposal to solve some ambiguity issues, like #575 or #635.
These arise, because the methods defined with the @wraped macro compete with very specialized methods, possibly from sub-modules.

In contrast to #811, this approach does not need a dedicated dispatch context.
Instead, the function specialize_methods looks up all methods currently available for a specified function and iteratively defines methods for the corresponding symbolically wrapped types.
Thus, the behavior of specialize_methods is load order dependent.
We could call it ourselves or leave it up to the user.

In this first draft:

  • specialize_methods only creates definitions for matrix-vector and matrix-matrix products and delegates the work to _matvec and _matmul, respectively. The examples from the issues above seem to work now.
  • ExprTools.jl is used to parse method information. I started using ExprTools because of convenience and to leverage the existing code for @wrapped, which is only slightly modified.
  • However, the current approach does not allow for incremental precompilation. I am also not sure if that is feasible, but I think we could try to work with the method signatures directly, to access argument types and avoid the hacky evaluation I currently use, which requires evaluation into sub-modules. This would require a bit of re-writing.
  • specialize_methods() is executed in the modules __init__ function and the tests appear to pass locally.

`specialize_methods`,
apparently working for matrix-vector and matrix-matrix products.
an `AbstractMatrix` and an `VecOrMat` (type union), then we accidentally redirect
a matrix-vector-product to `_matmul` without `typeintersect`.
=#
typeintersect($(atype), $(arg.args[2]) where {$(wparams...)})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if methods are added by a third party package after @warpped has executed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then specialize_methods would have to be called again. This sort of load-order dependence is also observed in Hyperspecialize, and the only way around it seems the Cassette approach.
Actually, the code in this pull request is not too different from what Hyperspecialize can do, except that some of the Matrix or Vector types occurring in method definitions can be super special, so I had to either traverse the whole type tree and gather every possible subtype for Hyperspecialize's @concretize or inspect the method signatures (what is done now).

However, there are a few things we can do with respect to the points in your other comment:

All relevant types in Base/stdlib should be made to work with the wrapped types defined in this package

specialize_methods accepts a list of modules with argument mods. When calling it in __init__() we can restrict ourselves to LinearAlgebra and other relevant standard modules.

Taking this further if a new package wants to use its array or number types with Symbolics, wherever necessary, it should return the specializations manually or using a different macro.

What happens if methods are added by a third party package after @Warpped has executed?

Whenever a user has custom array types defined in a module, or there is a package with custom array types, specialize_methods can be called with mods set accordingly to automatically “widen”/wrap the relevant definitions to have a generic fallback. This way, the developer does not have to know that Base.:(*)(::AbstractMatrix, ::AbstractVector) should redirect to _matmul most of the time.
If, for example, I get a StaticMatrix somewhere in my computations, then I cannot use it with symbolic arrays:

using Symbolics
using StaticArrays
A = @SMatrix(rand(2,2)) # imagine this is obtained somewhere else
@variables (x::Real)[1:2]
A * x # ERROR: MethodError: *(::SMatrix{2, 2, Float64, 4}, ::Symbolics.Arr{Num, 1}) is ambiguous.

But after
Symbolics.specialize_methods(StaticArrays)
it just works.

For some widely used packages (such as StaticArrays) we could even do this ourselves with @require. Conversely, a package author could opt to place Symbolics.specialize_methods(@__MODULE__) in their __init__, though I have to think about scopes and export statements a bit more.

Going forward, I can try to benchmark the impact of specialize_methods in __init__ (for different values of mods) and maybe devise some tests.

@manuelbb-upb
Copy link
Contributor Author

Quick update for anyone following/reading: There are pre-compilation issues. I'll try to work them out, and maybe try avoiding working with eval/@eval so much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants