-
-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] add Transitioning from MATLAB tutorial #3698
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #3698 +/- ##
=======================================
Coverage 98.35% 98.35%
=======================================
Files 43 43
Lines 5707 5707
=======================================
Hits 5613 5613
Misses 94 94 ☔ View full report in Codecov by Sentry. |
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
I don't have write access anymore since this PR is now in your branch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added the commands querying the solution status to the JuMP, YALMIP, and CVX codes. Also, note that in MATLAB the main function needs to be the first one, so I changed back the order.
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
Yeah. I changed it here so that we can get a preview: https://jump.dev/JuMP.jl/previews/PR3698/tutorials/getting_started/transitioning_from_matlab/ |
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
Maybe it is worth it adding a paragraph explaining |
It's also worth mentioning the difference between YALMIP and JuMP according to dualization. YALMIP is always in image form / geometric conic form while in JuMP it depends on the solver. See section 2.1 of the MOI paper for instance. You can refer to https://jump.dev/JuMP.jl/stable/tutorials/conic/dualization/ |
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
Co-authored-by: Mateus Araújo <[email protected]>
I'd like to keep this tutorial pretty tightly scoped to focus on JuMP/YALMIP/CVX. It is not the place for this tutorial to be a cheatsheet of every difference between Julia and MATLAB. Symmetric and Hermitian should be reasonably self-evident. If it isn't users can always look up the Julia documentation. |
Is this necessary for users transitioning from MATLAB? It seems in the weeds. I'd prefer we left it out and got feedback first. |
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
The difference is that with YALMIP and CVX one needs to do |
This is the n°1 comment people have when coming from YALMIP (see for instance https://discourse.julialang.org/t/yalmip-vs-jump/30776/10 and https://discourse.julialang.org/t/jump-sumofsquares-generate-sdps-with-18x-more-constraints-than-yalmip/65377/15). It can just be a small note refering to the relevant part of the doc. (sorry I closed, missed click) |
I also think it would be worth it such a remark, it has a large impact on performance. |
docs/src/tutorials/getting_started/transitioning_from_matlab.jl
Outdated
Show resolved
Hide resolved
Co-authored-by: Mateus Araújo <[email protected]>
optimize!(model) | ||
if is_solved_and_feasible(model) | ||
WT = dual(PPT) | ||
return value(λ), real(LinearAlgebra.dot(WT, rhoT)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did add real
here? It's redundant, dot
already returns a real number from Julia 1.11 onwards.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the docs currently build with Julia 1.10 and we support Julia 1.6. Also, v1.11 isn't released yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the problem? It doesn't cause any error with older versions of Julia, the output is just slightly less neat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dot
currently returns a Complex{Float64}
instead of a Float64
. Your MATLAB examples return a Float64
.
julia> import LinearAlgebra
julia> A = LinearAlgebra.Hermitian(rand(ComplexF64, (3, 3)))
3×3 LinearAlgebra.Hermitian{ComplexF64, Matrix{ComplexF64}}:
0.468242+0.0im 0.296799+0.49972im 0.621408+0.148878im
0.296799-0.49972im 0.413853+0.0im 0.677598+0.875476im
0.621408-0.148878im 0.677598-0.875476im 0.59341+0.0im
julia> LinearAlgebra.dot(A, A)
4.6860981128420285 + 0.0im
julia> real(LinearAlgebra.dot(A, A))
4.6860981128420285
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know. This is not an error, and will change anyway in Julia 1.11.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know
I know because I saw your PR 😄
My point is that I personally would argue that changing from ComplexF64
to Float64
is a breaking change, and the fact it will change in Julia v1.11 is not compelling because we still have many JuMP users on a range of old versions. I want something that works across all versions, not just some upcoming one. We cannot assume that new users from MATLAB will use a recent version of Julia. See, for example, a discourse user on Julia v1.5.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really don't understand the problem. It works. It's just slightly uglier. And moreover it's just example code in a tutorial. Nothing depends on whether its return value is Float64
or ComplexF64
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't work. One return a ComplexF64
, and the other returns a Float64
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this a problem?
|
||
# The same applies for Hermitian matrices, using `LinearAlgebra.Hermitian` and | ||
# `LinearAlgebra.ishermitian`. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I wanted to point out here is that with MATLAB you have to be paranoid about making your matrices exactly Hermitian, in order to not get subtle errors, whereas with JuMP you just need to wrap them and you're good to go.
But now I've tested it, and it turns out that it's not true. If you try to constrain a non-Hermitian matrix to be in HermitianPSDCone() you get an error message, as I expected, but if you try to constrain a non-Symmetric matrix to be in PSDCone() JuMP will let you (and fail afterwards, of course). Is that intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For non-symmetric matrices, JuMP adds equality constraints for the off-diagonal elements. It lets you write
model = Model()
@variable(model, X[1:2, 1:2])
@constraint(model, X in PSDCone())
We don't have the same for Hermitian because there is no HermitianPostiveSemidefininteConeSquare
(I'm sure we've discussed, but I can't find the link. Edit: found it: #3369 (comment)).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why? Is there any case where you would need that? As far as I can see this can either generate less efficient code or give you an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regardless of the need, it exists in JuMP 1.0 so we are not changing it.
Someone could also write [x y; 1 z] in PSDCone()
and we would add a constraint that y == 1
. It may have been sensible to support only the triangular form of PSD, but we didn't do that, so it's no longer up for discussion.
We added Hessian support after 1.0, which is why it supports only the triangular form.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was just curious, I wasn't suggesting to change this, it would clearly be a breaking change.
Although now that you brought it up, I will definitely suggest removing it when the time comes for JuMP 2.0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will definitely suggest removing it when the time comes for JuMP 2.0
My goal is not to release JuMP 2.0 if at all possible. If we did, it would be for a minimal set of breaking changes, not just to remove some things we don't like. The only candidate was the nonlinear changes, but we seem to have managed to sneak them in with minimal disruption. The JuMP developers have no plans for JuMP 2.0.
Merging because this is good enough for now. If needed, we can make tweaks in smaller PRs, instead of hashing everything out here. (There are already 56 comments!) |
Cherry-picked from #3697
Preview: https://jump.dev/JuMP.jl/previews/PR3698/tutorials/getting_started/transitioning_from_matlab/https://jump.dev/JuMP.jl/previews/PR3698/tutorials/transitioning/transitioning_from_matlab/