diff --git a/404.html b/404.html index 064526c..cadaa28 100644 --- a/404.html +++ b/404.html @@ -1 +1 @@ -
We welcome contributions, which can come in many forms:
If you think you found a bug in any of our packages, feel free to open an issue at the specific GitHub repo. For our packages, it should be a link like https://github.com/JuliaSmoothOptimizers/PACKAGE.jl
. For our tutorials, you can open an issue here or in the specific tutorial page.
Focused suggestion and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.
If you want to ask a question that is not suited for a bug report, feel free to start a discussion here. This forum is for general discussion, so questions about any of our packages are welcome.
Use the template repo and follow instructions there.
Check the issues for jso-docs.github.io, and open a new one if necessary.
We welcome contributions, which can come in many forms:
If you think you found a bug in any of our packages, feel free to open an issue at the specific GitHub repo. For our packages, it should be a link like https://github.com/JuliaSmoothOptimizers/PACKAGE.jl
. For our tutorials, you can open an issue here or in the specific tutorial page.
Focused suggestion and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.
If you want to ask a question that is not suited for a bug report, feel free to start a discussion here. This forum is for general discussion, so questions about any of our packages are welcome.
Use the template repo and follow instructions there.
Check the issues for jso-docs.github.io, and open a new one if necessary.
Our packages can be divided into three non-exclusive ecosystems:
Linear Algebra: Packages for dealing with matrices and matrix-like objects. For instance, for solving linear systems, or linear least-squares problems.
Models: Packages that define and access optimization models.
Solvers: Packages related to our solvers and the development of other solvers.
Our packages can be divided into three non-exclusive ecosystems:
Linear Algebra: Packages for dealing with matrices and matrix-like objects. For instance, for solving linear systems, or linear least-squares problems.
Models: Packages that define and access optimization models.
Solvers: Packages related to our solvers and the development of other solvers.
Inside an optimization method, we frequently have to deal with matrices and linear operations/ There are 2 main linear problems that we need to solve:
Linear systems: Find \(x \in \mathbb R^n\) such that \(Ax = b\), where \(A \in \mathbb R^{m\times n}\) and \(b \in \mathbb R^m\).
Least squares problems: Find \(x \in \mathbb R^n\) that minimizes \(\|Ax - b\|^2\), where \(A \in \mathbb R^{m\times n}\) and \(b \in \mathbb R^m\).
There is a wide variety of methods to solve these problems, usually specialized with respect to additional properties of \(A\) and \(b\). Inside JSO we implement a few of those methods, and provide code wrapper around some others.
Our main interest lies in large-scale problems, that are dealt with either by
Factorization-free methods: \(A\) itself is not available, but we have access to the result of the product of \(A\) by a vector \(v\). That is, we have a function \(v \to Av\). Possibly we have access to \(v \to A^T v\) and \(v \to A^*v\) as well.
Sparse factorization: \(A\) is factorized into a product of matrices. The issue here being that since \(A\) is sparse, the factorization algorithm has to be smart enough to not destroy the sparsity of the problem.
We'll describe in the following sections the main packages of our ecosystem divided in these two categories.
The first thing we did for this section of the ecosystem was define a new type for linear operators, with the package LinearOperators. The package is used internally by our models to provide access to the Jacobian and Hessian vector products, so it is considered a main package for the organization as a whole.
The second main package in this part if the package Krylov, which defines almost 30 methods for linear system and linear least-squares problems. Krylov implements a few known methods, but brand new methods were developed and published as well.
Factorization methods have been studied for a few decades, and some packages are well-known for doing it right. One of these packages is HSL, which defines some well-known methods such as MA57 and MA97. Our HSL wrapper exports both of these methods, which are the main ones used in our context.
Two drawbacks of HSL are that it is proprietary and it can't handle element types except 32 bits and 64 bits native floating point numbers. LDLFactorizations implements a factorization for symmetric matrices to compete with MA57 that solves both of these problems.
Inside an optimization method, we frequently have to deal with matrices and linear operations/ There are 2 main linear problems that we need to solve:
Linear systems: Find \(x \in \mathbb R^n\) such that \(Ax = b\), where \(A \in \mathbb R^{m\times n}\) and \(b \in \mathbb R^m\).
Least squares problems: Find \(x \in \mathbb R^n\) that minimizes \(\|Ax - b\|^2\), where \(A \in \mathbb R^{m\times n}\) and \(b \in \mathbb R^m\).
There is a wide variety of methods to solve these problems, usually specialized with respect to additional properties of \(A\) and \(b\). Inside JSO we implement a few of those methods, and provide code wrapper around some others.
Our main interest lies in large-scale problems, that are dealt with either by
Factorization-free methods: \(A\) itself is not available, but we have access to the result of the product of \(A\) by a vector \(v\). That is, we have a function \(v \to Av\). Possibly we have access to \(v \to A^T v\) and \(v \to A^*v\) as well.
Sparse factorization: \(A\) is factorized into a product of matrices. The issue here being that since \(A\) is sparse, the factorization algorithm has to be smart enough to not destroy the sparsity of the problem.
We'll describe in the following sections the main packages of our ecosystem divided in these two categories.
The first thing we did for this section of the ecosystem was define a new type for linear operators, with the package LinearOperators. The package is used internally by our models to provide access to the Jacobian and Hessian vector products, so it is considered a main package for the organization as a whole.
The second main package in this part if the package Krylov, which defines almost 30 methods for linear system and linear least-squares problems. Krylov implements a few known methods, but brand new methods were developed and published as well.
Factorization methods have been studied for a few decades, and some packages are well-known for doing it right. One of these packages is HSL, which defines some well-known methods such as MA57 and MA97. Our HSL wrapper exports both of these methods, which are the main ones used in our context.
Two drawbacks of HSL are that it is proprietary and it can't handle element types except 32 bits and 64 bits native floating point numbers. LDLFactorizations implements a factorization for symmetric matrices to compete with MA57 that solves both of these problems.
In the context of nonlinear optimization, the general form of the optimization problem is
\[\begin{aligned} \min \quad & f(x) \\ & c_i(x) = 0, \quad i \in E, \\ & c_{L_i} \leq c_i(x) \leq c_{U_i}, \quad i \in I, \\ & \ell \leq x \leq u, \end{aligned}\]where \(f:\mathbb{R}^n\rightarrow\mathbb{R}\), \(c:\mathbb{R}^n\rightarrow\mathbb{R}^m\), \(E\cup I = \{1,2,\dots,m\}\), \(E\cap I = \emptyset\), and \(c_{L_i}, c_{U_i}, \ell_j, u_j \in \mathbb{R}\cup\{\pm\infty\}\) for \(i = 1,\dots,m\) and \(j = 1,\dots,n\).
We define two interested parties in this context:
Users: People that have a problem like the one above and want to write them computationally and pass them to a solver.
Solver developers: People that are writing optimization methods and need information about the problem to provide an approximate solution.
To solve this problem we usually require \(f\) and its derivatives at a given point \(x.\) However, since the function \(f\) is nonlinear, determining its derivatives is usually not trivial. This led to the creation of several ways to define and obtain the function of an optimization problem. Furthermore, during the development of an optimization method, we want to test and compare our solver on a large variety of problems. This means that we need a collection of test problems, and these too will follow their own format.
Therefore, one of the objectives of NLPModels is to allow the creation of different model that follow the same API. This way, everybody is happy.
We can summarise the possibilities in 4 categories:
Manually pass everything: You usually do this when you're writing your optimization method to test it. It can also be the case when you special information about the derivatives that can be explored, or when you really want to squeeze that last bit of speed.
Modeling languages: A modeling language will translate a friendly description of the optimization method to the function and its derivatives. This has an extra cost when creating the model, usually, to compute the structure of the derivatives.
Automatic differentiation: A great new way of doing things. It's easier for the user, because you can define only \(f\) manually and its derivatives are computed for you. Naturally, that also has some extra cost.
Specialized problem collections: Some packages provide a curated list of problem so you can test your optimization method. To write this list you can follow any of the ways above, naturally, but as a developer you want them readily available, so someone already made the decision of how they'll be available to you.
Let's describe some of the packages available.
NLPModels is the base of this ecosystem, it defines an API for all other models to implement. This is done by creating a struct derived from the abstract NLPModel type, and defining how each API function behaves for that model. This means that you can define a struct specifically for your problem and describe the functions manually. This is how we test our API and how we keep the consistency between models, so it can be seen in action.
This package is dedicated to models that modify existing models. We currently have four models available. Here's a rough description:
FeasibilityFormNLS transforms a nonlinear least-squares problems by moving the residual function to the constraints. This allows a non-specialized solver to handle the objective function much better.
FeasibilityResidual uses the constraints of a model to create a nonlinear least-squares problem. This problem is called the Feasibility violation minimization in some cases.
LSR1Model and LBFGSModel are models that approximate the Hessian by quasi-Newton operators. They use LinearOperators, so they don't return the full matrix, only operators.
SlackModel transforms inequalities into equalities adding slack variables to these constraints.
This package is only for developers, since they are used to test models. The rationale here is that many of the tests we want to perform were copy-pasted from one source. With this package, we can add it only for the tests and use the prepared functions instead of copying a lot of code. Furthermore, it's easier to keep track of everything.
This package defines methods using automatic differentiation. Probably some of the most useful models, since you can very quickly define a problem.
As the name implies, for quadratic models.
For linear least-squares problems.
In this section we have models wrapping two main modeling languages. JuMP is an open source modeling language written in Julia. AMPL is an external modeling language that is well-known for its efficiency.
CUTEst is the latest iteration of the Constraint and Unconstrained Testing Environment. It is a package written in Fortran, and we provide a wrapper for it.
It contains around 1500 problems for general nonlinear optimization and has been in use since at least 1995, when the paper describing the first version was released. The problem are written in a specialized format that can almost be considered a modeling language, although it is not friendly to the user. On the other hand, the derivatives are obtained very efficiently.
This package provides a list of problems implemented in JuMP.
This package provides a list of nonlinear least-squares problems implemented using NLPModelsJUMP.
In the context of nonlinear optimization, the general form of the optimization problem is
\[\begin{aligned} \min \quad & f(x) \\ & c_i(x) = 0, \quad i \in E, \\ & c_{L_i} \leq c_i(x) \leq c_{U_i}, \quad i \in I, \\ & \ell \leq x \leq u, \end{aligned}\]where \(f:\mathbb{R}^n\rightarrow\mathbb{R}\), \(c:\mathbb{R}^n\rightarrow\mathbb{R}^m\), \(E\cup I = \{1,2,\dots,m\}\), \(E\cap I = \emptyset\), and \(c_{L_i}, c_{U_i}, \ell_j, u_j \in \mathbb{R}\cup\{\pm\infty\}\) for \(i = 1,\dots,m\) and \(j = 1,\dots,n\).
We define two interested parties in this context:
Users: People that have a problem like the one above and want to write them computationally and pass them to a solver.
Solver developers: People that are writing optimization methods and need information about the problem to provide an approximate solution.
To solve this problem we usually require \(f\) and its derivatives at a given point \(x.\) However, since the function \(f\) is nonlinear, determining its derivatives is usually not trivial. This led to the creation of several ways to define and obtain the function of an optimization problem. Furthermore, during the development of an optimization method, we want to test and compare our solver on a large variety of problems. This means that we need a collection of test problems, and these too will follow their own format.
Therefore, one of the objectives of NLPModels is to allow the creation of different model that follow the same API. This way, everybody is happy.
We can summarise the possibilities in 4 categories:
Manually pass everything: You usually do this when you're writing your optimization method to test it. It can also be the case when you special information about the derivatives that can be explored, or when you really want to squeeze that last bit of speed.
Modeling languages: A modeling language will translate a friendly description of the optimization method to the function and its derivatives. This has an extra cost when creating the model, usually, to compute the structure of the derivatives.
Automatic differentiation: A great new way of doing things. It's easier for the user, because you can define only \(f\) manually and its derivatives are computed for you. Naturally, that also has some extra cost.
Specialized problem collections: Some packages provide a curated list of problem so you can test your optimization method. To write this list you can follow any of the ways above, naturally, but as a developer you want them readily available, so someone already made the decision of how they'll be available to you.
Let's describe some of the packages available.
NLPModels is the base of this ecosystem, it defines an API for all other models to implement. This is done by creating a struct derived from the abstract NLPModel type, and defining how each API function behaves for that model. This means that you can define a struct specifically for your problem and describe the functions manually. This is how we test our API and how we keep the consistency between models, so it can be seen in action.
This package is dedicated to models that modify existing models. We currently have four models available. Here's a rough description:
FeasibilityFormNLS transforms a nonlinear least-squares problems by moving the residual function to the constraints. This allows a non-specialized solver to handle the objective function much better.
FeasibilityResidual uses the constraints of a model to create a nonlinear least-squares problem. This problem is called the Feasibility violation minimization in some cases.
LSR1Model and LBFGSModel are models that approximate the Hessian by quasi-Newton operators. They use LinearOperators, so they don't return the full matrix, only operators.
SlackModel transforms inequalities into equalities adding slack variables to these constraints.
This package is only for developers, since they are used to test models. The rationale here is that many of the tests we want to perform were copy-pasted from one source. With this package, we can add it only for the tests and use the prepared functions instead of copying a lot of code. Furthermore, it's easier to keep track of everything.
This package defines methods using automatic differentiation. Probably some of the most useful models, since you can very quickly define a problem.
As the name implies, for quadratic models.
For linear least-squares problems.
In this section we have models wrapping two main modeling languages. JuMP is an open source modeling language written in Julia. AMPL is an external modeling language that is well-known for its efficiency.
CUTEst is the latest iteration of the Constraint and Unconstrained Testing Environment. It is a package written in Fortran, and we provide a wrapper for it.
It contains around 1500 problems for general nonlinear optimization and has been in use since at least 1995, when the paper describing the first version was released. The problem are written in a specialized format that can almost be considered a modeling language, although it is not friendly to the user. On the other hand, the derivatives are obtained very efficiently.
This package provides a list of problems implemented in JuMP.
This package provides a list of nonlinear least-squares problems implemented using NLPModelsJUMP.
The solver ecosystem in JSO is in constant development. We are looking into unifying the solver structure to make solvers, subsolvers, and tools plug-and-play.
Within JSO, our solver development has focused on three types of packages:
New methods, usually accompanied by research papers;
our implementation of (well- or no-so-well-)known methods;
wrappers around noteworthy solvers implemented in other languages.
Below is the list of solvers and the type of problems that they are designed to solve.
Solver | Package | Description | Types of problem |
---|---|---|---|
cannoles | CaNNOLeS.jl | Regularization method for nonlinear least squares | Equality-constrained NLS |
dci | DCISolver.jl | Trust-cylinder similar to two-step SQP | Equality-constrained NLP |
lbfgs | JSOSolvers.jl | Factorization-free linesearch limited-memory inverse BFGS | Unconstrained NLP |
percival | Percival.jl | Factorization-free augmented Lagrangian | Generally-constrained NLP |
ripqp | RipQP.jl | Regularized Interior-Point Quadratic Programming | Convex QP (incl. LP) |
tron | JSOSolvers.jl | Factorization-free trust-region second-order or quasi-Newton method | Bound-constrained NLP |
tronls | JSOSolvers.jl | Least-squares versions of tron | Unconstrained NLS |
trunk | JSOSolvers.jl | Factorization-free trust-region second-order or quasi-Newton method | Unconstrained NLP |
trunkls | JSOSolvers.jl | Least-squares version of trunk | Unconstrained NLS |
Our optimization problems can be defined as
\[\text{minimize} \quad f(x) \quad \text{subject to} \quad x \in \Omega.\]Our constraint types and their meanings are:
Unconstrained: \(\Omega = \mathbb R^n\).
Bound constrained: \(\Omega = \{x \in \mathbb R^n |\ \ell \leq x \leq u\}\).
Equality constrained: \(\Omega = \{x \in \mathbb R^n |\ c(x) = 0\}\).
Generally constrained: \(\Omega = \{x \in \mathbb R^n |\ c_L \leq c(x) \leq c_U \}\), where \(c_{L_i} = c_{U_i}\) is a valid possibility, as are \(c_{L_i} = -\infty\) and \(c_{U_i} = \infty\), but not at the same time.
In addition to differences in the constraints, we also have different objective types:
NLP: General objective. The objective \(f(x)\) of these problems has no special structure to exploit. Access to its derivatives occurs through the default NLPModels.jl API.
NLS: Nonlinear Least-Squares problems. These problems are defined by \(f(x) = \tfrac{1}{2}\Vert F(x)\Vert^2\), with \(F\) and its derivatives available through the API for NLSModels, part of NLPModels.jl
QP: Quadratic Programming. These problems are defined by \(f(x) = \tfrac{1}{2}x^T Q x + g^T x + c\), with the API defined by QuadraticModels.jl.
The solver ecosystem in JSO is in constant development. We are looking into unifying the solver structure to make solvers, subsolvers, and tools plug-and-play.
Within JSO, our solver development has focused on three types of packages:
New methods, usually accompanied by research papers;
our implementation of (well- or no-so-well-)known methods;
wrappers around noteworthy solvers implemented in other languages.
Below is the list of solvers and the type of problems that they are designed to solve.
Solver | Package | Description | Types of problem |
---|---|---|---|
cannoles | CaNNOLeS.jl | Regularization method for nonlinear least squares | Equality-constrained NLS |
dci | DCISolver.jl | Trust-cylinder similar to two-step SQP | Equality-constrained NLP |
lbfgs | JSOSolvers.jl | Factorization-free linesearch limited-memory inverse BFGS | Unconstrained NLP |
percival | Percival.jl | Factorization-free augmented Lagrangian | Generally-constrained NLP |
ripqp | RipQP.jl | Regularized Interior-Point Quadratic Programming | Convex QP (incl. LP) |
tron | JSOSolvers.jl | Factorization-free trust-region second-order or quasi-Newton method | Bound-constrained NLP |
tronls | JSOSolvers.jl | Least-squares versions of tron | Unconstrained NLS |
trunk | JSOSolvers.jl | Factorization-free trust-region second-order or quasi-Newton method | Unconstrained NLP |
trunkls | JSOSolvers.jl | Least-squares version of trunk | Unconstrained NLS |
Our optimization problems can be defined as
\[\text{minimize} \quad f(x) \quad \text{subject to} \quad x \in \Omega.\]Our constraint types and their meanings are:
Unconstrained: \(\Omega = \mathbb R^n\).
Bound constrained: \(\Omega = \{x \in \mathbb R^n |\ \ell \leq x \leq u\}\).
Equality constrained: \(\Omega = \{x \in \mathbb R^n |\ c(x) = 0\}\).
Generally constrained: \(\Omega = \{x \in \mathbb R^n |\ c_L \leq c(x) \leq c_U \}\), where \(c_{L_i} = c_{U_i}\) is a valid possibility, as are \(c_{L_i} = -\infty\) and \(c_{U_i} = \infty\), but not at the same time.
In addition to differences in the constraints, we also have different objective types:
NLP: General objective. The objective \(f(x)\) of these problems has no special structure to exploit. Access to its derivatives occurs through the default NLPModels.jl API.
NLS: Nonlinear Least-Squares problems. These problems are defined by \(f(x) = \tfrac{1}{2}\Vert F(x)\Vert^2\), with \(F\) and its derivatives available through the API for NLSModels, part of NLPModels.jl
QP: Quadratic Programming. These problems are defined by \(f(x) = \tfrac{1}{2}x^T Q x + g^T x + c\), with the API defined by QuadraticModels.jl.
Julia Smooth Optimizers
is an organization on GitHub containing a collection of Julia packages for Nonlinear Optimization software development, testing, and benchmarking.
We provide tools for building models, access to repositories of problems, subproblem solving, linear algebra, and solving problems. This site will serve as the repository of information about JSO and its packages.
Check our tutorials and guides
We are centralizing all JSO tutorials in this page. You can find the full list here.
Our latest tutorial is
Julia Smooth Optimizers
is an organization on GitHub containing a collection of Julia packages for Nonlinear Optimization software development, testing, and benchmarking.
We provide tools for building models, access to repositories of problems, subproblem solving, linear algebra, and solving problems. This site will serve as the repository of information about JSO and its packages.
Check our tutorials and guides
We are centralizing all JSO tutorials in this page. You can find the full list here.
Our latest tutorial is
The Julia Smooth Optimizers organization has moved to this new website, where we will aggregate all our tutorials and news.
We hope this website helps optimization experts and beginners alike find their way through optimization in Julia.
The Julia Smooth Optimizers organization has moved to this new website, where we will aggregate all our tutorials and news.
We hope this website helps optimization experts and beginners alike find their way through optimization in Julia.
Migot, T., Orban, D., & Siqueira, A. S. DCISolver. jl: A Julia Solver for Nonlinear Optimization using Dynamic Control of Infeasibility. Journal of Open Source Software, 7(70), 3991 (2022). 10.21105/joss.03991
\ No newline at end of file +stats = dci(nlp)Bielschowsky, R. H., & Gomes, F. A. Dynamic control of infeasibility in equality constrained optimization. SIAM Journal on Optimization, 19(3), 1299-1325 (2008). 10.1007/s10589-020-00201-2
Migot, T., Orban, D., & Siqueira, A. S. DCISolver. jl: A Julia Solver for Nonlinear Optimization using Dynamic Control of Infeasibility. Journal of Open Source Software, 7(70), 3991 (2022). 10.21105/joss.03991
\ No newline at end of file diff --git a/news-and-blogposts/2022/2022-05-18-jopt22/index.html b/news-and-blogposts/2022/2022-05-18-jopt22/index.html index 251bd2b..dd43335 100644 --- a/news-and-blogposts/2022/2022-05-18-jopt22/index.html +++ b/news-and-blogposts/2022/2022-05-18-jopt22/index.html @@ -1 +1 @@ -Bielschowsky, R. H., & Gomes, F. A. Dynamic control of infeasibility in equality constrained optimization. SIAM Journal on Optimization, 19(3), 1299-1325 (2008). 10.1007/s10589-020-00201-2
We organized a steam of sessions on ’’Numerical optimization and linear algebra with Julia’’ at Optimization Days/Journées de l’optimisation 2022 held at HEC Montréal, May 16-18, 2022. The conference, renowned for its optimization expertise and wine & cheese party, was held in-person for the first time since 2019! The stream featured 16 talks including 6 on JSO-related research. The program of the conference is available here, and the Julia sessions:
We organized a steam of sessions on ’’Numerical optimization and linear algebra with Julia’’ at Optimization Days/Journées de l’optimisation 2022 held at HEC Montréal, May 16-18, 2022. The conference, renowned for its optimization expertise and wine & cheese party, was held in-person for the first time since 2019! The stream featured 16 talks including 6 on JSO-related research. The program of the conference is available here, and the Julia sessions:
Iterative Solution of Symmetric Quasi-Definite Linear Systems, D. Orban, M. Arioli, SIAM, 2017., 10.1137/1.9781611974737
DCISolver.jl: A Julia Solver for Nonlinear Optimization using Dynamic Control of Infeasibility, T. Migot, D. Orban, A. S. Siqueira, Journal of Open Source Software, 7(70), 3991, 2022., 10.21105/joss.03991
TriCG and TriMR: Two Iterative Methods for Symmetric Quasi-definite Systems, A. Montoison, D. Orban, SIAM Journal on Scientific Computing, 43(4), A2502–A2525, 2021., 10.1137/20m1363030
Exact linesearch limited-memory quasi-Newton methods for minimizing a quadratic function, D. Ek, A. Forsgren, Computational Optimization and Applications, 79(3), 789–816, 2021., 10.1007/s10589-021-00277-4
Design and implementation of a modular interior-point solver for linear optimization, M. Tanneau, M. F. Anjos, A. Lodi, Mathematical Programming Computation, 13(3), 509–551, 2021., 10.1007/s12532-020-00200-8
A Regularization Method for Constrained Nonlinear Least Squares, D. Orban, A. S. Siqueira, Computational Optimization and Applications, 76, 961–989, 2020., 10.1007/s10589-020-00201-2
BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, A. Montoison, D. Orban, SIAM Journal on Matrix Analysis and Applications, 41(3), 1145–1166, 2020., 10.1137/19m1290991
LNLQ: An Iterative Method for Least-Norm Problems with an Error Minimization Property, R. Estrin, D. Orban, M. A. Saunders, SIAM Journal on Matrix Analysis, 40(3), 1102–1124, 2019., 10.1137/18M1194948
A Tridiagonalization Method for Symmetric Saddle-Point System, A. Buttari, D. Orban, D. Ruiz, D. Titley-Peloquin, SIAM Journal on Scientific Computing, 41(5), S409–S432, 2019., 10.1137/18M1194900
The Conjugate Residual Method in Linesearch and Trust-Region Methods, M.-A. Dahito, D. Orban, SIAM Journal on Optimization, 29(3), 1988–2025, 2019., 10.1137/18M1204255
LSLQ: An Iterative Method for Linear Least-Squares with an Error Minimization Property, R. Estrin, D. Orban, M. A. Saunders, SIAM Journal on Matrix Analysis, 40(1), 254–275, 2019., 10.1137/17M1113552
A Unified Efficient Implementation of Trust-region Type Algorithms for Unconstrained Optimization, J.-P. Dussault, INFOR: Information Systems and Operational Research, 58(2), 290–309, 2019., 10.1080/03155986.2019.1624490
A roadmap for JuliaSmoothOptimizers and its NLPModels API, Tangi Migot and Alexis Montoison, JuMP nonlinear developers call (online)., 2022-Feb-15
Krylov.jl : une bibliothèque Julia pour l’algèbre linéaire numérique, Alexis Montoison, IVADO Digital October (online)., 2021-Nov-16
Large scale optimization solvers in Julia for data science, Tangi Migot, IVADO Digital October (online)., 2021-Nov-15
Otimização na Linguagem Julia, Abel S. Siqueira, Palestra no Congresso Internacional de Biomassa - 4° Expo Biomassa - Curitiba/PR, Brazil., 2019-Jun-26
JuliaSmoothOptimizers, Abel S. Siqueira, Talk at PPGM - UFPR. Curitiba/PR, Brazil., 2019-May-31
JuliaSmoothOptimizers, Abel S. Siqueira, Tutorial: Modeling and optimization tools in Julia: An introduction to JuMP and JSO. GERAD. Montreal/QC, Canada, 2019-Feb-7
A Regularized Interior-Point Method for Constrained Nonlinear Least Squares, Abel S. Siqueira and Dominique Orban, XII Brazilian Workshop on Continuous Optimization. Iguaçu Falls, Brazil., 2018-Jul-23
Developing new optimization methods with packages from the JuliaSmoothOptimizers organization, Abel S. Siqueira and Dominique Orban, Second Annual JuMP-dev Workshop. Bordeaux, France, 2018-Jun-28
Iterative Methods with an Error Minimization Property, Dominique Orban, Ron Estrin and Michael A. Saunders, LA/Opt Seminar, ICME, Stanford University, 2018-May-31
Otimização Não-Linear na Linguagem Julia, Abel S. Siqueira, Seminários de Análise Convexa e Otimização. UFSC, Florianópolis/SC, Brasil, 2017-Dec-1
A Workflow for Designing Optimization Methods in the Julia Language, Abel S. Siqueira and Dominique Orban, 2016 Optimization Days. Montréal, Canada, 2016-May-4
Algèbre linéaire numérique appliquée, Dominique Orban, Polytechnique Montréal, 2021-Jan-1
Otimização Não Linear em Julia, Abel S. Siqueira, YouTube, 2020-Sep-2
Méthodes d'optimisation et contrôle optimal, Dominique Orban, Polytechnique Montréal, 2019-Jan-1
Otimização I, Abel S. Siqueira, UFPR - Federal University of Paraná, 2018-Jun-19
Advanced Topics in Scientific Computing with Julia, Brad Nelson, Stanford University, 2018-Mar-3
Iterative Solution of Symmetric Quasi-Definite Linear Systems, D. Orban, M. Arioli, SIAM, 2017., 10.1137/1.9781611974737
DCISolver.jl: A Julia Solver for Nonlinear Optimization using Dynamic Control of Infeasibility, T. Migot, D. Orban, A. S. Siqueira, Journal of Open Source Software, 7(70), 3991, 2022., 10.21105/joss.03991
TriCG and TriMR: Two Iterative Methods for Symmetric Quasi-definite Systems, A. Montoison, D. Orban, SIAM Journal on Scientific Computing, 43(4), A2502–A2525, 2021., 10.1137/20m1363030
Exact linesearch limited-memory quasi-Newton methods for minimizing a quadratic function, D. Ek, A. Forsgren, Computational Optimization and Applications, 79(3), 789–816, 2021., 10.1007/s10589-021-00277-4
Design and implementation of a modular interior-point solver for linear optimization, M. Tanneau, M. F. Anjos, A. Lodi, Mathematical Programming Computation, 13(3), 509–551, 2021., 10.1007/s12532-020-00200-8
A Regularization Method for Constrained Nonlinear Least Squares, D. Orban, A. S. Siqueira, Computational Optimization and Applications, 76, 961–989, 2020., 10.1007/s10589-020-00201-2
BiLQ: An Iterative Method for Nonsymmetric Linear Systems with a Quasi-Minimum Error Property, A. Montoison, D. Orban, SIAM Journal on Matrix Analysis and Applications, 41(3), 1145–1166, 2020., 10.1137/19m1290991
LNLQ: An Iterative Method for Least-Norm Problems with an Error Minimization Property, R. Estrin, D. Orban, M. A. Saunders, SIAM Journal on Matrix Analysis, 40(3), 1102–1124, 2019., 10.1137/18M1194948
A Tridiagonalization Method for Symmetric Saddle-Point System, A. Buttari, D. Orban, D. Ruiz, D. Titley-Peloquin, SIAM Journal on Scientific Computing, 41(5), S409–S432, 2019., 10.1137/18M1194900
The Conjugate Residual Method in Linesearch and Trust-Region Methods, M.-A. Dahito, D. Orban, SIAM Journal on Optimization, 29(3), 1988–2025, 2019., 10.1137/18M1204255
LSLQ: An Iterative Method for Linear Least-Squares with an Error Minimization Property, R. Estrin, D. Orban, M. A. Saunders, SIAM Journal on Matrix Analysis, 40(1), 254–275, 2019., 10.1137/17M1113552
A Unified Efficient Implementation of Trust-region Type Algorithms for Unconstrained Optimization, J.-P. Dussault, INFOR: Information Systems and Operational Research, 58(2), 290–309, 2019., 10.1080/03155986.2019.1624490
A roadmap for JuliaSmoothOptimizers and its NLPModels API, Tangi Migot and Alexis Montoison, JuMP nonlinear developers call (online)., 2022-Feb-15
Krylov.jl : une bibliothèque Julia pour l’algèbre linéaire numérique, Alexis Montoison, IVADO Digital October (online)., 2021-Nov-16
Large scale optimization solvers in Julia for data science, Tangi Migot, IVADO Digital October (online)., 2021-Nov-15
Otimização na Linguagem Julia, Abel S. Siqueira, Palestra no Congresso Internacional de Biomassa - 4° Expo Biomassa - Curitiba/PR, Brazil., 2019-Jun-26
JuliaSmoothOptimizers, Abel S. Siqueira, Talk at PPGM - UFPR. Curitiba/PR, Brazil., 2019-May-31
JuliaSmoothOptimizers, Abel S. Siqueira, Tutorial: Modeling and optimization tools in Julia: An introduction to JuMP and JSO. GERAD. Montreal/QC, Canada, 2019-Feb-7
A Regularized Interior-Point Method for Constrained Nonlinear Least Squares, Abel S. Siqueira and Dominique Orban, XII Brazilian Workshop on Continuous Optimization. Iguaçu Falls, Brazil., 2018-Jul-23
Developing new optimization methods with packages from the JuliaSmoothOptimizers organization, Abel S. Siqueira and Dominique Orban, Second Annual JuMP-dev Workshop. Bordeaux, France, 2018-Jun-28
Iterative Methods with an Error Minimization Property, Dominique Orban, Ron Estrin and Michael A. Saunders, LA/Opt Seminar, ICME, Stanford University, 2018-May-31
Otimização Não-Linear na Linguagem Julia, Abel S. Siqueira, Seminários de Análise Convexa e Otimização. UFSC, Florianópolis/SC, Brasil, 2017-Dec-1
A Workflow for Designing Optimization Methods in the Julia Language, Abel S. Siqueira and Dominique Orban, 2016 Optimization Days. Montréal, Canada, 2016-May-4
Algèbre linéaire numérique appliquée, Dominique Orban, Polytechnique Montréal, 2021-Jan-1
Otimização Não Linear em Julia, Abel S. Siqueira, YouTube, 2020-Sep-2
Méthodes d'optimisation et contrôle optimal, Dominique Orban, Polytechnique Montréal, 2019-Jan-1
Otimização I, Abel S. Siqueira, UFPR - Federal University of Paraná, 2018-Jun-19
Advanced Topics in Scientific Computing with Julia, Brad Nelson, Stanford University, 2018-Mar-3
This is a curated list of tutorials.
This is another list of tutorials, from outside sources.
Abel Siqueira's YouTube playlist on JSO Tutorials, Abel Soares Siqueira, 08 April 2020
NLPModels.jl and CUTEst.jl: Constrained Optimization, Abel Soares Siqueira, 17 February 2017
NLPModels.jl, CUTEst.jl and other Nonlinear Optimization Packages on Julia, Abel Soares Siqueira, 07 February 2017
This is a curated list of tutorials.
This is another list of tutorials, from outside sources.
Abel Siqueira's YouTube playlist on JSO Tutorials, Abel Soares Siqueira, 08 April 2020
NLPModels.jl and CUTEst.jl: Constrained Optimization, Abel Soares Siqueira, 17 February 2017
NLPModels.jl, CUTEst.jl and other Nonlinear Optimization Packages on Julia, Abel Soares Siqueira, 07 February 2017