Skip to content

Commit

Permalink
Merge pull request #170 from PyLops/dev
Browse files Browse the repository at this point in the history
Release v0.8.0
  • Loading branch information
mrava87 authored Mar 11, 2024
2 parents 435403b + 3be6267 commit 8a4fc96
Show file tree
Hide file tree
Showing 22 changed files with 534 additions and 93 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ jobs:
strategy:
matrix:
platform: [ ubuntu-latest, macos-latest ]
python-version: ["3.8", "3.9", "3.10"]
python-version: ["3.8", "3.9", "3.10", "3.11"]

runs-on: ${{ matrix.platform }}
steps:
Expand Down
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
# 0.8.0

* Added ``pyproximal.projection.L01BallProj`` and ``pyproximal.proximal.L01Ball`` operators
* Added ``eta`` to ``pyproximal.optimization.primal.ProximalGradient``
* Added ``eta`` and ``weights`` to ``pyproximal.optimization.primal.GeneralizedProximalGradient``
* Allow ``eta`` to ``pyproximal.optimization.primal.ProximalGradient`` to have iteration-dependent ``epsg``
* Switched from ``lsqr`` to ``cg`` in ``pyproximal.projection.AffineSetProj``

# 0.7.0

* Added ``pyproximal.proximal.RelaxedMumfordShah`` operator
Expand Down
12 changes: 4 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,10 +82,10 @@ operators and/or algorithms, which present some clear overlap with this project.
A (possibly not exhaustive) list of other projects is:

* http://proximity-operator.net
* https://github.com/ganguli-lab/proxalgs/blob/master/proxalgs/operators.py
* https://github.com/ganguli-lab/proxalgs
* https://github.com/pmelchior/proxmin
* https://github.com/comp-imaging/ProxImaL
* https://github.com/matthieumeo/pycsou
* https://github.com/pyxu-org/pyxu

All of these projects are self-contained, meaning that they implement both proximal
and linear operators as needed to solve a variety of problems in different areas
Expand Down Expand Up @@ -115,10 +115,6 @@ You need **Python 3.8 or greater**.
*Note: Versions prior to v0.3.0 work also with Python 3.6 or greater, however they
require scipy version to be lower than v1.8.0.*

#### From PyPi
you want to use PyProximal within your codes,
install it in your Python environment by typing the following command in your terminal:

To get the most out of PyLops straight out of the box, we recommend `conda` to install PyLops:
```bash
conda install -c conda-forge pyproximal
Expand All @@ -127,7 +123,7 @@ conda install -c conda-forge pyproximal
#### From PyPi
You can also install pyproximal with `pip`:
```bash
pip install pylops
pip install pyproximal
```

#### From Github
Expand Down Expand Up @@ -193,4 +189,4 @@ you are required to rebuild the entire documentation before your changes will be
* Matteo Ravasi, mrava87
* Nick Luiken, NickLuiken
* Eneko Uruñuela, eurunuela
* Marcus Valtonen Örnhag, marcusvaltonen
* Marcus Valtonen Örnhag, marcusvaltonen
2 changes: 2 additions & 0 deletions docs/source/api/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ Orthogonal projections
HyperPlaneBoxProj
IntersectionProj
L0BallProj
L01BallProj
L1BallProj
NuclearBallProj
SimplexProj
Expand Down Expand Up @@ -68,6 +69,7 @@ Convex
Intersection
L0
L0Ball
L01Ball
L1
L1Ball
L2
Expand Down
15 changes: 13 additions & 2 deletions docs/source/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,22 @@
Changelog
=========

Version 0.8.0
--------------
*Released on: 11/03/2024*

* Added :py:class:`pyproximal.projection.L01BallProj` and :py:class:`pyproximal.proximal.L01Ball` operators
* Added ``eta`` to :py:func:`pyproximal.optimization.primal.ProximalGradient`
* Added ``eta`` and ``weights`` to :py:func:`pyproximal.optimization.primal.GeneralizedProximalGradient`
* Allow ``eta`` to :py:func:`pyproximal.optimization.primal.ProximalGradient` to have iteration-dependent ``epsg``
* Switched from ``lsqr`` to ``cg`` in :py:func:`pyproximal.projection.AffineSetProj`


Version 0.7.0
--------------
*Released on: 10/11/2023*

* Added :py:class:`pyproximal.proximal.RelaxedMumfordShah`` operator
* Added :py:class:`pyproximal.proximal.RelaxedMumfordShah` operator
* Added cuda version to the proximal operator of :py:class:`pyproximal.proximal.Simplex`
* Added bilinear update to :py:func:`pyproximal.optimization.primal.ProximalGradient`
* Modified :py:func:`pyproximal.optimization.pnp.PlugAndPlay` function signature to allow using any proximal solver of choice
Expand All @@ -34,7 +45,7 @@ Version 0.5.0
|:vertical_traffic_light:| |:vertical_traffic_light:|

* Added :py:class:`pyproximal.proximal.Log1` operator
* Allow ``radius`` parameter of :py:func:`pyproximal.optimization.primal.L0` to be a function
* Allow ``radius`` parameter of :py:func:`pyproximal.proximal.L0` to be a function
* Allow ``tau`` parameter of :py:func:`pyproximal.optimization.primal.HQS` to be a vector
and change over iterations
* Added ``z0`` to :py:func:`pyproximal.optimization.primal.HQS`
Expand Down
4 changes: 2 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,10 +76,10 @@ operators and/or algorithms which present some clear overlap with this project.
A (possibly not exahustive) list of other projects is:

* http://proximity-operator.net
* https://github.com/ganguli-lab/proxalgs/blob/master/proxalgs/operators.py
* https://github.com/ganguli-lab/proxalgs
* https://github.com/pmelchior/proxmin
* https://github.com/comp-imaging/ProxImaL
* https://github.com/matthieumeo/pycsou
* https://github.com/matthieumeo/pyxu

All of these projects are self-contained, meaning that they implement both proximal
and linear operators as needed to solve a variety of problems in different areas
Expand Down
2 changes: 1 addition & 1 deletion docs/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ or just clone the repository

.. code-block:: bash
>> git clone https://github.com/mrava87/pyproximal.git
>> git clone https://github.com/PyLops/pyproximal.git
or download the zip file from the repository (green button in the top right corner of the
main github repo page) and install PyProximal from terminal using the command:
Expand Down
108 changes: 68 additions & 40 deletions pyproximal/optimization/primal.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,8 +102,9 @@ def ProximalPoint(prox, x0, tau, niter=10, callback=None, show=False):
return x


def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
epsg=1., niter=10, niterback=100,
def ProximalGradient(proxf, proxg, x0, epsg=1.,
tau=None, beta=0.5, eta=1.,
niter=10, niterback=100,
acceleration=None,
callback=None, show=False):
r"""Proximal gradient (optionally accelerated)
Expand All @@ -127,17 +128,19 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
Proximal operator of g function
x0 : :obj:`numpy.ndarray`
Initial vector
epsg : :obj:`float` or :obj:`np.ndarray`, optional
Scaling factor of g function
tau : :obj:`float` or :obj:`numpy.ndarray`, optional
Positive scalar weight, which should satisfy the following condition
to guarantees convergence: :math:`\tau \in (0, 1/L]` where ``L`` is
the Lipschitz constant of :math:`\nabla f`. When ``tau=None``,
backtracking is used to adaptively estimate the best tau at each
iteration. Finally note that :math:`\tau` can be chosen to be a vector
iteration. Finally, note that :math:`\tau` can be chosen to be a vector
when dealing with problems with multiple right-hand-sides
beta : :obj:`float`, optional
Backtracking parameter (must be between 0 and 1)
epsg : :obj:`float` or :obj:`np.ndarray`, optional
Scaling factor of g function
eta : :obj:`float`, optional
Relaxation parameter (must be between 0 and 1, 0 excluded).
niter : :obj:`int`, optional
Number of iterations of iterative scheme
niterback : :obj:`int`, optional
Expand All @@ -161,9 +164,8 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
.. math::
\mathbf{x}^{k+1} = \prox_{\tau^k \epsilon g}(\mathbf{y}^{k+1} -
\tau^k \nabla f(\mathbf{y}^{k+1})) \\
\mathbf{x}^{k+1} = \mathbf{y}^k + \eta (\prox_{\tau^k \epsilon g}(\mathbf{y}^k -
\tau^k \nabla f(\mathbf{y}^k)) - \mathbf{y}^k) \\
\mathbf{y}^{k+1} = \mathbf{x}^k + \omega^k
(\mathbf{x}^k - \mathbf{x}^{k-1})
Expand All @@ -187,7 +189,7 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
Different accelerations are provided:
- ``acceleration=None``: :math:`\omega^k = 0`;
- `acceleration=vandenberghe`` [1]_: :math:`\omega^k = k / (k + 3)` for `
- ``acceleration=vandenberghe`` [1]_: :math:`\omega^k = k / (k + 3)` for `
- ``acceleration=fista``: :math:`\omega^k = (t_{k-1}-1)/t_k` for where
:math:`t_k = (1 + \sqrt{1+4t_{k-1}^{2}}) / 2` [2]_
Expand All @@ -197,9 +199,10 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
Imaging Sciences, vol. 2, pp. 183-202. 2009.
"""
# check if epgs is a ve
# check if epgs is a vector
if np.asarray(epsg).size == 1.:
epsg_print = str(epsg)
epsg = epsg * np.ones(niter)
epsg_print = str(epsg[0])
else:
epsg_print = 'Multi'

Expand All @@ -218,7 +221,7 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
'niterback = %d\tacceleration = %s\n' % (type(proxf), type(proxg),
'Adaptive' if tau is None else str(tau), beta,
epsg_print, niter, niterback, acceleration))
head = ' Itn x[0] f g J=f+eps*g'
head = ' Itn x[0] f g J=f+eps*g tau'
print(head)

backtracking = False
Expand All @@ -237,10 +240,15 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,

# proximal step
if not backtracking:
x = proxg.prox(y - tau * proxf.grad(y), epsg * tau)
if eta == 1.:
x = proxg.prox(y - tau * proxf.grad(y), epsg[iiter] * tau)
else:
x = x + eta * (proxg.prox(x - tau * proxf.grad(x), epsg[iiter] * tau) - x)
else:
x, tau = _backtracking(y, tau, proxf, proxg, epsg,
x, tau = _backtracking(y, tau, proxf, proxg, epsg[iiter],
beta=beta, niterback=niterback)
if eta != 1.:
x = x + eta * (proxg.prox(x - tau * proxf.grad(x), epsg[iiter] * tau) - x)

# update internal parameters for bilinear operator
if isinstance(proxf, BilinearOperator):
Expand All @@ -264,10 +272,11 @@ def ProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
if show:
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
pf, pg = proxf(x), proxg(x)
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
msg = '%6g %12.5e %10.3e %10.3e %10.3e %10.3e' % \
(iiter + 1, np.real(to_numpy(x[0])) if x.ndim == 1 else np.real(to_numpy(x[0, 0])),
pf, pg[0] if epsg_print == 'Multi' else pg,
pf + np.sum(epsg * pg))
pf, pg,
pf + np.sum(epsg[iiter] * pg),
tau)
print(msg)
if show:
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
Expand Down Expand Up @@ -296,8 +305,9 @@ def AcceleratedProximalGradient(proxf, proxg, x0, tau=None, beta=0.5,
callback=callback, show=show)


def GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None,
epsg=1., niter=10,
def GeneralizedProximalGradient(proxfs, proxgs, x0, tau,
epsg=1., weights=None,
eta=1., niter=10,
acceleration=None,
callback=None, show=False):
r"""Generalized Proximal gradient
Expand All @@ -316,24 +326,27 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None,
Parameters
----------
proxfs : :obj:`List of pyproximal.ProxOperator`
proxfs : :obj:`list of pyproximal.ProxOperator`
Proximal operators of the :math:`f_i` functions (must have ``grad`` implemented)
proxgs : :obj:`List of pyproximal.ProxOperator`
proxgs : :obj:`list of pyproximal.ProxOperator`
Proximal operators of the :math:`g_j` functions
x0 : :obj:`numpy.ndarray`
Initial vector
tau : :obj:`float` or :obj:`numpy.ndarray`, optional
tau : :obj:`float`
Positive scalar weight, which should satisfy the following condition
to guarantees convergence: :math:`\tau \in (0, 1/L]` where ``L`` is
the Lipschitz constant of :math:`\sum_{i=1}^n \nabla f_i`. When ``tau=None``,
backtracking is used to adaptively estimate the best tau at each
iteration.
the Lipschitz constant of :math:`\sum_{i=1}^n \nabla f_i`.
epsg : :obj:`float` or :obj:`np.ndarray`, optional
Scaling factor(s) of ``g`` function(s)
weights : :obj:`float`, optional
Weighting factors of ``g`` functions. Must sum to 1.
eta : :obj:`float`, optional
Relaxation parameter (must be between 0 and 1, 0 excluded). Note that
this will be only used when ``acceleration=None``.
niter : :obj:`int`, optional
Number of iterations of iterative scheme
acceleration: :obj:`str`, optional
Acceleration (``vandenberghe`` or ``fista``)
Acceleration (``None``, ``vandenberghe`` or ``fista``)
callback : :obj:`callable`, optional
Function with signature (``callback(x)``) to call after each iteration
where ``x`` is the current model vector
Expand All @@ -352,16 +365,27 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None,
.. math::
\text{for } j=1,\cdots,n, \\
~~~~\mathbf z_j^{k+1} = \mathbf z_j^{k} + \epsilon_j
\left[prox_{\frac{\tau^k}{\omega_j} g_j}\left(2 \mathbf{x}^{k} - \mathbf{z}_j^{k}
~~~~\mathbf z_j^{k+1} = \mathbf z_j^{k} + \eta
\left[prox_{\frac{\tau^k \epsilon_j}{w_j} g_j}\left(2 \mathbf{x}^{k} - \mathbf{z}_j^{k}
- \tau^k \sum_{i=1}^n \nabla f_i(\mathbf{x}^{k})\right) - \mathbf{x}^{k} \right] \\
\mathbf{x}^{k+1} = \sum_{j=1}^n \omega_j f_j \\
where :math:`\sum_{j=1}^n \omega_j=1`. In the current implementation :math:`\omega_j=1/n`.
\mathbf{x}^{k+1} = \sum_{j=1}^n w_j \mathbf z_j^{k+1} \\
where :math:`\sum_{j=1}^n w_j=1`. In the current implementation, :math:`w_j=1/n` when
not provided.
"""
# check if weights sum to 1
if weights is None:
weights = np.ones(len(proxgs)) / len(proxgs)
if len(weights) != len(proxgs) or np.sum(weights) != 1.:
raise ValueError(f'omega={weights} must be an array of size {len(proxgs)} '
f'summing to 1')
print(weights)

# check if epgs is a vector
if np.asarray(epsg).size == 1.:
epsg_print = str(epsg)
epsg = epsg * np.ones(len(proxgs))
else:
epsg_print = 'Multi'

Expand Down Expand Up @@ -403,9 +427,9 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None,
x = np.zeros_like(x)
for i, proxg in enumerate(proxgs):
ztmp = 2 * y - zs[i] - tau * grad
ztmp = proxg.prox(ztmp, tau * len(proxgs))
zs[i] += epsg * (ztmp - y)
x += zs[i] / len(proxgs)
ztmp = proxg.prox(ztmp, tau * epsg[i] / weights[i])
zs[i] += eta * (ztmp - y)
x += weights[i] * zs[i]

# update y
if acceleration == 'vandenberghe':
Expand All @@ -416,7 +440,6 @@ def GeneralizedProximalGradient(proxfs, proxgs, x0, tau=None,
omega = ((told - 1.) / t)
else:
omega = 0

y = x + omega * (x - xold)

# run callback
Expand Down Expand Up @@ -558,7 +581,8 @@ def HQS(proxf, proxg, x0, tau, niter=10, z0=None, gfirst=True,
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
pf, pg = proxf(x), proxg(x)
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
(iiter + 1, x[0], pf, pg, pf + pg)
(iiter + 1, np.real(to_numpy(x[0])),
pf, pg, pf + pg)
print(msg)
if show:
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
Expand Down Expand Up @@ -683,7 +707,8 @@ def ADMM(proxf, proxg, x0, tau, niter=10, gfirst=False,
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
pf, pg = proxf(x), proxg(x)
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
(iiter + 1, x[0], pf, pg, pf + pg)
(iiter + 1, np.real(to_numpy(x[0])),
pf, pg, pf + pg)
print(msg)
if show:
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
Expand Down Expand Up @@ -784,7 +809,8 @@ def ADMML2(proxg, Op, b, A, x0, tau, niter=10, callback=None, show=False, **kwar
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
pf, pg = 0.5 * np.linalg.norm(Op @ x - b) ** 2, proxg(Ax)
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
(iiter + 1, x[0], pf, pg, pf + pg)
(iiter + 1, np.real(to_numpy(x[0])),
pf, pg, pf + pg)
print(msg)
if show:
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
Expand Down Expand Up @@ -889,7 +915,8 @@ def LinearizedADMM(proxf, proxg, A, x0, tau, mu, niter=10,
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
pf, pg = proxf(x), proxg(Ax)
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
(iiter + 1, x[0], pf, pg, pf + pg)
(iiter + 1, np.real(to_numpy(x[0])),
pf, pg, pf + pg)
print(msg)
if show:
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
Expand Down Expand Up @@ -1037,7 +1064,8 @@ def TwIST(proxg, A, b, x0, alpha=None, beta=None, eigs=None, niter=10,
if iiter < 10 or niter - iiter < 10 or iiter % (niter // 10) == 0:
pf, pg = proxf(x), proxg(x)
msg = '%6g %12.5e %10.3e %10.3e %10.3e' % \
(iiter + 1, np.real(to_numpy(x[0])), pf, pg, pf + pg)
(iiter + 1, np.real(to_numpy(x[0])),
pf, pg, pf + pg)
print(msg)
if show:
print('\nTotal time (s) = %.2f' % (time.time() - tstart))
Expand Down
Loading

0 comments on commit 8a4fc96

Please sign in to comment.