Optimization Methods#
Overview#
Non-Negative Least Squares#
[CB94]
Pseudo Projected Gradient Descent#
[MH12]
Projected Graident Descent#
Like Pseudo-PGD, but performs orthogonal projection onto unit simplex
[Con16]
Automatic Gradient Descent#
Parameters#
Numpy backend#
Non-Negative Least Squares (
nnls
):Parameter
Type
Default
Description
max_iter_optimizer
int
100
The maximum number of iterations in the nnls optimization, passed to
scipy.optimize.nnls
const
float
100.0
The penalization constant to add in the nnls optimization to enforce convex optimization.
(Pseudo) Projected Gradient Descent (
pgd
andpseudo_pgd
):Parameter
Type
Default
Description
max_iter_optimizer
int
10
The maximum number of iterations for optimizing the learning rate.
beta
float
0.5
The decay factor for the learning rate.
step_size
float
1.0
The initial learning rate at the beginning of optimization.
JAX backend#
Automatic Gradient Descent (
autogd
):Parameter
Type
Description
optimizer
str or optax.GradientTransformation
The optimization method to use. See the available optimizers.
optimizer_kwargs
dict
The arguments to pass to initialize the optimization method.
Torch backend#
Automatic Gradient Descent (
autogd
)Parameter
Type
Description
optimizer
str or torch.optim.Optimizer
The optimization method to use. See the available optimizers.
optimizer_kwargs
dict
The arguments to pass to initialize the optimization method.