-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Welcome to the SPSA wiki!
The purpose of this package is to provide multivariable optimizers using SPSA. Although other optimizers exist, not many implement SPSA, which has various pros and cons. Additionally, SPSA has few requirements so that you don't have to install large packages like scipy just to optimize a function.
Unix/macOS:
python3 -m pip install spsa
Windows:
py -m pip install spsa
import spsa
def f(x):
return ...
x = spsa.minimize(f, ...)
The sphere
function is a simple test function for optimization. It is the squared norm of the input, which is minimal when x = 0
.
import numpy as np
def sphere(x: np.ndarray) -> float:
"""Returns the square of the norm."""
return np.linalg.norm(x) ** 2
Trying to find its minima starting from [1, 2, 3]
may give something similar to the following output.
>>> import spsa
>>> spsa.minimize(sphere, [1, 2, 3])
array([ 3.23484811e-26, -2.33465190e-24, 1.04123278e-25])
>>> # Results may vary.
Let's try it in 100 dimensions and see what the largest component is.
>>> np.abs(spsa.minimize(sphere, range(100))).max()
0.0004725157858277078
>>> # Results may vary.
This is pretty good considering the furthest component (the last one) starts at 99
and converges to ~ 4.7e-4
.
We can do polynomial regression using numpy.polyomial
's Polynomial.fit
on a dataset, but what if we want to apply it to an entire interval of points? Let's try using spsa
to match a polynomial function to exp(x)
from -1
to 1
.
First, we need to define the objective function, the error between the polynomial and exp(x)
. However, we cannot define it over every point from -1
to 1
. Instead, we can sample a random point from -1
to 1
instead, using an rng
generator.
from math import exp
from typing import Iterator
import numpy as np
from numpy.polynomial import Polynomial
def error(coef: np.ndarray, rng: Iterator[float]) -> float:
"""Computes the error between `Polynomial(coef)` and `exp` at a random point."""
poly = Polynomial(coef)
x = next(rng)
return (poly(x) - exp(x)) ** 2
We can then minimize the error using spsa.minimize
. To fill in for the rng
, we can use spsa.random.iterator.regression(-1, 1)
to generate random points from -1
to 1
.
Let's try matching a quadratic polynomial.
from functools import partial
import spsa
from spsa.random.iterator import regression
coef = spsa.minimize(partial(error, rng=regression(-1, 1)), [0, 0, 0])
poly = Polynomial(coef)
To see how well we actually did, we will need the matplotlib
package to graph our results.
import matplotlib.pyplot as plt
def plot(poly: Polynomial) -> None:
"""Plot the results."""
x = np.linspace(-1, 1)
plt.plot(x, np.exp(x), label="exp")
plt.plot(x, poly(x), label="poly")
plt.legend(loc="upper left")
plt.xlabel("x")
plt.ylabel("y")
plt.title("Polynomial Regression for exp(x)")
plt.show()
plot(poly)
You should end up with a graph similar to this:
Despite the fact that we've only provided an interval and the error at a random point, the polynomial regression turned out really well.
Let's try bumping up the polynomial degree by one.
coef = spsa.minimize(partial(error, rng=regression(-1, 1)), [0, 0, 0, 0])
poly = Polynomial(coef)
plot(poly)
You should end up with a graph similar to this:
Amazing! Apparently it turns out a cubic polynomial is a pretty solid estimate for exp(x)
from -1
to 1
.