nlspace_solve()

nlspace_solve(problem: Optimizable, params=None, results=None)

Null Space Optimizer solver for the nonlinear constrained optimization problem

(1)\[\begin{aligned} \min_{x\in \mathcal{X}} & \quad J(x) \\ s.t. & \left\{\begin{aligned} g_i(x) & =0 \text{ for all }0 \leqslant i \leqslant p-1,\\ h_j(x) & \leqslant 0 \text{ for all }0\leqslant j \leqslant q-1,\end{aligned}\right. \end{aligned}\]

where \(p\) and \(q\) are the number of equality and inequality constraints. The Null Space Optimizer solves the optimization problem (1) by solving the Ordinary Differential Equation (so-called ‘’null-space gradient flow’’)

(2)\[ \dot{x}(t) =-\alpha_J(t) \xi_J(x(t))-\alpha_C(t)\xi_C(x(t))\]

with adaptive coefficients \(\alpha_J(t)\) and \(\alpha_C(t)\).

Parameters:
  • problem – An Optimizable object implementing the optimization problem.

  • params – (optional) a dictionary containing algorithm parameters (see below).

  • results – (optional) a previous output of the nlspace_solve function. The optimization will restart from the last input of the dictionary results['x'][-1] and keep going as if there were no interruption.

Return results:

A dictionary of results containing the following entries:

  • results['it']: iteration numbers [0, 1, ..., n].

  • results['x']: optimization path [x0,x1,...,xn]

  • results['J']: values [J(x_0),...,J(x_n)] of the objective function along the path

  • results['G']: equality constraint values [G(x0),...,G(xn)]

  • results['H']: inequality constraints values [H(x_0),...,H(x_n)]

  • results['muls']: lagrange multiplier values at every iterations [mu(x0),...,mu(xn)]. Each mu(xi) is a vector or list of length the total number of constraints.

  • results['normxiJ'] : \(L^\infty\) norms of the nullspace step \(\xi_J\) (without normalization)

  • results['normxiJ_save'] : \(L^\infty\) norms of the nullspace step \(\xi_J\) after every renormalization.

  • results['eps'] : threshold values estimated for the constraints and used to take into account constraints which as close to be saturated or violated (these are those smaller than these eps values).

  • results['tolerance']: estimation of an uncertainty bound on the constraints under which these can expect to be satisfied. The uncertainty bound is computed thanks to the formula: \(\texttt{tolerance} = ||\texttt{DC}||_1 \texttt{params['dt']}\) where DC is the Jacobian matrix of the constraints.

  • results['s']: the optimization path length [s(x0),s(x1),...,s(xn)] with \(\texttt{s(x(t))}=\int_0^t ||\texttt{x}'(\tau)||_2 d\tau\).

It is possible to save only a limited number of previous iterations, see params[‘save_only_N_iterations’].

Algorithm parameters

  • params['alphaJ']: (default 1) scaling coefficient for the null space step \(\xi_J\) decreasing the objective function. \(\alpha_J(t)\) is computed such that \(||\alpha_J(t)\xi_J(x(t))||_{\infty}\leqslant \texttt{params['alphaJ']}\).

  • params['alphaC'] : (default 1) scaling coefficient for the Gauss Newton direction \(\xi_C\) decreasing the violation of the constraints. \(\alpha_C(t)\) is computed such that \(||\alpha_C(t)\xi_C(x(t))||_{\infty}\leqslant \texttt{params['alphaC']}\).

  • params['alphas']: (default None) a list [alpha1,...,alpha_{p+q}] or a function lambda x : alphas(x) where alphas(x)=[alpha1,...,alpha_{p+q}] is a list of weights scaling the Gauss Newton direction \(\xi_C\) for each of the constraints at the point x. Here p is the number of equality constraints and q the number of inequality constraints. This parameter can be a function to allow for cases where the number of constraints may depend on x (for instance if x is a discretization of an infinite dimensional object) or if it is important to change the weights depending on x.

  • params['debug']: Tune the level of verbosity of the output (default 0). The higher this parameter and the more verbose is the output. Use param['debug']=-1 to display only the final result Use param['debug']=-2 to remove any output.

  • params['dt']: (default : 0.1). Time-step used for the discretization of (2) with the Forward Euler method.

  • params['itnormalisation']: (default 1) the null space step \(\xi_J\) is normalized until the iteration params['itnormalisation']. More precisely, \(\alpha_J(t)\) is computed such that \(||\alpha_J(t)\xi_J(x(t))||_{\infty}=\texttt{params['alphaJ']}\) until the iteration number params['itnormalisation'] (after this equality becomes an inequality).

    Increasing this parameter allows to go faster during the first iteration but may also results in a two large time step.

  • params['K']: tunes the distance at which inactive inequality constraints are “felt”; this avoids oscillations around active constraints. nstraints are “felt” from a distance K*params['dt'] from the boundary.

  • params['maxit']: Maximal number of iterations (default : 4000)

  • params['maxtrials']: (default 3) number of trials in between time steps until the finite difference check is acceptable (see params['tol_finite_diff'])

  • params['normalisation_norm'] : the norm used to normalize the descent direction (default is \(L^\infty\) encoded by numpy.inf).

  • params['dual_norm'] : the dual norm of the norm provided by params['normalisation_norm'].

  • params['normalize_tol'] : (default: -1) if >= 0 , then \(\xi_J\) is normalized such that \(||\alpha_J(t)\xi_J(x(t))||_{\infty}=\texttt{params['alphaJ']}\) every time the set of active constraints changes (in addition to the first params['itnormalisation'] normalization iterations). This allows to speed up the optimizer when there are a small number of constraints. The value of this parameter can be set to a strictly positive tolerance (e.g. 1e-7) to normalize only when a substantial discontinuity occurs in the multipliers. If set to a negative value, then no normalization is performed when the active set changes.

    Note

    Turning this parameter on is not recommended if there are many constraints as too many normalizations can make the convergence to the local optimum unstable.

  • params['tol']: (default 1e-5). The algorithm stops when \(||x_{n+1}-x_n||_{2}<\texttt{params['tol']}\times \texttt{params['dt']}`\) or after params['maxit'] iterations.

  • params['tol_finite_diff'] : (default 0.15) a warning message will be displayed if params[`debug`]=1 and if the finite difference checks

    \[\begin{aligned} |J(x_{n+1})-J(x_n) - \texttt{DJ(dx)}| & \leqslant \texttt{params['tol\_finite\_diff']}\times ||\texttt{dx}||_2 \\ | C(x_{n+1})-C(x_n) - dC(dx) | & \leqslant \texttt{params['tol\_finite\_diff']}\times ||\texttt{dx}||_2 \end{aligned}\]

    fails, where \(C\) is the vector of constraints. Useful to check the correct implementation of the sensitivities.

  • params['tol_qp'] : (default 1e-10) the tolerance for the qp solver.

  • params['tol_cg'] : (default 1e-15) the tolerance for scipy iterative linear solvers (e.g. lsqr).

  • params['show_progress_qp'] : (default False) If set to True, the output of qp solver will be displayed between iterations.

  • params['qp_solver'] : osqp (default) cvxopt or qpalm. The quadratic programming solver used to solve the dual problem, either OSQP, CVXOPT or QPALM.

  • params['qp_solver_options'] : a dictionary of options that can be passed to the quadratic programming solver. Check the documentation of OSQP and CVXOPT.

  • params['save_only_N_iterations'] : (default None) if set to a integer N>0, then the routine will only save the last N iterates in results['x']. The past iterates are set to None, for saving memory. By default, the routine save all iterates, which can be costly for design variables containing a lot of information such as large arrays.

  • params['save_only_Q_constraints'] : (default None) this parameter is taken into account only if params['save_only_N_iterations'] is set to a positive integer. Then, if set to an integer Q, the routine will save data related to only the first Q equality and Q inequality constraints past the last N iterations. This enables to save memory when the number of constraints is large, e.g. in the case of bound constraints.

  • params['start'] : (default None) If set to an integer, and if a dictionary of previous results is provided, the optimizer will restart at iteration params['start'].

  • params['CFL'] : TODO