chop.constraints¶
Constraints.¶
This module contains classes representing constraints. The methods on each constraint object function batch-wise. Reshaping will be of order if the constraints are used on the parameters of a model.
This uses an API similar to the one for the COPT project, https://github.com/openopt/copt. Part of this code is adapted from https://github.com/ZIB-IOL.
Functions
|
Create LpBall constraints for each layer of model, and value depends on mode (either radius or factor to multiply average initialization norm with) |
|
Compute the Euclidean projection on a L1-ball Solves the optimization problem (using the algorithm from [1]): ..math:: min_w 0.5 * || w - v ||_2^2 , s.t. || w ||_1 <= s. |
|
Compute the Euclidean projection on a positive simplex Solves the optimization problem (using the algorithm from [1]): ..math:: min_w 0.5 * || w - v ||_2^2 , s.t. sum_i w_i = s, w_i >= 0. |
|
Computes the average norm of default layer initialization |
|
Classes
|
|
|
|
|
|
|
|
|
|
|
Nuclear norm constraint, i.e. sum of absolute eigenvalues. |
|
-
class
chop.constraints.
NuclearNormBall
(alpha)[source]¶ Nuclear norm constraint, i.e. sum of absolute eigenvalues. Also known as the Schatten-1 norm. We consider the last two dimensions of the input are the ones we compute the Nuclear Norm on.
-
chop.constraints.
create_lp_constraints
(model, p=2, value=300, mode='initialization')[source]¶ Create LpBall constraints for each layer of model, and value depends on mode (either radius or factor to multiply average initialization norm with)
-
chop.constraints.
euclidean_proj_l1ball
(v, s=1.0)[source]¶ Compute the Euclidean projection on a L1-ball Solves the optimization problem (using the algorithm from [1]):
- ..math::
min_w 0.5 * || w - v ||_2^2 , s.t. || w ||_1 <= s
- Parameters
v – (n,) numpy array, n-dimensional vector to project
s – float, optional, default: 1, radius of the L1-ball
- Returns
- (n,) numpy array,
Euclidean projection of v on the L1-ball of radius s
- Return type
w
Notes
Solves the problem by a reduction to the positive simplex case
See also
-
chop.constraints.
euclidean_proj_simplex
(v, s=1.0)[source]¶ Compute the Euclidean projection on a positive simplex Solves the optimization problem (using the algorithm from [1]):
- ..math::
min_w 0.5 * || w - v ||_2^2 , s.t. sum_i w_i = s, w_i >= 0
- Parameters
v ((n,) numpy array,) – n-dimensional vector to project
s (float, optional, default: 1,) – radius of the simplex
- Returns
w – Euclidean projection of v on the simplex
- Return type
(n,) numpy array,
Notes
The complexity of this algorithm is in O(n log(n)) as it involves sorting v. Better alternatives exist for high-dimensional sparse vectors (cf. [1]) However, this implementation still easily scales to millions of dimensions.
References
- [1] Efficient Projections onto the .1-Ball for Learning in High Dimensions
John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. International Conference on Machine Learning (ICML 2008) http://www.cs.berkeley.edu/~jduchi/projects/DuchiSiShCh08.pdf