## Surrogate “Level-Based” Lagrangian Relaxation for mixed-integer linear programming

### Surrogate “Level-Based” Lagrangian Relaxation

In this subsection, a novel Surrogate “Level-Based” Lagrangian Relaxation (SLBLR) method is developed to determine “level” estimates of $$q(\lambda ^*)$$ within the Polyak’s stepsizing formula (9) for fast convergence of multipliers when optimizing the dual function (3). Since the knowledge of $$q(\lambda ^*)$$ is generally unavailable, over-estimates of the optimal dual value, if used in place of $$q(\lambda ^*)$$ within the formula (9), may lead to the oscillation of multipliers and to the divergence. Rather than using heuristic “oscillation detection” of multipliers used to adjust “level” values24, the key of SLBLR is the decision-based “divergence detection” of multipliers based on a novel auxiliary “multiplier-divergence-detection” constraint satisfaction problem.

#### “Multiplier-Divergence-Detection” problem to obtain the estimate of $$q(\lambda ^*)$$

The premise behind the multiplier-divergence detection is the rendition of the result due Zhao et al.23:

### Theorem 1

Under the stepsizing formula

\beginaligned&s^k< \gamma \cdot \fracq(\lambda ^*) – L(\tildex^k,\tildey^k,\lambda ^k)\Vert g(\tildex^k,\tildey^k)\Vert ^2, \gamma < 1, \endaligned

(10)

such that $$\\tildex^k,\tildey^k\$$ satisfy

\beginaligned&L(\tildex^k,\tildey^k,\lambda ^k) \le L(\tildex^k-1,\tildey^k-1,\lambda ^k), \endaligned

(11)

the multipliers move closer to optimal multipliers $$\lambda ^*$$ iteration by iteration:

\beginaligned&\Vert \lambda ^*-\lambda ^k+1\Vert < \Vert \lambda ^*-\lambda ^k\Vert . \endaligned

(12)

The following Corollary and Theorem 2 are the main key results of this paper.

### Corollary 1

If

\beginaligned&\Vert \lambda ^*-\lambda ^k+1\Vert \ge \Vert \lambda ^*-\lambda ^k\Vert , \endaligned

(13)

then

\beginaligned&s^k \ge \gamma \cdot \fracq(\lambda ^*) – L(\tildex^k,\tildey^k,\lambda ^k)\Vert g(\tildex^k,\tildey^k)\Vert ^2. \endaligned

(14)

### Theorem 2

If the following auxiliary “multiplier-divergence-detection” feasibility problem (with $$\lambda$$ being a continuous decision variable: $$\lambda \in \mathbb R^m$$)

\beginaligned {\left\ \beginarrayll \Vert \lambda -\lambda ^k_j+1\Vert \le \Vert \lambda -\lambda ^k_j\Vert ,\\ \Vert \lambda -\lambda ^k_j+2\Vert \le \Vert \lambda -\lambda ^k_j+1\Vert ,\\ …\\ \Vert \lambda -\lambda ^k_j+n_j\Vert \le \Vert \lambda -\lambda ^k_j+n_j-1\Vert , \endarray\right. \endaligned

(15)

admits no feasible solution with respect to $$\lambda$$ for some $$k_j$$ and $$n_j$$, then $$\exists \; \kappa \in [k_j,k_j+n_j]$$ such that

\beginaligned&s^\kappa \ge \gamma \cdot \fracq(\lambda ^*) – L(\tildex^\kappa ,\tildey^\kappa ,\lambda ^\kappa )\Vert g(\tildex^\kappa ,\tildey^\kappa )\Vert ^2. \endaligned

(16)

### Proof

Assume the contrary: $$\forall \kappa \in [k_j,k_j+n_j]$$ the following holds:

\beginaligned&s^\kappa <