Surrogate “Level-Based” Lagrangian Relaxation
In this subsection, a novel Surrogate “Level-Based” Lagrangian Relaxation (SLBLR) method is developed to determine “level” estimates of \(q(\lambda ^*)\) within the Polyak’s stepsizing formula (9) for fast convergence of multipliers when optimizing the dual function (3). Since the knowledge of \(q(\lambda ^*)\) is generally unavailable, over-estimates of the optimal dual value, if used in place of \(q(\lambda ^*)\) within the formula (9), may lead to the oscillation of multipliers and to the divergence. Rather than using heuristic “oscillation detection” of multipliers used to adjust “level” values24, the key of SLBLR is the decision-based “divergence detection” of multipliers based on a novel auxiliary “multiplier-divergence-detection” constraint satisfaction problem.
“Multiplier-Divergence-Detection” problem to obtain the estimate of \(q(\lambda ^*)\)
The premise behind the multiplier-divergence detection is the rendition of the result due Zhao et al.23:
Theorem 1
Under the stepsizing formula
$$\beginaligned&s^k< \gamma \cdot \fracq(\lambda ^*) – L(\tildex^k,\tildey^k,\lambda ^k)\Vert g(\tildex^k,\tildey^k)\Vert ^2, \gamma < 1, \endaligned$$
(10)
such that \(\\tildex^k,\tildey^k\\) satisfy
$$\beginaligned&L(\tildex^k,\tildey^k,\lambda ^k) \le L(\tildex^k-1,\tildey^k-1,\lambda ^k), \endaligned$$
(11)
the multipliers move closer to optimal multipliers \(\lambda ^*\) iteration by iteration:
$$\beginaligned&\Vert \lambda ^*-\lambda ^k+1\Vert < \Vert \lambda ^*-\lambda ^k\Vert . \endaligned$$
(12)
The following Corollary and Theorem 2 are the main key results of this paper.
Corollary 1
If
$$\beginaligned&\Vert \lambda ^*-\lambda ^k+1\Vert \ge \Vert \lambda ^*-\lambda ^k\Vert , \endaligned$$
(13)
then
$$\beginaligned&s^k \ge \gamma \cdot \fracq(\lambda ^*) – L(\tildex^k,\tildey^k,\lambda ^k)\Vert g(\tildex^k,\tildey^k)\Vert ^2. \endaligned$$
(14)
Theorem 2
If the following auxiliary “multiplier-divergence-detection” feasibility problem (with \(\lambda\) being a continuous decision variable: \(\lambda \in \mathbb R^m\))
$$\beginaligned {\left\ \beginarrayll \Vert \lambda -\lambda ^k_j+1\Vert \le \Vert \lambda -\lambda ^k_j\Vert ,\\ \Vert \lambda -\lambda ^k_j+2\Vert \le \Vert \lambda -\lambda ^k_j+1\Vert ,\\ …\\ \Vert \lambda -\lambda ^k_j+n_j\Vert \le \Vert \lambda -\lambda ^k_j+n_j-1\Vert , \endarray\right. \endaligned$$
(15)
admits no feasible solution with respect to \(\lambda\) for some \(k_j\) and \(n_j\), then \(\exists \; \kappa \in [k_j,k_j+n_j]\) such that
$$\beginaligned&s^\kappa \ge \gamma \cdot \fracq(\lambda ^*) – L(\tildex^\kappa ,\tildey^\kappa ,\lambda ^\kappa )\Vert g(\tildex^\kappa ,\tildey^\kappa )\Vert ^2. \endaligned$$
(16)
Proof
Assume the contrary: \(\forall \kappa \in [k_j,k_j+n_j]\) the following holds:
$$\beginaligned&s^\kappa < \gamma \cdot \fracq(\lambda ^*) – L(\tildex^\kappa ,\tildey^\kappa ,\lambda ^\kappa )\Vert g(\tildex^\kappa ,\tildey^\kappa )\Vert ^2. \endaligned$$
(17)
By Theorem 1, multipliers approach \(\lambda ^*,\) therefore, the “multiplier-divergence-detection” problem admits at least one feasible solution – \(\lambda ^*.\) Contradiction. \(\square\)
From (16) it follows that \(\exists \; \overlineq_\kappa ,j\) such that \(\overlineq_\kappa ,j > q(\lambda ^*)\) and the following holds:
$$\beginaligned&s^\kappa = \gamma \cdot \frac\overlineq_\kappa ,j – L(\tildex^\kappa ,\tildey^\kappa ,\lambda ^\kappa )\Vert g(\tildex^\kappa ,\tildey^\kappa )\Vert ^2. \endaligned$$
(18)
The equation (18) can equivalently be rewritten as:
$$\beginaligned&\overlineq_\kappa ,j = \frac1\gamma \cdot s^\kappa \cdot \Vert g(\tildex^\kappa ,\tildey^\kappa )\Vert ^2 + L(\tildex^\kappa ,\tildey^\kappa ,\lambda ^\kappa ). \endaligned$$
(19)
Therefore,
$$\beginaligned&\overlineq_j = \max _\kappa \in [k_j,k_j+n_j] \overlineq_\kappa ,j > q(\lambda ^*). \endaligned$$
(20)
A brief yet important discussion is in order here. The overestimate \(\overlineq_j\) of the dual value \(q(\lambda ^*)\) is the sought-for “level” value after the \(j^th\) update (the \(j^th\) time the problem (15) is infeasible). Unlike previous methods, which require heuristic hyperparameter adjustments to set level values, within SLBLR, level values are obtained by using the decision-based principle per (15) precisely when divergence is detected without any guesswork. In a sense, SLBLR is hyperparameter-adjustment-free. Specifically, neither “multiplier-divergence-detection” problem (15), nor the computations within (18)–(20) requires hyperparameter adjustment. Following Nedić and Bertsekas27, the parameter \(\gamma\) will be chosen as a fixed value \(\gamma = \frac1I\), which is the inverse of the number of subproblems and will not require further adjustments.
Note that (15) simplifies to an LP constraint satisfaction problem. For example, after squaring both sides of the first inequality \(\Vert \lambda -\lambda ^k_j+1\Vert \le \Vert \lambda -\lambda ^k_j\Vert\) within (15), after using the binomial expansion, and canceling \(\Vert \lambda -\lambda ^k_j\Vert ^2\) from both sides, the inequality simplifies to \(2 \cdot (\lambda – \lambda ^k_j) \cdot g(\tildex^k_j,\tildey^k_j) \ge s^k_j \cdot \Vert g(\tildex^k_j,\tildey^k_j) \Vert ^2,\) which is linear in terms of \(\lambda .\)
To speed up convergence, a hyperparameter \(\zeta < 1\) is introduced to reduce stepsizes as follows:
$$\beginaligned&s^k = \zeta \cdot \gamma \cdot \frac\overlineq_j – L(\tildex^k,\tildey^k,\lambda ^k)\Vert g(\tildex^k,\tildey^k)\Vert ^2, \zeta < 1. \endaligned$$
(21)
Subsequently after iteration \(k_j+1\), the problem (15) is sequentially solved again by adding one inequality per multiplier-updating iteration until iteration \(k_j+1+n_j+1-1\) is reached for some \(n_j+1\) so that (15) is infeasible. Then, stepsize is updated by using \(\overlineq_j+1\) per (21) and is used to update multipliers until the next time it is updated to \(\overlineq_j+2\) when the “multiplier-divergence-detection” problem is infeasible again, and the process repeats. Per (21), SLBLR requires hyperparameter \(\zeta\), yet, it is set before the algorithm is run and subsequently is not adjusted (see “Numerical testing” section for empirical demonstration of the robustness of the method with respect to the choice of hyperparameter \(\zeta\)).
To summarize the advantage of SLBLR, hyperparameter adjustment is not needed. The guesswork of when to adjust the level-value, and by how much is obviated — after (15) is infeasible, the level value is formulaically recalculated.
On improvement of convergence
To speed up the acceleration of the multiplier-divergence detection through the “multiplier-divergence-detection” problem, (15) is modified, albeit heuristically, in the following way:
$$\beginaligned {\left\ \beginarrayll \Vert \lambda -\lambda ^k_j+1\Vert \le \sqrt1-2 \cdot \nu \cdot s^k_j \cdot \Vert \lambda -\lambda ^k_j\Vert ,\\ \Vert \lambda -\lambda ^k_j+2\Vert \le \sqrt1-2 \cdot \nu \cdot s^k_j+1 \cdot \Vert \lambda -\lambda ^k_j+1\Vert ,\\ …\\ \Vert \lambda -\lambda ^k_j+n_j\Vert \le \sqrt1-2 \cdot \nu \cdot s^k_j+n_j-1 \cdot \Vert \lambda -\lambda ^k_j+n_j-1\Vert . \endarray\right. \endaligned$$
(22)
Unlike the problem (15), the problem (22) no longer simplifies to an LP problem. Nevertheless, the system of inequalities delineate the convex region and can still be handled by commercial software.
Discussion of (22)
Equation (22) is developed based on the following principles: 1. Rather than detecting divergence per (15), convergence with a rate slower than \(\sqrt1-2 \cdot \nu \cdot s\) is detected. This will lead to a faster adjustment of the level values. While the level value may no longer be guaranteed to be the upper bound to \(q(\lambda ^*)\), the merit of the above scheme will be empirically justified in the “Numerical testing” section. 2. While the rate of convergence is unknown, in the “worst-case” scenario \(\sqrt1-2 \cdot \nu \cdot s\) is upper bounded by 1 with \(\nu = 0\), thereby reducing (22) to (15). The estimation of \(\sqrt1-2 \cdot \nu \cdot s\) is thus much easier than the previously used estimations of \(q(\lambda ^*)\) (as in Subgradient-Level and Incremental Subgradient approaches). 3. As the stepsize approaches zero, \(\sqrt1-2 \cdot \nu \cdot s\) approaches the value of 1 regardless of the value of \(\nu\), once again reducing (22) to (15).
There are three things to note here. 1. Steps in lines 15-16 are optional since other criteria can be used such as the number of iterations or the CPU time; 2. The value of \(q(\lambda ^k)\) is still needed (line 1) to obtain a valid lower bound. To obtain \(q(\lambda ^k)\), all subproblems are solved optimally for a given value of multipliers \(\lambda ^k\). The frequency of the search for the value \(q(\lambda ^k)\) is determined based on criteria as stated in point 1 above; 3. The search for feasible solutions is explained below.
Search for feasible solutions
Due to non-convexities caused by discrete variables, the relaxed constraints are generally not satisfied through coordination, even at convergence. Heuristics are thus inevitable, yet, they are the last step of the feasible-solution search procedure. Throughout all examples considered, following28 (as discussed in Supplementary Information), \(l_1\)-absolute-value penalties penalizing constraint violations are considered. After the total constraint violation reaches a small threshold value, a few subproblem solutions obtained by the Lagrangian Relaxation method are perturbed, e.g., see heuristics within accompanying CPLEX codes within28 to automatically select which subproblem solutions are to be adjusted to eliminate the constraint violation to obtain a solution feasible with respect to the overall problem.
Numerical Testing
In this subsection, a series of examples are considered to illustrate different aspects of the SLBLR method. In “Demonstration of convergence of multipliers based on a small example with known optimal multipliers” section, a small example with known corresponding optimal Lagrangian multipliers is considered to test the new method as well as to compare how fast Lagrangian multipliers approach their optimal values as compared to Surrogate Lagrangian Relaxation26 and to Incremental Subgradient25 methods. In “Generalized Assignment Problems” section, large-scale instances of generalized assignment problems (GAPs) of types D and E with 20, 40, and 80 machines and 1600 jobs from the OR-library (https://www-or.amp.i.kyoto-u.ac.jp/members/yagiura/gap/) are considered to demonstrate efficiency, scalability, robustness, and competitiveness of the method with respect to the best results available thus far in the literature. In “Stochastic job-shop scheduling with the consideration
of scrap and rework” section, a stochastic version of a job-shop scheduling problem instance with 127 jobs and 19 machines based on Hoitomt et al.29 is tested. In “Multi-stage pharmaceutical scheduling” section, two instances of pharmaceutical scheduling with 30 and 60 product orders, 17 processing units, and 6 stages based on Kopanos et al.13 are tested.
For “Demonstration of convergence of multipliers based on a small example with known optimal multipliers” section and “Generalized Assignment Problems” section, SLBLR is implemented within CPLEX 12.10 by using a Dell Precision laptop Intel(R) Xeon(R) E-2286M CPU @ 2.40GHz with 16 cores and installed memory (RAM) of 32.0 GB. For “Stochastic job-shop scheduling with the consideration
of scrap and rework” section and “Multi-stage pharmaceutical scheduling” section, SLBLR is implemented within CPLEX 12.10 by using a server Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz with 48 cores and installed memory (RAM) of 192.0 GB.
Demonstration of convergence of multipliers based on a small example with known optimal multipliers
To demonstrate convergence of multipliers, consider the following example (due Bragin et al.30):
$$\beginaligned&\min _x_1, x_2, x_3, x_4, x_5, x_6 \left\ x_1 + 2x_2 +3x_3 +x_4+2x_5+3x_6\right\ , \endaligned$$
(23)
$$\beginaligned&s.t. \; x_1 + 3x_2 +5x_3 +x_4+3x_5+5x_6 \ge 26, \endaligned$$
(24)
$$\beginaligned&2x_1 + 1.5x_2 +5x_3 +2x_4+0.5x_5+x_6 \ge 16. \endaligned$$
(25)
As proved by Bragin et al.30, the optimal dual solutions are \(\lambda _1^* = 0.6\) and \(\lambda _2^* = 0.\) Inequality constraints are converted to equality constraints after introducing slack variables. In Fig. 2, the decrease of the corresponding distances from current multipliers to the optimal multipliers (\(\Vert \lambda ^k-\lambda ^*\Vert\)) is shown, and the SLBLR method is compared with the Incremental Subgradient method25 and the Surrogate Lagrangian Relaxation method26.

Results for “Demonstration of convergence of multipliers based on a small example with known optimal multipliers” section: Comparison of SLBLR to 1. Incremental Subgradient method (left) and 2. Surrogate Lagrangian Relaxation method (right).
Within the SLBLR method, the equation (15) is used to detect divergence, and \(\zeta = \frac12\) is used to set stepsizes within (21). In essence, only one hyperparameter was required, which has a quite simple explanation – “when the stepsize is ‘too large,’ cut the stepsize in half.” As demonstrated in Fig. 2, the SLBLR method converges fast with \(\Vert \lambda ^k-\lambda ^*\Vert\) decreasing roughly along a straight line on a log-scale graph suggesting that the rate of convergence is likely linear as expected.
As for the Incremental Subgradient method, two hyperparameters are required: R and \(\delta\) (corresponding values used are shown in parentheses in the legend of Fig. 2 (left)). A trial-and-error analysis indicated that “acceptable” values are \(R=0.25\) and \(\delta = 24.\) Increasing or decreasing R to 0.5 and 0.125, respectively, do not lead to improvements. Likewise, increasing or decreasing \(\delta\) to 48 and 12, respectively, do not lead to improvements as well. “Plateau” regions in the figure are caused by the fact that as stepsizes get smaller, a larger number of iterations is required for multipliers to travel the predetermined distance R; during these iterations, stepsizes are not updated and multipliers may oscillate around a neighborhood of the optimum without getting closer. While the above difficulty can be alleviated and convergence can be improved by hyperparameters \(\tau\), \(\beta\), and \(R_l\) as reviewed in Supplementary Information, however, an even larger number of hyperparameters would be required.
As for the Surrogate Lagrangian Relaxation method, several pairs of hyperparameters (M and r) have been used as well (corresponding values used are shown in parentheses in the legend of Fig. 2 (right)), yet, the performance of Surrogate Lagrangian Relaxaton does not exceed the performance of the SLBLR method.
Herein lies the advantage of the novel SLBLR method: the decision-based principle behind computing the “level” values. This is in contrast to the problem-dependent choice of hyperparameters R and \(\delta\) within the Subgradient-Level24 and Incremental Subgradient25 methods, and the choice of M and r within Surrogate Lagrangian Relaxation26,28 (see “Introduction” section and Supplementry Information for more detail).
Even after obtaining “appropriate” values of the aforementioned hyperparameters by using a trial-and-error procedure that entails effort, results obtained by Surrogate Lagrangian Relaxation26 and the Incremental Subgradient method25 do not match or beat those obtained by the SLBLR method. The specific reasons are 1. Heuristic adjustments of the “level” values are required24,25 based on multiplier “oscillation detection” or “significant descent” (for minimization of non-smooth functions). However, these rules do not detect whether multipliers “start diverging.” Moreover, oscillation of multipliers is a natural phenomenon when optimizing non-smooth functions as discussed in “Introduction” section since multipliers may zigzag/oscillate across ridges of the function, so the multiplier “oscillation detection” may not necessarily warrant the adjustment of level values. On the other hand, multiplier “oscillation” is detected by checking whether multipliers traveled a (heuristically) predetermined distance R, hence, the divergence of multipliers can go undetected for a significant number of iterations (hence, the “plateau” regions shown in Fig. 2 (left)), depending on the value of R. To the best of the authors’ knowledge, the subgradient- and surrogate-subgradient-based methods using Polyak’s stepsizes with the intention of achieving the geometric/linear convergence rate either require \(q(\lambda ^*)\), which is unavailable, or require multipliers to travel infinite distance to guarantee convergence to the optimum \(\lambda ^*\)24. 2. While SLR avoids the need to estimate \(q(\lambda ^*)\), the geometric/linear convergence is only possible outside of a neighborhood of \(\lambda ^*\)26. Precisely for this reason, the convergence of multipliers within SLR with the corresponding stepsizing parameters \(M=30\) and \(r=0.01\) (as shown in Fig. 2 (right)) appears to follow closely convergence within SLBLR up until iteration 50, after which the improvement tapers off.
Generalized assignment problems
To demonstrate the computational capability of the new method as well as to determine appropriate values for key hyperparameters \(\zeta\) and \(\nu\) while using standard benchmark instances, large-scale instances of GAPs are considered (formulation is available in subsection 4.2 of Supplementary Information). We consider 20, 40, and 80 machines with 1600 jobs (https://www-or.amp.i.kyoto-u.ac.jp/members/yagiura/gap/).
To determine values for \(\zeta\) within (21) and \(\nu\) within (22) to be used throughout the examples, several values are tested using GAP instance d201600. In Table 1, with fixed values of \(\nu = 2\) and \(s^0 = 0.02\), the best result (both in terms of the cost and the CPU time) is obtained with \(\zeta = 1/1.5\). With the value of \(\zeta = 1/4,\) the stepsize decreases “too fast” thereby leading to a larger number of iterations and a much-increased CPU time as a result. Likewise, in Table 2 with fixed values of \(\zeta = 1/1.5\) and \(s^0 = 0.02\), it is demonstrated that the best result (both in terms of the cost and the CPU time) is obtained with \(\nu = 2\). Empirical evidence here suggests that the method is stable for other values of \(\nu .\) The robustness with respect to initial stepsizes (\(s^0\)) is tested and the results are demonstrated in Table 3. Multipliers are initialized by using LP dual solutions. The method’s performance is appreciably stable for the given range of initial stepsizes used (Table 3). SLBLR is robust with respect to initial multipliers \(\lambda ^0\) (Table 4). For this purpose, the multipliers are initialized randomly by using the uniform distribution U[90, 110]. For the testing, the initial stepsize \(s^0=0.02\) was used. As evidenced from Table 4, the method’s performance is stable, exhibiting only a slight degradation of solution accuracy and an increase of the CPU time as compared to the case with multipliers initialized by using LP dual solutions.
To test the robustness as well as scalability of the method across several large-scale GAP instances, six instances d201600, d401600, d801600, e201600, e401600, and e801600 are considered. SLBLR is compared with Depth-First Lagrangian Branch-and-Bound method (DFLBnB)31, Column Generation32, and Very Large Scale Neighborhood Search (VLSNS)33, which to the best of the authors’ knowledge are the best methods for at least one of the above instances. For completeness, a comparison against Surrogate Absolute-Value Lagrangian Relaxation (SAVLR)28, which is an improved version of Surrogate Lagrangian Relaxation (SLR)26, is also performed. The latter SLR method26 has been previously demonstrated to be advantageous against other non-smooth optimization methods as explained in “Introduction” section. Table 5 presents feasible costs and times (in seconds) for each method. The advantage of SLBLR is the ability to obtain optimal results across a wider range of GAP instances as compared to other methods. Even though the comparison in terms of the CPU time is not entirely fair, feasible-cost-wise, SLBLR decisively beats previous methods. For the d201600 instance, the results obtained by SLBLR and the Column Generation method32 are comparable. For instance d401600, SLBLR obtains a better feasible solution and for instance d801600, the advantage over the existing methods is even more pronounced.
To the best of the authors’ knowledge, no other reported method obtained optimal results for instances d401600 and d801600. SLBLR outperforms SAVLR28 as well, thereby demonstrating that the fast convergence offered by the novel “level-based” stepsizing, with other things being equal, translates into better results as compared to those obtained by SAVLR, which employs the “contraction mapping” stepsizing28. Lastly, the methods developed in31,32,33 specifically target GAPs, whereas the SLBLR method developed in this paper has broader applicability.
Stochastic job-shop scheduling with the consideration of scrap and rework
To demonstrate the computational capability of the method to solve large-scale stochastic MILP problems, a job-shop scheduling problem is considered. Within a job shop, each job requires a specific sequence of operations and the processing time for each operation. Operations are performed by a set of eligible machines. To avoid late shipments, expected tardiness is minimized. Limited machine capacity brings a layer of difficulty since multiple “individual-job” subproblems are considered together competing for limited resources (machines). Another difficulty arises because of uncertainties, including processing times34,35,36,37,38,39 and scrap40,41,42. Re-manufacturing of one part may affect and disrupt the overall schedule within the entire job shop, thereby leading to unexpectedly high delays in production.

The results for “Stochastic job-shop scheduling with the consideration
of scrap and rework” section are illustrated. SLBLR performs more than two orders of magnitude faster than branch-and-cut implemented in CPLEX.
In this paper, we modified data from the paper by Hoitomt et al.29 by modifying several jobs by increasing the number of operations (e.g., from 1 to 6) and decreasing the capacities of a few machines; the data are in Tables S1 and S2. The stochastic version of the problem with the consideration of scrap and rework is available within the manuscript by Bragin et al.42. With these changes, the running time of CPLEX spans multiple days as demonstrated in Fig. 3. In contrast, within the new method, a solution of the same quality as that obtained by CPLEX, is obtained within roughly 1 hour of CPU time. The new method is operationalized by relaxing machine capacity constraints42 and coordinating resulting job subproblems; at convergence, the beginning times of several jobs are adjusted by a few time periods to remove remaining machine capacity constraint violations.
Multi-stage pharmaceutical scheduling
To demonstrate the capability of the method to solve scheduling problems complicated by the presence of sequence-dependent setup times, a multi-stage pharmaceutical scheduling problem proposed by Kopanos et al.13 is considered. Setup times vary based on the sequencing of products on each unit (machine). Scheduling in this context is combinatorial in the number of product orders, units, and stages. The new method is operationalized by relaxing constraints that couple individual processing units, namely assignment, and processing/setup time constraints (constraints (39)-(41) from Supplementary Information). The results obtained by SLBLR and Branch-and-Cut are demonstrated in Fig. 4.
With a relatively small number of product orders, 30, an optimal solution with a feasible cost of 54.97 was found by CPLEX within 1057.78 seconds. The optimality is verified by running CPLEX until the gap is 0%; it took 171993.27 seconds to verify the optimality. SLBLR takes a slightly longer time to obtain the same solution – 1647.35 seconds (Fig. 4 (left)). In contrast, with 60 product orders, CPLEX no longer obtains good solutions in a computationally efficient manner; a solution with a feasible cost of 55.98 is obtained after 1,000,000 seconds. Within SLBLR, a solution with a feasible cost of 55.69 is obtained within 1978.04 seconds. This constitutes more than two orders of magnitude of improvement over CPLEX as demonstrated in Fig. 4 (right; log scale). When doubling the number of products, CPLEX’s performance is drastically deteriorated, while the performance of SLBLR is scalable.

The results for “Multi-stage pharmaceutical scheduling” section with 30 products orders (left) and 60 product orders (right) are illustrated.