MIP 2023 Logo

Mixed Integer Programming Workshop 2023

May 22-25, 2023

University of Southern California

About

The Mixed Integer Programming (MIP) Workshop is a single-track workshop highlighting the latest trends in integer programming and discrete optimization, with speakers chosen by invitation.

The 2023 edition of the workshop will be the twentieth in the MIP series. Links to past editions can be found here.


Program (including abstracts and slides)

Monday, May 22

8:30 - 9:10

Registration and refreshments

9:10 - 9:30

Welcome remarks

9:30 - 10:15

Jeff Linderoth – Matrix completion over GF(2)

We discuss integer-programming-based approaches to doing low-rank matrix completion over the finite field of two elements. We are able to derive an explicit description for the convex hull of an individual matrix element in the decomposition, using this as the basis of a new formulation. We also derive valid inequalities involving multiple matrix elements. Computational results showing the superiority of the new formulation over a natural formulation based on McCormick inequalities with integer-valued variables, and an extended disjunctive formulation arising from the parity polytope are given. Joint work with Jim Luedtke, Daniel Pimentel-Alarcon, and Akhilesh Soni, all of UW-Madison. (slides)

10:15 - 10:45Break
10:45 - 11:20

Silvia Di Gregorio – Partial optimality in cubic correlation clustering

The higher-order correlation clustering problem is an expressive model, and recently local search heuristics have been proposed for several applications. Certifying optimality, however, is NP-hard and practically hampered already by the complexity of the problem statement. Here, we focus on establishing partial optimality conditions for the special case of complete graphs and cubic objective functions. In addition, we define and implement algorithms for testing these conditions and examine their effect numerically on two datasets. (slides)

11:20 - 11:55

Mohit Singh – Determinant maximization and matroid intersection

In the determinant maximization problem, given a collection of vectors, we aim to pick a subset to maximize the determinant of a natural matrix associated with these vectors. The abstract problem captures problems in multiple areas including machine learning, statistics, convex geometry, Nash social welfare problem from algorithmic game theory and network design problems. We will survey the known results and techniques for the problem. The techniques used in these works range from polynomial inequalities, sparse solutions to convex programming solutions and matroid intersection algorithms.

11:55 - 2:00Lunch
2:00 - 2:35

Nick Sahinidis – A combined linear and nonlinear presolve algorithm for integer optimization problems

As experimentally proved by many commercial LP solvers, presolve methods may lead to orders of magnitude speed-ups for many difficult problems. We propose a presolve procedure for general mixed-integer nonlinear optimization problems that aims to reduce model size and the amount of nonlinearity, while preserving convexity and global optimality. In particular, we introduce a combined LP/NLP presolve, which iteratively operates on both the linear and nonlinear components of a nonlinear problem. In addition to adopting classical linear presolve techniques to our presolve framework, we propose novel presolve methods for reducing linear inequality systems and fixing nonlinear variables. Computational results demonstrate that the proposed approach reduces solution times over a large collection of benchmarks.

2:35 - 3:10

Jan Kronqvist – Strong and computationally efficient formulations of disjunctive constraints

Disjunctive constraints appear naturally in a large variety of different mixed-integer optimization problems. Such problems include classical OR applications but also applications in AI and ML, e.g., clustering and optimization over ReLU DNNs. The classical modeling approaches for disjunctive constraints can result in very weak continuous relaxations or strong but computationally very challenging relaxations. In some circumstances, it is straightforward to construct a model whose continuous relaxation forms the convex hull over a single disjunctive constraint through so-called extended formulations. However, it is well-known in the MIP community that such extended formulations tend to perform poorly computationally. In this talk we discuss a new hierarchy of formulation called p-split formulations, that combines the advantages of big-M and convex hull formulations. The p-split formulations offer an adjustable type of formulation that can easily be adjusted toward a stronger convex relaxation at the expense of a more complex formulation, or toward a simpler formulation. The advantage of the simpler formulations is mainly that the subproblems in branch-and-bound can be solved much faster, but at the expense of a potentially larger BB tree. Technical details and theoretical properties of the formulations are presented along with computational results.

3:10 - 3:40Break
3:40 - 4:15

Phebe Vayanos – Learning optimal classification trees robust to distribution shifts

We consider the problem of learning classification trees that are robust to distribution shifts between training and testing/deployment data. This problem arises frequently in high stakes settings such as public health and social work where data is often collected using self-reported surveys which are highly sensitive to e.g., the framing of the questions, the time when and place where the survey is conducted, and the level of comfort the interviewee has in sharing information with the interviewer. We propose a method for learning optimal robust classification trees based on mixed-integer robust optimization technology. In particular, we demonstrate that the problem of learning an optimal robust tree can be cast as a single-stage mixed-integer robust optimization problem with a highly nonlinear and discontinuous objective. We reformulate this problem equivalently as a two-stage linear robust optimization problem for which we devise a tailored solution procedure based on constraint generation. We evaluate the performance of our approach on numerous publicly available datasets, and compare the performance to a regularized, non-robust optimal tree. We show an increase of up to 14.16% in worst-case accuracy and of up to 4.72% in average-case accuracy across several datasets and distribution shifts from using our robust solution in comparison to the non-robust one.

4:15 - 4:50

Ryan Cory-Wright – Optimal low-rank matrix completion: Semidefinite relaxations and eigenvector disjunctions

Low-rank matrix completion consists of computing a matrix of minimal complexity that recovers a given set of observations as accurately as possible and has numerous applications, including product recommendation. Unfortunately, existing methods for solving low-rank matrix completion are heuristics that typically identify high-quality solutions, but without any optimality guarantees. We reexamine matrix completion with an optimality-oriented eye, by reformulating low-rank problems as convex problems over the non-convex set of projection matrices and implementing a disjunctive branch-and-bound scheme that solves them to certifiable optimality. Further, we derive a novel and often tight class of convex relaxations by decomposing a low-rank matrix as a sum of rank-one matrices and incentivizing, via a Shor relaxation, that each two-by-two minor in each rank-one matrix has determinant zero. Across a suite of numerical experiments, our new convex relaxations decrease the optimality gap by two orders of magnitude compared to existing attempts. Moreover, we showcase the performance of our disjunctive branch-and-bound scheme and demonstrate that it solves matrix completion problems over 150x150 matrices to certifiable optimality in hours, constituting an order of magnitude improvement on the state-of-the-art.

7:00

Dinner

Tuesday, May 23

8:30 - 9:20

Breakfast

9:20 - 9:30

Initial remarks

9:30 - 10:15

Sophie Huiberts – Smoothed analysis of the simplex method

The simplex method is an important algorithm for solving linear optimization problems. The algorithm is very efficient in practice, but theoretical explanations of this fact are lacking. In this talk, I will describe smoothed analysis, one of the primary theoretical frameworks for analysing the simplex method, and present upper and lower bounds on the running time. (slides)

10:15 - 10:45Break
10:45 - 11:20

Sourour Elloumi – Solving polynomial unconstrained binary optimization problems through reformulation

We consider the optimization of multivariate polynomials in binary variables. We discuss approaches to solve this problem, based on a refomulation into an equivalent problem, either linear or quadratic in mixed variables. In the linear reformulation approach, the resulting problem is directly submitted to a solver. In the quadratic approach, we further reformulate the quadratic problem into a convex quadratic problem that we submit to a solver. In all cases, the reformulations must be finely tuned to improve the numerical solution. We present our latest results in these approaches. In particular we show that a strong linear reformulation can be obtained by solving a MILP. We also show that a similar MILP can be used in the quadratic approach. Based on joint work with Amélie Lambert, Arnaud Lazare and Zoé Verchère.

11:20 - 11:55

Ed Klotz – Graph-based approaches to solving binary quadratic programs

Binary quadratic and quadratically constrained programs (BQPs and BQCPs) have some unique characteristics among integer programs that have motivated various specialized strategies to solve them more efficiently. While they can be reformulated as binary integer programs (BIPs) or unconstrained binary quadratic programs (QUBOS), examination of the original binary quadratic formulation before any such transformations can help improve performance. Graphs can often facilitate this process. The product graph from Padberg's Boolean Quadric Polytope paper in 1989 is probably the most well-known example. Padberg's work focused on deriving cuts from the product graph and just the quadratic objective of the BQP. In this talk we will consider incorporating constraints in the BQP or BQCP to derive cuts. We will then consider some other graph based approaches, and illustrate with some examples on some challenging publicly available problems. (slides)

11:55 - 2:00Lunch
2:00 - 2:35

Margarita Castro – Markov chain-based policies for multi-stage stochastic integer linear programming with an application to disaster relief logistics

We introduce an aggregation framework to address multi-stage stochastic programs with mixed-integer state variables and continuous local variables (MSILPs). Our aggregation framework imposes additional structure to the integer state variables by leveraging the information of the underlying stochastic process, which is modeled as a Markov chain (MC). We present a novel branch-and-cut algorithm integrated with stochastic dual dynamic programming as an exact solution method to the aggregated MSILP, which can also be used in an approximation form to obtain dual bounds and implementable feasible solutions. Moreover, we apply two-stage linear decision rule (2SLDR) approximations and propose MC-based variants to obtain high-quality decision policies with significantly reduced computational effort. We test the proposed methodologies in an MSILP model for hurricane disaster relief logistics planning. Our empirical evaluation compares the effectiveness of the various proposed approaches and analyzes the trade-offs between policy flexibility, solution quality, and computational effort. Specifically, the 2SLDR approximation yields provable high-quality solutions for our test instances supported by the proposed bounding procedure. (slides)

2:35 - 3:10

Dabeen Lee – Non-smooth and robust submodular maximization

We study the problem of maximizing a continuous DR-submodular function that is not necessarily smooth. We prove that the continuous greedy algorithm achieves an $[(1-1/e)\opt-\epsilon]$ guarantee when the function is monotone and H\"older-smooth, meaning that it admits a H\"older-continuous gradient. For functions that are non-differentiable or non-smooth, we propose a variant of the mirror-prox algorithm that attains an $[(1/2)\opt-\epsilon]$ guarantee. We apply our algorithmic frameworks to robust submodular maximization and distributionally robust submodular maximization under Wasserstein ambiguity. In particular, the mirror-prox method applies to robust submodular maximization to obtain a single feasible solution whose value is at least $(1/2)\opt-\epsilon$. For distributionally robust maximization under Wasserstein ambiguity, we deduce and work over a submodular-convex maximin reformulation whose objective function is H\"older-smooth, for which we may apply both the continuous greedy and the mirror-prox algorithms. This is joint work with Duksang Lee (KAIST) and Nam Ho-Nguyen (The University of Sydney).

3:10 - 3:40Break
3:40 - 4:30Computational competition session
5:00Poster session

Wednesday, May 24

8:30 - 9:20

Breakfast

9:20 - 9:30

Initial remarks

9:30 - 10:15

Igor Pak – Integer points in polytopes are hard to find

I will give a quick survey of a series of papers (joint with Danny Nguyen) on decision and counting problems in integer programming with multiple quantifiers. I will then discuss applications to two polynomial optimization problems on integer points in polytopes. (slides)

10:15 - 10:45Break
10:45 - 11:20

Saumya Sinha – Relaxation and duality for multiobjective integer programming

Multiobjective integer programs (MOIPs), as the name suggests, simultaneously optimize multiple objective functions over a set of linear constraints and integer variables. Relaxations of MOIPs provide bounds on their solutions and play an important role in solution algorithms. In this talk, we present a Lagrangian relaxation of an MOIP and discuss its properties. Specifically, we show that there exist sparse Lagrangian multipliers that satisfy a complementary slackness property and generate tight relaxations at supported solutions. At unsupported solutions, where the convex hull relaxation is not tight either, we show that a Lagrangian relaxation improves upon the convex hull bounds. We use the Lagrangian relaxation to define a Lagrangian dual of an MOIP and establish weak duality. The dual is strong at supported solutions under certain conditions on the primal feasible region which are identical to the single-objective case. Finally, we generalize the integer programming value function to MOIPs and use it to formulate a set-valued superadditive dual that is strong at supported solutions.

11:20 - 11:55

Cheng Guo – Copositive duality for discrete energy markets

Optimization problems with discrete decisions are nonconvex and thus lack strong duality, which limits the usefulness of tools such as shadow prices. It was shown in Burer (2009) that mixed-binary quadratic programs can be written as completely positive programs, which are convex. Completely positive reformulations of discrete optimization problems therefore have strong duality if a constraint qualification is satisfied. We apply this perspective by writing unit commitment in power systems as a completely positive program, and then using the dual copositive program to design new pricing mechanisms. We show that the mechanisms are revenue-adequate, and, under certain conditions, supports a market equilibrium. To the best of our knowledge, one of our mechanisms is the first equilibrium-supporting scheme that uses only uniform prices. To facilitate implementation, we also design a cutting plane algorithm for solving copositive programs exactly, which we further speed up via a second-order cone programming approximation. We provide numerical examples to illustrate the potential benefits of the new mechanisms and algorithms. (slides)

11:55 - 2:00Lunch
2:00 - 2:35

Ksenia Bestuzheva – Perspective cuts for generalized on/off constraints

Strong convex relaxations that leverage problem structure are crucial for the success of global mixed-integer nonlinear programming algorithms. We present a method for generating perspective cuts for extended classes of on/off constraints. The first generalization allows the construction of perspective cuts for nonconvex constraints defined by semi-continuous variables, that is, variables whose domain is reduced to a singleton set when a corresponding binary indicator variable is equal to zero. This technique requires a valid linear inequality, which is then strengthened by the addition of a term dependent on the indicator variable. A further generalization extends the method to constraints defined by variables whose domain is a non-singleton interval when the indicator is equal to zero. A detailed computational study evaluates the performance impact of classic perspective cuts, as well as the impact of perspective cuts for generalized structures. This is joint work with Ambros Gleixner and Stefan Vigerske.

2:35 - 3:10

Alper Atamtürk – TBD

3:10 - 3:40Break
3:40 - 4:15

Christian Tjandraatmadja – Network-aware scheduling of large ML workloads

Given the scale of state-of-the-art models in machine learning, they are often trained in parallel across multiple hardware accelerators (GPUs or TPUs) that must communicate with each other. While smaller models can rely on dedicated links, larger models may partially need to use the broader data center network, where congestion can cause the training to stall and waste resources. To reduce the chance of network congestion, we can optimize the assignment of accelerator resources with network conditions in mind, which can be formulated as a generalized quadratic assignment problem with capacity constraints and other requirements. We discuss a MIP-based approach and specialized heuristics to solve this problem from a practical perspective.

4:15 - 4:50

Alfredo Torrico – Two-sided assortment optimization for matching markets

We study the two-sided assortment problem recently introduced by [Rios et al. 2021]. A platform must choose an assortment of profiles for each user in each period. Users can either like/dislike as many profiles as they want, and a match occurs if two users see and like each other, potentially in different periods. The platform's goal is to maximize the expected number of matches generated. We show that the problem is NP-hard, even if we consider two periods and only one-directional sequential matches (i.e., users on one side of the market reach out first, and then the ones on the other side respond). Given the complexity of the problem, we focus on the case with two periods and provide algorithms and performance guarantees for different variants of the problem. First, we show that if matches can only be formed sequentially (i.e., users cannot see each other in the same period), there is an approximation guarantee of 1/3, which can be improved to 1−1/e if we restrict to one-directional matches. Second, we show that when we allow simultaneous matches in the first period, there is an approximation guarantee of 1/4, which can be improved to 1/2 if sequential matches are one-directional. Finally, we show that the general problem with simultaneous and sequential matches is more challenging, as the objective function is no longer submodular. Nevertheless, we provide performance guarantees for some specific cases, and we use data from our industry partner (a dating app in the US) to numerically show that the loss for not considering simultaneous matches in the second period when making first-period decisions is negligible.

5:00Business meeting

Thursday, May 25

8:30 - 9:15

Breakfast

9:15 - 9:30

Initial remarks

9:30 - 10:15

Matthias Walter – Hypergraphs, polyhedra and algorithms for polynomial optimization

Mixed-integer nonlinear optimization problems involving polynomials can be reformulated by creating an auxiliary variable for every monomial and convexifying the product constraint of each such variable. This gives rise to multilinear polytopes. In the talk I will describe this relationship, summarize what is known about these polytopes, and then focus on known inequalities classes. The emphasis will be on separation algorithms in order to use these inequalities as cutting planes. Finally, I will present some insights from a prototype implementation. The presented new results are joint work with Alberto Del Pia.

10:15 - 10:45Break
10:45 - 11:20

Bistra Dilkina – Searching large neighborhoods for integer linear programs with contrastive learning

Large Neighborhood Search (LNS) is an effective heuristic algorithm for finding high-quality solutions of combinatorial optimization problems, including for large-scale Integer Linear Programs (ILPs). It starts with an initial solution to the problem and iteratively improves it by searching a large neighborhood around the current best solution. LNS relies on destroy heuristics to select neighborhoods to search in by un-assigning a subset of the variables and then re-optimizing their assignment. We focus on designing effective and efficient heuristics in LNS for integer linear programs (ILP). Local Branching (LB) is a heuristic that selects the neighborhood that leads to the largest improvement over the current solution in each iteration of LNS (the optimal subset of variables to ‘destroy’ within the Hamming ball of the incumbent solutions). LB is often slow since it needs to solve an ILP of the same size as the input ILP. We propose a novel ML-based LNS for ILPs, namely CL, that uses contrastive learning (CL) to learn to imitate the LB heuristic. We not only use the optimal subsets provided by LB as the expert demonstration but also leverage intermediate solutions and perturbations to collect sets of positive and negative samples. We use graph attention networks and a rich set of features to further improve its performance. Empirically, CL outperforms state-of-the-art ML and non-ML approaches at different runtime cutoffs in terms of multiple metrics, including the primal gap, the primal integral, the best-performing rate as well as the survival rate, and furthermore achieves good generalization performance. (slides)

11:20 - 11:55

Bartolomeo Stellato – Learning for decision-making under uncertainty

We present two data-driven methods to learn uncertainty sets in decision-making problems affected by uncertain data. In the first part of this talk, we introduce mean robust optimization (MRO), a general framework that uses machine learning clustering to bridge between robust and Wasserstein distributionally robust optimization. MRO builds uncertainty sets constructed based on clustered data rather than on observed data points directly, thereby significantly reducing problem size. We show finite-sample performance guarantees and explicitly control the potential pessimism introduced by any clustering procedure. We illustrate the benefits of our framework on several numerical examples, obtaining multiple orders of magnitude computational speedups with little-to-no effect on the solution quality. In the second part of the talk, we introduce a learning technique to automatically reshape and resize the uncertainty sets in robust optimization. This method relies on differentiating the solution of robust optimization problems with respect to the parameters defining the uncertainty sets. Our approach is very flexible, and it can learn a wide variety of uncertainty sets while preserving tractability. We implemented our work in LROPT, a software package to naturally express decision-making problems affected by uncertain data and to automatically learn the corresponding robust optimization formulations. Numerical experiments in portfolio optimization, optimal control, and inventory management show that our method outperforms traditional approaches in robust optimization in terms of out of sample performance and constraint satisfaction guarantees.

11:55 - 12:30Awards and closing remarks

Poster session

The 2023 MIP Workshop included a poster session reserved for students.

Presented posters

Kristin Braun A Computational Study for Piecewise Linear Relaxations of Mixed-Integer Nonlinear Programs (Honorable Mention)
Justin Dumouchelle Neur2SP: Neural Two-Stage Stochastic Programming
Oscar Guaje Curated Data Generation for Machine Learning-based Cut Selection
Dahye Han Regularized MIP Model for Optimal Power Flow with Battery
Drew Horton MIP with an equity twist
Nan Jiang ALSO-X#: Better Convex Approximations for Distributionally Robust Chance Constrained Programs
Nathan Justin Learning Optimal Classification Trees Robust to Distribution Shifts
Sumin Kang Distributionally ambiguous stochastic dual dynamic integer programming
Aysenur Karagoz On fairness-seeking formulations of the liver transplant allocation problem
Sean Kelley Warm Starting Series of Mixed Integer Linear Programs with Fixed Dimensions via Disjunctive Cuts
Yongchun Li On the Dantzig-Wolfe Relaxation of Rank Constrained Optimization: Exactness, Rank Bounds, and Algorithms
Peijing Liu A feasible directions method for convex quadratic optimization problems with indicators
Sean Lo Optimal Low-Rank Matrix Completion: Semidefinite Relaxations and Eigenvalue Disjunctions
Bardhyl Miftari GBOML: A Modeller for Structured MILP
Sepideh Mohammadi Mixed integer programming formulations for the hop-bounded Steiner tree problem with budget limit
Jai Moondra Which Lp norm is the fairest? Approximations for fair facility location across all “p”
Angela Morrison On Combinatorial Algorithms and Circuit Augmentation for Pseudoflows (Honorable Mention)
Tu Nguyen On Distributed Exact Sparse Linear Regression over Networks
Prachi Shah Learning to Branch from Optimal Trees
Maral Shahmizad Political districting to minimize county splits
Akhilesh Soni Multi-Commodity Network Design using Lagrangian Decomposition
Paul Strang Influence branching for learning to solve mixed integer programs online
Shengding Sun Aggregations of quadratic inequalities and hidden hyperplane convexity
Bo Tang PyEPO: A PyTorch-based End-to-End Predict-then-Optimize Library for Linear and Integer Programming
Jie Wang Variable Selection for Kernel Two-Sample Tests
Linchuan Wei On the Convex Hull of Convex Quadratic Optimization with Indicators
Noah Weninger A Fast Combinatorial Algorithm for the Bilevel Knapsack Problem with Interdiction Constraints (Most Popular Poster and Best Poster)
Yu Xie Robust solutions to mixed integer bilevel linear optimization with bounded rationality
Jingye Xu Sensitivity analysis of general binary integer programming
Qing Ye Second-Order Conic and Polyhedral Approximations of the Exponential Cone: Application to Mixed-Integer Exponential Conic Programs
Yiran Zhu Integer programming for the generalized envy-free equilibrium pricing problem

Computational competition

The 2023 MIP Workshop will host a competition for research teams to showcase their practical, computational prowess.

All details can be found here.

Important dates

  • October 2022: Publication of topic and rules
  • November 2022: Completion of public instance series
  • January 31st, 2023: Registration deadline for participation
  • January 31st, 2023: Submission deadline for hidden benchmarks
  • March 1st March 15th, 2023: Submission deadline for report and code (10pm Pacific Daylight Time)
  • April 5th April 15th, 2023: Notification of results
  • May 22nd, 2023: Presentations of the finalists at the MIP Workshop

Sponsors

USC
SAS
DIMACS
Google
Mosek
AMPL
Fico
Gurobi
The Optimization Firm

Organization

Credits

Logo by ©Julia Silbermann (fonts: Cornerstone by Zac Freeland, Oraqle Script by Union Hands). Website hosted at GitHub Pages, layout based on skeleton using Raleway font