Convex optimization

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Convex optimization is a subfield of mathematical optimization. Given a real vector space X together with a convex, real-valued function

f:\mathcal{X}\to \mathbb{R}

defined on a convex subset \mathcal{X} of X, the problem is to find the point x * in \mathcal{X} for which the number f(x) is smallest, i.e., the point x * such that f(x^*) \le f(x) for all x \in \mathcal{X}.

The convexity of \mathcal{X} and f make the powerful tools of convex analysis applicable: the Hahn–Banach theorem and the theory of subgradients lead to a particularly satisfying and complete theory of necessary and sufficient conditions for optimality, a duality theory comparable in perfection to that for linear programming, and effective computational methods. Convex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, statistics, and finance. Modern computing power has improved the tractability of convex optimization problems to a level roughly equal to that of linear programming.

Contents

[edit] Theory

The following statements are true about the convex optimization problem:

  • if a local minimum exists, then it is a global minimum.
  • the set of all (global) minima is convex.
  • if the function is strictly convex, then there exists at most one minimum.

The theoretical framework for convex optimization uses the facts above in conjunction with notions from convex analysis such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas's lemma.

[edit] Standard form

Standard form is the usual and most intuitive form of describing a convex optimization problem. It consists of the following three parts:

  • A convex function f(x): \mathbb{R}^n \to \mathbb{R} to be minimized over the variable x
  • Inequality constraints of the form g_i(x) \leq 0, where the functions gi are convex
  • Equality constraints of the form hi(x) = 0, where the functions hi are affine. In practice, the terms "linear" and "affine" are often used interchangeably. Such constraints can be expressed in the form hi(x) = Ax + b, where A is a matrix and b is a vector.

A convex optimization problem is thus written as

minimize f(x) subject to

g_i(x) \leq 0, \quad i = 1,\dots,m
h_i(x) = 0, \quad i = 1, \dots,p

Note that every equality constraint h(x) = 0 can be equivalently replaced by a pair of inequality constraints h(x)\leq 0 and -h(x)\leq 0. Therefore, for theoretical purposes, equality constraints are redundant; however, it can be beneficial to treat them specially in practice.

[edit] Examples

Hierarchical representation of common convex optimization problems

The following problems are all convex optimization problems, or can be transformed into convex optimization problems via a change of variables:

[edit] Lagrange multipliers

Consider a convex optimization problem giving in standard form by a cost function f(x) and inequality constraints g_i(x)\leq 0, where i=1\ldots m. Then the domain \mathcal{X} is:

\mathcal{X} = \left\lbrace{x\in X \vert g_1(x)\le0, \ldots, g_m(x)\le0}\right\rbrace.

The Lagrange function for the problem is

L(x,λ0,...,λm) = λ0f(x) + λ1g1(x) + ... + λmgm(x).

For each point x in X that minimizes f over X, there exist real numbers λ0, ..., λm, called Lagrange multipliers, that satisfy these conditions simultaneously:

  1. x minimizes L(y, λ0, λ1, ..., λm) over all y in X,
  2. λ0 ≥ 0, λ1 ≥ 0, ..., λm ≥ 0, with at least one λk>0,
  3. λ1g1(x) = 0, ... , λmgm(x) = 0

If there exists a "strictly feasible point", i.e., a point z satisfying

g1(z) < 0,...,gm(z) < 0,

then the statement above can be upgraded to assert that λ0=1.

Conversely, if some x in X satisfies 1)-3) for scalars λ0, ..., λm with λ0 = 1, then x is certain to minimize f over X.

[edit] Methods

The following methods are used in solving convex optimization problems:

[edit] Software

Although most general-purpose nonlinear equation solvers such as LSSOL, LOQO, MINOS, and Lancelot work well, many software packages dealing exclusively with convex optimization problems are also available:

[edit] Convex programming languages

[edit] Convex optimization solvers

  • MOSEK (commercial, stand-alone software and Matlab interface)
  • solver.com (commercial)
  • SeDuMi (GPLv2, Matlab package)
  • SDPT3 (GPLv2, Matlab package)
  • OBOE

[edit] References

  • Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press. 
  • Luenberger, David (1984). Linear and Nonlinear Programming. Addison-Wesley. 
  • Luenberger, David (1969). Optimization by Vector Space Methods. Wiley & Sons. 
  • Bertsekas, Dimitri (2003). Convex Analysis and Optimization. Athena Scientific. 
  • Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
  • Borwein, Jonathan, and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer.
  • Berkovitz, Leonard (2001). Convexity and Optimization in \mathbb{R}^n. John Wiley & Sons. 
  • Ruszczynski, Andrzej (2006). Nonlinear Optimization. Princeton University Press. 
  • S.I. Zukhovitsky, L.I. Avdeeva, "Linear and convex programming" , Saunders (1966)

[edit] External links

Personal tools