This theorem states that if S {\displaystyle S} is a convex set in the topological vector space X = R n , {\displaystyle X=\mathbb {R} ^{n},} and x 0 {\displaystyle x_{0}} is a point on the boundary of S , {\displaystyle S,} then there exists a supporting hyperplane containing x 0 . {\displaystyle x_{0}.} If x ∗ ∈ X ∗ ∖ { 0 } {\displaystyle x^{*}\in X^{*}\backslash \{0\}} ( X ∗ {\displaystyle X^{*}} is the dual space of X {\displaystyle X} , x ∗ {\displaystyle x^{*}} is a nonzero linear functional) such that x ∗ ( x 0 ) ≥ x ∗ ( x ) {\displaystyle x^{*}\left(x_{0}\right)\geq x^{*}(x)} for all x ∈ S {\displaystyle x\in S} , then
defines a supporting hyperplane.2
Conversely, if S {\displaystyle S} is a closed set with nonempty interior such that every point on the boundary has a supporting hyperplane, then S {\displaystyle S} is a convex set, and is the intersection of all its supporting closed half-spaces.3
The hyperplane in the theorem may not be unique, as noticed in the second picture on the right. If the closed set S {\displaystyle S} is not convex, the statement of the theorem is not true at all points on the boundary of S , {\displaystyle S,} as illustrated in the third picture on the right.
The supporting hyperplanes of convex sets are also called tac-planes or tac-hyperplanes.4
The forward direction can be proved as a special case of the separating hyperplane theorem (see the page for the proof). For the converse direction,
Define T {\displaystyle T} to be the intersection of all its supporting closed half-spaces. Clearly S ⊂ T {\displaystyle S\subset T} . Now let y ∉ S {\displaystyle y\not \in S} , show y ∉ T {\displaystyle y\not \in T} .
Let x ∈ i n t ( S ) {\displaystyle x\in \mathrm {int} (S)} , and consider the line segment [ x , y ] {\displaystyle [x,y]} . Let t {\displaystyle t} be the largest number such that [ x , t ( y − x ) + x ] {\displaystyle [x,t(y-x)+x]} is contained in S {\displaystyle S} . Then t ∈ ( 0 , 1 ) {\displaystyle t\in (0,1)} .
Let b = t ( y − x ) + x {\displaystyle b=t(y-x)+x} , then b ∈ ∂ S {\displaystyle b\in \partial S} . Draw a supporting hyperplane across b {\displaystyle b} . Let it be represented as a nonzero linear functional f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } such that ∀ a ∈ T , f ( a ) ≥ f ( b ) {\displaystyle \forall a\in T,f(a)\geq f(b)} . Then since x ∈ i n t ( S ) {\displaystyle x\in \mathrm {int} (S)} , we have f ( x ) > f ( b ) {\displaystyle f(x)>f(b)} . Thus by f ( y ) − f ( b ) 1 − t = f ( b ) − f ( x ) t − 0 < 0 {\displaystyle {\frac {f(y)-f(b)}{1-t}}={\frac {f(b)-f(x)}{t-0}}<0} , we have f ( y ) < f ( b ) {\displaystyle f(y)<f(b)} , so y ∉ T {\displaystyle y\not \in T} .
Luenberger, David G. (1969). Optimization by Vector Space Methods. New York: John Wiley & Sons. p. 133. ISBN 978-0-471-18117-0. 978-0-471-18117-0 ↩
Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. pp. 50–51. ISBN 978-0-521-83378-3. Retrieved October 15, 2011. 978-0-521-83378-3 ↩
Cassels, John W. S. (1997), An Introduction to the Geometry of Numbers, Springer Classics in Mathematics (reprint of 1959[3] and 1971 Springer-Verlag ed.), Springer-Verlag. /wiki/John_W._S._Cassels ↩