When searching for a local optimum, there are two interesting issues to deal with: First how to find a local optimum, and second how long it takes to find a local optimum. For many local search algorithms, it is not known, whether they can find a local optimum in polynomial time or not.1 So to answer the question of how long it takes to find a local optimum, Johnson, Papadimitriou and Yannakakis 2 introduced the complexity class PLS in their paper "How easy is local search?". It contains local search problems for which the local optimality can be verified in polynomial time.
A local search problem is in PLS, if the following properties are satisfied:
With these properties, it is possible to find for each solution s {\displaystyle s} the best neighboring solution or if there is no such better neighboring solution, state that s {\displaystyle s} is a local optimum.
Consider the following instance I {\displaystyle I} of the Max-2Sat Problem: ( x 1 ∨ x 2 ) ∧ ( ¬ x 1 ∨ x 3 ) ∧ ( ¬ x 2 ∨ x 3 ) {\displaystyle (x_{1}\vee x_{2})\wedge (\neg x_{1}\vee x_{3})\wedge (\neg x_{2}\vee x_{3})} . The aim is to find an assignment, that maximizes the sum of the satisfied clauses.
A solution s {\displaystyle s} for that instance is a bit string that assigns every x i {\displaystyle x_{i}} the value 0 or 1. In this case, a solution consists of 3 bits, for example s = 000 {\displaystyle s=000} , which stands for the assignment of x 1 {\displaystyle x_{1}} to x 3 {\displaystyle x_{3}} with the value 0. The set of solutions F L ( I ) {\displaystyle F_{L}(I)} is the set of all possible assignments of x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} and x 3 {\displaystyle x_{3}} .
The cost of each solution is the number of satisfied clauses, so c L ( I , s = 000 ) = 2 {\displaystyle c_{L}(I,s=000)=2} because the second and third clause are satisfied.
The Flip-neighbor of a solution s {\displaystyle s} is reached by flipping one bit of the bit string s {\displaystyle s} , so the neighbors of s {\displaystyle s} are N ( I , 000 ) = { 100 , 010 , 001 } {\displaystyle N(I,000)=\{100,010,001\}} with the following costs:
c L ( I , 100 ) = 2 {\displaystyle c_{L}(I,100)=2}
c L ( I , 010 ) = 2 {\displaystyle c_{L}(I,010)=2}
c L ( I , 001 ) = 2 {\displaystyle c_{L}(I,001)=2}
There are no neighbors with better costs than s {\displaystyle s} , if we are looking for a solution with maximum cost. Even though s {\displaystyle s} is not a global optimum (which for example would be a solution s ′ = 111 {\displaystyle s'=111} that satisfies all clauses and has c L ( I , s ′ ) = 3 {\displaystyle c_{L}(I,s')=3} ), s {\displaystyle s} is a local optimum, because none of its neighbors has better costs.
Intuitively it can be argued that this problem lies in PLS, because:
If we are simply counting the number of satisfied clauses, the problem can be solved in polynomial time since the number of possible costs is polynomial. However, if we assign each clause a positive integer weight (and seek to locally maximize the sum of weights of satisfied clauses), the problem becomes PLS-complete (below).
A local search problem L {\displaystyle L} has a set D L {\displaystyle D_{L}} of instances which are encoded using strings over a finite alphabet Σ {\displaystyle \Sigma } . For each instance I {\displaystyle I} there exists a finite solution set F L ( I ) {\displaystyle F_{L}(I)} . Let R {\displaystyle R} be the relation that models L {\displaystyle L} . The relation R ⊆ D L × F L ( I ) := { ( I , s ) ∣ I ∈ D L , s ∈ F L ( I ) } {\displaystyle R\subseteq D_{L}\times F_{L}(I):=\{(I,s)\mid I\in D_{L},s\in F_{L}(I)\}} is in PLS 345 if:
An instance D L {\displaystyle D_{L}} has the structure of an implicit graph (also called Transition graph 7), the vertices being the solutions with two solutions s , s ′ ∈ F L ( I ) {\displaystyle s,s'\in F_{L}(I)} connected by a directed arc iff s ′ ∈ N ( I , s ) {\displaystyle s'\in N(I,s)} .
A local optimum is a solution s {\displaystyle s} , that has no neighbor with better costs. In the implicit graph, a local optimum is a sink. A neighborhood where every local optimum is a global optimum, which is a solution with the best possible cost, is called an exact neighborhood.89
The class PLS is the class containing all problems that can be reduced in polynomial time to the problem Sink-of-DAG[7] (also called Local-Opt 10): Given two integers n {\displaystyle n} and m {\displaystyle m} and two Boolean circuits S : { 0 , 1 } n → { 0 , 1 } n {\displaystyle S:\{0,1\}^{n}\rightarrow \{0,1\}^{n}} such that S ( 0 n ) ≠ 0 n {\displaystyle S(0^{n})\neq 0^{n}} and V : { 0 , 1 } n → { 0 , 1 , . . , 2 m − 1 } {\displaystyle V:\{0,1\}^{n}\rightarrow \{0,1,..,2^{m}-1\}} , find a vertex x ∈ { 0 , 1 } n {\displaystyle x\in \{0,1\}^{n}} such that S ( x ) ≠ x {\displaystyle S(x)\neq x} and either S ( S ( x ) ) = S ( x ) {\displaystyle S(S(x))=S(x)} or V ( S ( x ) ) ≤ V ( x ) {\displaystyle V(S(x))\leq V(x)} .
Example neighborhood structures for problems with boolean variables (or bit strings) as solution:
Example neighborhood structures for problems on graphs:
Consider the following computational problem: Given some instance I {\displaystyle I} of a PLS problem L {\displaystyle L} , find a locally optimal solution s ∈ F L ( I ) {\displaystyle s\in F_{L}(I)} such that c L ( I , s ′ ) ≥ c L ( I , s ) {\displaystyle c_{L}(I,s')\geq c_{L}(I,s)} for all s ′ ∈ N ( I , s ) {\displaystyle s'\in N(I,s)} .
Every local search problem can be solved using the following iterative improvement algorithm:21
Unfortunately, it generally takes an exponential number of improvement steps to find a local optimum even if the problem L {\displaystyle L} can be solved exactly in polynomial time.22 It is not necessary always to use the standard algorithm, there may be a different, faster algorithm for a certain problem. For example a local search algorithm used for Linear programming is the Simplex algorithm.
The run time of the standard algorithm is pseudo-polynomial in the number of different costs of a solution.23
The space the standard algorithm needs is only polynomial. It only needs to save the current solution s {\displaystyle s} , which is polynomial bounded by definition.24
A Reduction of one problem to another may be used to show that the second problem is at least as difficult as the first. In particular, a PLS-reduction is used to prove that a local search problem that lies in PLS is also PLS-complete, by reducing a PLS-complete Problem to the one that shall be proven to be PLS-complete.
A local search problem L 1 {\displaystyle L_{1}} is PLS-reducible25 to a local search problem L 2 {\displaystyle L_{2}} if there are two polynomial time functions f : D 1 → D 2 {\displaystyle f:D_{1}\rightarrow D_{2}} and g : D 1 × F 2 ( f ( I 1 ) ) → F 1 ( I 1 ) {\displaystyle g:D_{1}\times F_{2}(f(I_{1}))\rightarrow F_{1}(I_{1})} such that:
It is sufficient to only map the local optima of f ( I 1 ) {\displaystyle f(I_{1})} to the local optima of I 1 {\displaystyle I_{1}} , and to map all other solutions for example to the standard solution returned by A 1 {\displaystyle A_{1}} .26
PLS-reductions are transitive.27
The transition graph28 T I {\displaystyle T_{I}} of an instance I {\displaystyle I} of a problem L {\displaystyle L} is a directed graph. The nodes represent all elements of the finite set of solutions F L ( I ) {\displaystyle F_{L}(I)} and the edges point from one solution to the neighbor with strictly better cost. Therefore it is an acyclic graph. A sink, which is a node with no outgoing edges, is a local optimum. The height of a vertex v {\displaystyle v} is the length of the shortest path from v {\displaystyle v} to the nearest sink. The height of the transition graph is the largest of the heights of all vertices, so it is the height of the largest shortest possible path from a node to its nearest sink.
A PLS-reduction ( f , g ) {\displaystyle (f,g)} from a local search problem L 1 {\displaystyle L_{1}} to a local search problem L 2 {\displaystyle L_{2}} is a tight PLS-reduction29 if for any instance I 1 {\displaystyle I_{1}} of L 1 {\displaystyle L_{1}} , a subset R {\displaystyle R} of solutions of instance I 2 = f ( I 1 ) {\displaystyle I_{2}=f(I_{1})} of L 2 {\displaystyle L_{2}} can be chosen, so that the following properties are satisfied:
PLS lies between the functional versions of P and NP: FP ⊆ PLS ⊆ FNP.30
PLS also is a subclass of TFNP,31 that describes computational problems in which a solution is guaranteed to exist and can be recognized in polynomial time. For a problem in PLS, a solution is guaranteed to exist because the minimum-cost vertex of the entire graph is a valid solution, and the validity of a solution can be checked by computing its neighbors and comparing the costs of each one to another.
It is also proven that if a PLS problem is NP-hard, then NP = co-NP.32
A local search problem L {\displaystyle L} is PLS-complete,33 if
The optimization version of the circuit problem under the Flip neighborhood structure has been shown to be a first PLS-complete problem.34
This is an incomplete list of some known problems that are PLS-complete. The problems here are the weighted versions; for example, Max-2SAT/Flip is weighted even though Max-2SAT ordinarily refers to the unweighted version.
Notation: Problem / Neighborhood structure
Fearnley, Goldberg, Hollender and Savani84 proved that a complexity class called CLS is equal to the intersection of PPAD and PLS.
Yannakakis, Mihalis (2003). Local search in combinatorial optimization - Computational complexity. Princeton University Press. pp. 19–55. ISBN 9780691115221. 9780691115221 ↩
Johnson, David S; Papadimitriou, Christos H; Yannakakis, Mihalis (1988). "How easy is local search?". Journal of Computer and System Sciences. 37 (1): 79–100. doi:10.1016/0022-0000(88)90046-3. https://doi.org/10.1016%2F0022-0000%2888%2990046-3 ↩
Mulzer, Wolfgang; Stein, Yannik (14 March 2018). "Computational Aspects of the Colorful Caratheodory Theorem". Discrete & Computational Geometry. 60 (3): 720–755. arXiv:1412.3347. Bibcode:2014arXiv1412.3347M. doi:10.1007/s00454-018-9979-y. S2CID 254024141. /wiki/Discrete_%26_Computational_Geometry ↩
Borzechowski, Michaela. "The complexity class Polynomial Local Search (PLS) and PLS-complete problems" (PDF). http://www.mi.fu-berlin.de/inf/groups/ag-ti/theses/download/Borzechowski16.pdf ↩
Dumrauf, Dominic; Monien, Burkhard; Tiemann, Karsten (2009). "Multiprocessor Scheduling is PLS-Complete". System Sciences, 2009. HICSS'09. 42nd Hawaii International Conference on: 1–10. https://www.researchgate.net/publication/224373381 ↩
Michiels, Wil; Aarts, Emile; Korst, Jan (2010). Theoretical aspects of local search. Springer Science & Business Media. ISBN 9783642071485. 9783642071485 ↩
Daskalakis, Constantinos; Papadimitriou, Christos (23 January 2011). "Continuous Local Search". Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms: 790–804. doi:10.1137/1.9781611973082.62. ISBN 978-0-89871-993-2. S2CID 2056144. 978-0-89871-993-2 ↩
Klauck, Hartmut (1996). "On the hardness of global and local approximation". Proceedings of the 5th Scandinavian Workshop on Algorithm Theory: 88–99. https://www.researchgate.net/publication/221209488 ↩
Schäffer, Alejandro A.; Yannakakis, Mihalis (February 1991). "Simple Local Search Problems that are Hard to Solve". SIAM Journal on Computing. 20 (1): 56–87. doi:10.1137/0220004. /wiki/Doi_(identifier) ↩
Fiduccia, C. M.; Mattheyses, R. M. (1982). "A Linear-time Heuristic for Improving Network Partitions". Proceedings of the 19th Design Automation Conference: 175–181. ISBN 9780897910200. 9780897910200 ↩
Angel, Eric; Christopoulos, Petros; Zissimopoulos, Vassilis (2014). Paradigms of Combinatorial Optimization: Problems and New Approaches - Local Search: Complexity and Approximation (2 ed.). John Wiley & Sons, Inc., Hoboken. pp. 435–471. doi:10.1002/9781119005353.ch14. ISBN 9781119005353. 9781119005353 ↩
Megiddo, Nimrod; Papadimitriou, Christos H (1991). "On total functions, existence theorems and computational complexity". Theoretical Computer Science. 81 (2): 317–324. CiteSeerX 10.1.1.75.4797. doi:10.1016/0304-3975(91)90200-L. https://doi.org/10.1016%2F0304-3975%2891%2990200-L ↩
Krentel, M. (1 August 1990). "On Finding and Verifying Locally Optimal Solutions". SIAM Journal on Computing. 19 (4): 742–749. doi:10.1137/0219052. ISSN 0097-5397. /wiki/Doi_(identifier) ↩
Krentel, Mark W. (1989). "Structure in locally optimal solutions". 30th Annual Symposium on Foundations of Computer Science. pp. 216–221. doi:10.1109/SFCS.1989.63481. ISBN 0-8186-1982-1. S2CID 32686790. 0-8186-1982-1 ↩
Shimozono, Shinichi (1997). "Finding optimal subgraphs by local search". Theoretical Computer Science. 172 (1): 265–271. doi:10.1016/S0304-3975(96)00135-1. https://doi.org/10.1016%2FS0304-3975%2896%2900135-1 ↩
Dumrauf, Dominic; Süß, Tim (2010). "On the Complexity of Local Search for Weighted Standard Set Problems". CiE 2010: Programs, Proofs, Processes. Lecture Notes in Computer Science. Vol. 6158. Springer, Berlin, Heidelberg. pp. 132–140. CiteSeerX 10.1.1.762.6801. doi:10.1007/978-3-642-13962-8_15. ISBN 978-3-642-13961-1. S2CID 14099014. 978-3-642-13961-1 ↩
Papadimitriou, C.H.; Schäffer, A. A.; Yannakakis, M. (1990). "On the complexity of local search". Proceedings of the twenty-second annual ACM symposium on Theory of computing - STOC '90. pp. 438–445. doi:10.1145/100216.100274. ISBN 0897913612. S2CID 16877206. 0897913612 ↩
Fabrikant, Alex; Papadimitriou, Christos; Talwar, Kunal (2004). "The complexity of pure Nash equilibria". Proceedings of the thirty-sixth annual ACM symposium on Theory of computing. ACM. pp. 604–612. CiteSeerX 10.1.1.3.7861. doi:10.1145/1007352.1007445. ISBN 978-1581138528. S2CID 1037326. 978-1581138528 ↩
Ackermann, Heiner; Röglin, Heiko; Vöcking, Berthold (2008). "On the Impact of Combinatorial Structure on Congestion Games". J. ACM. 55 (6): 25:1–25:22. CiteSeerX 10.1.1.634.4913. doi:10.1145/1455248.1455249. ISSN 0004-5411. S2CID 3070710. /wiki/CiteSeerX_(identifier) ↩
Yang, Yichen; Jia, Kai; Rinard, Martin (2022). "On the Impact of Player Capability on Congestion Games". Algorithmic Game Theory. Lecture Notes in Computer Science. Vol. 13584. pp. 311–328. arXiv:2205.09905. doi:10.1007/978-3-031-15714-1_18. ISBN 978-3-031-15713-4. 978-3-031-15713-4 ↩
Dumrauf, Dominic; Monien, Burkhard (2013). "On the PLS-complexity of Maximum Constraint Assignment". Theor. Comput. Sci. 469: 24–52. doi:10.1016/j.tcs.2012.10.044. ISSN 0304-3975. https://doi.org/10.1016%2Fj.tcs.2012.10.044 ↩
Kaznatcheev, Artem (2019). "Computational Complexity as an Ultimate Constraint on Evolution". Genetics. 212 (1): 245–265. doi:10.1534/genetics.119.302000. PMC 6499524. PMID 30833289. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6499524 ↩
Fearnley, John; Goldberg, Paul; Hollender, Alexandros; Savani, Rahul (2022-12-19). "The Complexity of Gradient Descent: CLS = PPAD ∩ PLS". Journal of the ACM. 70 (1): 7:1–7:74. arXiv:2011.01929. doi:10.1145/3568163. ISSN 0004-5411. S2CID 263706261. https://doi.org/10.1145/3568163 ↩
Yannakakis, Mihalis (2009-05-01). "Equilibria, fixed points, and complexity classes". Computer Science Review. 3 (2): 71–85. arXiv:0802.2831. doi:10.1016/j.cosrev.2009.03.004. ISSN 1574-0137. https://www.sciencedirect.com/science/article/pii/S1574013709000161 ↩