A non-smooth version of the Griewank function has been developed3 to emulate the characteristics of objective functions frequently encountered in optimization problems from machine learning (ML). These functions often exhibit piecewise smooth or non-smooth behavior due to the presence of regularization terms, activation functions, or constraints in learning models.
A non-smooth variant of the Griewank function is the following:
By incorporating non-smooth elements, such as absolute values in the cosine and sine terms, this function mimics the irregularities present in many ML loss landscapes. It offers a robust benchmark for evaluating optimization algorithms, especially those designed to handle the challenges of non-convex, non-smooth, or high-dimensional problems, including sub-gradient, hybrid, and evolutionary methods.
The function's resemblance to practical ML objective functions makes it particularly valuable for testing the robustness and efficiency of algorithms in tasks such as hyperparameter tuning, neural network training, and constrained optimization.
Griewank, A. O. "Generalized Descent for Global Optimization." J. Opt. Th. Appl. 34, 11–39, 1981 ↩
Locatelli, M. "A Note on the Griewank Test Function." J. Global Opt. 25, 169–174, 2003 ↩
Bosse, Torsten F.; Bücker, H. Martin (2024-10-29). "A piecewise smooth version of the Griewank function". Optimization Methods and Software: 1–11. doi:10.1080/10556788.2024.2414186. ISSN 1055-6788. https://doi.org/10.1080%2F10556788.2024.2414186 ↩