Publications

Beyond Log-Concavity: Theory and Algorithm for Sum-Log-Concave Optimization

Published in preprint, 2023

Abstract. This paper extends the classic theory of convex optimization to the minimization of functions that are equal to the negated logarithm of what we term as a “sum-log-concave” function, i.e., a sum of log-concave functions. In particular, we show that such functions are in general not convex but still satisfy generalized convexity inequalities. These inequalities unveil the key importance of a certain vector that we call the “cross-gradient” and that is, in general, distinct from the usual gradient. Thus, we propose the Cross Gradient Descent (XGD) algorithm moving in the opposite direction of the cross-gradient and derive a convergence analysis. As an application of our sum-log-concave framework, we introduce the so-called “checkered regression” method relying on a sum-log-concave function. This classifier extends (multiclass) logistic regression to non-linearly separable problems since it is capable of tessellating the feature space by using any given number of hyperplanes, creating a checkerboard-like pattern of decision regions.

Download here

Checkered Regression

Published in preprint, 2022

Abstract. This paper introduces the checkered regression model, a nonlinear generalization of logistic regression. More precisely, this new binary classifier relies on the multivariate function $\frac{1}{2}\left( 1 + \tanh(\frac{z_1}{2})\times\dots\times\tanh(\frac{z_m}{2}) \right)$, which coincides with the usual sigmoid function in the univariate case $m=1$. While the decision boundary of logistic regression consists of a single hyperplane, our method is shown to tessellate the feature space by any given number $m\ge 1$ of hyperplanes. In order to fit the model’s parameters to some labeled data, we describe a classic empirical risk minimization framework based on the cross entropy loss. A multiclass version of our approach is also proposed.

Download here