- Statistical Learning under Heterogenous Distribution Shift This paper studies the prediction of a target z from a pair of random variables (x,y), where the ground-truth predictor is additive E[z mid x,y] = f_star(x) +g_{star}(y). We study the performance of empirical risk minimization (ERM) over functions f+g, f in F and g in G, fit on a given training distribution, but evaluated on a test distribution which exhibits covariate shift. We show that, when the class F is "simpler" than G (measured, e.g., in terms of its metric entropy), our predictor is more resilient to heterogenous covariate shifts in which the shift in x is much greater than that in y. These results rely on a novel H\"older style inequality for the Dudley integral which may be of independent interest. Moreover, we corroborate our theoretical findings with experiments demonstrating improved resilience to shifts in "simpler" features across numerous domains. 4 authors · Feb 27, 2023
- New high-dimensional generalizations of Nesbitt's inequality and relative applications Two kinds of novel generalizations of Nesbitt's inequality are explored in various cases regarding dimensions and parameters in this article. Some other cases are also discussed elaborately by using the semiconcave-semiconvex theorem. The general inequalities are then employed to deduce some alternate inequalities and mathematical competition questions. At last, a relation about Hurwitz-Lerch zeta functions is obtained. 2 authors · Mar 18
- Quantum Lower Bounds for Finding Stationary Points of Nonconvex Functions Quantum algorithms for optimization problems are of general interest. Despite recent progress in classical lower bounds for nonconvex optimization under different settings and quantum lower bounds for convex optimization, quantum lower bounds for nonconvex optimization are still widely open. In this paper, we conduct a systematic study of quantum query lower bounds on finding epsilon-approximate stationary points of nonconvex functions, and we consider the following two important settings: 1) having access to p-th order derivatives; or 2) having access to stochastic gradients. The classical query lower bounds is Omegabig(epsilon^{-1+p{p}}big) regarding the first setting, and Omega(epsilon^{-4}) regarding the second setting (or Omega(epsilon^{-3}) if the stochastic gradient function is mean-squared smooth). In this paper, we extend all these classical lower bounds to the quantum setting. They match the classical algorithmic results respectively, demonstrating that there is no quantum speedup for finding epsilon-stationary points of nonconvex functions with p-th order derivative inputs or stochastic gradient inputs, whether with or without the mean-squared smoothness assumption. Technically, our quantum lower bounds are obtained by showing that the sequential nature of classical hard instances in all these settings also applies to quantum queries, preventing any quantum speedup other than revealing information of the stationary points sequentially. 2 authors · Dec 7, 2022
- A Quadratic Synchronization Rule for Distributed Deep Learning In distributed deep learning with data parallelism, synchronizing gradients at each training step can cause a huge communication overhead, especially when many nodes work together to train large models. Local gradient methods, such as Local SGD, address this issue by allowing workers to compute locally for H steps without synchronizing with others, hence reducing communication frequency. While H has been viewed as a hyperparameter to trade optimization efficiency for communication cost, recent research indicates that setting a proper H value can lead to generalization improvement. Yet, selecting a proper H is elusive. This work proposes a theory-grounded method for determining H, named the Quadratic Synchronization Rule (QSR), which recommends dynamically setting H in proportion to 1{eta^2} as the learning rate eta decays over time. Extensive ImageNet experiments on ResNet and ViT show that local gradient methods with QSR consistently improve the test accuracy over other synchronization strategies. Compared with the standard data parallel training, QSR enables Local AdamW on ViT-B to cut the training time on 16 or 64 GPUs down from 26.7 to 20.2 hours or from 8.6 to 5.5 hours and, at the same time, achieves 1.16% or 0.84% higher top-1 validation accuracy. 5 authors · Oct 22, 2023
- Regularity for obstacle problems to anisotropic parabolic equations Following Dibenedetto's intrinsic scaling method, we prove local H\"older continuity of weak solutions to obstacle problems related to some anisotropic parabolic equations under the condition for which only H\"older's continuity of the obstacle is known. 1 authors · Oct 1, 2024
20 Solving Inequality Proofs with Large Language Models Inequality proving, crucial across diverse scientific and mathematical fields, tests advanced reasoning skills such as discovering tight bounds and strategic theorem application. This makes it a distinct, demanding frontier for large language models (LLMs), offering insights beyond general mathematical problem-solving. Progress in this area is hampered by existing datasets that are often scarce, synthetic, or rigidly formal. We address this by proposing an informal yet verifiable task formulation, recasting inequality proving into two automatically checkable subtasks: bound estimation and relation prediction. Building on this, we release IneqMath, an expert-curated dataset of Olympiad-level inequalities, including a test set and training corpus enriched with step-wise solutions and theorem annotations. We also develop a novel LLM-as-judge evaluation framework, combining a final-answer judge with four step-wise judges designed to detect common reasoning flaws. A systematic evaluation of 29 leading LLMs on IneqMath reveals a surprising reality: even top models like o1 achieve less than 10% overall accuracy under step-wise scrutiny; this is a drop of up to 65.5% from their accuracy considering only final answer equivalence. This discrepancy exposes fragile deductive chains and a critical gap for current LLMs between merely finding an answer and constructing a rigorous proof. Scaling model size and increasing test-time computation yield limited gains in overall proof correctness. Instead, our findings highlight promising research directions such as theorem-guided reasoning and self-refinement. Code and data are available at https://ineqmath.github.io/. 7 authors · Jun 9 2
- Optimal Embeddings of Posets in Hypercubes Given a finite poset mathcal P, the hypercube-height, denoted by h^*(mathcal P), is defined to be the largest h such that, for any natural number n, the subsets of [n] of size less than h do not contain an induced copy of mathcal P. The hypercube-width, denoted by w^*(mathcal P), is the smallest w such that the subsets of [w] of size at most h^*(mathcal P) contain an induced copy of mathcal P. In other words, h^*(mathcal P) asks how `low' can a poset be embedded, and w^*(mathcal P) asks for the first hypercube in which such an `optimal' embedding occurs. These notions were introduced by Bastide, Groenland, Ivan and Johnston in connection to upper bounds for the poset saturation numbers. While it is not hard to see that h^*(mathcal P)leq |mathcal P|-1 (and this bound can be tight), the hypercube-width has proved to be much more elusive. It was shown by the authors mentioned above that w^*(mathcal P)leq|mathcal P|^2/4, but they conjectured that in fact w^*(mathcal P)leq |mathcal P| for any finite poset mathcal P. In this paper we prove this conjecture. The proof uses Hall's theorem for bipartite graphs as a precision tool for modifing an existing copy of our poset. 3 authors · Sep 30
- An elementary and unified proof of Grothendieck's inequality We present an elementary, self-contained proof of Grothendieck's inequality that unifies the real and complex cases and yields both the Krivine and Haagerup bounds, the current best-known explicit bounds for the real and complex Grothendieck constants respectively. This article is intended to be pedagogical, combining and streamlining known ideas of Lindenstrauss--Pe{\l}czy\'nski, Krivine, and Haagerup into a proof that need only univariate calculus, basic complex variables, and a modicum of linear algebra as prerequisites. 3 authors · Nov 28, 2017
- On the minimal power of q in a Kazhdan-Lusztig polynomial For w in the symmetric group, we provide an exact formula for the smallest positive power q^{h(w)} appearing in the Kazhdan-Lusztig polynomial P_{e,w}(q). We also provide a tight upper bound on h(w) in simply-laced types, resolving a conjecture of Billey-Postnikov from 2002. 2 authors · Mar 23, 2023
- Hardy inequalities for fractional integrals on general domains We prove a sharp Hardy inequality for fractional integrals for functions that are supported on a general domain. The constant is the same as the one for the half-space and hence our result settles a recent conjecture of Bogdan and Dyda. 2 authors · Jul 17, 2009
- Fundamental Tradeoffs in Learning with Prior Information We seek to understand fundamental tradeoffs between the accuracy of prior information that a learner has on a given problem and its learning performance. We introduce the notion of prioritized risk, which differs from traditional notions of minimax and Bayes risk by allowing us to study such fundamental tradeoffs in settings where reality does not necessarily conform to the learner's prior. We present a general reduction-based approach for extending classical minimax lower-bound techniques in order to lower bound the prioritized risk for statistical estimation problems. We also introduce a novel generalization of Fano's inequality (which may be of independent interest) for lower bounding the prioritized risk in more general settings involving unbounded losses. We illustrate the ability of our framework to provide insights into tradeoffs between prior information and learning performance for problems in estimation, regression, and reinforcement learning. 1 authors · Apr 26, 2023
- Almost sure bounds for a weighted Steinhaus random multiplicative function We obtain almost sure bounds for the weighted sum sum_{n leq t} f(n){n}, where f(n) is a Steinhaus random multiplicative function. Specifically, we obtain the bounds predicted by exponentiating the law of the iterated logarithm, giving sharp upper and lower bounds. 1 authors · Jul 2, 2023
- Kernel Density Estimators in Large Dimensions This paper studies Kernel density estimation for a high-dimensional distribution rho(x). Traditional approaches have focused on the limit of large number of data points n and fixed dimension d. We analyze instead the regime where both the number n of data points y_i and their dimensionality d grow with a fixed ratio alpha=(log n)/d. Our study reveals three distinct statistical regimes for the kernel-based estimate of the density hat rho_h^{D}(x)=1{n h^d}sum_{i=1}^n Kleft(x-y_i{h}right), depending on the bandwidth h: a classical regime for large bandwidth where the Central Limit Theorem (CLT) holds, which is akin to the one found in traditional approaches. Below a certain value of the bandwidth, h_{CLT}(alpha), we find that the CLT breaks down. The statistics of hat rho_h^{D}(x) for a fixed x drawn from rho(x) is given by a heavy-tailed distribution (an alpha-stable distribution). In particular below a value h_G(alpha), we find that hat rho_h^{D}(x) is governed by extreme value statistics: only a few points in the database matter and give the dominant contribution to the density estimator. We provide a detailed analysis for high-dimensional multivariate Gaussian data. We show that the optimal bandwidth threshold based on Kullback-Leibler divergence lies in the new statistical regime identified in this paper. Our findings reveal limitations of classical approaches, show the relevance of these new statistical regimes, and offer new insights for Kernel density estimation in high-dimensional settings. 2 authors · Aug 11, 2024