Paper: C. Huang, Z. Jiang, and Z. Zhang. Notes on stochastic orders and submartingale-like assumptions for optimization. In preparation, 2026.
Overview Link to heading
Submartingale-like assumptions of the form $P(Y_k = 1 \mid \mathcal{F}_{k-1}) \geq p$ are ubiquitous in the convergence analysis of randomized optimization methods. This paper establishes a clean comparison principle: any such dependent Bernoulli process is stochastically larger than an i.i.d. Bernoulli($p$) process in the usual stochastic order. This means that probabilistic bounds for all increasing functionals transfer directly from the simpler i.i.d. case.
Motivation Link to heading
- Submartingale-like assumptions appear in trust-region, line-search, cubic-regularization, and direct search methods. Each paper re-derives tail bounds (e.g., Hoeffding-type inequalities) from scratch using ad hoc arguments
- A natural question: can we systematically relate these dependent sequences to their i.i.d. counterparts, so that classical results apply immediately?
Main Contributions Link to heading
- Stochastic domination theorem: If ${Y_k}$ satisfies $P(Y_k = 1 \mid \mathcal{F}_{k-1}) \geq p$, then
Hoeffding-type bound: $$P\biggl(\frac{1}{n}\sum_{k=0}^{n-1} Y_k \geq p - \varepsilon\biggr) \geq 1 - e^{-2n\varepsilon^2},$$ with the same rate as the i.i.d. case
Kolmogorov-type bound: $$P\biggl(\inf_{k \geq n} \frac{1}{k}\sum_{\ell=0}^{k-1} Y_\ell \geq p - \varepsilon\biggr) \geq 1 - e^{-2n\varepsilon^2},$$ providing uniform concentration over all future iterations
Key Ideas Link to heading
The proof leverages the multivariate stochastic order (Definition via upper sets in $\mathbb{R}^n$) and a component-wise comparison lemma: it suffices to show that each $Y_k$, conditioned on any realization of the past, stochastically dominates a Bernoulli($p$) variable — which is exactly what the submartingale-like assumption guarantees. The paper also explores the connection between upper sets, Alexandrov topology, and preorders, providing a unified mathematical framework.