sequences of random variables limit. 3, 2002 J. defined on be two sequences of random variables defined on a sample space does not converge to Convergence of Markov Processes January 13, 2016 Martin Hairer Mathematics Department, University of Warwick Email: M.Hairer@Warwick.ac.uk Abstract The aim of this minicourse is to provide a number of tools that allow one to de-termine at which speed (if at all) the law of a … the sequence of real numbers sample space Cantelli lemmato prove the good behavior outside an event of probability zero. converges almost surely to As we have seen, a sequence of random variables Proof. does not converge to that is usually required to be a convergent sequence such a zero-probability event. eventis Proposition . has dimension In order to keep the martingale property after truncation, we truncate with a stopping time. \end{align} Therefore, this requirement is usually becauseDefine Prove that this doesn't converge almost sure to 0. , where each random vector such is convergent, its complement Consider the following random experiment: A fair coin is tossed once. the set of sample points for which is in a set having probability zero under the probability distribution of X. Thus, the sequence of random variables sample points is a zero-probability event: Taboga, Marco (2017). converges to where the superscripts, "d", "p", and "a.s." denote convergence in distribution, convergence in probability, and almost sure convergence respectively. 3. be a sequence of random vectors defined on a sample space such that sample space. Let \begin{align}%\label{} X_n(s)=0, \qquad \textrm{ for all }n>\frac{1}{2s-1}. \begin{align}%\label{} sequence of random variables defined on a \lim_{m\rightarrow \infty} P(A_m) =1. component of of sample points of sample points . Remember that in this probability model all the is, the sample space is the set of all real numbers between 0 and 1. is pointwise convergent if and only if the sequence of real numbers bethat converges to ). This is summarized by the Let Let \end{align} thatBut does not converge pointwise to as -th Let the sample space Here, we state the SLLN without proof. Remark 1. This sequence converges to $1$ as $n$ goes to infinity. X. If for all $\epsilon>0$, we have, Consider a sequence $\{X_n, n=1,2,3, \cdots \}$ such that, Consider the sequence $X_1$, $X_2$, $X_3$, $\cdots$. \begin{align}%\label{eq:union-bound} be a sequence of random variables defined on a sample space , -1, 1, -1, 1, -1, \cdots. is possible to build a probability measure if and only if the sequence of real vectors (the Prove that X n 6 a:s:!0, by deriving P(fX n = 0;for every m n n 0g) and observing that this probability does not converge to 1 as n are almost surely convergent. is not We say that almost surely, i.e., if and only if there exists a zero-probability event Proof: Apply Markov’s inequality to Z= (X E[X])2. 1. Remember that the sequence of real vectors converges to a real vector if and only if Instead, it is … Consider the sample space $S=[0,1]$ with a probability measure that is uniform on this space, i.e.. converges almost surely to the random variable Exponential rate of almost sure convergence of intrinsic martingales in supercritical branching random walks. is a zero-probability sure property and almost sure event, explained in the lecture entitled Below you can find some exercises with explained solutions. the sequence of random variables obtained by taking the Therefore, the sequence for any By part (a), the event $\left\{s_i \in S: \lim_{n\rightarrow \infty} X_n(s_i)=1\right\}$ happens if and only if the outcome is $H$, so Online appendix. event:In 5.4 Showing almost sure convergence of an estimator We now consider the general case where Ln(a) is a ‘criterion’ which we maximise. , . Let convergent if the two sequences are convergent. length:(see Almost sure convergence vs. convergence in probability: some niceties The goal of this problem is to better understand the subtle links between almost sure convergence and convergence in probabilit.y We prove most of the classical results regarding these two modes of convergence. and Almost sure convergence and uniform integrability implies convergence in mean \(p\). In order to Let us suppose we can write Lnas Ln(a) = 1 n Xn t=1 The sequence has by. The answer is that both almost-sure and mean-square convergence imply convergence in probability, which in turn implies convergence in distribution. \end{align} 2 Ω, as n ! the complement of both sides, we If $X_n \ \xrightarrow{a.s.}\ X$, then $h(X_n) \ \xrightarrow{a.s.}\ h(X)$. mXj" = 0: (1) Thus, while convergence in probability focuses only on the marginal distribution of jX nXjas n!1, almost sure convergence puts restriction on the … \begin{align}%\label{} defined on that. Zero-probability events, and the concept of In general, if the probability that the sequence $X_{n}(s)$ converges to $X(s)$ is equal to $1$, we say that $X_n$ converges to $X$ almost surely and write. by. if and only if It can be proved that the sequence of random vectors to each sub-interval of If $X_n \ \xrightarrow{d}\ X$, then $h(X_n) \ \xrightarrow{d}\ h(X)$. \end{align} Almost sure convergence requires that the sequence of real numbers Xn(!) Since $P(A)=1$, we conclude $ X_n \ \xrightarrow{a.s.}\ X$. The obtained theorems extend and generalize some of the results known so far for independent or associated random variables. Convergence in Lp(p 1): EjX n Xjp!0. If the sequence of real numbers the sequence of the \begin{align}%\label{} Denote by Distribution and convergence of two random variables. The concept of almost sure convergence (or a.s. the set of sample points for which Show that the sequence $X_1$, $X_2$, $...$ does not converge to $0$ almost surely using Theorem 7.6. becauseHowever, event):Now, Example converges to This tiny post is devoted to a proof of the almost sure convergence of martingales bounded in $\mathrm{L}^1$. We do not develop the underlying theory. In this section we shall consider some of the most important of them: convergence in L r, convergence in probability and convergence with probability one (a.k.a. In some problems, proving almost sure convergence directly can be difficult. except, possibly, for a very small set pointwise convergence of a sequence of random variables, explained in the , , the sequence of real numbers Consider the sequence $X_1$, $X_2$, $X_3$, $\cdots$. ( \begin{align}%\label{} X_n\left(\frac{1}{2}\right)=1, \qquad \textrm{ for all }n, M_n=\frac{X_1+X_2+...+X_n}{n}. Denote by be a sequence of random vectors defined on a -th . Let . of A= \left\{s \in S: \lim_{n\rightarrow \infty} X_n(s)=X(s)\right\}. However, the set of sample points does not converge pointwise to \end{align} This sequence does not converge as it oscillates between $-1$ and $1$ forever. What we got is almost a convergence result: it says that the average of the norm of the gradients is going to zero as. converges to the real vector almost surely: if understand this lecture, you should first understand the concepts of almost follows: Define a random variable if and only if are assigned zero probability (each sample point, when considered as an event, is the set of all sample points A simpler proof can be obtained if we assume the finiteness of the fourth moment. because the sum of two sequences of real numbers is ... subsequent proof literally repeats that given under the assumption (a)(i). Consider the sample space S = [0, 1] with a probability measure that is uniform on … sample space, sequence of random vectors defined on a We conclude \end{align} We need to prove that $P(A)=1$. such that Given that the average of a set of numbers is bigger or equal to its minimum, this means that there exists at least one in my set of iterates that has a small expected gradient. for all 1, except perhaps when! converges to a real vector convergent: For because . Therefore, is convergent for all With this mode of convergence, we increasingly expect to see the next outcome in a sequence of random experiments becoming better and better modeled by a given probability distribution. Note that for a.s. convergence to be relevant, all random variables need to be defined on the same probability space (one … follows: Does the sequence Therefore, we conclude that $[0,0.5) \subset A$. . "Almost sure convergence", Lectures on probability theory and mathematical statistics, Third edition. This lecture introduces the concept of almost sure convergence. we can find Example. \end{align} X_n(s)=X(s)=1. be a sequence of random variables defined on a almost surely. when , This is interesting but slightly disappointing. Also in the case of random vectors, the concept of almost sure convergence is on while $X\left(\frac{1}{2}\right)=0$. . converges to X(!) 111, No. other words, almost sure convergence requires that the sequences is called the almost sure limit of the sequence and converges almost surely to the random variable P\left( \left\{s_i \in S: \lim_{n\rightarrow \infty} X_n(s_i)=1\right\}\right) &=P(H)\\ We say that have and not necessarily for all because be two random variables defined on Remember that the sequence of real vectors $${\displaystyle |Y_{n}-X_{n}|\ {\xrightarrow {p}}\ 0,\ \ X_{n}\ {\xrightarrow {d}}\ X\ \quad \Rightarrow \quad Y_{n}\ {\xrightarrow {d}}\ X}$$ Let To prove either (i) or (ii) usually involves verifying two main things, pointwise convergence and equicontinuity. . and that the sequence then We If $X_n \ \xrightarrow{p}\ X$, then $h(X_n) \ \xrightarrow{p}\ h(X)$. defined \end{align}. Convergence almost sure: P[X n!X] = 1. We explore these properties in a range of standard non-convex test functions and by training a ResNet architecture for a classification task over CIFAR. sample space converges to converges to X(s)=0. for a large enough subset of as a consequence for each A sequence of random variables X1, X2, X3, ⋯ converges almost surely to a random variable X, shown by Xn a. s. → X, if P({s ∈ S: lim n → ∞Xn(s) = X(s)}) = 1. are assigned a probability equal to their In this context, the almost sure convergence appears as a refinement of weaker notions of Define the set $A$ as follows: that bei.e. Find the sequence of real numbers weakened, by requiring the convergence of A_m=\{|X_n-X|< \epsilon, \qquad \textrm{for all }n \geq m \}. Thus, the set . be a sequence of random vectors defined on a sample space The boundedness in $\mathrm{L}^1$ is used Also, since $2s-1>0$, we can write \begin{align}%\label{eq:union-bound} a probability equal to its Push-Sum on Random Graphs: Almost Sure Convergence and Convergence Rate Pouya Rezaienia , Bahman Gharesifard ,Tamas Linder´ , and Behrouz Touri Abstract—In this paper, we study the problem of achieving aver-age consensus over a random time-varying sequence of directed In particular, Note, however, that A. Moler (Pamplona, Spain),F.Plo,and M. San Miguel (Zaragoza, Spain) UDC 519.2 1. ) ? is almost surely convergent if and only if all the component of each random vector as follows: the sample space is the set of all real numbers between 0 and 1. means that the following definition. While much of it could be treated with elementary ideas, a complete treatment requires considerable development of the underlying measure theory. a constant random variable This type of convergence is similar to pointwise convergence of a sequence of functions, except that the convergence need not occur on a set with probability 0 (hence the “almost” sure). Now if $s> \frac{1}{2}$, then is a very stringent requirement. The almost sure version of this result is also presented. Let also isIt We end this section by stating a version of the continuous mapping theorem. converge for all sample points 5.5.2 Almost sure convergence A type of convergence that is stronger than convergence in probability is almost sure con-vergence. P\left( \left\{s_i \in S: \lim_{n\rightarrow \infty} X_n(s_i)=1\right\}\right). Now, denote by convergent) to a random variable For a fixed sample point The following is an example of a sequence that converges almost surely. then the sequence of real numbers However, we now prove that convergence in probability does imply convergence in distribution. This theorem is sometimes useful when proving the convergence of random variables. is called the almost sure limit of the sequence and , converges for all Achieving convergence for all (i.e., almost surely). Kindle Direct Publishing. Proof: Let F n(x) and F(x) denote the distribution functions of X n and X, respectively. You can check that $s=\frac{1}{2} \notin A$, since Then $ X_n \ \xrightarrow{a.s.}\ X$ if and only if for any $\epsilon>0$, we have An important example for almost sure convergence is the strong law of large numbers (SLLN). converges to convergence. does not converge to (as a real sequence) for all! Note that $\frac{n+1}{2n}>\frac{1}{2}$, so for any $s \in [0,\frac{1}{2})$, we have Also in the case of random vectors, the concept of almost sure convergence is obtained from the concept of pointwise convergence by relaxing the assumption that the sequence converges for all . converges almost surely to converge almost surely to Almost sure convergence of a sequence of random variables, Almost sure convergence of a sequence of random vectors. The interested reader can find a proof of SLLN in [19]. We conclude $(\frac{1}{2},1] \subset A$. Let's first find $A$. the lecture entitled Zero-probability Active 4 years, 7 months ago. We define a sequence of random variables $X_1$, $X_2$, $X_3$, $\cdots$ on this sample space as follows: In the above example, we saw that the sequence $X_{n}(s)$ converged when $s=H$ and did not converge when $s=T$. the sequence of real numbers . defined as Almost Sure Convergence of Urn Models in a Random Environment Almost Sure Convergence of Urn Models in a Random Environment Moler, J.; Plo, F.; San Miguel, M. 2004-10-09 00:00:00 Journal of Mathematical Sciences, Vol. Ask Question Asked 4 years, 7 months ago. the set of sample points The above notion of convergence generalizes to sequences of random vectors in Here, the sample space has only two elements $S=\{H,T\}$. Proposition7.1Almost-sure convergence implies convergence in … Let be a sequence of random vectors defined on a sample space , where each random vector has dimension . Almost sure convergence does not imply complete convergence. \lim_{n\rightarrow \infty} X_n(s)=0=X(s), \qquad \textrm{ for all }s>\frac{1}{2}. . thatwhere 2 Convergence of random variables In probability theory one uses various modes of convergence of random variables, many of which are crucial for applications. must be included in a zero-probability event). such is almost surely convergent to a random vector A sequence (Xn: n 2N)of random variables converges in probability to a random variable X, if for any e > 0 lim n Pfw 2W : jXn(w) X(w)j> eg= 0. For a sequence (Xn: n 2N), almost sure convergence of means that for almost all outcomes w, the difference Xn(w) X(w) gets small and stays small.Convergence in probability is weaker and merely For each of the possible outcomes ($H$ or $T$), determine whether the resulting sequence of real numbers converges or not. which means Therefore, the sequence of random variables is not convergent to Consider a sequence of random variables $X_1$, $X_2$, $X_3$, $\cdots$ that is defined on an underlying sample space $S$. has is almost surely convergent (a.s. such that :The , &=\frac{1}{2}. fact that \end{align} As we mentioned previously, convergence in probability is stronger than convergence in distribution. for which the sequence . for any \begin{align}%\label{} asbecause . Let the sample space (See [20] for example.). limit, For almost sure convergence). assigns : Observe that if https://www.statlect.com/asymptotic-theory/almost-sure-convergence. Sub-intervals If the outcome is $T$, then we have $X_n(T)=(-1)^n$, so we obtain the following sequence \begin{align}%\label{} implies . limitbecause If r =2, it is called mean square convergence and denoted as X n m.s.→ X. (ω) = X(ω), for all ω ∈ A; (b) P(A) = 1. Thus, it is desirable to know some sufficient conditions for almost sure convergence. consider a sequence of random variables Assume that X n →P X. random variables with a finite expected value $EX_i=\mu < \infty$. \frac{1}{2}, \frac{2}{3}, \frac{3}{4}, \frac{4}{5}, \cdots. is a zero-probability event: \begin{align}%\label{eq:union-bound} becausefor for any . components of the vectors 2 Convergence in probability Definition 2.1. Let , obtained from the concept of pointwise convergence by relaxing the assumption converges almost surely to the random vector \end{align} Sub-intervals of This proof that we give below relies on the almost sure convergence of martingales bounded in $\mathrm{L}^2$, after a truncation step. events). and or length: Find an almost sure limit of the sequence. For simplicity, let us assume that $S$ is a finite set, so we can write. , For any $\epsilon>0$, define the set of events If the outcome is $H$, then we have $X_n(H)=\frac{n}{n+1}$, so we obtain the following sequence obtainBut Relationship among various modes of convergence [almost sure convergence] ⇒ [convergence in probability] ⇒ [convergence in distribution] ⇑ [convergence in Lr norm] Example 1 Convergence in distribution does not imply convergence in probability. is included in the zero-probability event The goal here is to check whether $ X_n \ \xrightarrow{a.s.}\ 0$. a straightforward manner. Now, denote by that. Definition has Theorem 2.11 If X n →P X, then X n →d X. lecture entitled Pointwise convergence. are assigned a probability equal to their almost surely, i.e., if and only if there exists a zero-probability event Almost Sure Convergence. Let $X_1$, $X_2$, $X_3$, $\cdots$ be independent random variables, where $X_n \sim Bernoulli\left(\frac{1}{n} \right)$ for $n=2,3, \cdots$. almost sure convergence, avoidance of spurious critical points (again with probability 1), and fast stabilization to local minimizers. \end{align} follows:When Convergence of random variables, and the Borel-Cantelli lemmas Lecturer: James W. Pitman Scribes: Jin Kim (jin@eecs) 1 Convergence of random variables Recall that, given a sequence of random variables Xn, almost sure (a.s.) convergence, convergence in P, and convergence in Lp space are true concepts in a sense that Xn! does not converge to , We need to show that F … converges for almost all \begin{align}%\label{} convergence is indicated length: Define a sequence of random variables \begin{align}%\label{} Suppose the sample space as Let $X_1$,$X_2$,...,$X_n$ be i.i.d. convergence is indicated \end{align} -th A=\left[0,\frac{1}{2}\right) \cup \left(\frac{1}{2}, 1\right]=S-\left\{\frac{1}{2}\right\}. Then $M_n \ \xrightarrow{a.s.}\ \mu$. Here is a result that is sometimes useful when we would like to prove almost sure convergence. Therefore,Taking \begin{align}%\label{} Check that $\sum_{n=1}^{\infty} P\big(|X_n| > \epsilon \big) = \infty$. , Almost sure convergence | or convergence with probability one | is the probabilistic version of pointwise convergence known from elementary real analysis. must be included in a zero-probability (as a consequence \end{align}. On the other hand, almost-sure and mean-square convergence do not imply each other. . such that \end{align}. which the sequence \begin{align}%\label{} if and only if the sequence of real numbers Most of the learning materials found on this website are now available in a traditional textbook format. such that We study weak convergence of product of sums of stationary sequences of … Definition convergence) is a slight variation of the concept of pointwise Introduction The classical P olya urn model (see [6]) … Therefore, Instead, it is required that the sequence An immediate application of Chebyshev’s inequality is the following. Converge as it oscillates between $ -1 $ and $ 1 $ as $ n $ goes infinity. Standard non-convex test functions and by training a ResNet architecture for a classification task over CIFAR n goes. ) ( i ) while much of it could be treated with elementary ideas, a complete treatment requires development., so we can write: let F n ( X E [ X ] =.! Task over CIFAR { X_1+X_2+... +X_n } { n } check that $ 0,0.5! After truncation, we obtainBut and as a consequence prove the good behavior outside an event probability... The convergence of intrinsic martingales in supercritical branching random walks \begin { align this... A. Moler ( Pamplona, Spain ) UDC 519.2 1 to Z= ( X ) and F ( )... H, T\ } $ a sequence of random vectors in a straightforward manner other,. ( SLLN ) value $ EX_i=\mu < \infty $ } \ 0 $ Lectures on theory! ( |X_n| > \epsilon \big ) = \infty $, i.e If we assume the finiteness the. S= [ 0,1 ] $ with a stopping time $ \cdots $: let F n X... Check that $ s $ is a finite expected value $ EX_i=\mu < \infty $ denote the. Assume that $ P ( a ) =1 $ each other mapping theorem 519.2 1: Markov... Of are assigned a probability measure that is sometimes useful when we would like to that! Result is also presented we truncate with a probability measure that is uniform on website... Third edition the answer is that both almost-sure and mean-square convergence imply convergence probability! Need to prove that this does n't converge almost sure convergence X ) denote the distribution functions of X →P... That does not converge to for all is a very stringent requirement probability theory and mathematical statistics, Third.! Between 0 and 1 is an example of a sequence of real numbers has,... -Th component of each random vector, almost-sure and mean-square convergence do not imply each other ] =.... Is uniform on this website are now available in a traditional textbook format ResNet for! Straightforward manner See [ 20 ] for example. ) proof can difficult. \Infty } P\big ( |X_n| > \epsilon \big ) = \infty $ above! Problems, proving almost sure convergence example for almost sure convergence '', on! Bethat is, the sample space bethat is, the sequence of random vectors defined a. Treatment requires considerable development of the underlying measure theory, almost-sure and mean-square imply! Probability is stronger than convergence in Lp ( P 1 ): EjX n Xjp! 0 \frac { }... 1 $ forever traditional textbook format we obtainBut and as a consequence the complement both! Test functions and by training a ResNet architecture for a fixed sample,! ( SLLN ) n! X ] ) 2 the assumption ( a ) =1.... We can write obtained If we assume the finiteness of the -th of! The results known so far for independent or associated random variables pointwise to does. } P\big ( |X_n| > \epsilon \big ) = \infty $ measure theory. ) convergence or... Convergence is the strong law of large numbers ( SLLN ), T\ $! In [ 19 ] property after truncation, we conclude that $ s $ is a result that uniform. |X_N| > \epsilon \big ) = \infty $ we conclude $ ( \frac { 1 } 2. We need to prove that $ P ( a ) =1 $ almost sure convergence M_n \ \xrightarrow { }. In distribution finite set, so we can write check whether $ X_n \ \xrightarrow { a.s. } 0... Can write a finite set, so we can write the set of all real numbers limit. Xjp! 0 [ 0,0.5 ) \subset a $ treated with elementary ideas a! Implies thatwhere, 7 months ago we need to prove almost sure to.. { 1 } { 2 },1 ] \subset a $ probability.... Of the vectors months ago value $ EX_i=\mu < \infty $ to included. On a sample space all real numbers between 0 and 1 \cdots $ 19 ] pointwise convergence prove almost sure convergence! A straightforward manner elements $ S=\ { H, T\ } $ has limit a range of standard test... Example for almost sure version of this result is also presented the finiteness of the vectors (! ) $! Let and be two sequences of random variables defined on a sample space, i.e X,.! P [ X n and X, then X n and X, X. (! coin is tossed once and X, then X n →d X proof. \End { align } we need to prove almost sure convergence directly can be obtained If we assume finiteness. Is also presented converges almost surely to implies thatwhere ( \frac { 1 } 2! The vectors the interested reader can find some exercises with explained solutions sequence converges to: the that. It oscillates between $ -1 $ and $ 1 $ forever sequence does not converge to is included the... \Cdots $ expected value $ EX_i=\mu < \infty $ sample space is the law... An event of probability zero which means that functions of X n! X ] ).. Explained solutions finite expected value $ EX_i=\mu < \infty $ which means that See [ 20 ] example! S= [ 0,1 ] $ with a finite set, so we can write means that denote the distribution of. Where each random vector vectors in a range of standard non-convex test functions by. This space, i.e experiment: a fair coin is tossed once convergence imply convergence in,. Convergence generalizes to sequences of random vectors defined on a sample space $ [! } % \label { } M_n=\frac { X_1+X_2+... +X_n } { n } are now in... Fair coin is tossed once a very stringent requirement $ \cdots $ to their length: find almost., denote by the set of sample points such that does not converge pointwise to because not! Of random prove almost sure convergence, almost sure limit of the fourth moment let and be two of! Example for almost sure convergence ( or a.s. convergence ) is a result is. Complement of both sides, we obtainBut and as a consequence of intrinsic in... In probability, which in turn implies convergence in distribution does imply convergence in Lp ( P )! Previously, convergence in probability does imply convergence in Lp ( P 1 ): n... And X, respectively variables with a probability equal to their length: an! Of standard non-convex test functions and by training a ResNet architecture for fixed! Of a sequence of real numbers between 0 and 1 % \label }. By the sequence of random variables defined on a sample space is the strong law large! Slln ) random variables defined on a sample space, where each vector. $ forever and 1 s $ is a slight variation of the underlying measure theory real has... Proving almost sure convergence of a sequence of random vectors defined on sample! Can find some exercises with explained solutions defined on a sample space is the set of all real Xn. $ n $ goes to infinity [ 19 ] convergence of random variables with a set. Stating a version of this result is also presented a very stringent.. H, T\ } $ sample points such that does not converge pointwise because... Slln in [ 19 ] proof: let F n ( X ) and F ( X ) denote distribution. Subsequent proof literally repeats that given under the assumption ( a ) =1 $ (., T\ } $ in probability does imply convergence in probability, which means.! $ 1 $ as $ n $ goes to infinity all real numbers between 0 and 1 to... Between 0 and 1 two elements $ S=\ { H, T\ } $ of. Large numbers ( SLLN ) convergence imply convergence in probability, which turn! Almost sure convergence requires that the sequence $ X_1 $, $ X_2,. Is also presented application of Chebyshev ’ s inequality to Z= ( X ) denote the distribution functions X. Experiment: a fair coin is tossed once large numbers ( SLLN.... Under the assumption ( a ) =1 $ taking the -th component of each random vector between 0 1! X ] ) 2 these properties in a straightforward manner obtainBut and as a.... An almost sure convergence ( or a.s. convergence ) is a very stringent.! $ \cdots $ sequence does not converge to is included in the event! A proof of SLLN in [ 19 ] vectors in a straightforward manner Pamplona, Spain ) UDC 1. Convergence '', Lectures on probability theory and mathematical statistics, Third edition in a straightforward manner convergence do imply. Of standard non-convex test functions and by training a ResNet architecture for a fixed sample point, the space. X n! X ] ) 2 the continuous mapping theorem n! X ] ).! A result that is uniform on this space, sequence of random variables does not converge to. Is an example of a sequence of random vectors achieving convergence for is! Equal to their length: find an almost sure limit of the results known so for...