Coupon Collector's Problem - K Cup Coupons

Coupon collector's problem  - k cup coupons

In probability theory, the coupon collector's problem describes the "collect all coupons and win" contests. It asks the following question: Suppose that there is an urn of n different coupons, from which coupons are being collected, equally likely, with replacement. What is the probability that more than t sample trials are needed to collect all n coupons? An alternative statement is: Given n coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as Θ ( n log ⁡ ( n ) ) {\displaystyle \Theta (n\log(n))} . For example, when n = 50 it takes about 225 trials to collect all 50 coupons.

Coupon collector's problem  - k cup coupons
Understanding the problem

The key to solving the problem is understanding that it takes very little time to collect the first few coupons. On the other hand, it takes a long time to collect the last few coupons. In fact, for 50 coupons, it takes on average 50 trials to collect the very last coupon after the other 49 coupons have been collected. This is why the expected time to collect all coupons is much longer than 50. The idea now is to split the total time into 50 intervals where the expected time can be calculated.

Coupon collector's problem  - k cup coupons
Answer

The following table (click [show] to expand) gives the expected number of tries to get sets of 1 to 100 coupons.

Coupon collector's problem  - k cup coupons
Solution

Calculating the expectation

Let T be the time to collect all n coupons, and let ti be the time to collect the i-th coupon after i âˆ' 1 coupons have been collected. Think of T and ti as random variables. Observe that the probability of collecting a new coupon is pi = (n âˆ' (i âˆ' 1))/n. Therefore, ti has geometric distribution with expectation 1/pi. By the linearity of expectations we have:

E ⁡ ( T ) = E ⁡ ( t 1 ) + E ⁡ ( t 2 ) + ⋯ + E ⁡ ( t n ) = 1 p 1 + 1 p 2 + ⋯ + 1 p n = n n + n n âˆ' 1 + ⋯ + n 1 = n â‹… ( 1 1 + 1 2 + ⋯ + 1 n ) = n â‹… H n . {\displaystyle {\begin{aligned}\operatorname {E} (T)&=\operatorname {E} (t_{1})+\operatorname {E} (t_{2})+\cdots +\operatorname {E} (t_{n})={\frac {1}{p_{1}}}+{\frac {1}{p_{2}}}+\cdots +{\frac {1}{p_{n}}}\\&={\frac {n}{n}}+{\frac {n}{n-1}}+\cdots +{\frac {n}{1}}=n\cdot \left({\frac {1}{1}}+{\frac {1}{2}}+\cdots +{\frac {1}{n}}\right)\,=\,n\cdot H_{n}.\end{aligned}}}

Here Hn is the n-th harmonic number. Using the asymptotics of the harmonic numbers, we obtain:

E ⁡ ( T ) = n â‹… H n = n log ⁡ n + γ n + 1 2 + o ( 1 ) ,   as   n â†' ∞ , {\displaystyle \operatorname {E} (T)=n\cdot H_{n}=n\log n+\gamma n+{\frac {1}{2}}+o(1),\ {\text{as}}\ n\to \infty ,}

where γ ≈ 0.5772156649 {\displaystyle \gamma \approx 0.5772156649} is the Eulerâ€"Mascheroni constant.

Now one can use the Markov inequality to bound the desired probability:

P ⁡ ( T ≥ c n H n ) ≤ 1 c . {\displaystyle \operatorname {P} (T\geq c\,nH_{n})\leq {\frac {1}{c}}.}

Calculating the variance

Using the independence of random variables ti, we obtain:

Var ⁡ ( T ) = Var ⁡ ( t 1 ) + Var ⁡ ( t 2 ) + ⋯ + Var ⁡ ( t n ) = 1 âˆ' p 1 p 1 2 + 1 âˆ' p 2 p 2 2 + ⋯ + 1 âˆ' p n p n 2 < ( n 2 n 2 + n 2 ( n âˆ' 1 ) 2 + ⋯ + n 2 1 2 ) < n 2 â‹… ( 1 1 2 + 1 2 2 + ⋯ + 1 n 2 ) < Ï€ 2 6 n 2 . {\displaystyle {\begin{aligned}\operatorname {Var} (T)&=\operatorname {Var} (t_{1})+\operatorname {Var} (t_{2})+\cdots +\operatorname {Var} (t_{n})\\&={\frac {1-p_{1}}{p_{1}^{2}}}+{\frac {1-p_{2}}{p_{2}^{2}}}+\cdots +{\frac {1-p_{n}}{p_{n}^{2}}}\\&<\left({\frac {n^{2}}{n^{2}}}+{\frac {n^{2}}{(n-1)^{2}}}+\cdots +{\frac {n^{2}}{1^{2}}}\right)\\&<n^{2}\cdot \left({\frac {1}{1^{2}}}+{\frac {1}{2^{2}}}+\cdots +{\frac {1}{n^{2}}}\right)\\&<{\frac {\pi ^{2}}{6}}n^{2}.\end{aligned}}}

The π 2 / 6 {\displaystyle \pi ^{2}/6} is a value of the Riemann zeta function (see Basel problem).

Now one can use the Chebyshev inequality to bound the desired probability:

P ⁡ ( | T âˆ' n H n | ≥ c n ) ≤ Ï€ 2 6 c 2 . {\displaystyle \operatorname {P} \left(|T-nH_{n}|\geq c\,n\right)\leq {\frac {\pi ^{2}}{6c^{2}}}.}

Tail estimates

A different upper bound can be derived from the following observation. Let Z i r {\displaystyle {Z}_{i}^{r}} denote the event that the i {\displaystyle i} -th coupon was not picked in the first r {\displaystyle r} trials. Then:

P [ Z i r ] = ( 1 âˆ' 1 n ) r ≤ e âˆ' r / n {\displaystyle {\begin{aligned}P\left[{Z}_{i}^{r}\right]=\left(1-{\frac {1}{n}}\right)^{r}\leq e^{-r/n}\end{aligned}}}

Thus, for r = β n log ⁡ n {\displaystyle r=\beta n\log n} , we have P [ Z i r ] ≤ e ( âˆ' β n log ⁡ n ) / n = n âˆ' β {\displaystyle P\left[{Z}_{i}^{r}\right]\leq e^{(-\beta n\log n)/n}=n^{-\beta }} .

P [ T > β n log ⁡ n ] = P [ ⋃ i Z i β n log ⁡ n ] ≤ n â‹… P [ Z 1 β n log ⁡ n ] ≤ n âˆ' β + 1 {\displaystyle {\begin{aligned}P\left[T>\beta n\log n\right]=P\left[\bigcup _{i}{Z}_{i}^{\beta n\log n}\right]\leq n\cdot P[{Z}_{1}^{\beta n\log n}]\leq n^{-\beta +1}\end{aligned}}}

Coupon collector's problem  - k cup coupons
Extensions and generalizations

  • Paul ErdÅ's and Alfréd Rényi proved the limit theorem for the distribution of T. This result is a further extension of previous bounds.
P ⁡ ( T < n log ⁡ n + c n ) â†' e âˆ' e âˆ' c ,     as   n â†' ∞ . {\displaystyle \operatorname {P} (T<n\log n+cn)\to e^{-e^{-c}},\ \ {\text{as}}\ n\to \infty .}
  • Donald J. Newman and Lawrence Shepp found a generalization of the coupon collector's problem when m copies of each coupon needs to be collected. Let Tm be the first time m copies of each coupon are collected. They showed that the expectation in this case satisfies:
E ⁡ ( T m ) = n log ⁡ n + ( m âˆ' 1 ) n log ⁡ log ⁡ n + O ( n ) ,   as   n â†' ∞ . {\displaystyle \operatorname {E} (T_{m})=n\log n+(m-1)n\log \log n+O(n),\ {\text{as}}\ n\to \infty .}
Here m is fixed. When m = 1 we get the earlier formula for the expectation.
  • Common generalization, also due to ErdÅ's and Rényi:
P ⁡ ( T m < n log ⁡ n + ( m âˆ' 1 ) n log ⁡ log ⁡ n + c n ) â†' e âˆ' e âˆ' c / ( m âˆ' 1 ) ! ,     as   n â†' ∞ . {\displaystyle \operatorname {P} (T_{m}<n\log n+(m-1)n\log \log n+cn)\to e^{-e^{-c}/(m-1)!},\ \ {\text{as}}\ n\to \infty .}
  • Wolfgang Stadje has solved the case where the stickers are bought in packets, which contain no duplicates. The results show that for practical applications, say packets of five stickers, the effect of packets is negligible.
  • Sylvain Vardy and Yvan Velenik derived an optimized strategy including swapping and purchasing missing stickers by Monte Carlo simulation.
  • In the general case of a nonuniform probability distribution, according to Philippe Flajolet,
E ( T ) = ∫ 0 ∞ ( 1 âˆ' ∏ i = 1 n ( 1 âˆ' e âˆ' p i t ) ) d t {\displaystyle E(T)=\int _{0}^{\infty }{\big (}1-\prod _{i=1}^{n}(1-e^{-p_{i}t}){\big )}dt}

0 komentar: