Volume 30 - Issue 2 - December 2005

The problem of the broken rod and Ernesto Cesàro's early work in
   Eugene Seneta, François Jongmans
   pp. 6776
A rod of length 1 is broken into n pieces at n – 1 randomly chosen points. What is the probability that k segments are of length greater than x? This problem and its solution go back to 1873 and are closely associated with the foundation of the Société Mathématique de France. The most complete solution was, remarkably, given a little later by Ernesto Cesàro, an Italian writing in French at the very beginning of his research career, which was not generally associated with geometric probability. Various solutions have been rediscovered in the 20th century in a number of important probabilistic-model settings. We review the pre-history of the problem in the late 19th century, where credit for its solution belongs. We also provide a modern solution to the problem of the broken rod and a verification of Cesàro's expression for the solution.

On the golden ratio, strong law, and first passage problem
   Tien-Chung Hu, Andrew Rosalsky, Andrei I. Volodin
   pp. 7786
For a sequence of correlated square-integrable random variables {Xn, n ¸ 1}, conditions are provided for the strong law of large numbers limn ! 1(Sn – ESn) / n D 0 almost surely to hold where Sn D ∑i D 1n Xi, n ¸ 1. The hypotheses stipulate that two series converge, where the terms of the first series involve both the golden ratio ' D (1 + √5) / 2 and bounds on var Xn and the terms of the second series involve both ' and bounds on cov (Xn, Xn + m). An application to first passage times is provided.

A note on the estimation of the frequency and severity distribution of
   operational losses
   A. Chernobai, C. Menn, S. T. Rachev, S. Trück
   pp. 8797
The Basel II Capital Accord requires banks to determine the capital charge to account for operational losses. A compound Poisson process with lognormal losses is suggested for this purpose. The paper examines the impact of possibly censored and/or truncated data on the estimation of loss distributions. A procedure for consistent estimation of the severity and frequency distributions based on incomplete data samples is presented. It is also demonstrated that ignoring the peculiarities of available data samples leads to inaccurate value-at-risk estimates that govern the operational risk capital charge.

Hypergeometric functions and birth–death processes
   P. R. Parthasarathy
   pp. 98111
Several interesting birth and death processes that emanate from hypergeometric functions and their ratios are presented.

Sums and ratios for Marshall and Olkin's bivariate exponential
   Saralees Nadarajah, Samuel Kotz
   pp. 112119
Motivated by reliability applications, we derive the exact distributions of R H X C Y and W H X / (X C Y) and the corresponding moment properties when X and Y follow Marshall and Olkin's bivariate exponential distribution.

Measuring inter-rater agreement: how useful is the kappa statistic?
   W. F. Scottz
   pp. 120124
Kappa, and more generally weighted kappa, is a measure of the level of agreement between two raters. This measure, which was introduced by Cohen (1960), is much used in certain fields, particularly psychology. We show that, when the number of people or objects to be rated is large, kappa is a good estimator of the true coefficient of correlation ½ when the (marginal) distributions of the marks awarded by the raters are identical. More generally, we discuss kappa estimates ½ / ®, where ® ¸ 1 is a measure of the level of disagreement between the marginal distributions. Consequently, kappa may be thought of as a composite measure of agreement between the raters. We derive a formula for the large-sample variance of kappa, and give formulae for confidence intervals. Examples are given from medical literature.

Exact tests and quasi-exact alternatives
   Mary C. Phipps
   pp. 125133
Exact tests which eliminate a nuisance parameter by conditioning on its sufficient statistic may be simple computationally, but can be very conservative. In fact the size of an exact test is often lower than half the nominal significance level. Corresponding unconditional methods are less conservative, but they pose technical and computational difficulties, whereas large-sample methods, while avoiding these problems, are not exact and often have poor small-sample properties. A compromise is Lancaster's mid-P, an easily calculated quasi-exact measure which has gained some recognition in the literature. A new argument is given in favour of the mid-P, and this argument helps to explain why certain other quasi-exact measures may also deserve serious consideration in the context of exact tests when sample sizes are small.

Index to Volume 30
   p. 134