For more information on how you can subscribe to our journals please read the information on our subscriptions page.
Click here for information on submitting papers to the Applied Probability Trust.
Volume 29 - Issue 1 - June 2004
Claude C. Leroy
The purpose of this article is to draw attention to an algorithm for the calculation of π used by the Chinese mathematician Liu Hui in about AD 260, and possibly also about 200 years later by two other Chinese mathematicians, a father and son called Zu. It seems remarkable that this algorithm has, as far as we can ascertain, remained unnoticed for 1000 years. Moreover, the amazing advantage of the algorithm seems never to have been clearly realised. Well-known references do not draw attention to this point, nor has the author been able to trace any other reference where it is fully emphasised.
P. M. Cohn
This brief survey has as its object the description of various types of comma-free codes: prefix codes, maximal codes, complete codes and their relation to complete subsets of free monoids. Its aim is a simple proof of a factorization theorem for maximal codes and a form of the converse.
Yvik C. Swan and F. Thomas Bruss
Problems involving stochastic processes frequently involve the computation of hitting probabilities, and a given process often has to be approximated by a Brownian motion. But, in order to obtain explicit answers, it is sometimes necessary to map the Brownian motion and the region in which it is defined in such a way that the transformed process is again a Brownian motion. The desired exit probabilities can then be found by symmetry arguments. For planar Brownian motion, the Schwarz–Christoffel transformation is such a mapping. The goal of this paper is to provide an organized summary of the relevant theory and a step-by-step guide to finding the explicit form of the transformation. We are mainly concerned with exit probability problems. We also draw attention to software developed by Driscoll and Trefethen which was found to be very helpful. Among the new problems we solve is the three players' ruin problem with capital constraints.
Marsha M. Young and Dean M. Young
This paper gives a relatively brief constructive proof that characterizes the relationship between the matrices F and G when FFT = GGT. The proof uses the singular value decomposition, rank factorization, and general solution to a linear matrix equation.
Saralees Nadarajah and Samuel Kotz
If X and Y are gamma-distributed independent random variables with common scale parameter, then it is well known that the ratio X / (X + Y) has the beta distribution. In this article, we consider the distribution of W = X c / (X c + Y c ), c > 0, which we refer to as the generalized beta distribution. We derive various properties of this distribution, including its hazard-rate function and moments. We produce evidence to show that the distribution of W is a more realistic model for failure times than one based on the standard beta distribution.
Lingyun Zhang and C. D. Lai
We obtain the variance of a randomly stopped sum, where the stopping time is a run length from a Shewhart-type control chart. Two proofs are presented. The derived result is useful in the study of quality control charts.
W. F. Scott
We consider the estimation of the logarithm of the odds ratio in (for example) clinical trials. Peto's estimator, P = (O – E) / V , was described by Yusuf et al. (1985), but their theoretical justification is rather brief. We derive the properties of P by reference to known results on the hazard ratio. We also show that, in meta-analysis, the inverse variance method leads to very convenient formulae for estimating the log-odds ratio when Peto's estimator is used. The results are found to be similar to those obtained by logistic regression.
A. Katsis and B. Toman
In this paper, we address the problem of optimal sample size when misclassification among binomial observations is present. A Bayesian double sampling procedure with two classifiers is considered. The first is an expensive device that classifies the observations correctly, while the second classifier is cheaper but prone to misclassification error. Initially, a sample of units is classified by both devices in order to reduce the posterior variance of the misclassification probabilities. Since this is a pre-experimental process, an additional condition is imposed on the likelihood of the data. In the second stage, a number of observations are classified by either device, according to the rule that the experiment's `worth' is maximized. Optimal sample sizes are then derived for fixed values of the misclassification cost and the available budget.
Tom L. Bratcher and James D. Stamey
In this paper we develop an exact Bayesian approach to interval estimation for the difference of two Poisson rates. An asymptotic approach is also given for the difference in these rates so that no Monte Carlo integration is necessary. Highest posterior density intervals are utilized for both the difference and the ratio of two Poisson rates. Examples with real data are considered.