For more information on how you can subscribe to our journals please read the information on our subscriptions page.
Click here for information on submitting papers to the Applied Probability Trust.
Volume 37 - Issue 1 - June 2012
R. Champion, C. T. Lenard and C. Matthews
The aim of this expository paper is to provide an introduction to smoothing methods. The paper describes several different problems in applied mathematics that give rise to smoothing problems. In each case, the issue of determining an optimal smoothing constant arises. The method of cross-validation for estimating smoothing constants is introduced and illustrated with a numerical example on the optimal bin-width for a histogram of rainfall data.
K. R. W. Brewer, Genevieve Hayes, A. N. Gillison
In Parts 1 and 2 of this article (see Math. Scientist 36, pp. 107–116, 117–125) it was shown theoretically that when a statistical test with a precise null hypothesis is used to detect the presence of a significant effect, the values of the two-tailed p required to establish significance are much smaller than the values of p required for that purpose when the null hypothesis is diffuse. In Part 3, the relevant behaviour of an empirical data set is studied, our earlier theoretical findings are confirmed, and some additional conclusions are drawn, including two that were initially unexpected.
Genevieve Hayes and K. R. W. Brewer
It is shown here that, when conducting a Bayesian hypothesis test to estimate a reference posterior odds, it is possible simply to convolute the sample likelihood with the complete ignorance prior (defined in Part 2 of this article (see Math. Scientist 36, pp. 117–125) and implied by an extended version of Benford's law of numbers) thereby obviating the need to specify any reference prior measure at the null hypothesis location. Finally we consider the likely consequences of a general recognition that, when the implied null hypothesis is diffuse, much smaller values of p than currently envisaged are needed to supply any meaningful false discovery rate.
When studying the mathematical properties of single-elimination knockout tournaments, it is usually assumed that the initial number of participants n is equal to 2k for some positive integer k. In this paper we consider some combinatorial properties of certain types of unseeded single-elimination knockout tournaments for which n is not necessarily restricted to be of this form.
Ilan Adler and Sheldon M. Ross
We consider serve and rally competitions involving two teams, in which the probability that a team wins a rally depends on which team is serving. We give elementary derivations of the final score probabilities both when the match ends when one of the teams reaches a set number of points, and when there is an additional proviso that the winning team must be ahead by at least two points. We consider models where the winner of a rally receives a point, and also where the winner of a rally receives a point only if that player was the server of the rally. In the latter case we also compute the mean number of rallies. We also determine conditions under which a player would prefer to be the initial server.
David J. Aldous
The Great Filter refers to a highly speculative theory that implicitly claims one can test, via a probability model, whether known aspects of the history of life on Earth are consistent with the hypothesis that emergence of intelligent life was very unlikely. We describe the theory and some of the many objections to it. We give a mathematical argument to show that one objection, namely that it considers only a single possible linear history rather than multitudinous branching potential pathways to intelligence, has no force.
C. W. Lloyd-Smith
This paper shows how a Poisson distribution can be derived from the principle of maximum entropy for a random cluster of `particles'. The idea is adapted from statistical physics with the help of the order statistics for a sample of size N. Using a large collection of independent particles, we use partition functions and maximum entropy to derive the Poisson distribution. This result is extended to the case of exchangeable random variables, conditioned on a sigma-field. We suggest the Poisson as a baseline for modelling the distribution of a large variety of random clusters and social groups.
J. Gani and R. J. Swift
We consider the probability of the time T until a rod of length L is broken up into unit lengths along a given path, when the permutations of the broken parts are taken into account. The inductive method is shown to work for L = 3, 4, 5. This note extends earlier results by Gani and Swift (2011) in which permutations of the broken parts of the rod were condensed into a single state.