Volume 37 - Issue 1 - June 2012

An introduction to smoothing
   R. Champion, C. T. Lenard, C. Matthews
   pp. 117
  
  
Abstract
The aim of this expository paper is to provide an introduction to smoothing methods. The paper describes several different problems in applied mathematics that give rise to smoothing problems. In each case, the issue of determining an optimal smoothing constant arises. The method of cross-validation for estimating smoothing constants is introduced and illustrated with a numerical example on the optimal bin-width for a histogram of rainfall data.

Understanding and using Fisher's p. A four-part article
   K. R. W. Brewer, Genevieve Hayes, A. N. Gillison
   pp. 1819
  
  
Understanding and using Fisher's p. Part 3: Examining an empirical
   data set
   K. R. W. Brewer, Genevieve Hayes, A. N. Gillison
   pp. 2026
  
  
Abstract
In Parts 1 and 2 of this article (see Math. Scientist 36, pp. 107–116, 117–125) it was shown theoretically that when a statistical test with a precise null hypothesis is used to detect the presence of a significant effect, the values of the two-tailed p required to establish significance are much smaller than the values of p required for that purpose when the null hypothesis is diffuse. In Part 3, the relevant behaviour of an empirical data set is studied, our earlier theoretical findings are confirmed, and some additional conclusions are drawn, including two that were initially unexpected.

Understanding and using Fisher's p. Part 4: Do we even need to specify
   a prior measure at H0?
   Genevieve Hayes, K. R. W. Brewer
   pp. 2733
  
  
Abstract
It is shown here that, when conducting a Bayesian hypothesis test to estimate a reference posterior odds, it is possible simply to convolute the sample likelihood with the complete ignorance prior (defined in Part 2 of this article (see Math. Scientist 36, pp. 117–125) and implied by an extended version of Benford's law of numbers) thereby obviating the need to specify any reference prior measure at the null hypothesis location. Finally we consider the likely consequences of a general recognition that, when the implied null hypothesis is diffuse, much smaller values of p than currently envisaged are needed to supply any meaningful false discovery rate.

Knockout-tournament scenarios accounting for byes
   Martin Griffiths
   pp. 3446
  
  
Abstract
When studying the mathematical properties of single-elimination knockout tournaments, it is usually assumed that the initial number of participants n is equal to 2k for some positive integer k. In this paper we consider some combinatorial properties of certain types of unseeded single-elimination knockout tournaments for which n is not necessarily restricted to be of this form.

Score probabilities for serve and rally competition
   Ilan Adler, Sheldon M. Ross
   pp. 4754
  
  
Abstract
We consider serve and rally competitions involving two teams, in which the probability that a team wins a rally depends on which team is serving. We give elementary derivations of the final score probabilities both when the match ends when one of the teams reaches a set number of points, and when there is an additional proviso that the winning team must be ahead by at least two points. We consider models where the winner of a rally receives a point, and also where the winner of a rally receives a point only if that player was the server of the rally. In the latter case we also compute the mean number of rallies. We also determine conditions under which a player would prefer to be the initial server.

The Great Filter, branching histories, and unlikely events
   David J. Aldous
   pp. 5564
  
  
Abstract
The Great Filter refers to a highly speculative theory that implicitly claims one can test, via a probability model, whether known aspects of the history of life on Earth are consistent with the hypothesis that emergence of intelligent life was very unlikely. We describe the theory and some of the many objections to it. We give a mathematical argument to show that one objection, namely that it considers only a single possible linear history rather than multitudinous branching potential pathways to intelligence, has no force.

Maximum entropy and the Poisson distribution
   C. W. Lloyd-Smith
   pp. 6571
  
  
Abstract
This paper shows how a Poisson distribution can be derived from the principle of maximum entropy for a random cluster of `particles'. The idea is adapted from statistical physics with the help of the order statistics for a sample of size N. Using a large collection of independent particles, we use partition functions and maximum entropy to derive the Poisson distribution. This result is extended to the case of exchangeable random variables, conditioned on a sigma-field. We suggest the Poisson as a baseline for modelling the distribution of a large variety of random clusters and social groups. 

Breaking a rod into unit lengths along a given path
   J. Gani, R. J. Swift
   pp. 7276
  
  
Abstract
We consider the probability of the time T until a rod of length L is broken up into unit lengths along a given path, when the permutations of the broken parts are taken into account. The inductive method is shown to work for L = 3, 4, 5. This note extends earlier results by Gani and Swift (2011) in which permutations of the broken parts of the rod were condensed into a single state.

Letter to the Editor: Factors for the central moments of the log-normal
   distribution
   Christopher S. Withers, Saralees Nadarajah
   pp. 7779
  
  
Letter to the Editor: Some surprising rationalities of tangent circles
   Nelson M. Blachman
   pp. 8082