Organizer: Luis Pericchi, Universidad de Puerto Rico, Puerto Rico

Speakers:

Title:

Calibrating the amount of information through Bayesian asymptotics: The adaptive alpha

Abstract:
Paradoxically, the easiest way of alleviating the problems raised by the use of significance
testing with fixed α values may be to calibrate them using Bayesian ideas, in particular
building bridges between α values and Bayes factors. One of such bridges is presented in
P´erez and Pericchi (2014), where the authors put forward an adaptive α, α_n, which changes
with the amount of sample information. This calibration may be interpreted as a Bayes –
non-Bayes compromise, and leads to statistical consistency. The calibration can also be used
to produce confidence intervals whose size takes in consideration the amount of observed
information.
In this talk, this proposal will be discussed and its consequences will be explored. Additionally, a refined version of the adaptive alpha for linear models will be presented.
Keywords: Hypothesis testing , p-values, Bayes factors, Bayes- Non Bayes compromise.

 

Title:

Pros and cons of changing the default significant threshold from 0.05 to 0.005

Abstract:

In light of the previous talks, and alternatives to a fixed significance threshold level, we discuss the merits and drawbacks of changing the significance threshold from 0.05 to 0.005. We address multiple hypothesis testing, as well as other concerns, including the idea that changing the significance threshold might just be a distraction from finding a real solution to the problem.

Title:

Blending Bayesian and Fisherian Tools to Build Optimal Significance Tests

Abstract:

The idea of our work is to have a sample-size-dependent significance level — the test is optimal in the sense of the generalized Neyman-Pearson Lemma. The testing schemes allow any kind of hypothesis, without restrictions on the dimensionalities of the sample, the parameter and the hypotheses spaces. Note that this should include “sharp” hypotheses, which correspond to subsets of lower dimensionality than the full parameter space. These significance tests are compatible with the Likelihood Principle (LP). They can be easier to interpret consistently than tests that are not LP-compliant. The P-value is obtained in the usual way after ordering the sample space by Bayes Factors or maybe by Likelihood Ratios. Clearly, we use numerical methods to handle the high dimension spaces. In other words, our work retains the useful concept of statistical significance and the same operational procedures as currently used hypothesis tests, whether Fisherian (p-values), Neyman-Pearson (decision theory) or Bayesian (Bayes-factor tests).

Title:

Converting P-Values in approximate Posterior Probabilities to increase the reproducible Scientific “Findings”

Abstract:

We put forward a novel calibration of p values, the “p Posterior Probabilities’ (pPP) which maps p values into approximations of posterior probabilities taking into account the effect of sample sizes and model dimensionality (number of adjustable parameters). We build on the Robust Lower Bound proposed by Sellke, Bayarri and Berger (2001), but we incorporate a simple power of the sample size and model dimensionality to make it adaptive to different amounts of data and complexity.
We present several illustrations from where it is apparent that the pPP closely approximates exact Bayes Factors based on Intrinsic priors. In particular, it has the same asymptotics as posterior probabilities but avoiding the problems of “Bayesian Information Criterion” (BIC) for small samples relative to the number of parameters. “Too familiar to ditch” are p-values (Spiegelhalter (2018)), and calibrating them to approximate Posterior Probabilities may be the fastest route to make posterior probabilities widely available without the complexities of Bayes Factors