Optional stopping: No problem for Bayesians. The idea here is that you stop data collection as soon as your data provide strong evidence in favour of either the null or alternative, thus avoiding bias for one conclusion over the other. There are two common optional-stopping approaches:ġ) Use inferential statistics that directly compare the null and alternative hypotheses, such as the Bayes factor (Rouder, 2014 Schönbrodt & Wagenmakers, 2018 although see de Heide & Grünwald, 2017). Note, however, that power analyses should still be conducted for other reasons, such as assessing the feasibility of your study given time or financial constraints. One advantage of these approaches is that they do not rely on analyzing power a priori, which can be difficult to estimate accurately. Correspondingly, optional stopping (without correction/adjustment) has been classified as a Questionable Research Practice (see Wicherts et al., 2016).įortunately, statistical approaches have been devised that allow researchers to use optional stopping (dynamic sample sizes) without engaging in a Questionable Research Practice. Indeed, some authors have noted that, with this optional stopping approach, researchers can always obtain a significant p-value (see Wagenmakers, 2007). Historically, this approach has been problematic because it substantially increases Type I errors. One periodically examines their data during data collection, and data collection stops when some criterion is achieved (e.g., statistical significance). One approach is to set the sample size dynamically. There are two general approaches:ġ) Dynamically setting sample size (i.e., optional stopping)Īpproach 1: Dynamically Setting Sample Size (Optional Stopping) Handbook of ethics in quantitative methodology, 159-184.Ī critical aspect of design is determining the sample size that will be used. ![]() You may find it helpful to read Maxwell and Kelley (2011) prior to planning your sample size: 50, looking at the sample size of a previous study to set your sample size is generally discouraged. Note that given that most psychology studies typically have statistical power of less than. Consequently, you might want to consider programs such as Study Swap as a means of obtaining your requisite sample size. It can be challenging to achieve the sample size required to properly power a study. This same reasoning applies to the use of covariates. This type of analysis is best considered exploratory – rather than an evaluation of the hypothesis. ![]() Using a measure to evaluate the question underlying a hypothesis that is not specified a priori results in substantially increased Type I error rates. Switching the measure that is used to evaluate a hypothesis negates the validity of the hypothesis test. If multiple measures are used as operationalizations of the same construct, be sure to clearly indicate a priori which onewill be used to evaluate the hypothesis. Including multiple measures of the same variable (predictor or dependent variables) in confirmatory research allows for researcher flexibility during the analysis stage. The key issue here is making decisions that reduce unnecessary complexity in data collection, to limit flexibility during analysis, and evaluation of hypotheses (i.e., confirmatory research). Failing to specify the sampling plan and allowing for running (multiple) small studies.Failing to conduct a well-founded power analysis.Measuring additional variables that couldlater enable exclusion of participants from the analyses (e.g., awareness or manipulation checks).Measuring additional constructs that couldpotentially act as primary outcomes. ![]()
0 Comments
Leave a Reply. |