Serveur d'exploration sur les relations entre la France et l'Australie

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.
***** Acces problem to record *****\

Identifieur interne : 0006089 ( Pmc/Corpus ); précédent : 0006088; suivant : 0006090 ***** probable Xml problem with record *****

Links to Exploration step


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Bayesian Estimation of Small Effects in Exercise and Sports Science</title>
<author>
<name sortKey="Mengersen, Kerrie L" sort="Mengersen, Kerrie L" uniqKey="Mengersen K" first="Kerrie L." last="Mengersen">Kerrie L. Mengersen</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>Science and Engineering Faculty, Mathematical Sciences, and Institute for Future Environments, Queensland University of Technology, Brisbane, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff002">
<addr-line>Australian Research Council Centre of Excellence in Mathematical and Statistical Frontiers in Big Data, Big Models and New Insights, Brisbane, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Drovandi, Christopher C" sort="Drovandi, Christopher C" uniqKey="Drovandi C" first="Christopher C." last="Drovandi">Christopher C. Drovandi</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>Science and Engineering Faculty, Mathematical Sciences, and Institute for Future Environments, Queensland University of Technology, Brisbane, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff002">
<addr-line>Australian Research Council Centre of Excellence in Mathematical and Statistical Frontiers in Big Data, Big Models and New Insights, Brisbane, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Robert, Christian P" sort="Robert, Christian P" uniqKey="Robert C" first="Christian P." last="Robert">Christian P. Robert</name>
<affiliation>
<nlm:aff id="aff003">
<addr-line>Ceremade, Universite Paris Dauphine, Paris, France</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Pyne, David B" sort="Pyne, David B" uniqKey="Pyne D" first="David B." last="Pyne">David B. Pyne</name>
<affiliation>
<nlm:aff id="aff004">
<addr-line>Australian Institute of Sport, Canberra, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff005">
<addr-line>Research Institute for Sport and Exercise, University of Canberra, Bruce, ACT, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Gore, Christopher J" sort="Gore, Christopher J" uniqKey="Gore C" first="Christopher J." last="Gore">Christopher J. Gore</name>
<affiliation>
<nlm:aff id="aff004">
<addr-line>Australian Institute of Sport, Canberra, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff005">
<addr-line>Research Institute for Sport and Exercise, University of Canberra, Bruce, ACT, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff006">
<addr-line>Exercise Physiology Laboratory, Flinders University of South Australia, Bedford Park, South Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">27073897</idno>
<idno type="pmc">4830602</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4830602</idno>
<idno type="RBID">PMC:4830602</idno>
<idno type="doi">10.1371/journal.pone.0147311</idno>
<date when="2016">2016</date>
<idno type="wicri:Area/Pmc/Corpus">000608</idno>
<idno type="wicri:explorRef" wicri:stream="Pmc" wicri:step="Corpus" wicri:corpus="PMC">000608</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Bayesian Estimation of Small Effects in Exercise and Sports Science</title>
<author>
<name sortKey="Mengersen, Kerrie L" sort="Mengersen, Kerrie L" uniqKey="Mengersen K" first="Kerrie L." last="Mengersen">Kerrie L. Mengersen</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>Science and Engineering Faculty, Mathematical Sciences, and Institute for Future Environments, Queensland University of Technology, Brisbane, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff002">
<addr-line>Australian Research Council Centre of Excellence in Mathematical and Statistical Frontiers in Big Data, Big Models and New Insights, Brisbane, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Drovandi, Christopher C" sort="Drovandi, Christopher C" uniqKey="Drovandi C" first="Christopher C." last="Drovandi">Christopher C. Drovandi</name>
<affiliation>
<nlm:aff id="aff001">
<addr-line>Science and Engineering Faculty, Mathematical Sciences, and Institute for Future Environments, Queensland University of Technology, Brisbane, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff002">
<addr-line>Australian Research Council Centre of Excellence in Mathematical and Statistical Frontiers in Big Data, Big Models and New Insights, Brisbane, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Robert, Christian P" sort="Robert, Christian P" uniqKey="Robert C" first="Christian P." last="Robert">Christian P. Robert</name>
<affiliation>
<nlm:aff id="aff003">
<addr-line>Ceremade, Universite Paris Dauphine, Paris, France</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Pyne, David B" sort="Pyne, David B" uniqKey="Pyne D" first="David B." last="Pyne">David B. Pyne</name>
<affiliation>
<nlm:aff id="aff004">
<addr-line>Australian Institute of Sport, Canberra, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff005">
<addr-line>Research Institute for Sport and Exercise, University of Canberra, Bruce, ACT, Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Gore, Christopher J" sort="Gore, Christopher J" uniqKey="Gore C" first="Christopher J." last="Gore">Christopher J. Gore</name>
<affiliation>
<nlm:aff id="aff004">
<addr-line>Australian Institute of Sport, Canberra, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff005">
<addr-line>Research Institute for Sport and Exercise, University of Canberra, Bruce, ACT, Australia</addr-line>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="aff006">
<addr-line>Exercise Physiology Laboratory, Flinders University of South Australia, Bedford Park, South Australia</addr-line>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">PLoS ONE</title>
<idno type="eISSN">1932-6203</idno>
<imprint>
<date when="2016">2016</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Atkinson, G" uniqKey="Atkinson G">G Atkinson</name>
</author>
<author>
<name sortKey="Batterham, Am" uniqKey="Batterham A">AM Batterham</name>
</author>
<author>
<name sortKey="Hopkins, Wg" uniqKey="Hopkins W">WG Hopkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ploutz Snyder, Rj" uniqKey="Ploutz Snyder R">RJ Ploutz-Snyder</name>
</author>
<author>
<name sortKey="Fiedler, J" uniqKey="Fiedler J">J Fiedler</name>
</author>
<author>
<name sortKey="Feiveson, Ah" uniqKey="Feiveson A">AH Feiveson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bacchetti, P" uniqKey="Bacchetti P">P Bacchetti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bacchetti, P" uniqKey="Bacchetti P">P Bacchetti</name>
</author>
<author>
<name sortKey="Deeks, Sg" uniqKey="Deeks S">SG Deeks</name>
</author>
<author>
<name sortKey="Mccune, Jm" uniqKey="Mccune J">JM McCune</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Beck, Tw" uniqKey="Beck T">TW Beck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Batterham, Am" uniqKey="Batterham A">AM Batterham</name>
</author>
<author>
<name sortKey="Hopkins, Wg" uniqKey="Hopkins W">WG Hopkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barker, Rj" uniqKey="Barker R">RJ Barker</name>
</author>
<author>
<name sortKey="Schofield, Mr" uniqKey="Schofield M">MR Schofield</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Welsh, Ah" uniqKey="Welsh A">AH Welsh</name>
</author>
<author>
<name sortKey="Knight, Ej" uniqKey="Knight E">EJ Knight</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Humberstone Gough, C" uniqKey="Humberstone Gough C">C Humberstone-Gough</name>
</author>
<author>
<name sortKey="Saunders, Pu" uniqKey="Saunders P">PU Saunders</name>
</author>
<author>
<name sortKey="Bonetti, Dl" uniqKey="Bonetti D">DL Bonetti</name>
</author>
<author>
<name sortKey="Stephens, S" uniqKey="Stephens S">S Stephens</name>
</author>
<author>
<name sortKey="Bullock, N" uniqKey="Bullock N">N Bullock</name>
</author>
<author>
<name sortKey="Anson, Jm" uniqKey="Anson J">JM Anson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gelman, A" uniqKey="Gelman A">A Gelman</name>
</author>
<author>
<name sortKey="Carlin, J" uniqKey="Carlin J">J Carlin</name>
</author>
<author>
<name sortKey="Stern, H" uniqKey="Stern H">H Stern</name>
</author>
<author>
<name sortKey="Dunson, D" uniqKey="Dunson D">D Dunson</name>
</author>
<author>
<name sortKey="Vehtari, A" uniqKey="Vehtari A">A Vehtari</name>
</author>
<author>
<name sortKey="Rubin, D" uniqKey="Rubin D">D Rubin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cohen, J" uniqKey="Cohen J">J Cohen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gelman, A" uniqKey="Gelman A">A Gelman</name>
</author>
<author>
<name sortKey="Carlin, J" uniqKey="Carlin J">J Carlin</name>
</author>
<author>
<name sortKey="Stern, H" uniqKey="Stern H">H Stern</name>
</author>
<author>
<name sortKey="Dunson, D" uniqKey="Dunson D">D Dunson</name>
</author>
<author>
<name sortKey="Vehtari, A" uniqKey="Vehtari A">A Vehtari</name>
</author>
<author>
<name sortKey="Rubin, D" uniqKey="Rubin D">D Rubin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Geman, S" uniqKey="Geman S">S Geman</name>
</author>
<author>
<name sortKey="Geman, D" uniqKey="Geman D">D Geman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hedges, Lv" uniqKey="Hedges L">LV Hedges</name>
</author>
<author>
<name sortKey="Olkin, I" uniqKey="Olkin I">I Olkin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lunn, Dj" uniqKey="Lunn D">DJ Lunn</name>
</author>
<author>
<name sortKey="Thomas, A" uniqKey="Thomas A">A Thomas</name>
</author>
<author>
<name sortKey="Best, N" uniqKey="Best N">N Best</name>
</author>
<author>
<name sortKey="Spiegelhalter, D" uniqKey="Spiegelhalter D">D Spiegelhalter</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marin, J" uniqKey="Marin J">J Marin</name>
</author>
<author>
<name sortKey="Robert, C" uniqKey="Robert C">C Robert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Marin, J" uniqKey="Marin J">J Marin</name>
</author>
<author>
<name sortKey="Robert, C" uniqKey="Robert C">C Robert</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hopkins, Wg" uniqKey="Hopkins W">WG Hopkins</name>
</author>
<author>
<name sortKey="Marshall, Sw" uniqKey="Marshall S">SW Marshall</name>
</author>
<author>
<name sortKey="Batterham, Am" uniqKey="Batterham A">AM Batterham</name>
</author>
<author>
<name sortKey="Hanin, J" uniqKey="Hanin J">J Hanin</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Garvican, La" uniqKey="Garvican L">LA Garvican</name>
</author>
<author>
<name sortKey="Martin, Dt" uniqKey="Martin D">DT Martin</name>
</author>
<author>
<name sortKey="Mcdonald, W" uniqKey="Mcdonald W">W McDonald</name>
</author>
<author>
<name sortKey="Gore, Cj" uniqKey="Gore C">CJ Gore</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sturtz, S" uniqKey="Sturtz S">S Sturtz</name>
</author>
<author>
<name sortKey="Liggues, U" uniqKey="Liggues U">U Liggues</name>
</author>
<author>
<name sortKey="Gelman, A" uniqKey="Gelman A">A Gelman</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Thomas, A" uniqKey="Thomas A">A Thomas</name>
</author>
<author>
<name sortKey="O Hara, B" uniqKey="O Hara B">B O'Hara</name>
</author>
<author>
<name sortKey="Ligges, U" uniqKey="Ligges U">U Ligges</name>
</author>
<author>
<name sortKey="Sturtz, S" uniqKey="Sturtz S">S Sturtz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Martin, A" uniqKey="Martin A">A Martin</name>
</author>
<author>
<name sortKey="Quinn, K" uniqKey="Quinn K">K Quinn</name>
</author>
<author>
<name sortKey="Park, J H" uniqKey="Park J">J-H Park</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gore, Cj" uniqKey="Gore C">CJ Gore</name>
</author>
<author>
<name sortKey="Sharpe, K" uniqKey="Sharpe K">K Sharpe</name>
</author>
<author>
<name sortKey="Garvican Lewis, La" uniqKey="Garvican Lewis L">LA Garvican-Lewis</name>
</author>
<author>
<name sortKey="Saunders, Pu" uniqKey="Saunders P">PU Saunders</name>
</author>
<author>
<name sortKey="Humberstone, Ce" uniqKey="Humberstone C">CE Humberstone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gore, Cj" uniqKey="Gore C">CJ Gore</name>
</author>
<author>
<name sortKey="Rodriguez, Fa" uniqKey="Rodriguez F">FA Rodriguez</name>
</author>
<author>
<name sortKey="Truijens, Mj" uniqKey="Truijens M">MJ Truijens</name>
</author>
<author>
<name sortKey="Townsend, Ne" uniqKey="Townsend N">NE Townsend</name>
</author>
<author>
<name sortKey="Stray Gundersen, J" uniqKey="Stray Gundersen J">J Stray-Gundersen</name>
</author>
<author>
<name sortKey="Levine, Bd" uniqKey="Levine B">BD Levine</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hopkins, Wg" uniqKey="Hopkins W">WG Hopkins</name>
</author>
<author>
<name sortKey="Hawley, Ja" uniqKey="Hawley J">JA Hawley</name>
</author>
<author>
<name sortKey="Burke, Lm" uniqKey="Burke L">LM Burke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hopkins, W" uniqKey="Hopkins W">W Hopkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hopkins, Wg" uniqKey="Hopkins W">WG Hopkins</name>
</author>
<author>
<name sortKey="Schabort, Ej" uniqKey="Schabort E">EJ Schabort</name>
</author>
<author>
<name sortKey="Hawley, Ja" uniqKey="Hawley J">JA Hawley</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bonetti, Dl" uniqKey="Bonetti D">DL Bonetti</name>
</author>
<author>
<name sortKey="Hopkins, Wg" uniqKey="Hopkins W">WG Hopkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pyne, Db" uniqKey="Pyne D">DB Pyne</name>
</author>
<author>
<name sortKey="Trewin, C" uniqKey="Trewin C">C Trewin</name>
</author>
<author>
<name sortKey="Hopkins, Wg" uniqKey="Hopkins W">WG Hopkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Smith, Tb" uniqKey="Smith T">TB Smith</name>
</author>
<author>
<name sortKey="Hopkins, Wg" uniqKey="Hopkins W">WG Hopkins</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Schmidt, W" uniqKey="Schmidt W">W Schmidt</name>
</author>
<author>
<name sortKey="Prommer, N" uniqKey="Prommer N">N Prommer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lundby, C" uniqKey="Lundby C">C Lundby</name>
</author>
<author>
<name sortKey="Calbert, Ja" uniqKey="Calbert J">JA Calbert</name>
</author>
<author>
<name sortKey="Sander, M" uniqKey="Sander M">M Sander</name>
</author>
<author>
<name sortKey="Van Hall, G" uniqKey="Van Hall G">G van Hall</name>
</author>
<author>
<name sortKey="Mazzeo, Rs" uniqKey="Mazzeo R">RS Mazzeo</name>
</author>
<author>
<name sortKey="Stray Gundersen, J" uniqKey="Stray Gundersen J">J Stray-Gundersen</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Conley, Dl" uniqKey="Conley D">DL Conley</name>
</author>
<author>
<name sortKey="Krahenbuhl, Gs" uniqKey="Krahenbuhl G">GS Krahenbuhl</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Daniels, Jt" uniqKey="Daniels J">JT Daniels</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilkinson, M" uniqKey="Wilkinson M">M Wilkinson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Burton, Pr" uniqKey="Burton P">PR Burton</name>
</author>
<author>
<name sortKey="Gurrin, Lc" uniqKey="Gurrin L">LC Gurrin</name>
</author>
<author>
<name sortKey="Campbell, Mj" uniqKey="Campbell M">MJ Campbell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stang, A" uniqKey="Stang A">A Stang</name>
</author>
<author>
<name sortKey="Poole, C" uniqKey="Poole C">C Poole</name>
</author>
<author>
<name sortKey="Kuss, O" uniqKey="Kuss O">O Kuss</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stapleton, C" uniqKey="Stapleton C">C Stapleton</name>
</author>
<author>
<name sortKey="S, M A" uniqKey="S M">M.A. S</name>
</author>
<author>
<name sortKey="Atkinson, G" uniqKey="Atkinson G">G Atkinson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Williams, Mn" uniqKey="Williams M">MN Williams</name>
</author>
<author>
<name sortKey="Grajales, Gag" uniqKey="Grajales G">GAG Grajales</name>
</author>
<author>
<name sortKey="Kurkiewiez, D" uniqKey="Kurkiewiez D">D Kurkiewiez</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dienes, Z" uniqKey="Dienes Z">Z Dienes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gelman, A" uniqKey="Gelman A">A Gelman</name>
</author>
<author>
<name sortKey="Stern, H" uniqKey="Stern H">H Stern</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Verhagen, J" uniqKey="Verhagen J">J Verhagen</name>
</author>
<author>
<name sortKey="Wagenmakers, Ej" uniqKey="Wagenmakers E">EJ Wagenmakers</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">PLoS One</journal-id>
<journal-id journal-id-type="iso-abbrev">PLoS ONE</journal-id>
<journal-id journal-id-type="publisher-id">plos</journal-id>
<journal-id journal-id-type="pmc">plosone</journal-id>
<journal-title-group>
<journal-title>PLoS ONE</journal-title>
</journal-title-group>
<issn pub-type="epub">1932-6203</issn>
<publisher>
<publisher-name>Public Library of Science</publisher-name>
<publisher-loc>San Francisco, CA USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">27073897</article-id>
<article-id pub-id-type="pmc">4830602</article-id>
<article-id pub-id-type="doi">10.1371/journal.pone.0147311</article-id>
<article-id pub-id-type="publisher-id">PONE-D-15-37573</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Research Article</subject>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Research and Analysis Methods</subject>
<subj-group>
<subject>Mathematical and Statistical Techniques</subject>
<subj-group>
<subject>Bayesian Method</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Physical Sciences</subject>
<subj-group>
<subject>Mathematics</subject>
<subj-group>
<subject>Probability Theory</subject>
<subj-group>
<subject>Probability Distribution</subject>
<subj-group>
<subject>Normal Distribution</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Physical Sciences</subject>
<subj-group>
<subject>Mathematics</subject>
<subj-group>
<subject>Probability Theory</subject>
<subj-group>
<subject>Probability Distribution</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Sports Science</subject>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Anatomy</subject>
<subj-group>
<subject>Body Fluids</subject>
<subj-group>
<subject>Blood</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Anatomy</subject>
<subj-group>
<subject>Body Fluids</subject>
<subj-group>
<subject>Blood</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Physiology</subject>
<subj-group>
<subject>Body Fluids</subject>
<subj-group>
<subject>Blood</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Physiology</subject>
<subj-group>
<subject>Body Fluids</subject>
<subj-group>
<subject>Blood</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Medicine and Health Sciences</subject>
<subj-group>
<subject>Hematology</subject>
<subj-group>
<subject>Blood</subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Physical Sciences</subject>
<subj-group>
<subject>Mathematics</subject>
<subj-group>
<subject>Statistics (Mathematics)</subject>
<subj-group>
<subject>Confidence Intervals</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Biology and Life Sciences</subject>
<subj-group>
<subject>Biochemistry</subject>
<subj-group>
<subject>Proteins</subject>
<subj-group>
<subject>Hemoglobin</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="Discipline-v3">
<subject>Physical Sciences</subject>
<subj-group>
<subject>Mathematics</subject>
<subj-group>
<subject>Probability Theory</subject>
<subj-group>
<subject>Statistical Distributions</subject>
</subj-group>
</subj-group>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Bayesian Estimation of Small Effects in Exercise and Sports Science</article-title>
<alt-title alt-title-type="running-head">Bayesian Estimation of Small Effects</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Mengersen</surname>
<given-names>Kerrie L.</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="cor001">*</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Drovandi</surname>
<given-names>Christopher C.</given-names>
</name>
<xref ref-type="aff" rid="aff001">
<sup>1</sup>
</xref>
<xref ref-type="aff" rid="aff002">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Robert</surname>
<given-names>Christian P.</given-names>
</name>
<xref ref-type="aff" rid="aff003">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pyne</surname>
<given-names>David B.</given-names>
</name>
<xref ref-type="aff" rid="aff004">
<sup>4</sup>
</xref>
<xref ref-type="aff" rid="aff005">
<sup>5</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Gore</surname>
<given-names>Christopher J.</given-names>
</name>
<xref ref-type="aff" rid="aff004">
<sup>4</sup>
</xref>
<xref ref-type="aff" rid="aff005">
<sup>5</sup>
</xref>
<xref ref-type="aff" rid="aff006">
<sup>6</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff001">
<label>1</label>
<addr-line>Science and Engineering Faculty, Mathematical Sciences, and Institute for Future Environments, Queensland University of Technology, Brisbane, Australia</addr-line>
</aff>
<aff id="aff002">
<label>2</label>
<addr-line>Australian Research Council Centre of Excellence in Mathematical and Statistical Frontiers in Big Data, Big Models and New Insights, Brisbane, Australia</addr-line>
</aff>
<aff id="aff003">
<label>3</label>
<addr-line>Ceremade, Universite Paris Dauphine, Paris, France</addr-line>
</aff>
<aff id="aff004">
<label>4</label>
<addr-line>Australian Institute of Sport, Canberra, Australia</addr-line>
</aff>
<aff id="aff005">
<label>5</label>
<addr-line>Research Institute for Sport and Exercise, University of Canberra, Bruce, ACT, Australia</addr-line>
</aff>
<aff id="aff006">
<label>6</label>
<addr-line>Exercise Physiology Laboratory, Flinders University of South Australia, Bedford Park, South Australia</addr-line>
</aff>
<contrib-group>
<contrib contrib-type="editor">
<name>
<surname>Chen</surname>
<given-names>Cathy W.S.</given-names>
</name>
<role>Editor</role>
<xref ref-type="aff" rid="edit1"></xref>
</contrib>
</contrib-group>
<aff id="edit1">
<addr-line>Feng Chia University, TAIWAN</addr-line>
</aff>
<author-notes>
<fn fn-type="conflict" id="coi001">
<p>
<bold>Competing Interests: </bold>
The authors have declared that no competing interests exist.</p>
</fn>
<fn fn-type="con" id="contrib001">
<p>Conceived and designed the experiments: KM CD CG. Performed the experiments: KM CD. Analyzed the data: KM CD CR DP CG. Contributed reagents/materials/analysis tools: KM CD. Wrote the paper: KM CD CR DP CG.</p>
</fn>
<corresp id="cor001">* E-mail:
<email>k.mengersen@qut.edu.au</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>13</day>
<month>4</month>
<year>2016</year>
</pub-date>
<pub-date pub-type="collection">
<year>2016</year>
</pub-date>
<volume>11</volume>
<issue>4</issue>
<elocation-id>e0147311</elocation-id>
<history>
<date date-type="received">
<day>25</day>
<month>8</month>
<year>2015</year>
</date>
<date date-type="accepted">
<day>31</day>
<month>12</month>
<year>2015</year>
</date>
</history>
<permissions>
<copyright-statement>© 2016 Mengersen et al</copyright-statement>
<copyright-year>2016</copyright-year>
<copyright-holder>Mengersen et al</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open access article distributed under the terms of the
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="pone.0147311.pdf"></self-uri>
<abstract>
<p>The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.</p>
</abstract>
<funding-group>
<funding-statement>The authors have no support or funding to report.</funding-statement>
</funding-group>
<counts>
<fig-count count="8"></fig-count>
<table-count count="5"></table-count>
<page-count count="23"></page-count>
</counts>
<custom-meta-group>
<custom-meta id="data-availability">
<meta-name>Data Availability</meta-name>
<meta-value>All relevant data are within the paper and its Supporting Information files.</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
<notes>
<title>Data Availability</title>
<p>All relevant data are within the paper and its Supporting Information files.</p>
</notes>
</front>
<body>
<sec sec-type="intro" id="sec001">
<title>Introduction</title>
<p>A key interest in sports science is the estimation and evaluation of small effects, such as the difference in finishing times between world-class athletes, or the impact of exercise training and/or lifestyle interventions such as dietary changes or sleep behaviors on performance [
<xref rid="pone.0147311.ref001" ref-type="bibr">1</xref>
]. While such an interest is not confined to this context [
<xref rid="pone.0147311.ref002" ref-type="bibr">2</xref>
], there are some features of sports science that make accurate and relevant estimation of small effects particularly challenging. Two such challenges are small sample sizes when dealing with international-standard, elite-level athletes and frequent small true between-individual differences in competitive performance. The issue of dealing with small sample sizes in studies has drawn comment in the fields of both medicine [
<xref rid="pone.0147311.ref003" ref-type="bibr">3</xref>
,
<xref rid="pone.0147311.ref004" ref-type="bibr">4</xref>
] and sports science [
<xref rid="pone.0147311.ref005" ref-type="bibr">5</xref>
].</p>
<p>These issues have been addressed by a number of sports science researchers. For example, Batterham and Hopkins (2006) challenged the traditional method of making an inference based on a p-value derived from a hypothesis test, arguing that it is confusing, potentially misleading and unnecessarily restrictive in its inferential capability [
<xref rid="pone.0147311.ref006" ref-type="bibr">6</xref>
]. The authors suggested alternative is to focus on the confidence interval as a measure of the uncertainty of the estimated effect, and examine the proportion of this interval that overlaps pre-defined magnitudes that are clinically or mechanistically relevant. As illustration, Batterham and Hopkins identify ‘substantially positive’, ‘trivial’ and ‘substantially negative’ magnitudes, as well as more finely graded magnitudes. The authors then translate these proportions to a set of likelihood statements about the magnitude of the true effect.</p>
<p>Batterham and Hopkins justify their suggested approach and corresponding inferences by drawing an analogy between their method and a Bayesian construction of the problem. In particular, they claim that their approach is approximately Bayesian based on no prior assumption about the distribution of the true parameter values. This has drawn criticism by a number of authors, such as Barker and Schofield (2008) who–rightly–point out that the approach is
<italic>not</italic>
Bayesian, and that the assumed priors in an analogous Bayesian approach may indeed be informative [
<xref rid="pone.0147311.ref007" ref-type="bibr">7</xref>
]. More recently, Welsh and Knight (2014) further criticised the approach of Batterham and Hopkins and suggested that relevant statistical approaches should use either confidence intervals or a fully Bayesian analysis [
<xref rid="pone.0147311.ref008" ref-type="bibr">8</xref>
].</p>
<p>The aim of this paper is to provide a Bayesian formulation of the method proposed by Batterham and Hopkins (2006) and provide a range of probabilistic statements that parallel their intended magnitude-based inferences. The models described here can be expanded as needed to address other issues. For further exposition, the model is described in the context of a small-scale athlete study authored by Humberstone-Gough and co-workers [
<xref rid="pone.0147311.ref009" ref-type="bibr">9</xref>
], which employed Batterham and Hopkins’ approach to compare the effect of two altitude training regimens (live high train low, and intermittent hypoxic exposure) on running performance and blood measurements of elite triathletes.</p>
</sec>
<sec sec-type="materials|methods" id="sec002">
<title>Methods</title>
<sec id="sec003">
<title>General model</title>
<p>Both Bayesian and frequentist approaches require specification of a statistical model for the observed data, which contains a number of parameters that need to be estimated. Bayesian methods are different from frequentist approaches in that the parameters are treated as random variables. That is, they are considered as having true, but unknown, values and are thus described by a (posterior) probability distribution that reflects the uncertainty associated with how well they are known, based on the data. The posterior distribution is obtained by multiplying the likelihood, which describes the probability of observing the data given specified values of the parameters, and the prior distribution(s), which encapsulates beliefs about the probability of obtaining those parameter values independently of the data. These priors may be developed using a range of information sources including previous experiments, historical data and/or expert opinion. Alternatively, they may be so-called uninformative or vague distributions, to allow inferences to be driven by the observed data.</p>
<p>This study describes a simple statistical model that might be considered in the context of examining small effects in sports science and also some possible prior distributions that might be placed on the parameters of this model. Some extensions to the model are considered in a later section.</p>
<p>Suppose that there are
<italic>G</italic>
treatment groups. For the
<italic>g</italic>
th group (
<italic>g =</italic>
1,…,
<italic>G</italic>
), let
<italic>n</italic>
<sub>
<italic>g</italic>
</sub>
denote the total number of individuals in the group,
<italic>y</italic>
<sub>
<italic>i</italic>
(
<italic>g</italic>
)</sub>
denote an observed effect of interest for the
<italic>i</italic>
th individual in the group (
<italic>i</italic>
= 1,…,
<italic>n</italic>
<sub>
<italic>g</italic>
</sub>
),
<italic>y</italic>
<sub>
<italic>g</italic>
</sub>
denote the set of observations in the group,
<inline-formula id="pone.0147311.e001">
<alternatives>
<graphic xlink:href="pone.0147311.e001.jpg" id="pone.0147311.e001g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M1">
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>-</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</alternatives>
</inline-formula>
and
<inline-formula id="pone.0147311.e002">
<alternatives>
<graphic xlink:href="pone.0147311.e002.jpg" id="pone.0147311.e002g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M2">
<mml:msubsup>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
denote respectively the sample mean and sample standard deviation of all the observed responses from the group, and
<italic>v</italic>
<sub>
<italic>g</italic>
</sub>
=
<italic>n</italic>
<sub>
<italic>g</italic>
</sub>
−1 denote the degrees of freedom. For example, in the following case study, there are
<italic>G</italic>
= 3 groups (training regimens);
<italic>y</italic>
<sub>
<italic>i(g)</italic>
</sub>
is the difference between the post- and pre-treatment measurements for a selected response for the
<italic>i</italic>
th athlete in the
<italic>g</italic>
th training regimen, and
<inline-formula id="pone.0147311.e003">
<alternatives>
<graphic xlink:href="pone.0147311.e003.jpg" id="pone.0147311.e003g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M3">
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>-</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</alternatives>
</inline-formula>
is the average difference for that group.</p>
<p>Assume that an observation
<italic>y</italic>
<sub>
<italic>i</italic>
(
<italic>g</italic>
)</sub>
is Normally distributed around a group mean
<italic>μ</italic>
<sub>
<italic>g</italic>
</sub>
, with a group-specific variance
<inline-formula id="pone.0147311.e004">
<alternatives>
<graphic xlink:href="pone.0147311.e004.jpg" id="pone.0147311.e004g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M4">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
, i.e.:
<disp-formula id="pone.0147311.e005">
<alternatives>
<graphic xlink:href="pone.0147311.e005.jpg" id="pone.0147311.e005g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M5">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>g</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>~</mml:mo>
<mml:mtext>Normal(</mml:mtext>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</alternatives>
<label>(1)</label>
</disp-formula>
</p>
<p>A vague prior density is adopted for the pair of parameters
<inline-formula id="pone.0147311.e006">
<alternatives>
<graphic xlink:href="pone.0147311.e006.jpg" id="pone.0147311.e006g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M6">
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>μ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo>)</mml:mo>
</mml:math>
</alternatives>
</inline-formula>
[
<xref rid="pone.0147311.ref010" ref-type="bibr">10</xref>
] so that:
<disp-formula id="pone.0147311.e007">
<alternatives>
<graphic xlink:href="pone.0147311.e007.jpg" id="pone.0147311.e007g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M7">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</alternatives>
<label>(2)</label>
</disp-formula>
(where
<italic></italic>
denotes proportional to). Based on [
<xref rid="pone.0147311.ref001" ref-type="bibr">1</xref>
] and [
<xref rid="pone.0147311.ref002" ref-type="bibr">2</xref>
], the posterior conditional distributions for
<italic>μ</italic>
<sub>
<italic>g</italic>
</sub>
and
<inline-formula id="pone.0147311.e008">
<alternatives>
<graphic xlink:href="pone.0147311.e008.jpg" id="pone.0147311.e008g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M8">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
are given by
<disp-formula id="pone.0147311.e009">
<alternatives>
<graphic xlink:href="pone.0147311.e009.jpg" id="pone.0147311.e009g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M9">
<mml:mrow>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>~</mml:mo>
<mml:mtext>N(</mml:mtext>
<mml:msub>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>n</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</alternatives>
<label>(3)</label>
</disp-formula>
<disp-formula id="pone.0147311.e010">
<alternatives>
<graphic xlink:href="pone.0147311.e010.jpg" id="pone.0147311.e010g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M10">
<mml:mrow>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>~</mml:mo>
<mml:mtext>Inverse</mml:mtext>
<mml:msup>
<mml:mi>χ</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>υ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mi>s</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</alternatives>
<label>(4)</label>
</disp-formula>
</p>
<p>The marginal posterior distribution for
<italic>μ</italic>
<sub>
<italic>g</italic>
</sub>
can be shown to have a
<italic>t</italic>
distribution on
<italic>v</italic>
<sub>
<italic>g</italic>
</sub>
degrees of freedom: [
<xref rid="pone.0147311.ref010" ref-type="bibr">10</xref>
]
<disp-formula id="pone.0147311.e011">
<alternatives>
<graphic xlink:href="pone.0147311.e011.jpg" id="pone.0147311.e011g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M11">
<mml:mrow>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>~</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mtext>(</mml:mtext>
<mml:msub>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mi>s</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>n</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</alternatives>
<label>(5)</label>
</disp-formula>
so that
<disp-formula id="pone.0147311.e012">
<alternatives>
<graphic xlink:href="pone.0147311.e012.jpg" id="pone.0147311.e012g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M12">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>/</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mi>s</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>n</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msqrt>
<mml:mo>|</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>~</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>υ</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</alternatives>
<label>(6)</label>
</disp-formula>
</p>
</sec>
<sec id="sec004">
<title>Relationship with frequentist results</title>
<p>The marginal posterior distributions for
<inline-formula id="pone.0147311.e013">
<alternatives>
<graphic xlink:href="pone.0147311.e013.jpg" id="pone.0147311.e013g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M13">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
and
<italic>μ</italic>
<sub>
<italic>g</italic>
</sub>
, based on the data, are given by Eqs (
<xref ref-type="disp-formula" rid="pone.0147311.e010">4</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0147311.e011">5</xref>
), respectively. Because of the choice of the vague prior (
<xref ref-type="disp-formula" rid="pone.0147311.e007">Eq (2)</xref>
), these distributions can be shown to be closely related to analogous distributions for the (appropriately scaled) sufficient statistics, given
<italic>μ</italic>
<sub>
<italic>g</italic>
</sub>
and
<inline-formula id="pone.0147311.e014">
<alternatives>
<graphic xlink:href="pone.0147311.e014.jpg" id="pone.0147311.e014g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M14">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
, based on frequentist sampling theory: [
<xref rid="pone.0147311.ref010" ref-type="bibr">10</xref>
]
<disp-formula id="pone.0147311.e015">
<alternatives>
<graphic xlink:href="pone.0147311.e015.jpg" id="pone.0147311.e015g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M15">
<mml:mrow>
<mml:msub>
<mml:mi>υ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:msubsup>
<mml:mi>s</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>|</mml:mo>
<mml:msubsup>
<mml:mi>σ</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>~</mml:mo>
<mml:msubsup>
<mml:mi>χ</mml:mi>
<mml:mrow>
<mml:mi>υ</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
</mml:math>
</alternatives>
<label>(7)</label>
</disp-formula>
<disp-formula id="pone.0147311.e016">
<alternatives>
<graphic xlink:href="pone.0147311.e016.jpg" id="pone.0147311.e016g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M16">
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mi>y</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>/</mml:mo>
<mml:mo stretchy="false">(</mml:mo>
<mml:msubsup>
<mml:mi>s</mml:mi>
<mml:mi>g</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>/</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>n</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msqrt>
<mml:mo>~</mml:mo>
<mml:msub>
<mml:mi>t</mml:mi>
<mml:mrow>
<mml:mi>υ</mml:mi>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</alternatives>
<label>(8)</label>
</disp-formula>
</p>
</sec>
<sec id="sec005">
<title>Estimation of values of interest</title>
<p>A range of posterior estimates (conditional on the available data) arising from the model may be of interest, including:</p>
<list list-type="order">
<list-item>
<p>the mean and standard deviation for each group (e.g., each training regimen in the study), given by
<italic>μ</italic>
<sub>
<italic>g</italic>
</sub>
and
<inline-formula id="pone.0147311.e017">
<alternatives>
<graphic xlink:href="pone.0147311.e017.jpg" id="pone.0147311.e017g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M17">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
, respectively</p>
</list-item>
<list-item>
<p>the difference between the group means:
<italic>δ</italic>
<sub>
<italic>kl</italic>
</sub>
=
<italic>μ</italic>
<sub>
<italic>k</italic>
</sub>
<italic>μ</italic>
<sub>
<italic>l</italic>
</sub>
and the associated standard deviation of this difference,
<italic>σ</italic>
<sub>
<italic>kl</italic>
</sub>
</p>
</list-item>
<list-item>
<p>a (1−
<italic>α</italic>
)% credible interval (CI) for a measure of interest,
<italic>θ</italic>
, say, such that there is a posterior probability (1−
<italic>α</italic>
) that
<italic>θ</italic>
lies in this interval (e.g.,
<italic>θ</italic>
could be the mean of group 2, i.e.,
<italic>θ = μ</italic>
<sub>2</sub>
, and a 95% CI of (3.1, 4.2), for instance, indicates that the probability that
<italic>μ</italic>
<sub>2</sub>
is between 3.1 and 4.2, given the data, is 0.95), which is a much more direct statement than is possible under a frequentist approach</p>
</list-item>
<list-item>
<p>Cohen’s
<italic>d</italic>
[
<xref rid="pone.0147311.ref011" ref-type="bibr">11</xref>
] for the difference between two groups, given by
<italic>d</italic>
<sub>
<italic>kl</italic>
</sub>
=
<italic>δ</italic>
<sub>
<italic>kl</italic>
</sub>
/
<italic>σ</italic>
<sub>
<italic>kl</italic>
</sub>
when comparing groups
<italic>k</italic>
and
<italic>l</italic>
,
<italic>k</italic>
<italic>l</italic>
</p>
</list-item>
<list-item>
<p>the probability that Cohen’s
<italic>d</italic>
exceeds a specified threshold such as a ‘smallest worthwhile change’ (
<italic>SWC</italic>
,[
<xref rid="pone.0147311.ref006" ref-type="bibr">6</xref>
]), given by Pr(
<italic>d</italic>
<sub>
<italic>kl</italic>
</sub>
> SWC) or Pr(
<italic>d</italic>
<sub>
<italic>kl</italic>
</sub>
< −SWC), depending on whether
<italic>d</italic>
<sub>
<italic>kl</italic>
</sub>
is positive or negative, respectively</p>
</list-item>
<list-item>
<p>the predicted outcome of each individual under each training regimen, regardless of whether or not they have participated in that training, obtained from
<xref ref-type="disp-formula" rid="pone.0147311.e005">Eq (1)</xref>
, with an estimate of the corresponding uncertainty of this prediction</p>
</list-item>
<list-item>
<p>the ranks of each individual under each training regimen, with corresponding uncertainty in these orderings.</p>
</list-item>
</list>
<p>Given the data
<italic>y</italic>
<sub>
<italic>g</italic>
</sub>
for each group (and hence the sufficient statistics
<inline-formula id="pone.0147311.e018">
<alternatives>
<graphic xlink:href="pone.0147311.e018.jpg" id="pone.0147311.e018g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M18">
<mml:msub>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>-</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</alternatives>
</inline-formula>
and
<inline-formula id="pone.0147311.e019">
<alternatives>
<graphic xlink:href="pone.0147311.e019.jpg" id="pone.0147311.e019g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M19">
<mml:msubsup>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
), it is straightforward to use Eqs (
<xref ref-type="disp-formula" rid="pone.0147311.e010">4</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0147311.e011">5</xref>
) to compute posterior estimates
<italic>μ</italic>
<sub>
<italic>g</italic>
</sub>
and
<inline-formula id="pone.0147311.e020">
<alternatives>
<graphic xlink:href="pone.0147311.e020.jpg" id="pone.0147311.e020g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M20">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
, and other probabilities of interest. An alternative, simple approach is to simulate values of interest using Eqs (
<xref ref-type="disp-formula" rid="pone.0147311.e009">3</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0147311.e010">4</xref>
) iteratively, employing a form of Markov chain Monte Carlo (MCMC) [
<xref rid="pone.0147311.ref012" ref-type="bibr">12</xref>
]. A more technical explanation of this approach including the Gibbs sampling techniques is provided by Geman and Geman [
<xref rid="pone.0147311.ref013" ref-type="bibr">13</xref>
]. At each iteration, a value of
<inline-formula id="pone.0147311.e021">
<alternatives>
<graphic xlink:href="pone.0147311.e021.jpg" id="pone.0147311.e021g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M21">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
is simulated from
<xref ref-type="disp-formula" rid="pone.0147311.e010">Eq (4)</xref>
and then a value of
<italic>μ</italic>
<sub>
<italic>g</italic>
</sub>
given that value of
<inline-formula id="pone.0147311.e022">
<alternatives>
<graphic xlink:href="pone.0147311.e022.jpg" id="pone.0147311.e022g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M22">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
is simulated from
<xref ref-type="disp-formula" rid="pone.0147311.e009">Eq (3)</xref>
. This process is repeated a large number of times. The simulated values can be used to compute other measures (e.g. exp(
<italic>μ</italic>
<sub>1</sub>
<italic>μ</italic>
<sub>2</sub>
) if this is of interest), indicators I(
<italic>μ</italic>
<sub>1</sub>
>
<italic>c</italic>
) or I(
<italic>μ</italic>
<sub>1</sub>
>
<italic>μ</italic>
<sub>2</sub>
) and so on. Then E(exp(
<italic>μ</italic>
<sub>1</sub>
<italic>μ</italic>
<sub>2</sub>
)), Pr(
<italic>μ</italic>
<sub>1</sub>
>
<italic>c</italic>
) and Pr(
<italic>μ</italic>
<sub>1</sub>
>
<italic>μ</italic>
<sub>2</sub>
) can be estimated (where E denotes expectation) as the respective averages of these values over all of the iterations. Similarly, at each iteration, the simulated parameter values can be input into
<xref ref-type="disp-formula" rid="pone.0147311.e005">Eq (1)</xref>
to obtain predicted values of
<italic>y</italic>
for each individual under each regimen, and the individuals can be ranked with respect to their predicted outcome. The posterior distributions for individual predicted outcomes, and the probability distribution for the ranks, are computed from the respective values obtained from the set of iterations.</p>
<p>The Cohen’s
<italic>d</italic>
is a standardized effect size estimate, calculated as the difference between two means divided by the corresponding standard deviation. While there are many effect size estimators, Cohen’s
<italic>d</italic>
is one of the most common since it is appropriate for comparing between the means of distinctly different group and it has appealing statistical properties; for example it has a well known distribution and is maximum likelihood estimator [
<xref rid="pone.0147311.ref014" ref-type="bibr">14</xref>
].</p>
</sec>
<sec id="sec006">
<title>Model extensions</title>
<p>The model described above can be easily extended in a range of ways. Three such extensions are considered here. The first extension is that other prior distributions can be considered instead of
<xref ref-type="disp-formula" rid="pone.0147311.e007">Eq (2)</xref>
above. For example, another common approach is to assign a normal distribution for the group means,
<disp-formula id="pone.0147311.e023">
<alternatives>
<graphic xlink:href="pone.0147311.e023.jpg" id="pone.0147311.e023g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M23">
<mml:mrow>
<mml:msub>
<mml:mi>μ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>~</mml:mo>
<mml:mtext>Normal(M,V)</mml:mtext>
</mml:mrow>
</mml:math>
</alternatives>
<label>(9)</label>
</disp-formula>
and a Uniform distribution for the standard deviations,
<disp-formula id="pone.0147311.e024">
<alternatives>
<graphic xlink:href="pone.0147311.e024.jpg" id="pone.0147311.e024g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M24">
<mml:mrow>
<mml:msub>
<mml:mi>σ</mml:mi>
<mml:mi>g</mml:mi>
</mml:msub>
<mml:mo>~</mml:mo>
<mml:mtext>Uniform(0,R),</mml:mtext>
</mml:mrow>
</mml:math>
</alternatives>
<label>(10)</label>
</disp-formula>
where M and V denote the mean and variance of the normal distribution, respectively, and R is the upper bound of the uniform distribution. Alternatives to the uniform are the half-normal or half-Cauchy. If the sample sizes within groups are small and little is known
<italic>a priori</italic>
about the comparative variability of measurements within and between the groups, then
<inline-formula id="pone.0147311.e025">
<alternatives>
<graphic xlink:href="pone.0147311.e025.jpg" id="pone.0147311.e025g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M25">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
can be imprecisely estimated; to avoid this, the individual variances be replaced by a common variance,
<italic>σ</italic>
<sup>2</sup>
say.</p>
<p>There are many ways of setting the values of M, V and R. For example, if there is no prior information about these values and if the groups are considered to be independent, this can be reflected by specifying very large values of V and R, relative to the data. This means that the priors in Eqs (
<xref ref-type="disp-formula" rid="pone.0147311.e023">9</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0147311.e024">10</xref>
) will have negligible weight in the posterior estimates of the group means
<italic>μ</italic>
<sub>
<italic>g</italic>
</sub>
and variances
<inline-formula id="pone.0147311.e026">
<alternatives>
<graphic xlink:href="pone.0147311.e026.jpg" id="pone.0147311.e026g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M26">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
. If V is sufficiently large, the value of M will not matter, so it is commonly set to 0 in this case. Alternatively, the groups can be perceived as having their own characteristics (described by
<italic>μ</italic>
<sub>
<italic>g</italic>
</sub>
and
<inline-formula id="pone.0147311.e027">
<alternatives>
<graphic xlink:href="pone.0147311.e027.jpg" id="pone.0147311.e027g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M27">
<mml:msubsup>
<mml:mrow>
<mml:mi>σ</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:math>
</alternatives>
</inline-formula>
) but also being part of a larger population with an overall mean M and variance V. This random effects model is very common as it helps to accommodate outliers and improve estimation of small groups. Another alternative is to use other information to set the values of M, V and R. This information can include results of previous similar experiments, published estimates, expert opinion, and so on. Depending on the problem and the available information, different values of M, V and R can be defined for the different groups. The Bayesian framework can be very helpful in providing a mechanism for combining these sources of information in a formal manner.</p>
<p>The second extension is that the model described in
<xref ref-type="disp-formula" rid="pone.0147311.e005">Eq (1)</xref>
can be expanded to include explanatory variables that can help to improve the explanation or prediction of the response. This is the model that is adopted in the case study described below, where the explanatory variables comprise the group label and a covariate reflecting training-induced changes. For this purpose,
<xref ref-type="disp-formula" rid="pone.0147311.e005">Eq (1)</xref>
is extended as follows:
<disp-formula id="pone.0147311.e028">
<alternatives>
<graphic xlink:href="pone.0147311.e028.jpg" id="pone.0147311.e028g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M28">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>x</mml:mi>
<mml:mi>i</mml:mi>
<mml:mo></mml:mo>
</mml:msubsup>
<mml:mi>β</mml:mi>
<mml:mo>+</mml:mo>
<mml:mi>ε</mml:mi>
</mml:mrow>
</mml:math>
</alternatives>
<label>(11)</label>
</disp-formula>
where the explanatory variables and their regression coefficients are denoted by
<italic>x</italic>
and
<italic>β</italic>
, respectively, and
<italic>ε</italic>
<sub>
<italic>i</italic>
</sub>
describes the residual between the observation
<italic>y</italic>
<sub>
<italic>i</italic>
</sub>
and its predicted value
<inline-formula id="pone.0147311.e029">
<alternatives>
<graphic xlink:href="pone.0147311.e029.jpg" id="pone.0147311.e029g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M29">
<mml:msubsup>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>'</mml:mi>
</mml:mrow>
</mml:msubsup>
<mml:mi>β</mml:mi>
</mml:math>
</alternatives>
</inline-formula>
. Note that the superscript
<italic></italic>
denotes the transpose. It is common to assume that
<italic></italic>
<sub>
<italic>i</italic>
</sub>
~Normal(0,
<italic>σ</italic>
<sup>2</sup>
). Normally distributed priors are placed on the parameters in this regression model:
<disp-formula id="pone.0147311.e030">
<alternatives>
<graphic xlink:href="pone.0147311.e030.jpg" id="pone.0147311.e030g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M30">
<mml:mrow>
<mml:mi>β</mml:mi>
<mml:mo>~</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mtext>Normal</mml:mtext>
</mml:mrow>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mtext>(</mml:mtext>
<mml:msub>
<mml:mi>b</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msubsup>
<mml:mtext>B</mml:mtext>
<mml:mn>0</mml:mn>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mo stretchy="false">)</mml:mo>
<mml:mo>;</mml:mo>
<mml:msup>
<mml:mi>σ</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>~</mml:mo>
<mml:mtext>Gamma(</mml:mtext>
<mml:msub>
<mml:mi>c</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo> </mml:mo>
<mml:msub>
<mml:mi>d</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</alternatives>
<label>(12)</label>
</disp-formula>
where
<italic>k</italic>
represents the number of parameters, Normal
<sub>
<italic>k</italic>
</sub>
indicates a
<italic>k</italic>
-dimensional Gaussian distribution and Gamma indicates a Gamma distribution described by shape and scale parameters, in this case given by constants
<italic>c</italic>
<sub>0</sub>
and
<italic>d</italic>
<sub>0</sub>
.</p>
<p>An uninformative prior specification can be defined for
<italic>β</italic>
by setting zero values for the mean vector
<italic>b</italic>
<sub>0</sub>
and precision matrix
<italic>B</italic>
<sub>0</sub>
. Similarly, negligible prior information about the magnitude of the residuals is reflected by setting small values for
<italic>c</italic>
<sub>0</sub>
and
<italic>d</italic>
<sub>0</sub>
in the distribution for
<italic>σ</italic>
<sup>2</sup>
[
<xref rid="pone.0147311.ref015" ref-type="bibr">15</xref>
].</p>
<p>An alternative, popular formulation is to use Zellner’s
<italic>g</italic>
-prior, whereby the variance of the prior for
<italic>β</italic>
is defined in terms of the variance for the data. More explicitly, b is specified as a multivariate normal distribution with a covariance matrix that is proportional to the inverse Fisher information matrix for
<italic>β</italic>
, given by
<italic>g</italic>
(
<italic>x</italic>
<sup>
<italic>T</italic>
</sup>
<italic>x</italic>
)
<sup>−1</sup>
. This is an elegant way of specifying the ‘information’ contained in the prior, relative to that contained in the data: the value of
<italic>g</italic>
is analogous to the ‘equivalent number of observations’ that is contributed to the analysis by the prior [
<xref rid="pone.0147311.ref016" ref-type="bibr">16</xref>
,
<xref rid="pone.0147311.ref017" ref-type="bibr">17</xref>
].</p>
<p>The third extension is the choice of the response
<italic>y</italic>
. This depends on the aim of the analysis, biological and other contextual knowledge of the problem, and the available data. The residuals are assumed to have a normal distribution with a mean of zero, and normally distributed priors can be defined as the difference between an individual’s post-training and pre-training measurements, the difference of the logarithms of these measurements, the relative difference between the pairs of measurements (i.e. (post-pre)/pre) or some other context-relevant transformation.</p>
</sec>
<sec id="sec007">
<title>Case Study</title>
<p>The Bayesian approach described above was applied to a study by Humberstone-Gough
<italic>et al</italic>
. [
<xref rid="pone.0147311.ref009" ref-type="bibr">9</xref>
] who used a two-period (pre-post) repeated measures design to compare the effects of three training regimens ‘Live High Train Low’ altitude training (LHTL), ‘Intermittent Hypoxic Exposure’ (IHE) and ‘Placebo’ on running performance and blood characteristics. The study comprised eight subjects (elite male triathletes) in each regimen, and had one dropout in the LHTL group. Although ten running and blood variables were considered in the original study; three variables with the most complete data are selected here for illustration: hemoglobin mass (Hbmass, units of grams), submaximal running economy (RunEcon, units of L O
<sub>2</sub>
.min
<sup>-1</sup>
) and maximum blood lactate concentration (La-max, units of mmol/L). The authors also employed a covariate reflecting training-induced changes, namely the percent change in weekly training load from pre- to during-camp for each individual athlete. The data used for the analyses are shown in
<xref ref-type="supplementary-material" rid="pone.0147311.s001">S1 Table</xref>
(data extracted from original study of Humberstone-Gough et al (2013 and provided by co-author Gore).</p>
<p>Casting this study in terms of the models described above, there are
<italic>G = 3</italic>
groups denoting the training regimens (Placebo by
<italic>g</italic>
= 1; IHE by
<italic>g</italic>
= 2; LHTL by
<italic>g</italic>
= 3). Letting pre
<sub>
<italic>i</italic>
</sub>
and post
<sub>
<italic>i</italic>
</sub>
denote respectively the pre- and post-treatment measurements for the
<italic>i</italic>
th individual, an (unscaled) effect of interest,
<italic>y</italic>
<sub>
<italic>i</italic>
</sub>
, was defined in terms of the difference between the pairs of measurements:
<disp-formula id="pone.0147311.e031">
<alternatives>
<graphic xlink:href="pone.0147311.e031.jpg" id="pone.0147311.e031g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M31">
<mml:mrow>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mtext>post</mml:mtext>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mrow>
<mml:mtext>pre</mml:mtext>
</mml:mrow>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:mrow>
</mml:math>
</alternatives>
<label>(13)</label>
</disp-formula>
</p>
<p>A log transformation was adopted in the original analysis by Humberstone-Gough
<italic>et al</italic>
. [
<xref rid="pone.0147311.ref009" ref-type="bibr">9</xref>
] but was not performed in the analysis described below, as there was insufficient information in the observed data to strongly motivate a transformation of the measurements, particularly after adjusting for the covariate reported by Humberstone-Gough
<italic>et al</italic>
. (comparative summary plots not shown). However, it is acknowledged that this decision was based purely on the available data and there may be compelling biological or experimental reasons for choosing the log (or other) scale; for example, under this transformation covariates can be considered to have multiplicative rather than additive effects on the original response. On the one hand, retaining the original scale allows for more direct interpretation of the results. On the other, if the underlying assumptions are not met, the inferences based on the results must be treated with caution. In this study, the premise was adopted of not transforming unless there is a compelling domain-specific or statistical reason to do so. Hence the decision was made not to take a log transformation of the data as other authors have suggested–a statistical decision–and to consider a relative change in performance as well as an absolute difference–a domain-based decision since this measure is of interest to sports scientists. A similar issue arises about the inclusion of covariates in a small sample analysis. In this case, the associated regression parameters may be estimated with substantial uncertainty and the usual model comparison methods are often inadequate in determining any associated improvement in model fit. Again, the decision may be more domain-based than statistical. In the study considered here, results were reported with and without a covariate that was considered to be important for sports scientists, and a deliberate decision was made to avoid formal model comparison. These issues of data transformation and model comparison for small samples merit further research.</p>
<p>Here we consider instead an analogous scaled effect defined in terms of the relative difference between the pairs of measurements:
<disp-formula id="pone.0147311.e032">
<alternatives>
<graphic xlink:href="pone.0147311.e032.jpg" id="pone.0147311.e032g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M32">
<mml:msub>
<mml:mrow>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi> </mml:mi>
<mml:mo>=</mml:mo>
<mml:mi> </mml:mi>
<mml:msub>
<mml:mrow>
<mml:mtext>(post</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mtext>pre</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>)</mml:mo>
<mml:mo>/</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mtext>pre</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>.</mml:mo>
</mml:math>
</alternatives>
<label>(14)</label>
</disp-formula>
</p>
<p>For both the unscaled response given by
<xref ref-type="disp-formula" rid="pone.0147311.e031">Eq (13)</xref>
and the relative response given by
<xref ref-type="disp-formula" rid="pone.0147311.e032">Eq (14)</xref>
, the list of posterior estimates of interest were:</p>
<list list-type="bullet">
<list-item>
<p>the differences between the two experimental training regimens (IHE, LHTL) and the Placebo group, given by
<italic>δ</italic>
<sub>
<italic>12</italic>
</sub>
and
<italic>δ</italic>
<sub>
<italic>13</italic>
</sub>
, respectively, and the difference between the two training regimens IHE and LHTL, given by
<italic>δ</italic>
<sub>
<italic>23</italic>
</sub>
;</p>
</list-item>
<list-item>
<p>Cohen’s
<italic>d</italic>
for each of the two experimental regimens compared with the Placebo regimen, given by
<italic>d</italic>
<sub>
<italic>12</italic>
</sub>
=
<italic>δ</italic>
<sub>
<italic>12</italic>
</sub>
/
<italic>σ</italic>
<sub>
<italic>12</italic>
</sub>
for IHE and
<italic>d</italic>
<sub>
<italic>13</italic>
</sub>
=
<italic>δ</italic>
<sub>
<italic>3</italic>
</sub>
/
<italic>σ</italic>
<sub>
<italic>13</italic>
</sub>
for LHTL;</p>
</list-item>
<list-item>
<p>Cohen’s
<italic>d</italic>
for the standardized difference between LHTL versus IHE, given by
<italic>d</italic>
<sub>
<italic>23</italic>
</sub>
=
<italic>δ</italic>
<sub>
<italic>23</italic>
</sub>
/
<italic>σ</italic>
<sub>
<italic>23</italic>
</sub>
;</p>
</list-item>
<list-item>
<p>the probabilities that the standardized difference between the IHE training regimen and the Placebo exceed the ‘smallest worthwhile change’ (
<italic>SWC</italic>
, specified as a standardised change of
<italic>0</italic>
.
<italic>2</italic>
based on previous recommendations [
<xref rid="pone.0147311.ref018" ref-type="bibr">18</xref>
]), denoted by
<italic>SWCU</italic>
<sub>
<italic>12</italic>
</sub>
= Pr(
<italic>d</italic>
<sub>
<italic>12</italic>
</sub>
> 0.2) and
<italic>SWCL</italic>
<sub>
<italic>12</italic>
</sub>
= Pr(
<italic>d</italic>
<sub>
<italic>12</italic>
</sub>
< -0.2);</p>
</list-item>
<list-item>
<p>analogous probabilistic comparisons with the
<italic>SWC</italic>
for the difference between the LHTL training regimen and the Placebo, and the LHTL and IHE training regimens,</p>
</list-item>
<list-item>
<p>the posterior distributions of the expected outcome E(
<italic>y</italic>
<sub>
<italic>ij</italic>
</sub>
) =
<italic>β</italic>
<sub>0</sub>
+
<italic>β</italic>
<sub>1</sub>
<italic>X</italic>
+
<italic>β</italic>
<sub>2</sub>
<italic>I</italic>
<sub>
<italic>j</italic>
= 1</sub>
+
<italic>β</italic>
<sub>3</sub>
<italic>I</italic>
<sub>
<italic>j</italic>
= 2</sub>
for the
<italic>i</italic>
th individual under the
<italic>j</italic>
th training regimen (where
<italic>I</italic>
<sub>
<italic>j = 1</italic>
</sub>
= 1 if the treatment is IHE and = 0 otherwise, and
<italic>I</italic>
<sub>
<italic>j = 2</italic>
</sub>
= 1 if the treatment is LHTL and = 0 otherwise); the expected outcome, obtained by substituting the simulated parameter values (
<italic>β</italic>
<sub>0</sub>
,
<italic>β</italic>
<sub>1</sub>
,
<italic>β</italic>
<sub>2</sub>
,
<italic>β</italic>
<sub>3</sub>
) into this equation at each MCMC simulation,</p>
</list-item>
<list-item>
<p>the analogous posterior predicted outcome for each individual under each training regimen, which allows for within-subject variation around the expected outcome, i.e.,
<inline-formula id="pone.0147311.e033">
<alternatives>
<graphic xlink:href="pone.0147311.e033.jpg" id="pone.0147311.e033g" mimetype="image" position="anchor" orientation="portrait"></graphic>
<mml:math id="M33">
<mml:mrow>
<mml:msubsup>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>pred</mml:mtext>
</mml:mrow>
</mml:msubsup>
<mml:mo>=</mml:mo>
<mml:msubsup>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mtext>pred</mml:mtext>
</mml:mrow>
</mml:msubsup>
<mml:mo>+</mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:msub>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>~</mml:mo>
<mml:mtext>N</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:msup>
<mml:mi>σ</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</alternatives>
</inline-formula>
, which is obtained in the same manner as above,</p>
</list-item>
<list-item>
<p>the ranks of the individuals based on their expected and predicted outcomes under each of the treatment regimens; again, this is a probability distribution, reflecting the fact that rankings may change depending on the precision of the estimated treatment effects and within-subject variation.</p>
</list-item>
</list>
<p>Note that although the denominator of the Cohen’s
<italic>d</italic>
<sub>
<italic>kl</italic>
</sub>
values can be calculated using the traditional equation, i.e.,
<italic>σ</italic>
<sub>
<italic>kl</italic>
</sub>
= √Var(
<italic>δ</italic>
<sub>
<italic>l</italic>
</sub>
-
<italic>δ</italic>
<sub>
<italic>k</italic>
</sub>
) = √((
<italic>v</italic>
<sub>
<italic>l</italic>
</sub>
Var(
<italic>δ</italic>
<sub>
<italic>l</italic>
</sub>
)+
<italic>v</italic>
<sub>
<italic>k</italic>
</sub>
Var(
<italic>d</italic>
<sub>
<italic>k</italic>
</sub>
))/(
<italic>v</italic>
<sub>
<italic>l</italic>
</sub>
+
<italic>v</italic>
<sub>
<italic>k</italic>
</sub>
)), this can also be directly calculated using the simulated values of
<italic>d</italic>
<sub>
<italic>kl</italic>
</sub>
obtained from the MCMC iterations, i.e.,
<italic>σ</italic>
<sub>
<italic>kl</italic>
</sub>
= √Var(
<italic>d</italic>
<sub>
<italic>kl</italic>
</sub>
).</p>
<p>Based on exploratory plots of the relationships between the observed pre- and post-training values of Hbmass, RunEcon and La-max among the three groups, and with the covariate, two analyses of the data were undertaken. In the first analysis, the covariate was excluded and the model was fit using Eqs (
<xref ref-type="disp-formula" rid="pone.0147311.e009">3</xref>
) and (
<xref ref-type="disp-formula" rid="pone.0147311.e010">4</xref>
). In the second analysis, the covariate was included given previous work showing that training load can influence the hemopoietic response [
<xref rid="pone.0147311.ref019" ref-type="bibr">19</xref>
] and the model was fit using
<xref ref-type="disp-formula" rid="pone.0147311.e028">Eq (11)</xref>
. The models were implemented using the statistical software R, with packages BRugs and R2WinBugs, which call WinBUGS [
<xref rid="pone.0147311.ref015" ref-type="bibr">15</xref>
,
<xref rid="pone.0147311.ref020" ref-type="bibr">20</xref>
,
<xref rid="pone.0147311.ref021" ref-type="bibr">21</xref>
], and MCMCregress in the MCMCpack library [
<xref rid="pone.0147311.ref022" ref-type="bibr">22</xref>
]. Estimates were based on 150,000 MCMC iterations, after discarding an initial burn-in of 50,000 iterations. For comparability with Humberstone-Gough
<italic>et al</italic>
[
<xref rid="pone.0147311.ref009" ref-type="bibr">9</xref>
], the results of the second analysis are reported below. The R code for this model is presented as a text file in
<xref ref-type="supplementary-material" rid="pone.0147311.s002">S1 Text</xref>
.</p>
<p>As described above, the primary analyses for the case study utilized an uninformative prior specification for β in
<xref ref-type="disp-formula" rid="pone.0147311.e030">Eq (12)</xref>
, which was obtained by setting the values of the prior mean vector
<italic>b</italic>
<sub>0</sub>
and prior precision matrix B
<sub>0</sub>
to zero. The impact of informative priors was evaluated by considering a range of non-zero values for these terms, with Hbmass as the response measure. The values were motivated by the results of a recent meta-analysis of training regimens on Hbmass [
<xref rid="pone.0147311.ref023" ref-type="bibr">23</xref>
], which reported a mean response of 1.08% increase in Hbmass per 100 hours of LHTL training. Based on the study of Humberstone-Gough with 240 hours of exposure, the prior expectation is thus that the mean increases for the LHTL and IHE groups would be 2.6% and 0% respectively. The latter figure is also supported by a report that 3 h/day at 4000–5,500 m was inadequate to increase Hbmass at all [
<xref rid="pone.0147311.ref024" ref-type="bibr">24</xref>
]. This literature also provides a prior expectation of 0% increase in Hbmass of the Placebo group. The 2013 meta-analysis [
<xref rid="pone.0147311.ref023" ref-type="bibr">23</xref>
] also provided an estimate of 2.2% for the within-subject standard deviation of Hbmass.</p>
</sec>
</sec>
<sec sec-type="results" id="sec008">
<title>Results</title>
<p>The distribution of the covariate X (representing the % change in weekly training load from pre- to post-camp) within and among the three training regimens (Placebo, IHE, LHTL) is displayed in
<xref ref-type="fig" rid="pone.0147311.g001">Fig 1</xref>
. The plots show that there is non-negligible variation between individuals within a regimen with respect to this variable and substantive differences between the regimens. It is clear that adjustment needs to be made for X before evaluating the comparative impact of the three regimens. This is accommodated in the regression model described in
<xref ref-type="disp-formula" rid="pone.0147311.e028">Eq (11)</xref>
.</p>
<fig id="pone.0147311.g001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.g001</object-id>
<label>Fig 1</label>
<caption>
<title>Exploratory analyses comprising stripcharts (left) and boxplots (right) for the covariate X in the three training regimens (Placebo, Intermittent Hypoxic Exposure (IHE), Live High Train Low (LHTL)), where X is a measure of the percent change in training load for each of the 23 individuals in the study.</title>
<p>(See text for details.).</p>
</caption>
<graphic xlink:href="pone.0147311.g001"></graphic>
</fig>
<p>Scatterplots of the unscaled differences given by
<xref ref-type="disp-formula" rid="pone.0147311.e031">Eq (13)</xref>
and scaled differences given by
<xref ref-type="disp-formula" rid="pone.0147311.e032">Eq (14)</xref>
are presented in Figs
<xref ref-type="fig" rid="pone.0147311.g002">2</xref>
<xref ref-type="fig" rid="pone.0147311.g004">4</xref>
. Based on these plots, there is no clear visual association between the three measurements under consideration in this case study (Hbmass, RunEcon and La-max), or between these measurements and the covariate.</p>
<fig id="pone.0147311.g002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.g002</object-id>
<label>Fig 2</label>
<caption>
<title>Three-dimensional scatterplot of the three measurements, Hemoglobin Mass (Hbmass), Running Economy (RunEcon) and maximum blood lactate concentration (La-max), unscaled data (left) and scaled data (right).</title>
<p>Unscaled data are calculated as post
<sub>
<italic>i</italic>
</sub>
−pre, and scaled data are calculated as (post
<sub>
<italic>i</italic>
</sub>
−pre
<sub>
<italic>i</italic>
</sub>
) / pre
<sub>
<italic>i</italic>
</sub>
.</p>
</caption>
<graphic xlink:href="pone.0147311.g002"></graphic>
</fig>
<fig id="pone.0147311.g003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.g003</object-id>
<label>Fig 3</label>
<caption>
<title>Two-dimensional scatterplots of the three measurements of Hbmass, RunEcon and La-max, under three regimes Placebo, Intermittent Hypoxic Exposure (IHE) and Live High Train Low (LHTL), unscaled data.</title>
</caption>
<graphic xlink:href="pone.0147311.g003"></graphic>
</fig>
<fig id="pone.0147311.g004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.g004</object-id>
<label>Fig 4</label>
<caption>
<title>Two-dimensional scatterplots of the three measurements of Hbmass, RunEcon and La-max, under three regimes Placebo, Intermittent Hypoxic Exposure (IHE) and Live High Train Low (LHTL), scaled data.</title>
</caption>
<graphic xlink:href="pone.0147311.g004"></graphic>
</fig>
<p>Plots of the posterior distributions of the differences between the training regimens, IHE vs Placebo, LHTL vs Placebo, LHTL vs IHE, given by
<italic>δ</italic>
<sub>
<italic>12</italic>
</sub>
,
<italic>δ</italic>
<sub>
<italic>13</italic>
</sub>
and
<italic>δ</italic>
<sub>
<italic>23</italic>
</sub>
, respectively, are shown in
<xref ref-type="fig" rid="pone.0147311.g005">Fig 5</xref>
. Corresponding posterior estimates of the effects (mean, s.d., 95% and 90% credible intervals) are given in
<xref ref-type="table" rid="pone.0147311.t001">Table 1</xref>
.</p>
<fig id="pone.0147311.g005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.g005</object-id>
<label>Fig 5</label>
<caption>
<title>Posterior densities of the three measurements, Haemoglobin Mass, Running Economy and Running Maximum Lactate, comparing Live High Train Low (LHTL) vs Intermittent Hypoxic Exposure (IHE) (solid line), LHTL vs Placebo (dotted line) and IHE vs Placebo (dashed line), unscaled data.</title>
</caption>
<graphic xlink:href="pone.0147311.g005"></graphic>
</fig>
<table-wrap id="pone.0147311.t001" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.t001</object-id>
<label>Table 1</label>
<caption>
<title>Posterior estimates based on unscaled data.</title>
</caption>
<alternatives>
<graphic id="pone.0147311.t001g" xlink:href="pone.0147311.t001"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<tbody>
<tr>
<td align="left" colspan="5" rowspan="1">
<bold>Hbmass</bold>
</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Posterior parameter estimates (units of grams)</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">X</td>
<td align="left" rowspan="1" colspan="1">0.25</td>
<td align="left" rowspan="1" colspan="1">0.25</td>
<td align="center" rowspan="1" colspan="1">-0.25, 0.74</td>
<td align="center" rowspan="1" colspan="1">-0.17, 0.66</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">-1.4</td>
<td align="left" rowspan="1" colspan="1">19.8</td>
<td align="center" rowspan="1" colspan="1">-40.5, 37.8</td>
<td align="center" rowspan="1" colspan="1">-33.8, 30.9</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">30.7</td>
<td align="left" rowspan="1" colspan="1">21.7</td>
<td align="center" rowspan="1" colspan="1">-12.4, 73.4</td>
<td align="center" rowspan="1" colspan="1">-4.9, 66.2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">32.0</td>
<td align="left" rowspan="1" colspan="1">15.3</td>
<td align="center" rowspan="1" colspan="1">1.9, 62.2</td>
<td align="center" rowspan="1" colspan="1">7.04, 57.2</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Cohen’s d</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">-0.07</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-2.1, 1.9</td>
<td align="center" rowspan="1" colspan="1">-1.7. 1.6</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">1.4</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-0.57, 3.4</td>
<td align="center" rowspan="1" colspan="1">-0.23, 3.0</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">2.1</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">0.12, 4.1</td>
<td align="center" rowspan="1" colspan="1">0.46, 3.7</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Prob</italic>
.
<italic>Cohen’s d <> 0</italic>
.
<italic>2</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Parameter</td>
<td align="left" rowspan="1" colspan="1">Prob.
<italic>d</italic>
<-0.2</td>
<td align="left" colspan="3" rowspan="1">Prob.
<italic>d</italic>
>0.2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">0.45</td>
<td align="left" colspan="3" rowspan="1">0.39</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">0.052</td>
<td align="left" colspan="3" rowspan="1">0.89</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">0.013</td>
<td align="left" colspan="3" rowspan="1">0.97</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<bold>RunEcon</bold>
</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Posterior parameter estimates (unit of L/min)</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">X</td>
<td align="left" rowspan="1" colspan="1">0.00045</td>
<td align="left" rowspan="1" colspan="1">0.0010</td>
<td align="center" rowspan="1" colspan="1">-0.0016, 0.0025</td>
<td align="center" rowspan="1" colspan="1">-0.0012, 0.0021</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">0.039</td>
<td align="left" rowspan="1" colspan="1">0.079</td>
<td align="center" rowspan="1" colspan="1">-0.12, 0.20</td>
<td align="center" rowspan="1" colspan="1">-0.09, 0.17</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">-0.080</td>
<td align="left" rowspan="1" colspan="1">0.090</td>
<td align="center" rowspan="1" colspan="1">-0.26, 0.097</td>
<td align="center" rowspan="1" colspan="1">-0.23, 0.065</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">-0.12</td>
<td align="left" rowspan="1" colspan="1">0.064</td>
<td align="center" rowspan="1" colspan="1">-0.25, 0.0094</td>
<td align="center" rowspan="1" colspan="1">-0.22, -0.014</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Cohen’s d</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">0.50</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-1.5, 2.5</td>
<td align="center" rowspan="1" colspan="1">-1.1, 2.1</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">-0.89</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-2.9, 1.1</td>
<td align="center" rowspan="1" colspan="1">-2.5, 0.73</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">-1.85</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-3.8, 0.15</td>
<td align="center" rowspan="1" colspan="1">-3.5, -0.22</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Prob</italic>
.
<italic>Cohen’s d <> 0</italic>
.
<italic>2</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Parameter</td>
<td align="left" rowspan="1" colspan="1">Prob.
<italic>d</italic>
<-0.2</td>
<td align="left" colspan="3" rowspan="1">Prob.
<italic>d</italic>
>0.2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">0.23</td>
<td align="left" colspan="3" rowspan="1">0.62</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">0.77</td>
<td align="left" colspan="3" rowspan="1">0.13</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">0.95</td>
<td align="left" colspan="3" rowspan="1">0.023</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<bold>La-max</bold>
</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Posterior parameter estimates (units of mmol/L)</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">X</td>
<td align="left" rowspan="1" colspan="1">-0.018</td>
<td align="left" rowspan="1" colspan="1">0.015</td>
<td align="center" rowspan="1" colspan="1">-0.050, 0.013</td>
<td align="center" rowspan="1" colspan="1">-0.044, 0.0076</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">-2.5</td>
<td align="left" rowspan="1" colspan="1">1.3</td>
<td align="center" rowspan="1" colspan="1">-5.0, -0.06</td>
<td align="center" rowspan="1" colspan="1">-4.6, -0.50</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">-2.8</td>
<td align="left" rowspan="1" colspan="1">1.3</td>
<td align="center" rowspan="1" colspan="1">-5.6, -0.10</td>
<td align="center" rowspan="1" colspan="1">-5.10-, -0.59</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">-0.32</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-2.3, 1.66</td>
<td align="center" rowspan="1" colspan="1">-1.2, 1.3</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Cohen’s d</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">-2.0</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-4.0, -0.054</td>
<td align="center" rowspan="1" colspan="1">-3.7, -0.40</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">-2.1</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-4.0, -0.070</td>
<td align="center" rowspan="1" colspan="1">-3.7, -0.48</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">-0.32</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-2.3, 1.7</td>
<td align="center" rowspan="1" colspan="1">-2.0, 1.3</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Prob</italic>
.
<italic>Cohen’s d <> 0</italic>
.
<italic>2</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Parameter</td>
<td align="left" rowspan="1" colspan="1">Prob.
<italic>d</italic>
<-0.2</td>
<td align="left" colspan="3" rowspan="1">Prob.
<italic>d</italic>
>0.2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">0.97</td>
<td align="left" colspan="3" rowspan="1">0.015</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">0.98</td>
<td align="left" colspan="3" rowspan="1">0.015</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">HTL-IHE</td>
<td align="left" rowspan="1" colspan="1">0.55</td>
<td align="left" colspan="3" rowspan="1">0.29</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>From
<xref ref-type="fig" rid="pone.0147311.g005">Fig 5</xref>
and
<xref ref-type="table" rid="pone.0147311.t001">Table 1</xref>
, it can be seen that for Hbmass and RunEcon, although there is a slight detrimental effect of IHE and a slight beneficial effect of LHTL compared with the Placebo, these are not substantive: a difference of 0 is reasonably well supported by the posterior distributions. However, this slight differential in response results between IHE and LTHL: a difference of 0 appears to have less support in the posterior densities; the 90% credible interval does not include 0 and the posterior probability that Cohen’s
<italic>d</italic>
exceeds the SWC is 0.96 and 0.93 for Hbmass and RunEcon respectively. These outcomes strongly indicate that LHTL is substantively better than IHE for both of these outcome measures.</p>
<p>In contrast, for La-max, both IHE and LHTL show a substantive beneficial effect compared with the Placebo, with the corresponding 95% (and hence 90%) credible intervals excluding 0 and a probability of 0.97 that Cohen’s
<italic>d</italic>
exceeds the SWC. As a consequence, the difference between LHTL and IHE is thus attenuated for this outcome measure.</p>
<p>Posterior estimates of parameters of interest for the scaled (relative) measures are shown in
<xref ref-type="fig" rid="pone.0147311.g006">Fig 6</xref>
and
<xref ref-type="table" rid="pone.0147311.t002">Table 2</xref>
. The figures and table confirm the above results. Similar to the unscaled effects, there is no clear visual association between two of the measurements under consideration in this case study (Hbmass and RunEcon), or between these measurements and the covariate of change in weekly training load. However, there is a clear difference in the values of the covariate among individuals in the Placebo group compared with the two training regimens (LHTL and IHE). The two training regimens both appear to substantively improve La-max, even after accounting for training-induced changes in the individual athletes. The direct probabilistic comparisons with the SWC provide more complete information about these treatments based on these data.</p>
<fig id="pone.0147311.g006" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.g006</object-id>
<label>Fig 6</label>
<caption>
<title>Posterior densities of the three measurements, Haemoglobin Mass, Running Economy and Running Maximum Lactate, comparing Live High Train Low (LHTL) vs Intermittent Hypoxic Exposure (IHE) (solid line), LHTL vs Placebo (dotted line) and IHE vs Placebo (dashed line), scaled data.</title>
</caption>
<graphic xlink:href="pone.0147311.g006"></graphic>
</fig>
<table-wrap id="pone.0147311.t002" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.t002</object-id>
<label>Table 2</label>
<caption>
<title>Posterior estimates based on scaled data.</title>
</caption>
<alternatives>
<graphic id="pone.0147311.t002g" xlink:href="pone.0147311.t002"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<tbody>
<tr>
<td align="left" colspan="5" rowspan="1">
<bold>Hbmass</bold>
</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Posterior parameter estimates (units of percent / 100)</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">X</td>
<td align="left" rowspan="1" colspan="1">0.00020</td>
<td align="left" rowspan="1" colspan="1">0.00029</td>
<td align="center" rowspan="1" colspan="1">-0.00038, 0.00077</td>
<td align="center" rowspan="1" colspan="1">-0.00028, 0.00067</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">-0.0075</td>
<td align="left" rowspan="1" colspan="1">0.023</td>
<td align="center" rowspan="1" colspan="1">-0.053, 0.038</td>
<td align="center" rowspan="1" colspan="1">-0.045, 0.030</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">0.026</td>
<td align="left" rowspan="1" colspan="1">0.025</td>
<td align="center" rowspan="1" colspan="1">-0.023, 0.076</td>
<td align="center" rowspan="1" colspan="1">-0.015, 0.068</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">0.034</td>
<td align="left" rowspan="1" colspan="1">0.018</td>
<td align="center" rowspan="1" colspan="1">-0.0011, 0.069</td>
<td align="center" rowspan="1" colspan="1">0.0050, 0.063</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Cohen’s d</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">-0.33</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-2.3, 1.7</td>
<td align="center" rowspan="1" colspan="1">-1.2, 1.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">1.1</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-0.93, 3.0</td>
<td align="center" rowspan="1" colspan="1">-0.60, 2.7</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">1.9</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-0.059, 3.9</td>
<td align="center" rowspan="1" colspan="1">0.28, 3.6</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Prob</italic>
.
<italic>Cohen’s d <> 0</italic>
.
<italic>2</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Parameter</td>
<td align="left" rowspan="1" colspan="1">Prob.
<italic>d</italic>
<-0.2</td>
<td align="left" colspan="3" rowspan="1">Prob.
<italic>d</italic>
>0.2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">0.55</td>
<td align="left" colspan="3" rowspan="1">0.29</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
<td align="left" colspan="3" rowspan="1">0.81</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">0.019</td>
<td align="left" colspan="3" rowspan="1">0.96</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<bold>RunEcon</bold>
</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Posterior parameter estimates (units of percent / 100)</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">X</td>
<td align="left" rowspan="1" colspan="1">0.00023</td>
<td align="left" rowspan="1" colspan="1">0.00031</td>
<td align="center" rowspan="1" colspan="1">-0.00038, 0.00083</td>
<td align="center" rowspan="1" colspan="1">-0.00027, 0.00072</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">0.015</td>
<td align="left" rowspan="1" colspan="1">0.023</td>
<td align="center" rowspan="1" colspan="1">-0.032, 0.061</td>
<td align="center" rowspan="1" colspan="1">-0.024, 0.053</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">-0.016</td>
<td align="left" rowspan="1" colspan="1">0.027</td>
<td align="center" rowspan="1" colspan="1">-0.069, 0.037</td>
<td align="center" rowspan="1" colspan="1">-0.060, 0.027</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">-0.031</td>
<td align="center" rowspan="1" colspan="1">0.019</td>
<td align="center" rowspan="1" colspan="1">-0.069, 0.0071</td>
<td align="center" rowspan="1" colspan="1">-0.062, 0.00016</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Cohen’s d</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">0.63</td>
<td align="left" rowspan="1" colspan="1">1.00</td>
<td align="center" rowspan="1" colspan="1">-1.4, 2.6</td>
<td align="center" rowspan="1" colspan="1">-1.0, 2.3</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">-0.61</td>
<td align="left" rowspan="1" colspan="1">1.00</td>
<td align="center" rowspan="1" colspan="1">-2.6, 1.4</td>
<td align="center" rowspan="1" colspan="1">-2.2, 1.0</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">-1.62</td>
<td align="left" rowspan="1" colspan="1">1.00</td>
<td align="center" rowspan="1" colspan="1">-3.6, 0.37</td>
<td align="center" rowspan="1" colspan="1">-3.3, 0.0085</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Prob</italic>
.
<italic>Cohen’s d <> 0</italic>
.
<italic>2</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Parameter</td>
<td align="left" rowspan="1" colspan="1">Prob.
<italic>d</italic>
<-0.2</td>
<td align="left" colspan="3" rowspan="1">Prob.
<italic>d</italic>
>0.2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">0.19</td>
<td align="left" colspan="3" rowspan="1">0.68</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">0.67</td>
<td align="left" colspan="3" rowspan="1">0.20</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">0.93</td>
<td align="left" colspan="3" rowspan="1">0.035</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<bold>La-max</bold>
</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Posterior parameter estimates (units of percent / 100)</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">X</td>
<td align="left" rowspan="1" colspan="1">-0.0019</td>
<td align="left" rowspan="1" colspan="1">0.0016</td>
<td align="center" rowspan="1" colspan="1">-0.0051, 0.0013</td>
<td align="center" rowspan="1" colspan="1">-0.0045, 0.00072</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">-0.26</td>
<td align="left" rowspan="1" colspan="1">0.13</td>
<td align="center" rowspan="1" colspan="1">-0.51, -0.0094</td>
<td align="center" rowspan="1" colspan="1">-0.47, -0.054</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">-0.29</td>
<td align="left" rowspan="1" colspan="1">0.14</td>
<td align="center" rowspan="1" colspan="1">-0.58, -0.014</td>
<td align="center" rowspan="1" colspan="1">-0.52, -0.065</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">-0.034</td>
<td align="left" rowspan="1" colspan="1">0.10</td>
<td align="center" rowspan="1" colspan="1">-0.24, 0.17</td>
<td align="center" rowspan="1" colspan="1">-0.20, 0.13</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Cohen’s d</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Effect</td>
<td align="left" rowspan="1" colspan="1">Mean</td>
<td align="left" rowspan="1" colspan="1">s.d.</td>
<td align="center" rowspan="1" colspan="1">95% CI</td>
<td align="center" rowspan="1" colspan="1">90% CI</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">-2.6</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-4.0, -0.074</td>
<td align="center" rowspan="1" colspan="1">-3.7, -0.43</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">-2.1</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-4.1, -0.10</td>
<td align="center" rowspan="1" colspan="1">-3.7, -0.46</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">-0.33</td>
<td align="left" rowspan="1" colspan="1">1.0</td>
<td align="center" rowspan="1" colspan="1">-2.3, 1.7</td>
<td align="center" rowspan="1" colspan="1">-2.0, 1.3</td>
</tr>
<tr>
<td align="left" colspan="5" rowspan="1">
<italic>Prob</italic>
.
<italic>Cohen’s d <> 0</italic>
.
<italic>2</italic>
</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Parameter</td>
<td align="left" rowspan="1" colspan="1">Prob.
<italic>d</italic>
<-0.2</td>
<td align="left" colspan="3" rowspan="1">Prob.
<italic>d</italic>
>0.2</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">0.97</td>
<td align="left" colspan="3" rowspan="1">0.014</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">0.97</td>
<td align="left" colspan="3" rowspan="1">0.014</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">LHTL-IHE</td>
<td align="left" rowspan="1" colspan="1">0.56</td>
<td align="left" colspan="3" rowspan="1">0.29</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>The posterior expected outcome of Hbmass for each individual under each regimen is illustrated in
<xref ref-type="fig" rid="pone.0147311.g007">Fig 7</xref>
, for the unscaled data. The boxplots indicate the distribution of possible outcomes, with the box corresponding to the middle 50% of values and the limits of the bars corresponding to the minimum and maximum values. The corresponding expected rank and associated interquartile range for the 23 individuals are reported in
<xref ref-type="table" rid="pone.0147311.t003">Table 3</xref>
. It is noted that the predictions and ranks are substantively driven by the covariate values in this model, with comparatively much less influence from the effect of the training regimens. Hence
<xref ref-type="table" rid="pone.0147311.t003">Table 3</xref>
displays only a selection of results.</p>
<fig id="pone.0147311.g007" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.g007</object-id>
<label>Fig 7</label>
<caption>
<title>Boxplots of the posterior expected outcomes for Hbmass for each individual in the study, under each of the two training regimens Intermittent Hypoxic Exposure (left) and Live High Train Low (right).</title>
</caption>
<graphic xlink:href="pone.0147311.g007"></graphic>
</fig>
<table-wrap id="pone.0147311.t003" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.t003</object-id>
<label>Table 3</label>
<caption>
<title>Expected rank and associated interquartile range for the 23 individuals in the study.</title>
</caption>
<alternatives>
<graphic id="pone.0147311.t003g" xlink:href="pone.0147311.t003"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="center" rowspan="1" colspan="1">ID</th>
<th align="center" colspan="2" rowspan="1">Hbmass</th>
<th align="center" colspan="2" rowspan="1">RunEcon</th>
<th align="center" colspan="2" rowspan="1">La-max</th>
</tr>
<tr>
<th align="center" rowspan="1" colspan="1"></th>
<th align="center" rowspan="1" colspan="1">Mean</th>
<th align="center" rowspan="1" colspan="1">IQR</th>
<th align="center" rowspan="1" colspan="1">Mean</th>
<th align="center" rowspan="1" colspan="1">IQR</th>
<th align="center" rowspan="1" colspan="1">Mean</th>
<th align="center" rowspan="1" colspan="1">IQR</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" rowspan="1" colspan="1">1</td>
<td align="center" rowspan="1" colspan="1">3</td>
<td align="center" rowspan="1" colspan="1">3–19</td>
<td align="center" rowspan="1" colspan="1">3</td>
<td align="center" rowspan="1" colspan="1">3–19</td>
<td align="center" rowspan="1" colspan="1">19</td>
<td align="center" rowspan="1" colspan="1">3–19</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">2</td>
<td align="center" rowspan="1" colspan="1">NA</td>
<td align="center" rowspan="1" colspan="1">NA</td>
<td align="center" rowspan="1" colspan="1">NA</td>
<td align="center" rowspan="1" colspan="1">NA</td>
<td align="center" rowspan="1" colspan="1">NA</td>
<td align="center" rowspan="1" colspan="1">NA</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">3</td>
<td align="center" rowspan="1" colspan="1">10</td>
<td align="center" rowspan="1" colspan="1">10–12</td>
<td align="center" rowspan="1" colspan="1">10</td>
<td align="center" rowspan="1" colspan="1">10–12</td>
<td align="center" rowspan="1" colspan="1">12</td>
<td align="center" rowspan="1" colspan="1">10–12</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">4</td>
<td align="center" rowspan="1" colspan="1">5</td>
<td align="center" rowspan="1" colspan="1">5–17</td>
<td align="center" rowspan="1" colspan="1">5</td>
<td align="center" rowspan="1" colspan="1">5–17</td>
<td align="center" rowspan="1" colspan="1">17</td>
<td align="center" rowspan="1" colspan="1">5–17</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">5</td>
<td align="center" rowspan="1" colspan="1">12</td>
<td align="center" rowspan="1" colspan="1">10–12</td>
<td align="center" rowspan="1" colspan="1">12</td>
<td align="center" rowspan="1" colspan="1">10–12</td>
<td align="center" rowspan="1" colspan="1">10</td>
<td align="center" rowspan="1" colspan="1">10–12</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">6</td>
<td align="center" rowspan="1" colspan="1">15</td>
<td align="center" rowspan="1" colspan="1">7–15</td>
<td align="center" rowspan="1" colspan="1">15</td>
<td align="center" rowspan="1" colspan="1">7–15</td>
<td align="center" rowspan="1" colspan="1">7</td>
<td align="center" rowspan="1" colspan="1">7–15</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">7</td>
<td align="center" rowspan="1" colspan="1">11</td>
<td align="center" rowspan="1" colspan="1">11–11</td>
<td align="center" rowspan="1" colspan="1">11</td>
<td align="center" rowspan="1" colspan="1">11–11</td>
<td align="center" rowspan="1" colspan="1">11</td>
<td align="center" rowspan="1" colspan="1">11–11</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">8</td>
<td align="center" rowspan="1" colspan="1">8</td>
<td align="center" rowspan="1" colspan="1">8–14</td>
<td align="center" rowspan="1" colspan="1">8</td>
<td align="center" rowspan="1" colspan="1">8–14</td>
<td align="center" rowspan="1" colspan="1">14</td>
<td align="center" rowspan="1" colspan="1">8–14</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">9</td>
<td align="center" rowspan="1" colspan="1">6</td>
<td align="center" rowspan="1" colspan="1">6–16</td>
<td align="center" rowspan="1" colspan="1">6</td>
<td align="center" rowspan="1" colspan="1">6–16</td>
<td align="center" rowspan="1" colspan="1">16</td>
<td align="center" rowspan="1" colspan="1">6–16</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">10</td>
<td align="center" rowspan="1" colspan="1">2</td>
<td align="center" rowspan="1" colspan="1">2–20</td>
<td align="center" rowspan="1" colspan="1">2</td>
<td align="center" rowspan="1" colspan="1">2–20</td>
<td align="center" rowspan="1" colspan="1">20</td>
<td align="center" rowspan="1" colspan="1">2–20</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">11</td>
<td align="center" rowspan="1" colspan="1">1</td>
<td align="center" rowspan="1" colspan="1">1–21</td>
<td align="center" rowspan="1" colspan="1">1</td>
<td align="center" rowspan="1" colspan="1">1–21</td>
<td align="center" rowspan="1" colspan="1">21</td>
<td align="center" rowspan="1" colspan="1">1–21</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">12</td>
<td align="center" rowspan="1" colspan="1">17</td>
<td align="center" rowspan="1" colspan="1">5–17</td>
<td align="center" rowspan="1" colspan="1">17</td>
<td align="center" rowspan="1" colspan="1">5–17</td>
<td align="center" rowspan="1" colspan="1">5</td>
<td align="center" rowspan="1" colspan="1">5–17</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">13</td>
<td align="center" rowspan="1" colspan="1">4</td>
<td align="center" rowspan="1" colspan="1">4–18</td>
<td align="center" rowspan="1" colspan="1">4</td>
<td align="center" rowspan="1" colspan="1">4–18</td>
<td align="center" rowspan="1" colspan="1">18</td>
<td align="center" rowspan="1" colspan="1">4–18</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">14</td>
<td align="center" rowspan="1" colspan="1">7</td>
<td align="center" rowspan="1" colspan="1">7–15</td>
<td align="center" rowspan="1" colspan="1">7</td>
<td align="center" rowspan="1" colspan="1">7–15</td>
<td align="center" rowspan="1" colspan="1">15</td>
<td align="center" rowspan="1" colspan="1">7–15</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">15</td>
<td align="center" rowspan="1" colspan="1">9</td>
<td align="center" rowspan="1" colspan="1">9–13</td>
<td align="center" rowspan="1" colspan="1">9</td>
<td align="center" rowspan="1" colspan="1">9–13</td>
<td align="center" rowspan="1" colspan="1">13</td>
<td align="center" rowspan="1" colspan="1">9–13</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">16</td>
<td align="center" rowspan="1" colspan="1">18</td>
<td align="center" rowspan="1" colspan="1">4–18</td>
<td align="center" rowspan="1" colspan="1">18</td>
<td align="center" rowspan="1" colspan="1">4–18</td>
<td align="center" rowspan="1" colspan="1">4</td>
<td align="center" rowspan="1" colspan="1">4–18</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">17</td>
<td align="center" rowspan="1" colspan="1">21</td>
<td align="center" rowspan="1" colspan="1">1–21</td>
<td align="center" rowspan="1" colspan="1">21</td>
<td align="center" rowspan="1" colspan="1">1–21</td>
<td align="center" rowspan="1" colspan="1">1</td>
<td align="center" rowspan="1" colspan="1">1–21</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">18</td>
<td align="center" rowspan="1" colspan="1">13</td>
<td align="center" rowspan="1" colspan="1">9–13</td>
<td align="center" rowspan="1" colspan="1">13</td>
<td align="center" rowspan="1" colspan="1">9–13</td>
<td align="center" rowspan="1" colspan="1">9</td>
<td align="center" rowspan="1" colspan="1">9–13</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">19</td>
<td align="center" rowspan="1" colspan="1">16</td>
<td align="center" rowspan="1" colspan="1">6–16</td>
<td align="center" rowspan="1" colspan="1">16</td>
<td align="center" rowspan="1" colspan="1">6–16</td>
<td align="center" rowspan="1" colspan="1">6</td>
<td align="center" rowspan="1" colspan="1">6–16</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">20</td>
<td align="center" rowspan="1" colspan="1">14</td>
<td align="center" rowspan="1" colspan="1">8–14</td>
<td align="center" rowspan="1" colspan="1">14</td>
<td align="center" rowspan="1" colspan="1">8–14</td>
<td align="center" rowspan="1" colspan="1">8</td>
<td align="center" rowspan="1" colspan="1">8–14</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">21</td>
<td align="center" rowspan="1" colspan="1">20</td>
<td align="center" rowspan="1" colspan="1">2–20</td>
<td align="center" rowspan="1" colspan="1">20</td>
<td align="center" rowspan="1" colspan="1">2–20</td>
<td align="center" rowspan="1" colspan="1">2</td>
<td align="center" rowspan="1" colspan="1">2–20</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">22</td>
<td align="center" rowspan="1" colspan="1">NA</td>
<td align="center" rowspan="1" colspan="1">NA</td>
<td align="center" rowspan="1" colspan="1">NA</td>
<td align="center" rowspan="1" colspan="1">NA</td>
<td align="center" rowspan="1" colspan="1">NA</td>
<td align="center" rowspan="1" colspan="1">NA</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">23</td>
<td align="center" rowspan="1" colspan="1">19</td>
<td align="center" rowspan="1" colspan="1">3–19</td>
<td align="center" rowspan="1" colspan="1">19</td>
<td align="center" rowspan="1" colspan="1">3–19</td>
<td align="center" rowspan="1" colspan="1">3</td>
<td align="center" rowspan="1" colspan="1">3–19</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>A comparison of two of the primary outcome measures Hbmass and RunEcon based on the Bayesian and magnitude-based inference approach is presented in
<xref ref-type="table" rid="pone.0147311.t004">Table 4</xref>
. Note that the two sets of results differ slightly not only because of differences in analytic method, but also because of differences in modelling. For example, the magnitude-based inferences are based on a log-transformed response forecast to a covariate value (a 44% increase in weekly training load), with covariate adjustment undertaken within each treatment group; in contrast, the Bayesian inferences are based on the unadjusted and relative responses forecast to the mean covariate value and adjustment is undertaken using all of the data for reasons of small sample size. Furthermore, as described above, the method of computation of the denominator of the standardized values is not based on asymptotics in the Bayesian analysis, which makes a difference for small samples. Notwithstanding these differences, the overall conclusions are similar for the two sets of analyses. For Hbmass, the Bayesian analysis indicated a substantially higher increase for LHTL with both unscaled and scaled data, with magnitude-based analysis indicating possibly higher for LHTL with unscaled data, and likely higher with scaled data. Similarly for RunEcon the outcomes were similar between the analytical approaches—the Bayesian analysis indicated a substantial improvement (lower oxygen cost) with both unscaled and scaled data, while magnitude-based analysis indicated possibly lower oxygen cost in both cases.</p>
<table-wrap id="pone.0147311.t004" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.t004</object-id>
<label>Table 4</label>
<caption>
<title>Analysis of pre- to post-training measurements for LHTL vs IHE–outcomes for Bayesian and Magnitude-based Inferences for both unscaled and scaled data.</title>
<p>SD = standard deviation, CL = confidence limits, CI = credible interval.</p>
</caption>
<alternatives>
<graphic id="pone.0147311.t004g" xlink:href="pone.0147311.t004"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="left" rowspan="1" colspan="1">Analysis</th>
<th align="left" rowspan="1" colspan="1">Measure</th>
<th align="center" rowspan="1" colspan="1">Hemoglobin Mass</th>
<th align="center" rowspan="1" colspan="1">Running Economy</th>
</tr>
<tr>
<th align="left" rowspan="1" colspan="1"></th>
<th align="left" rowspan="1" colspan="1"></th>
<th align="center" rowspan="1" colspan="1">(g)</th>
<th align="center" rowspan="1" colspan="1">(L.min
<sup>-1</sup>
)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" rowspan="1" colspan="1">Bayesian Unscaled</td>
<td align="left" rowspan="1" colspan="1">Mean ± SD</td>
<td align="center" rowspan="1" colspan="1">21 ± 17</td>
<td align="center" rowspan="1" colspan="1">-0.17 ± 0.052</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">90% CI</td>
<td align="center" rowspan="1" colspan="1">-6, 48</td>
<td align="center" rowspan="1" colspan="1">-0.25, -0.08</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Cohen’s d; 90% CI</td>
<td align="center" rowspan="1" colspan="1">1.26; -0.37, 2.90</td>
<td align="center" rowspan="1" colspan="1">-3.20; -4.84, -1.57</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Probability |d|>0.2</td>
<td align="center" rowspan="1" colspan="1">0.931</td>
<td align="center" rowspan="1" colspan="1">0.998</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Qualitative inference</td>
<td align="center" rowspan="1" colspan="1">Higher</td>
<td align="center" rowspan="1" colspan="1">Lower</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Magnitude-based Inference</td>
<td align="left" rowspan="1" colspan="1">Mean; 90% CL</td>
<td align="center" rowspan="1" colspan="1">36; -5, 78</td>
<td align="center" rowspan="1" colspan="1">-0.13; -0.22, 0.04</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Cohen’s d; 90% CL</td>
<td align="center" rowspan="1" colspan="1">0.18; -0.02, 0.39</td>
<td align="center" rowspan="1" colspan="1">-0.20; -0.34, -0.07</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Qualitative inference</td>
<td align="center" rowspan="1" colspan="1">Possibly Higher</td>
<td align="center" rowspan="1" colspan="1">Possibly Lower</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Bayesian Scaled</td>
<td align="left" rowspan="1" colspan="1">Mean ± SD (% / 100)</td>
<td align="center" rowspan="1" colspan="1">0.023 ± 0.019</td>
<td align="center" rowspan="1" colspan="1">-0.042 ± 0.017</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">90% CI</td>
<td align="center" rowspan="1" colspan="1">-0.008, 0.054</td>
<td align="center" rowspan="1" colspan="1">-0.069, -0.015</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Cohen’s d; 90% CI</td>
<td align="center" rowspan="1" colspan="1">1.21; -0.42, 2.85</td>
<td align="center" rowspan="1" colspan="1">-2.51; -4.14, -0.88</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Probabilty |d|>0.2</td>
<td align="center" rowspan="1" colspan="1">0.926</td>
<td align="center" rowspan="1" colspan="1">0.993</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Qualitative inference</td>
<td align="center" rowspan="1" colspan="1">Higher</td>
<td align="center" rowspan="1" colspan="1">Lower</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1">Magnitude-based Inference</td>
<td align="left" rowspan="1" colspan="1">Smallest worthwhile difference (% / 100)</td>
<td align="center" rowspan="1" colspan="1">0.016</td>
<td align="center" rowspan="1" colspan="1">0.019</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Difference ± SD</td>
<td align="center" rowspan="1" colspan="1">0.047 ± 0.035</td>
<td align="center" rowspan="1" colspan="1">-0.028 ± 0.044</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Cohen’s d; 90% CL</td>
<td align="center" rowspan="1" colspan="1">0.20; 0.05, 0.35</td>
<td align="center" rowspan="1" colspan="1">-0.14; -0.34, 0.07</td>
</tr>
<tr>
<td align="left" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">Qualitative inference</td>
<td align="center" rowspan="1" colspan="1">Likely Higher</td>
<td align="center" rowspan="1" colspan="1">Possibly Lower</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
<p>Comparison of the expected values of Hbmass and La-max under each of the training regimens is further illustrated in
<xref ref-type="fig" rid="pone.0147311.g008">Fig 8</xref>
. The diagonal line indicates no treatment effect. The cloud of points represents the values obtained from the MCMC simulations in the Bayesian analysis. Displacement of the cloud from the line indicates that that there is an expected improvement or decline in the outcome measure associated with the respective treatment, and the range of values for which this is anticipated to take effect.</p>
<fig id="pone.0147311.g008" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.g008</object-id>
<label>Fig 8</label>
<caption>
<title>Comparison of the posterior distributions of the expected measurements of Hbmass (left) and La-max (right) under each of the training regimens Intermittent Hypoxic Exposure (IHE) and Live High Train Low (LHTL), unscaled data.</title>
</caption>
<graphic xlink:href="pone.0147311.g008"></graphic>
</fig>
<p>The alternative priors that were motivated by the available external information are shown in
<xref ref-type="table" rid="pone.0147311.t005">Table 5</xref>
. The consequent changes in the parameter values arising from the incorporation of these priors in the model are also shown in this table. It is clear that although the parameter estimates change slightly, the inferences reported above are generally robust to relatively small changes in the priors. However, the posterior estimates start to differ in a natural manner when the priors become more informative with respect to either the mean or variance. It is also noted that, reassuringly, the original (vague prior) setting yielded a posterior estimate of a relative increase of 2.6% in Hemoglobin mass under the LHTL regimen, which is equivalent to the anticipated value based on the (independent) prior information.</p>
<table-wrap id="pone.0147311.t005" orientation="portrait" position="float">
<object-id pub-id-type="doi">10.1371/journal.pone.0147311.t005</object-id>
<label>Table 5</label>
<caption>
<title>Configurations of hyperparameter values for informative priors in the Bayesian model [Eqs (
<xref ref-type="disp-formula" rid="pone.0147311.e028">11</xref>
and
<xref ref-type="disp-formula" rid="pone.0147311.e030">12</xref>
)].</title>
<p>Here
<italic>b</italic>
<sub>
<italic>0</italic>
</sub>
and B
<sub>0</sub>
denote respectively the prior mean vector and precision matrix for the regression coefficients, and
<italic>c</italic>
<sub>
<italic>0</italic>
</sub>
<italic>/2</italic>
and
<italic>d</italic>
<sub>
<italic>0</italic>
</sub>
<italic>/2</italic>
denote respectively the shape parameter and scale parameter for the inverse Gamma prior on σ
<sup>2</sup>
(the variance of the disturbances). These latter two parameters can be respectively interpreted as indicating the amount of information, and the sum of squared errors, from c
<sub>0</sub>
pseudo-observations, for the inverse Gamma prior on
<italic>σ</italic>
<sup>
<italic>2</italic>
</sup>
(the variance of the residuals) [
<xref rid="pone.0147311.ref016" ref-type="bibr">16</xref>
]. Note that (a) depicts the baseline uninformative priors used in the primary analyses, whereas (b) to (h) illustrate seven alternate priors.</p>
</caption>
<alternatives>
<graphic id="pone.0147311.t005g" xlink:href="pone.0147311.t005"></graphic>
<table frame="hsides" rules="groups">
<colgroup span="1">
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
<col align="left" valign="middle" span="1"></col>
</colgroup>
<thead>
<tr>
<th align="center" rowspan="1" colspan="1">Setting</th>
<th align="center" rowspan="1" colspan="1">(a)</th>
<th align="center" rowspan="1" colspan="1">(b)</th>
<th align="center" rowspan="1" colspan="1">(c)</th>
<th align="center" rowspan="1" colspan="1">(d)</th>
<th align="center" rowspan="1" colspan="1">(e)</th>
<th align="center" rowspan="1" colspan="1">(f)</th>
<th align="center" rowspan="1" colspan="1">(g)</th>
<th align="center" rowspan="1" colspan="1">(h)</th>
</tr>
</thead>
<tbody>
<tr>
<td align="center" rowspan="1" colspan="1">
<italic>b</italic>
<sub>
<italic>0</italic>
</sub>
</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,0)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,2.6)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,2.6)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,2.6)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,0)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,0)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,0)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,2.6)</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">diag(B
<sub>0</sub>
)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,0)</td>
<td align="left" rowspan="1" colspan="1">(0,0,.2,.2)</td>
<td align="left" rowspan="1" colspan="1">(5,5,5,5)</td>
<td align="left" rowspan="1" colspan="1">(0,0,5,5)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,0)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,0)</td>
<td align="left" rowspan="1" colspan="1">(0,0,5,5)</td>
<td align="left" rowspan="1" colspan="1">(0,0,0,0)</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">
<italic>c</italic>
<sub>
<italic>0</italic>
</sub>
</td>
<td align="left" rowspan="1" colspan="1">0.0001</td>
<td align="left" rowspan="1" colspan="1">0.0001</td>
<td align="left" rowspan="1" colspan="1">0.0001</td>
<td align="left" rowspan="1" colspan="1">0.0001</td>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="left" rowspan="1" colspan="1">20</td>
<td align="left" rowspan="1" colspan="1">20</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">
<italic>d</italic>
<sub>
<italic>0</italic>
</sub>
</td>
<td align="left" rowspan="1" colspan="1">0.0001</td>
<td align="left" rowspan="1" colspan="1">0.0001</td>
<td align="left" rowspan="1" colspan="1">0.0001</td>
<td align="left" rowspan="1" colspan="1">0.0001</td>
<td align="left" rowspan="1" colspan="1">100</td>
<td align="left" rowspan="1" colspan="1">5</td>
<td align="left" rowspan="1" colspan="1">100</td>
<td align="left" rowspan="1" colspan="1">100</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">Int.</td>
<td align="left" rowspan="1" colspan="1">0.00047</td>
<td align="left" rowspan="1" colspan="1">0.0044</td>
<td align="left" rowspan="1" colspan="1">-0.0027</td>
<td align="left" rowspan="1" colspan="1">-0.0028</td>
<td align="left" rowspan="1" colspan="1">0.011</td>
<td align="left" rowspan="1" colspan="1">0.0060</td>
<td align="left" rowspan="1" colspan="1">-0.64</td>
<td align="left" rowspan="1" colspan="1">0.0060</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">(0.026)</td>
<td align="left" rowspan="1" colspan="1">(0.026)</td>
<td align="left" rowspan="1" colspan="1">(0.026)</td>
<td align="left" rowspan="1" colspan="1">(0.026)</td>
<td align="left" rowspan="1" colspan="1">(1.4)</td>
<td align="left" rowspan="1" colspan="1">(0.30)</td>
<td align="left" rowspan="1" colspan="1">(0.28)</td>
<td align="left" rowspan="1" colspan="1">(0.30)</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">X</td>
<td align="left" rowspan="1" colspan="1">0.00020</td>
<td align="left" rowspan="1" colspan="1">0.00020</td>
<td align="left" rowspan="1" colspan="1">0.00026</td>
<td align="left" rowspan="1" colspan="1">0.00026</td>
<td align="left" rowspan="1" colspan="1">0.00018</td>
<td align="left" rowspan="1" colspan="1">0.00019</td>
<td align="left" rowspan="1" colspan="1">0.0062</td>
<td align="left" rowspan="1" colspan="1">0.00019</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">(0.00029)</td>
<td align="left" rowspan="1" colspan="1">(0.00029)</td>
<td align="left" rowspan="1" colspan="1">(0.00029)</td>
<td align="left" rowspan="1" colspan="1">(0.00029)</td>
<td align="left" rowspan="1" colspan="1">(0.015)</td>
<td align="left" rowspan="1" colspan="1">(0.0034)</td>
<td align="left" rowspan="1" colspan="1">(0.0033)</td>
<td align="left" rowspan="1" colspan="1">(0.0034)</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">IHE</td>
<td align="left" rowspan="1" colspan="1">-0.0075</td>
<td align="left" rowspan="1" colspan="1">-0.0073</td>
<td align="left" rowspan="1" colspan="1">-0.0021</td>
<td align="left" rowspan="1" colspan="1">-0.0021</td>
<td align="left" rowspan="1" colspan="1">-0.017</td>
<td align="left" rowspan="1" colspan="1">-0.0096</td>
<td align="left" rowspan="1" colspan="1">0.41</td>
<td align="left" rowspan="1" colspan="1">-0.0096</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">(0.023)</td>
<td align="left" rowspan="1" colspan="1">(0.023)</td>
<td align="left" rowspan="1" colspan="1">(0.023)</td>
<td align="left" rowspan="1" colspan="1">(0.023)</td>
<td align="left" rowspan="1" colspan="1">(1.2)</td>
<td align="left" rowspan="1" colspan="1">(0.27)</td>
<td align="left" rowspan="1" colspan="1">(0.23)</td>
<td align="left" rowspan="1" colspan="1">(0.27)</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">LHTL</td>
<td align="left" rowspan="1" colspan="1">0.026</td>
<td align="left" rowspan="1" colspan="1">0.027</td>
<td align="left" rowspan="1" colspan="1">0.035</td>
<td align="left" rowspan="1" colspan="1">0.035</td>
<td align="left" rowspan="1" colspan="1">0.024</td>
<td align="left" rowspan="1" colspan="1">0.026</td>
<td align="left" rowspan="1" colspan="1">0.79</td>
<td align="left" rowspan="1" colspan="1">0.26</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">(0.025)</td>
<td align="left" rowspan="1" colspan="1">(0.025)</td>
<td align="left" rowspan="1" colspan="1">(0.025)</td>
<td align="left" rowspan="1" colspan="1">(0.025)</td>
<td align="left" rowspan="1" colspan="1">(1.3)</td>
<td align="left" rowspan="1" colspan="1">(0.29)</td>
<td align="left" rowspan="1" colspan="1">(0.27)</td>
<td align="left" rowspan="1" colspan="1">(0.29)</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1">σ
<sup>2</sup>
</td>
<td align="left" rowspan="1" colspan="1">0.0011</td>
<td align="left" rowspan="1" colspan="1">0.0011</td>
<td align="left" rowspan="1" colspan="1">0.0011</td>
<td align="left" rowspan="1" colspan="1">0.0011</td>
<td align="left" rowspan="1" colspan="1">2.9</td>
<td align="left" rowspan="1" colspan="1">0.14</td>
<td align="left" rowspan="1" colspan="1">0.17</td>
<td align="left" rowspan="1" colspan="1">0.14</td>
</tr>
<tr>
<td align="center" rowspan="1" colspan="1"></td>
<td align="left" rowspan="1" colspan="1">(0.00042)</td>
<td align="left" rowspan="1" colspan="1">(0.00042)</td>
<td align="left" rowspan="1" colspan="1">(0.00043)</td>
<td align="left" rowspan="1" colspan="1">(0.00043)</td>
<td align="left" rowspan="1" colspan="1">(0.71)</td>
<td align="left" rowspan="1" colspan="1">(0.035)</td>
<td align="left" rowspan="1" colspan="1">(0.047)</td>
<td align="left" rowspan="1" colspan="1">(0.045)</td>
</tr>
</tbody>
</table>
</alternatives>
</table-wrap>
</sec>
<sec sec-type="conclusions" id="sec009">
<title>Discussion</title>
<p>In 2008, Barker and Schofield [
<xref rid="pone.0147311.ref007" ref-type="bibr">7</xref>
] suggested that “to correctly adopt the type of inference advocated by Batterham and Hopkins [
<xref rid="pone.0147311.ref006" ref-type="bibr">6</xref>
], sport scientists need to use fully Bayesian methods of analysis”. They also noted that most sport scientists are not trained in Bayesian methods, likely because this approach has only become commonplace as a statistical technique in approximately the last 20 years. To help make the Bayesian approach more accessible for those working in exercise science and sports medicine, we have provided here both a worked example (using statistical software) together with a description of the underlying models. We hope that this template will encourage those who deal with small samples and small effects to explore the full Bayesian method, which is well suited to the analysis of small samples. Other supporting information, where available, can be represented via the prior and hence formally and transparently incorporated with the data. In the absence of such information, the uncertainty induced by small samples is properly incorporated in the posterior estimates and inferences. In both of these situations, the analytical decision-making is enhanced, in support of the ultimate practical/clinical decision-making undertaken by sports practitioners.</p>
<sec id="sec010">
<title>Case study re-interpreted with Bayesian inferences</title>
<p>An experimental study by Humberstone-Gough and colleagues reported changes (mean ± 90% confidence interval) in Hbmass of -1.4 ± 4.5% for IHE compared with Placebo and 3.2 ± 4.8% for LHTL compared with Placebo [
<xref rid="pone.0147311.ref009" ref-type="bibr">9</xref>
]. For RunEcon the authors reported ‘no beneficial changes’ for IHE compared with Placebo, and a change of 2.8 ± 4.4% for LHTL compared with Placebo. Although the analyses were undertaken using different outcome measures and a slightly different analytical model, the conclusions based on the posterior estimates and probabilities obtained from the Bayesian analysis reported above are broadly consistent with those reported by Humberstone-Gough
<italic>et al</italic>
. Importantly, the Bayesian approach allows a much more direct probabilistic interpretation of credible intervals and posterior probabilities; for example, the probability that the mean change in Hbmass after LHTL compared with the change after IHE is greater than the smallest worthwhile change (0.2) is 0.96.</p>
<p>Cohen’s effect size magnitudes are well established [
<xref rid="pone.0147311.ref011" ref-type="bibr">11</xref>
] but the selection of a small effect (
<italic>d = 0</italic>
.
<italic>2</italic>
) as the threshold value for a worthwhile change or difference has been questioned. In the sporting context, worthwhile changes in competition performance, which can alter medal rankings, have been derived [
<xref rid="pone.0147311.ref025" ref-type="bibr">25</xref>
] as approximately 0.3 times the within-subject standard deviation [
<xref rid="pone.0147311.ref026" ref-type="bibr">26</xref>
,
<xref rid="pone.0147311.ref027" ref-type="bibr">27</xref>
], or ~0.3–1% of performance time in a range of sports [
<xref rid="pone.0147311.ref028" ref-type="bibr">28</xref>
<xref rid="pone.0147311.ref030" ref-type="bibr">30</xref>
]. Empirical evidence confirms that small effects (on competitive performance) are worthwhile for elite athletes and of practical relevance for coaches and scientists attempting to understand the likely benefit or harm of training regimen, lifestyle intervention or change in technique. The full Bayesian approach provides a robust and acceptable method of estimating the likelihood of a small effect. For instance, in the Humberstone-Gough et al. case study Hbmass increased ~21 g (or by 2.3%) more in LHTL vs IHE. Given that every gram of hemoglobin can carry ~4 mL O
<sub>2</sub>
, [
<xref rid="pone.0147311.ref031" ref-type="bibr">31</xref>
], it is reasonable to infer that this small increase in Hbmass is likely beneficial to overall oxygen transport capacity. The corresponding 95% credible interval for this comparison of absolute change ranged from -11.8 to +53.8 g, but on balance the probability is >0.8 that the true increase in Hbmass is substantial (worthwhile), which should be sufficient encouragement for most scientists and coaches to utilize altitude training to increase Hbmass–a position also supported by a meta-analysis of Hbmass and altitude training [
<xref rid="pone.0147311.ref023" ref-type="bibr">23</xref>
]. Likewise in the Humberstone-Gough et al. case study RunEcon improved (was lower) by ~0.17 L.min
<sup>-1</sup>
(or lower by 4.2%) more in LHTL vs IHE. The associated 95% credible interval for this comparison of relative change ranged from –0.9 to -7.5%, with a probability of ~0.99 that the true decrease in submaximal oxygen consumption is substantial (worthwhile). Although contentious [
<xref rid="pone.0147311.ref032" ref-type="bibr">32</xref>
], an improved running economy after altitude training is advantageous to distance running performance because it reduces the utilization of oxygen at any given steady-state running speed [
<xref rid="pone.0147311.ref033" ref-type="bibr">33</xref>
,
<xref rid="pone.0147311.ref034" ref-type="bibr">34</xref>
].</p>
</sec>
<sec id="sec011">
<title>Limitations of quasi-Bayesian approaches</title>
<p>Batterham and Hopkins (2006) have challenged the frequentist approach as being too conservative, and provided a useful, if somewhat unconventional, framework for interpreting small effects. The so-called magnitude-based approach emerging in sports science [
<xref rid="pone.0147311.ref018" ref-type="bibr">18</xref>
,
<xref rid="pone.0147311.ref026" ref-type="bibr">26</xref>
,
<xref rid="pone.0147311.ref035" ref-type="bibr">35</xref>
] is based on defining and justifying clinically, practically or mechanistically meaningful values of an effect. Confidence intervals are then used to interpret uncertainty in the effect in relation to these reference or threshold values. Much discussion has centred on the legitimacy of using vague priors in the magnitude-based approach and whether prior knowledge is actually useful in all cases [
<xref rid="pone.0147311.ref036" ref-type="bibr">36</xref>
]. There are inferential limitations to their approach [
<xref rid="pone.0147311.ref007" ref-type="bibr">7</xref>
,
<xref rid="pone.0147311.ref008" ref-type="bibr">8</xref>
] which can be circumvented by using a full Bayesian approach that we have elaborated here.</p>
<p>A major criticism of the approach suggested by Batterham and Hopkins (2006) is that, contrary to the authors’ claims, their method is not (even approximately) Bayesian and that a Bayesian formulation of their approach would indeed make prior assumptions about the distribution of the true parameter values. Barker and Schofield (2008) suggest that the underlying prior distribution would be uniform, which makes a clear assumption about the parameter values (that any parameter value in the defined range is equally likely) and which can be influenced by transformations of the parameter. As demonstrated in our paper, a Bayesian formulation of the problem considered by Batterham and Hopkins (2006) can quite easily be constructed, using a reference prior which is arguably vague (often referred to as the Jeffreys prior [
<xref rid="pone.0147311.ref010" ref-type="bibr">10</xref>
]). Moreover, there are clear and natural links between the frequentist distributions based on sampling theory and the Bayesian posterior distributions under these prior assumptions. The use of the reference prior for the estimation and comparison problem considered in this paper is well-founded, theoretically sound and very commonly employed [
<xref rid="pone.0147311.ref010" ref-type="bibr">10</xref>
]. As discussed in the Methods section, however, other priors can also be considered, particularly if there is other information available to complement the analysis.</p>
<p>Another criticism levelled at Batterham and Hopkins (2006) by Barker and Schofield (2008) is their choice and use of an expanded set of categories, based on a non-standard choice of the thresholds used to define the categories, the use of different thresholds for different problems (e.g., sometimes 0.025 and 0.975 instead of 0.05 and 0.95), and the descriptors used to label the categories, namely ‘almost certainly not,…almost certainly’). However, while the expanded set of categories proposed by Batterham and Hopkins is not ‘standard’ in classical statistics, this does not mean that it is wrong, misleading or not useful. Indeed, such categorizations can be very useful
<italic>if</italic>
they are clearly justified, interpreted properly and provide additional decision support for clinical (or, in this case, sporting) interventions. Even in ‘traditional’ statistics, some statisticians suggest that a p-value less than 0.10 indicates ‘substantive’ evidence against the null hypothesis, while other statisticians would not counsel this. Similarly, although a p-value of 0.05 is almost overwhelmingly taken as the ‘significance level’, many statisticians strongly advise against its unconsidered use and suggest that other levels (such as 0.01 or 0.10) may be more appropriate for certain problems and desired inferences. A number of commentators in the sports science field have made similar observations [
<xref rid="pone.0147311.ref001" ref-type="bibr">1</xref>
,
<xref rid="pone.0147311.ref037" ref-type="bibr">37</xref>
,
<xref rid="pone.0147311.ref038" ref-type="bibr">38</xref>
]. The overwhelming advice is that the probabilities obtained as a result of statistical analysis must be useful in providing decision support for the problem at hand, and different probabilities can indeed be used if they are well justified, transparently reported and correctly interpreted.</p>
<p>The technical interpretation of a (frequentist) confidence interval is poorly understood by many practitioners. This has caused, and will continue to lead to, clumsy statements about the inferences that can be made on its basis. In contrast, an analogous Bayesian interval is directly interpretable: for example, a 95% credible interval indicates that the true parameter lies within this interval with an estimated probability of 0.95. Moreover, the analysis can be used to obtain other decision support statements such as a set of meaningful probabilities; for example, as demonstrated in the case study, one can obtain the probability that a particular parameter exceeds an objectively-derived threshold of clinical/practical/sporting interest. Of course, the particular decisions that are made on the basis of these probabilities remain the prerogative of the decision-maker. For example, the outcome of an intervention to improve athletic performance (e.g. a new experimental therapeutic treatment) may be classified as ‘possible’ in some cases (acceptable probability of improving performance, within minimal adverse effects, low cost, readily available, and legal in terms of anti-doping regulations), and hence lead to a decision of using, whereas in another context it may be deemed too risky (unacceptable risk of impairing performance, adverse effects on health and well-being, high cost and limited availability, and some uncertainty in meeting anti-doping regulations) and lead to no action. In practice, these decisions may not coincide with the traditional statement of a statistically significant effect at a 5% level [
<xref rid="pone.0147311.ref036" ref-type="bibr">36</xref>
]. In both cases, however, the decisions are enhanced by the richer probabilistic and inferential capability afforded by the Bayesian analysis.</p>
<p>In the context of small samples such as those encountered in this study, it is important to understand the nature and implications of the statistical assumptions underlying the adopted models and inferences. For example, in a standard linear regression model a common assumption is that the residuals (the differences between the observed and predicted values) are normally distributed. Note that this only applies to the residuals, not the explanatory or response variables. This assumption was also adopted in the model and analysis presented in this paper. There is a rich literature about the appropriateness of this assumption for small sample sizes. Importantly, if the residuals are indeed normally distributed then the regression estimates will possess all three desirable statistical characteristics of unbiasedness, consistency, and efficiency among all unbiased estimators; however, even if they are not normally distributed they will still be unbiased (accurate) and consistent (improve with increasing sample size) but will only be most efficient (i.e. have smallest variance) among a smaller class of (linear unbiased) estimators [
<xref rid="pone.0147311.ref039" ref-type="bibr">39</xref>
]. The most obvious implication of non-normal residuals is that the inferences may not be as sharp, but by virtue of the central limit theorem the sampling distribution of the coefficients will approach a normal distribution as the sample size increases, under mild conditions. In our study, this was achieved by employing a single residual term across all groups which effectively increased the sample size. Feasible alternatives would have been to allow different residual variances for each group or to employ a robust regression approach, for example using a
<italic>t</italic>
distribution for the errors. It is also noted that the Bayesian estimates avoid some of the inferential concerns, since the credible intervals and probabilistic rankings are obtained from the MCMC samples, i.e., from the posterior distributions themselves, as opposed to relying on stronger asymptotic assumptions that are required for frequentist inferences</p>
<p>Another topical issue that has substantive implications for small sample analysis is reproducibility [
<xref rid="pone.0147311.ref040" ref-type="bibr">40</xref>
]. Indeed, the very measure of reproducibility arguably faces similar challenges as those reported here, and a Bayesian approach is arguably preferable over measures based on p-values or confidence intervals [
<xref rid="pone.0147311.ref041" ref-type="bibr">41</xref>
<xref rid="pone.0147311.ref043" ref-type="bibr">43</xref>
]. See also a recent blog article that discusses this topic (
<ext-link ext-link-type="uri" xlink:href="http://alexanderetz.com/2015/08/30/the-bayesian-reproducibility-project/">http://alexanderetz.com/2015/08/30/the-bayesian-reproducibility-project/</ext-link>
). The current debates are often conducted in the context of large samples, so the challenge is much greater for studies such as the one presented here. This is another topic for future research.</p>
</sec>
</sec>
<sec sec-type="conclusions" id="sec012">
<title>Conclusion</title>
<p>We have demonstrated that a Bayesian analysis can be undertaken for small scale athlete studies and can yield comparable, but more directly interpretable and theoretically justified probabilistic outcomes compared with the so-called magnitude-based (quasi-Bayesian) approach. The model described here is one of the simplest Bayesian formulations, and can be expanded as needed to address other issues. Analytical approaches for small sample studies using full Bayesian, quasi-Bayesian, and frequentist decisions must be well justified, reported transparently and interpreted correctly.</p>
</sec>
<sec sec-type="supplementary-material" id="sec013">
<title>Supporting Information</title>
<supplementary-material content-type="local-data" id="pone.0147311.s001">
<label>S1 Table</label>
<caption>
<title>Data used in the case study.</title>
<p>(DOCX)</p>
</caption>
<media xlink:href="pone.0147311.s001.docx">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
<supplementary-material content-type="local-data" id="pone.0147311.s002">
<label>S1 Text</label>
<caption>
<title>R code used in the analysis of the case study.</title>
<p>(DOCX)</p>
</caption>
<media xlink:href="pone.0147311.s002.docx">
<caption>
<p>Click here for additional data file.</p>
</caption>
</media>
</supplementary-material>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="pone.0147311.ref001">
<label>1</label>
<mixed-citation publication-type="journal">
<name>
<surname>Atkinson</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Batterham</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Hopkins</surname>
<given-names>WG</given-names>
</name>
.
<article-title>Sports performance research under the spotlight</article-title>
.
<source>Int J Sports Med</source>
.
<year>2012</year>
;
<volume>33</volume>
:
<fpage>949</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1055/s-0032-1327755">10.1055/s-0032-1327755</ext-link>
</comment>
<pub-id pub-id-type="pmid">23165647</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref002">
<label>2</label>
<mixed-citation publication-type="journal">
<name>
<surname>Ploutz-Snyder</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Fiedler</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Feiveson</surname>
<given-names>AH</given-names>
</name>
.
<article-title>Justifying small-n research in scientifically amazing settings: challenging the notion that only "big-n" studies are worthwhile</article-title>
.
<source>J Appl Physiol</source>
.
<year>2014</year>
;
<volume>116</volume>
:
<fpage>1251</fpage>
<lpage>2</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1152/japplphysiol.01335.2013">10.1152/japplphysiol.01335.2013</ext-link>
</comment>
<pub-id pub-id-type="pmid">24408991</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref003">
<label>3</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bacchetti</surname>
<given-names>P</given-names>
</name>
.
<article-title>Current sample size conventions: flaws, harms, and alternatives</article-title>
.
<source>BMC Medicine</source>
.
<year>2010</year>
;
<volume>8</volume>
:
<fpage>17</fpage>
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1186/1741-7015-8-17">10.1186/1741-7015-8-17</ext-link>
</comment>
<pub-id pub-id-type="pmid">20307281</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref004">
<label>4</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bacchetti</surname>
<given-names>P</given-names>
</name>
,
<name>
<surname>Deeks</surname>
<given-names>SG</given-names>
</name>
,
<name>
<surname>McCune</surname>
<given-names>JM</given-names>
</name>
.
<article-title>Breaking free of sample size dogma to perform innovative translational research</article-title>
.
<source>Sci Transl Med</source>
.
<year>2011</year>
;
<volume>3</volume>
(
<issue>87</issue>
):
<fpage>87sp24</fpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref005">
<label>5</label>
<mixed-citation publication-type="journal">
<name>
<surname>Beck</surname>
<given-names>TW</given-names>
</name>
.
<article-title>The importance of a priori sample size estimation in strength and conditioning research</article-title>
.
<source>J Str Cond Res</source>
.
<year>2013</year>
;
<volume>27</volume>
:
<fpage>2323</fpage>
<lpage>37</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref006">
<label>6</label>
<mixed-citation publication-type="journal">
<name>
<surname>Batterham</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Hopkins</surname>
<given-names>WG</given-names>
</name>
.
<article-title>Making meaningful inferences about magnitudes</article-title>
.
<source>Int J Sports Physiol Perf</source>
.
<year>2006</year>
;
<volume>1</volume>
:
<fpage>50</fpage>
<lpage>7</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref007">
<label>7</label>
<mixed-citation publication-type="journal">
<name>
<surname>Barker</surname>
<given-names>RJ</given-names>
</name>
,
<name>
<surname>Schofield</surname>
<given-names>MR</given-names>
</name>
.
<article-title>Inference about magnitudes of effects</article-title>
.
<source>Int J Sports Physiol Perf</source>
.
<year>2008</year>
;
<volume>3</volume>
:
<fpage>547</fpage>
<lpage>57</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref008">
<label>8</label>
<mixed-citation publication-type="journal">
<name>
<surname>Welsh</surname>
<given-names>AH</given-names>
</name>
,
<name>
<surname>Knight</surname>
<given-names>EJ</given-names>
</name>
.
<article-title>"Magnitude-Based Inference": A Statistical Review</article-title>
.
<source>Med Sci Sports Exerc</source>
.
<year>2014</year>
;
<volume>47</volume>
:
<fpage>874</fpage>
<lpage>84</lpage>
.
<pub-id pub-id-type="pmid">25051387</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref009">
<label>9</label>
<mixed-citation publication-type="journal">
<name>
<surname>Humberstone-Gough</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Saunders</surname>
<given-names>PU</given-names>
</name>
,
<name>
<surname>Bonetti</surname>
<given-names>DL</given-names>
</name>
,
<name>
<surname>Stephens</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Bullock</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Anson</surname>
<given-names>JM</given-names>
</name>
<etal>et al</etal>
<article-title>Comparison of live high: train low altitude and intermittent hypoxic exposure</article-title>
.
<source>J Sports Sci Med</source>
.
<year>2013</year>
;
<volume>12</volume>
:
<fpage>394</fpage>
<lpage>401</lpage>
.
<pub-id pub-id-type="pmid">24149143</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref010">
<label>10</label>
<mixed-citation publication-type="book">
<name>
<surname>Gelman</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Carlin</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Stern</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Dunson</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Vehtari</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Rubin</surname>
<given-names>D</given-names>
</name>
.
<source>
<italic>Bayesian Data Analysis</italic>
</source>
.
<edition>3rd ed.</edition>
:
<publisher-name>Chapman and Hall.</publisher-name>
;
<year>2013</year>
,
<fpage>64</fpage>
<lpage>9</lpage>
p.</mixed-citation>
</ref>
<ref id="pone.0147311.ref011">
<label>11</label>
<mixed-citation publication-type="book">
<name>
<surname>Cohen</surname>
<given-names>J</given-names>
</name>
.
<source>
<italic>Statistical power analysis for the behavioral sciences</italic>
</source>
.
<publisher-loc>Hillsdale, New Jersey</publisher-loc>
:
<publisher-name>Lawrence Erlbaum Associates</publisher-name>
;
<year>1988</year>
,
<fpage>1</fpage>
<lpage>17</lpage>
p.</mixed-citation>
</ref>
<ref id="pone.0147311.ref012">
<label>12</label>
<mixed-citation publication-type="book">
<name>
<surname>Gelman</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Carlin</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Stern</surname>
<given-names>H</given-names>
</name>
,
<name>
<surname>Dunson</surname>
<given-names>D</given-names>
</name>
,
<name>
<surname>Vehtari</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Rubin</surname>
<given-names>D</given-names>
</name>
.
<source>
<italic>Bayesian Data Analysis</italic>
</source>
.
<edition>3rd ed.</edition>
:
<publisher-name>Chapman and Hall</publisher-name>
;
<year>2013</year>
,
<fpage>275</fpage>
<lpage>92</lpage>
p.</mixed-citation>
</ref>
<ref id="pone.0147311.ref013">
<label>13</label>
<mixed-citation publication-type="journal">
<name>
<surname>Geman</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Geman</surname>
<given-names>D</given-names>
</name>
.
<article-title>Stochastic relaxation, Gibbs distributions and Bayesian restoration of images</article-title>
.
<source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
.
<year>1984</year>
;
<volume>6</volume>
:
<fpage>721</fpage>
<lpage>41</lpage>
.
<pub-id pub-id-type="pmid">22499653</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref014">
<label>14</label>
<mixed-citation publication-type="book">
<name>
<surname>Hedges</surname>
<given-names>LV</given-names>
</name>
,
<name>
<surname>Olkin</surname>
<given-names>I</given-names>
</name>
.
<source>
<italic>Statistical Methods for Meta-Analysis</italic>
</source>
.
<publisher-loc>Orlando</publisher-loc>
:
<publisher-name>Academic Press</publisher-name>
;
<year>1985</year>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref015">
<label>15</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lunn</surname>
<given-names>DJ</given-names>
</name>
,
<name>
<surname>Thomas</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Best</surname>
<given-names>N</given-names>
</name>
,
<name>
<surname>Spiegelhalter</surname>
<given-names>D</given-names>
</name>
.
<article-title>WinBugs—a Bayesian modelling framework: concepts, structure and extensibility</article-title>
.
<source>Stats Computing</source>
.
<year>2000</year>
;
<volume>10</volume>
:
<fpage>325</fpage>
<lpage>37</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref016">
<label>16</label>
<mixed-citation publication-type="book">
<name>
<surname>Marin</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Robert</surname>
<given-names>C</given-names>
</name>
.
<source>
<italic>Bayesian Core</italic>
:
<italic>A Practical Approach to Computational Bayesian Statistics</italic>
</source>
.
<publisher-name>Springer</publisher-name>
;
<year>2007</year>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref017">
<label>17</label>
<mixed-citation publication-type="book">
<name>
<surname>Marin</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Robert</surname>
<given-names>C</given-names>
</name>
.
<source>
<italic>Bayesian Essentials with R</italic>
</source>
.
<publisher-name>Springer</publisher-name>
;
<year>2014</year>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref018">
<label>18</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hopkins</surname>
<given-names>WG</given-names>
</name>
,
<name>
<surname>Marshall</surname>
<given-names>SW</given-names>
</name>
,
<name>
<surname>Batterham</surname>
<given-names>AM</given-names>
</name>
,
<name>
<surname>Hanin</surname>
<given-names>J</given-names>
</name>
.
<article-title>Progressive statistics for studies in sports medicine and exercise science</article-title>
.
<source>Med Sci Sports Exerc</source>
.
<year>2009</year>
;
<volume>41</volume>
:
<fpage>3</fpage>
<lpage>13</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1249/MSS.0b013e31818cb278">10.1249/MSS.0b013e31818cb278</ext-link>
</comment>
<pub-id pub-id-type="pmid">19092709</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref019">
<label>19</label>
<mixed-citation publication-type="journal">
<name>
<surname>Garvican</surname>
<given-names>LA</given-names>
</name>
,
<name>
<surname>Martin</surname>
<given-names>DT</given-names>
</name>
,
<name>
<surname>McDonald</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Gore</surname>
<given-names>CJ</given-names>
</name>
.
<article-title>Seasonal variation of haemoglobin mass in internationally competitive female road cyclists</article-title>
.
<source>Eur J Appl Physiol</source>
.
<year>2010</year>
;
<volume>109</volume>
:
<fpage>221</fpage>
<lpage>31</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s00421-009-1349-2">10.1007/s00421-009-1349-2</ext-link>
</comment>
<pub-id pub-id-type="pmid">20058020</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref020">
<label>20</label>
<mixed-citation publication-type="journal">
<name>
<surname>Sturtz</surname>
<given-names>S</given-names>
</name>
,
<name>
<surname>Liggues</surname>
<given-names>U</given-names>
</name>
,
<name>
<surname>Gelman</surname>
<given-names>A</given-names>
</name>
.
<article-title>R2WinBUGS: A package for running WinBUGS from R</article-title>
.
<source>J Stats Software</source>
.
<year>2005</year>
;
<volume>12</volume>
:
<fpage>1</fpage>
<lpage>16</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref021">
<label>21</label>
<mixed-citation publication-type="journal">
<name>
<surname>Thomas</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>O'Hara</surname>
<given-names>B</given-names>
</name>
,
<name>
<surname>Ligges</surname>
<given-names>U</given-names>
</name>
,
<name>
<surname>Sturtz</surname>
<given-names>S</given-names>
</name>
.
<article-title>Making BUGS Open</article-title>
.
<source>R News</source>
.
<year>2006</year>
;
<volume>6</volume>
:
<fpage>12</fpage>
<lpage>7</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref022">
<label>22</label>
<mixed-citation publication-type="journal">
<name>
<surname>Martin</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Quinn</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Park</surname>
<given-names>J-H</given-names>
</name>
.
<article-title>MCMCpack: Markov chain Monte Carlo in R</article-title>
.
<source>J Stats Software</source>
.
<year>2011</year>
;
<volume>429</volume>
:
<fpage>1</fpage>
<lpage>21</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref023">
<label>23</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gore</surname>
<given-names>CJ</given-names>
</name>
,
<name>
<surname>Sharpe</surname>
<given-names>K</given-names>
</name>
,
<name>
<surname>Garvican-Lewis</surname>
<given-names>LA</given-names>
</name>
,
<name>
<surname>Saunders</surname>
<given-names>PU</given-names>
</name>
,
<name>
<surname>Humberstone</surname>
<given-names>CE</given-names>
</name>
,
<article-title>Robertson EY et al. Altitude training and haemoglobin mass from the optimised carbon monoxide rebreathing method determined by a meta-analysis</article-title>
.
<source>Br J Sports Med</source>
.
<year>2013</year>
;
<volume>47</volume>
:
<fpage>i31</fpage>
<lpage>9</lpage>
.
<pub-id pub-id-type="pmid">24282204</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref024">
<label>24</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gore</surname>
<given-names>CJ</given-names>
</name>
,
<name>
<surname>Rodriguez</surname>
<given-names>FA</given-names>
</name>
,
<name>
<surname>Truijens</surname>
<given-names>MJ</given-names>
</name>
,
<name>
<surname>Townsend</surname>
<given-names>NE</given-names>
</name>
,
<name>
<surname>Stray-Gundersen</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Levine</surname>
<given-names>BD</given-names>
</name>
.
<article-title>Increased serum erythropoietin but not red cell production afer 4 wk of intermittent hypobaric hypoxia (4,000–5,500 m)</article-title>
.
<source>J Appl Physiol</source>
.
<year>2006</year>
;
<volume>101</volume>
:
<fpage>1386</fpage>
<lpage>91</lpage>
.
<pub-id pub-id-type="pmid">16794028</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref025">
<label>25</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hopkins</surname>
<given-names>WG</given-names>
</name>
,
<name>
<surname>Hawley</surname>
<given-names>JA</given-names>
</name>
,
<name>
<surname>Burke</surname>
<given-names>LM</given-names>
</name>
.
<article-title>Design and analysis of research on sport performance enhancement</article-title>
.
<source>Med Sci Sports Exerc</source>
.
<year>1999</year>
;
<volume>31</volume>
:
<fpage>472</fpage>
<lpage>85</lpage>
.
<pub-id pub-id-type="pmid">10188754</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref026">
<label>26</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hopkins</surname>
<given-names>W</given-names>
</name>
.
<article-title>How to interpret changes in an athletic performance test</article-title>
.
<source>Sportscience</source>
.
<year>2004</year>
;
<volume>8</volume>
:
<fpage>1</fpage>
<lpage>7</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref027">
<label>27</label>
<mixed-citation publication-type="journal">
<name>
<surname>Hopkins</surname>
<given-names>WG</given-names>
</name>
,
<name>
<surname>Schabort</surname>
<given-names>EJ</given-names>
</name>
,
<name>
<surname>Hawley</surname>
<given-names>JA</given-names>
</name>
.
<article-title>Reliability of power in physical performance tests</article-title>
.
<source>Sports Med</source>
.
<year>2001</year>
;
<volume>31</volume>
:
<fpage>211</fpage>
<lpage>34</lpage>
.
<pub-id pub-id-type="pmid">11286357</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref028">
<label>28</label>
<mixed-citation publication-type="journal">
<name>
<surname>Bonetti</surname>
<given-names>DL</given-names>
</name>
,
<name>
<surname>Hopkins</surname>
<given-names>WG</given-names>
</name>
.
<article-title>Variation in performance times of elite flat-water canoeists from race to race</article-title>
.
<source>Int J Sports Physiol Perf</source>
.
<year>2010</year>
;
<volume>5</volume>
:
<fpage>210</fpage>
<lpage>7</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref029">
<label>29</label>
<mixed-citation publication-type="journal">
<name>
<surname>Pyne</surname>
<given-names>DB</given-names>
</name>
,
<name>
<surname>Trewin</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Hopkins</surname>
<given-names>WG</given-names>
</name>
.
<article-title>Progression and variability of competitive performance of Olympic swimmers</article-title>
.
<source>J Sports Sci</source>
.
<year>2004</year>
;
<volume>22</volume>
:
<fpage>613</fpage>
<lpage>20</lpage>
.
<pub-id pub-id-type="pmid">15370491</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref030">
<label>30</label>
<mixed-citation publication-type="journal">
<name>
<surname>Smith</surname>
<given-names>TB</given-names>
</name>
,
<name>
<surname>Hopkins</surname>
<given-names>WG</given-names>
</name>
.
<article-title>Variability and predictability of finals times of elite rowers</article-title>
.
<source>Med Sci Sports Exerc</source>
.
<year>2011</year>
;
<volume>43</volume>
:
<fpage>2155</fpage>
<lpage>60</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1249/MSS.0b013e31821d3f8e">10.1249/MSS.0b013e31821d3f8e</ext-link>
</comment>
<pub-id pub-id-type="pmid">21502896</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref031">
<label>31</label>
<mixed-citation publication-type="journal">
<name>
<surname>Schmidt</surname>
<given-names>W</given-names>
</name>
,
<name>
<surname>Prommer</surname>
<given-names>N</given-names>
</name>
.
<article-title>Effects of various training modalities on blood volume</article-title>
.
<source>Scand J Med.Sci.Sports</source>
<year>2008</year>
;
<volume>18</volume>
:
<fpage>57</fpage>
<lpage>69</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1111/j.1600-0838.2008.00833.x">10.1111/j.1600-0838.2008.00833.x</ext-link>
</comment>
<pub-id pub-id-type="pmid">18665953</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref032">
<label>32</label>
<mixed-citation publication-type="journal">
<name>
<surname>Lundby</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Calbert</surname>
<given-names>JA</given-names>
</name>
,
<name>
<surname>Sander</surname>
<given-names>M</given-names>
</name>
,
<name>
<surname>van Hall</surname>
<given-names>G</given-names>
</name>
,
<name>
<surname>Mazzeo</surname>
<given-names>RS</given-names>
</name>
,
<name>
<surname>Stray-Gundersen</surname>
<given-names>J</given-names>
</name>
<etal>et al</etal>
<article-title>Exercise economy does not change after acclimatization to moderate to very high altitude</article-title>
.
<source>Scand J Med.Sci.Sports</source>
.
<year>2007</year>
;
<volume>17</volume>
:
<fpage>281</fpage>
<lpage>91</lpage>
.
<pub-id pub-id-type="pmid">17501869</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref033">
<label>33</label>
<mixed-citation publication-type="journal">
<name>
<surname>Conley</surname>
<given-names>DL</given-names>
</name>
,
<name>
<surname>Krahenbuhl</surname>
<given-names>GS</given-names>
</name>
.
<article-title>Running economy and distance running performance of highly trained athletes</article-title>
.
<source>Med Sci Sports Exerc</source>
.
<year>1980</year>
;
<volume>12</volume>
:
<fpage>357</fpage>
<lpage>60</lpage>
.
<pub-id pub-id-type="pmid">7453514</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref034">
<label>34</label>
<mixed-citation publication-type="journal">
<name>
<surname>Daniels</surname>
<given-names>JT</given-names>
</name>
.
<article-title>A physiologist's view of running economy</article-title>
.
<source>Med Sci Sports Exerc</source>
.
<year>1985</year>
;
<volume>17</volume>
:
<fpage>332</fpage>
<lpage>8</lpage>
.
<pub-id pub-id-type="pmid">3894870</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref035">
<label>35</label>
<mixed-citation publication-type="journal">
<name>
<surname>Wilkinson</surname>
<given-names>M</given-names>
</name>
.
<article-title>Distinguishing between statistical significance and practica/clinical meaningfulness using statistical inference</article-title>
.
<source>Sports Med</source>
.
<year>2014</year>
;
<volume>44</volume>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref036">
<label>36</label>
<mixed-citation publication-type="journal">
<name>
<surname>Burton</surname>
<given-names>PR</given-names>
</name>
,
<name>
<surname>Gurrin</surname>
<given-names>LC</given-names>
</name>
,
<name>
<surname>Campbell</surname>
<given-names>MJ</given-names>
</name>
.
<article-title>Clinical significance not statistical significance: a simple Bayesian alternative to p values</article-title>
.
<source>J Epidemiol Comm Health</source>
.
<year>1998</year>
;
<volume>52</volume>
:
<fpage>318</fpage>
<lpage>23</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref037">
<label>37</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stang</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Poole</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>Kuss</surname>
<given-names>O</given-names>
</name>
.
<article-title>The ongoing tyranny of statistical significance testing in biomedical research</article-title>
.
<source>Eur J Epidemiol</source>
.
<year>2010</year>
;
<volume>25</volume>
:
<fpage>225</fpage>
<lpage>30</lpage>
.
<comment>doi:
<ext-link ext-link-type="uri" xlink:href="http://dx.doi.org/10.1007/s10654-010-9440-x">10.1007/s10654-010-9440-x</ext-link>
</comment>
<pub-id pub-id-type="pmid">20339903</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref038">
<label>38</label>
<mixed-citation publication-type="journal">
<name>
<surname>Stapleton</surname>
<given-names>C</given-names>
</name>
,
<name>
<surname>S</surname>
<given-names>M.A.</given-names>
</name>
,
<name>
<surname>Atkinson</surname>
<given-names>G</given-names>
</name>
.
<article-title>The 'so what' factor: statistical versus clinical significance</article-title>
.
<source>Int J Sports Med</source>
.
<year>2012</year>
;
<volume>30</volume>
:
<fpage>773</fpage>
<lpage>4</lpage>
.
<pub-id pub-id-type="pmid">19876796</pub-id>
</mixed-citation>
</ref>
<ref id="pone.0147311.ref039">
<label>39</label>
<mixed-citation publication-type="journal">
<name>
<surname>Williams</surname>
<given-names>MN</given-names>
</name>
,
<name>
<surname>Grajales</surname>
<given-names>GAG</given-names>
</name>
,
<name>
<surname>Kurkiewiez</surname>
<given-names>D</given-names>
</name>
.
<article-title>Assumptions of multiple regression: correcting two misconceptions</article-title>
.
<source>Practical Assessment, Research and Evaluation</source>
.
<year>2013</year>
;
<volume>18</volume>
:
<fpage>1</fpage>
<lpage>14</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref040">
<label>40</label>
<mixed-citation publication-type="journal">
<collab>Open_Science_Collaboration</collab>
.
<article-title>Estimating the reproducibility of psychological science</article-title>
.
<source>Science</source>
.
<year>2015</year>
;
<volume>349</volume>
:
<fpage>6251</fpage>
:
<fpage>aac4716</fpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref041">
<label>41</label>
<mixed-citation publication-type="journal">
<name>
<surname>Dienes</surname>
<given-names>Z</given-names>
</name>
.
<article-title>Using Bayes to get the most out of non-significant results</article-title>
.
<source>Front. Psych</source>
.
<year>2014</year>
;
<volume>5</volume>
:
<fpage>781</fpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref042">
<label>42</label>
<mixed-citation publication-type="journal">
<name>
<surname>Gelman</surname>
<given-names>A</given-names>
</name>
,
<name>
<surname>Stern</surname>
<given-names>H</given-names>
</name>
.
<article-title>The difference between "significant" and "not significant" is not itself statistically significant</article-title>
.
<source>Am Stat</source>
.
<year>2006</year>
;
<volume>60</volume>
:
<fpage>328</fpage>
<lpage>31</lpage>
.</mixed-citation>
</ref>
<ref id="pone.0147311.ref043">
<label>43</label>
<mixed-citation publication-type="journal">
<name>
<surname>Verhagen</surname>
<given-names>J</given-names>
</name>
,
<name>
<surname>Wagenmakers</surname>
<given-names>EJ</given-names>
</name>
.
<article-title>Bayesian tests to quantify the result of a replication attempt</article-title>
.
<source>J Exp Psych Gen</source>
.
<year>2014</year>
;
<volume>143</volume>
:
<fpage>1457</fpage>
<lpage>75</lpage>
.</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Asie/explor/AustralieFrV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 0006089 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 0006089 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Asie
   |area=    AustralieFrV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     
   |texte=   
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Tue Dec 5 10:43:12 2017. Site generation: Tue Mar 5 14:07:20 2024