Serveur d'exploration autour du libre accès en Belgique

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A framework for streamlining research workflow in neuroscience and psychology

Identifieur interne : 000465 ( Pmc/Corpus ); précédent : 000464; suivant : 000466

A framework for streamlining research workflow in neuroscience and psychology

Auteurs : Jonas Kubilius

Source :

RBID : PMC:3894454

Abstract

Successful accumulation of knowledge is critically dependent on the ability to verify and replicate every part of scientific conduct. However, such principles are difficult to enact when researchers continue to resort on ad-hoc workflows and with poorly maintained code base. In this paper I examine the needs of neuroscience and psychology community, and introduce psychopy_ext, a unifying framework that seamlessly integrates popular experiment building, analysis and manuscript preparation tools by choosing reasonable defaults and implementing relatively rigid patterns of workflow. This structure allows for automation of multiple tasks, such as generated user interfaces, unit testing, control analyses of stimuli, single-command access to descriptive statistics, and publication quality plotting. Taken together, psychopy_ext opens an exciting possibility for a faster, more robust code development and collaboration for researchers.


Url:
DOI: 10.3389/fninf.2013.00052
PubMed: 24478691
PubMed Central: 3894454

Links to Exploration step

PMC:3894454

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">A framework for streamlining research workflow in neuroscience and psychology</title>
<author>
<name sortKey="Kubilius, Jonas" sort="Kubilius, Jonas" uniqKey="Kubilius J" first="Jonas" last="Kubilius">Jonas Kubilius</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24478691</idno>
<idno type="pmc">3894454</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3894454</idno>
<idno type="RBID">PMC:3894454</idno>
<idno type="doi">10.3389/fninf.2013.00052</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">000465</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">A framework for streamlining research workflow in neuroscience and psychology</title>
<author>
<name sortKey="Kubilius, Jonas" sort="Kubilius, Jonas" uniqKey="Kubilius J" first="Jonas" last="Kubilius">Jonas Kubilius</name>
</author>
</analytic>
<series>
<title level="j">Frontiers in Neuroinformatics</title>
<idno type="eISSN">1662-5196</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Successful accumulation of knowledge is critically dependent on the ability to verify and replicate every part of scientific conduct. However, such principles are difficult to enact when researchers continue to resort on
<italic>ad-hoc</italic>
workflows and with poorly maintained code base. In this paper I examine the needs of neuroscience and psychology community, and introduce
<italic>psychopy_ext</italic>
, a unifying framework that seamlessly integrates popular experiment building, analysis and manuscript preparation tools by choosing reasonable defaults and implementing relatively rigid patterns of workflow. This structure allows for automation of multiple tasks, such as generated user interfaces, unit testing, control analyses of stimuli, single-command access to descriptive statistics, and publication quality plotting. Taken together,
<italic>psychopy_ext</italic>
opens an exciting possibility for a faster, more robust code development and collaboration for researchers.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Ashburner, J" uniqKey="Ashburner J">J. Ashburner</name>
</author>
<author>
<name sortKey="Friston, K J" uniqKey="Friston K">K. J. Friston</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Barnes, N" uniqKey="Barnes N">N. Barnes</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cox, R W" uniqKey="Cox R">R. W. Cox</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Davison, A" uniqKey="Davison A">A. Davison</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gorgolewski, K" uniqKey="Gorgolewski K">K. Gorgolewski</name>
</author>
<author>
<name sortKey="Madison, C" uniqKey="Madison C">C. Madison</name>
</author>
<author>
<name sortKey="Clark, D" uniqKey="Clark D">D. Clark</name>
</author>
<author>
<name sortKey="Halchenko, Y O" uniqKey="Halchenko Y">Y. O. Halchenko</name>
</author>
<author>
<name sortKey="Waskom, M L" uniqKey="Waskom M">M. L. Waskom</name>
</author>
<author>
<name sortKey="Ghosh, S S" uniqKey="Ghosh S">S. S. Ghosh</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Halchenko, Y O" uniqKey="Halchenko Y">Y. O. Halchenko</name>
</author>
<author>
<name sortKey="Hanke, M" uniqKey="Hanke M">M. Hanke</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hanke, M" uniqKey="Hanke M">M. Hanke</name>
</author>
<author>
<name sortKey="Halchenko, Y" uniqKey="Halchenko Y">Y. Halchenko</name>
</author>
<author>
<name sortKey="Sederberg, P" uniqKey="Sederberg P">P. Sederberg</name>
</author>
<author>
<name sortKey="Hanson, S" uniqKey="Hanson S">S. Hanson</name>
</author>
<author>
<name sortKey="Haxby, J" uniqKey="Haxby J">J. Haxby</name>
</author>
<author>
<name sortKey="Pollmann, S" uniqKey="Pollmann S">S. Pollmann</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ince, D C" uniqKey="Ince D">D. C. Ince</name>
</author>
<author>
<name sortKey="Hatton, L" uniqKey="Hatton L">L. Hatton</name>
</author>
<author>
<name sortKey="Graham Cumming, J" uniqKey="Graham Cumming J">J. Graham-Cumming</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Joppa, L N" uniqKey="Joppa L">L. N. Joppa</name>
</author>
<author>
<name sortKey="Mcinerny, G" uniqKey="Mcinerny G">G. McInerny</name>
</author>
<author>
<name sortKey="Harper, R" uniqKey="Harper R">R. Harper</name>
</author>
<author>
<name sortKey="Salido, L" uniqKey="Salido L">L. Salido</name>
</author>
<author>
<name sortKey="Takeda, K" uniqKey="Takeda K">K. Takeda</name>
</author>
<author>
<name sortKey="O Hara, K" uniqKey="O Hara K">K. O'Hara</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kampstra, P" uniqKey="Kampstra P">P. Kampstra</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kubilius, J" uniqKey="Kubilius J">J. Kubilius</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kubilius, J" uniqKey="Kubilius J">J. Kubilius</name>
</author>
<author>
<name sortKey="Wagemans, J" uniqKey="Wagemans J">J. Wagemans</name>
</author>
<author>
<name sortKey="Op De Beeck, H P" uniqKey="Op De Beeck H">H. P. Op de Beeck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lades, M" uniqKey="Lades M">M. Lades</name>
</author>
<author>
<name sortKey="Vorbruggen, J C" uniqKey="Vorbruggen J">J. C. Vorbruggen</name>
</author>
<author>
<name sortKey="Buhmann, J" uniqKey="Buhmann J">J. Buhmann</name>
</author>
<author>
<name sortKey="Lange, J" uniqKey="Lange J">J. Lange</name>
</author>
<author>
<name sortKey="Von Der Malsburg, C" uniqKey="Von Der Malsburg C">C. von der Malsburg</name>
</author>
<author>
<name sortKey="Wurtz, R P" uniqKey="Wurtz R">R. P. Wurtz</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Op De Beeck, H" uniqKey="Op De Beeck H">H. Op de Beeck</name>
</author>
<author>
<name sortKey="Wagemans, J" uniqKey="Wagemans J">J. Wagemans</name>
</author>
<author>
<name sortKey="Vogels, R" uniqKey="Vogels R">R. Vogels</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pedregosa, F" uniqKey="Pedregosa F">F. Pedregosa</name>
</author>
<author>
<name sortKey="Varoquaux, G" uniqKey="Varoquaux G">G. Varoquaux</name>
</author>
<author>
<name sortKey="Gramfort, A" uniqKey="Gramfort A">A. Gramfort</name>
</author>
<author>
<name sortKey="Michel, V" uniqKey="Michel V">V. Michel</name>
</author>
<author>
<name sortKey="Thirion, B" uniqKey="Thirion B">B. Thirion</name>
</author>
<author>
<name sortKey="Grisel, O" uniqKey="Grisel O">O. Grisel</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peirce, J W" uniqKey="Peirce J">J. W. Peirce</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Peirce, J W" uniqKey="Peirce J">J. W. Peirce</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Perez, F" uniqKey="Perez F">F. Perez</name>
</author>
<author>
<name sortKey="Granger, B E" uniqKey="Granger B">B. E. Granger</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Riesenhuber, M" uniqKey="Riesenhuber M">M. Riesenhuber</name>
</author>
<author>
<name sortKey="Poggio, T" uniqKey="Poggio T">T. Poggio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Stevens, J R" uniqKey="Stevens J">J. R. Stevens</name>
</author>
<author>
<name sortKey="Elver, M" uniqKey="Elver M">M. Elver</name>
</author>
<author>
<name sortKey="Bednar, J A" uniqKey="Bednar J">J. A. Bednar</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wallis, T S" uniqKey="Wallis T">T. S. Wallis</name>
</author>
<author>
<name sortKey="Taylor, C P" uniqKey="Taylor C">C. P. Taylor</name>
</author>
<author>
<name sortKey="Wallis, J" uniqKey="Wallis J">J. Wallis</name>
</author>
<author>
<name sortKey="Jackson, M L" uniqKey="Jackson M">M. L. Jackson</name>
</author>
<author>
<name sortKey="Bex, P J" uniqKey="Bex P">P. J. Bex</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="White, E P" uniqKey="White E">E. P. White</name>
</author>
<author>
<name sortKey="Baldridge, E" uniqKey="Baldridge E">E. Baldridge</name>
</author>
<author>
<name sortKey="Brym, Z T" uniqKey="Brym Z">Z. T. Brym</name>
</author>
<author>
<name sortKey="Locey, K J" uniqKey="Locey K">K. J. Locey</name>
</author>
<author>
<name sortKey="Mcglinn, D J" uniqKey="Mcglinn D">D. J. McGlinn</name>
</author>
<author>
<name sortKey="Supp, S R" uniqKey="Supp S">S. R. Supp</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Wilson, G" uniqKey="Wilson G">G. Wilson</name>
</author>
<author>
<name sortKey="Aruliah, D A" uniqKey="Aruliah D">D. A. Aruliah</name>
</author>
<author>
<name sortKey="Brown, C T" uniqKey="Brown C">C. T. Brown</name>
</author>
<author>
<name sortKey="Hong, N P C" uniqKey="Hong N">N. P. C. Hong</name>
</author>
<author>
<name sortKey="Davis, M" uniqKey="Davis M">M. Davis</name>
</author>
<author>
<name sortKey="Guy, R T" uniqKey="Guy R">R. T. Guy</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Xu, X" uniqKey="Xu X">X. Xu</name>
</author>
<author>
<name sortKey="Yue, X" uniqKey="Yue X">X. Yue</name>
</author>
<author>
<name sortKey="Lescroart, M D" uniqKey="Lescroart M">M. D. Lescroart</name>
</author>
<author>
<name sortKey="Biederman, I" uniqKey="Biederman I">I. Biederman</name>
</author>
<author>
<name sortKey="Kim, J G" uniqKey="Kim J">J. G. Kim</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="review-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Front Neuroinform</journal-id>
<journal-id journal-id-type="iso-abbrev">Front Neuroinform</journal-id>
<journal-id journal-id-type="publisher-id">Front. Neuroinform.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Neuroinformatics</journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5196</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24478691</article-id>
<article-id pub-id-type="pmc">3894454</article-id>
<article-id pub-id-type="doi">10.3389/fninf.2013.00052</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Methods Article</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>A framework for streamlining research workflow in neuroscience and psychology</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Kubilius</surname>
<given-names>Jonas</given-names>
</name>
<xref ref-type="author-notes" rid="fn001">
<sup>*</sup>
</xref>
</contrib>
</contrib-group>
<aff>
<institution>Laboratories of Biological and Experimental Psychology, Faculty of Psychology and Educational Sciences, KU Leuven</institution>
<country>Leuven, Belgium</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>Edited by: Yaroslav O. Halchenko, Dartmouth College, USA</p>
</fn>
<fn fn-type="edited-by">
<p>Reviewed by: Michael Hanke, Otto-von-Guericke University, Germany; Fernando Perez, University of California at Berkeley, USA</p>
</fn>
<corresp id="fn001">*Correspondence: Jonas Kubilius, Laboratory of Biological Psychology, Faculty of Psychology and Educational Sciences, KU Leuven, Tiensestraat 102 bus 3711, Leuven e-mail:
<email xlink:type="simple">Jonas.Kubilius@ppw.kuleuven.be</email>
</corresp>
<fn fn-type="other" id="fn002">
<p>This article was submitted to the journal Frontiers in Neuroinformatics.</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>1</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="collection">
<year>2013</year>
</pub-date>
<volume>7</volume>
<elocation-id>52</elocation-id>
<history>
<date date-type="received">
<day>01</day>
<month>11</month>
<year>2013</year>
</date>
<date date-type="accepted">
<day>30</day>
<month>12</month>
<year>2013</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright © 2014 Kubilius.</copyright-statement>
<copyright-year>2014</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/3.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Successful accumulation of knowledge is critically dependent on the ability to verify and replicate every part of scientific conduct. However, such principles are difficult to enact when researchers continue to resort on
<italic>ad-hoc</italic>
workflows and with poorly maintained code base. In this paper I examine the needs of neuroscience and psychology community, and introduce
<italic>psychopy_ext</italic>
, a unifying framework that seamlessly integrates popular experiment building, analysis and manuscript preparation tools by choosing reasonable defaults and implementing relatively rigid patterns of workflow. This structure allows for automation of multiple tasks, such as generated user interfaces, unit testing, control analyses of stimuli, single-command access to descriptive statistics, and publication quality plotting. Taken together,
<italic>psychopy_ext</italic>
opens an exciting possibility for a faster, more robust code development and collaboration for researchers.</p>
</abstract>
<kwd-group>
<kwd>python</kwd>
<kwd>neuroscience</kwd>
<kwd>vision</kwd>
<kwd>psychophysics</kwd>
<kwd>fMRI</kwd>
<kwd>MVPA</kwd>
<kwd>reproducibility</kwd>
<kwd>collaboration</kwd>
</kwd-group>
<counts>
<fig-count count="7"></fig-count>
<table-count count="0"></table-count>
<equation-count count="0"></equation-count>
<ref-count count="24"></ref-count>
<page-count count="12"></page-count>
<word-count count="7538"></word-count>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>In recent years, Python and its scientific packages emerged as a promising platform for researchers in neuroscience and psychology, including
<italic>PsychoPy</italic>
for running experiments (Peirce,
<xref ref-type="bibr" rid="B16">2007</xref>
,
<xref ref-type="bibr" rid="B17">2009</xref>
),
<italic>pandas</italic>
<xref ref-type="fn" rid="fn0001">
<sup>1</sup>
</xref>
and
<italic>statsmodels</italic>
<xref ref-type="fn" rid="fn0002">
<sup>2</sup>
</xref>
for data analysis,
<italic>PyMVPA</italic>
(Hanke et al.,
<xref ref-type="bibr" rid="B7">2009</xref>
) and
<italic>scikit-learn</italic>
(Pedregosa et al.,
<xref ref-type="bibr" rid="B15">2011</xref>
) for machine learning data analyses, and
<italic>NeuroDebian</italic>
(Halchenko and Hanke,
<xref ref-type="bibr" rid="B6">2012</xref>
) as an overarching platform providing an easy deployment of these tools. Together, these tools are increasingly opening possibilities for development, sharing and building upon experimental and analysis code.</p>
<p>However, with most software focusing on facilitation of the various parts of scientific routine, up till very recently there were few if any options to directly foster the key principles of science, namely, transparency and reproducibility. Even with an increasing interest in Open Science, it is very infrequent that a researcher publishes the entire log of her work that would allow for a perfect reproduction of each and every step of that work. In fact, while open access to publications is largely perceived as desired, open sourcing experiment and analysis code is often ignored or met with a grain of skepticism, and for a good reason: many publications would be difficult to reproduce from start to end given typically poor coding skills, lack of version control habits, and the prevalence of manual implementation of many tasks (such as statistical analyses or plotting) in neuroscience and psychology (Ince et al.,
<xref ref-type="bibr" rid="B8">2012</xref>
). As many practicing scientists know, organizing different research stages together into a clean working copy is a time-consuming and thankless job in the publish-or-perish merit system. Yet these tendencies are troubling because lacking software engineering skills, researchers are more likely to produce poor quality code, and in the absence of code sharing, errors are hard to detect (Joppa et al.,
<xref ref-type="bibr" rid="B9">2013</xref>
), leading to reproducible research in theory but not in practice.</p>
<p>I argue that the primary reason of such irreproducible research is the lack of tools that would seamlessly enact good coding and sharing standards. Here I examine the needs of neuroscience and psychology community and develop a framework tailored to address these needs. To implement these ideas, I introduce a Python package called
<italic>psychopy_ext</italic>
(
<ext-link ext-link-type="uri" xlink:href="http://psychopy_ext.klab.lt">http://psychopy_ext.klab.lt</ext-link>
) that ties together existing Python packages for project organization, creation of experiments, behavioral, functional magnetic resonance imaging (fMRI) and stimulus analyses, and pretty publication quality plotting in a unified and relatively rigid interface. Unlike
<italic>PsychoPy</italic>
,
<italic>PyMVPA</italic>
,
<italic>pandas</italic>
, or
<italic>matplotlib</italic>
that are very flexible and support multiple options to suit everyone's needs, the underlying philosophy of
<italic>psychopy_ext</italic>
is to act as the glue at a higher level of operation by choosing reasonable defaults for these packages and providing patterns for common tasks with a minimal user intervention. More specifically, it provides extensive and well-structured wrappers to these packages such that interaction between them becomes seamless.</p>
</sec>
<sec>
<title>Design</title>
<sec>
<title>Philosophy</title>
<p>The overarching philosophical stance taken in
<italic>psychopy_ext</italic>
can be summarized in the following manner:
<italic>Tools must act clever</italic>
. This statement implies several design choices for a software package:</p>
<list list-type="order">
<list-item>
<p>
<italic>Reasonable defaults.</italic>
When a package is designed with the idea that it must act clever, reasonable expectations from an end user can be matched. Unfortunately, many packages lack this quality. For example, while
<italic>matplotlib</italic>
excels in producing plots, by default it lacks publication-quality polish which is a reasonable expectation from a user.</p>
</list-item>
<list-item>
<p>
<italic>Minimal user intervention (top-down principle).</italic>
A package should be capable of producing a working end product with little effort on a user's part. Importantly, various components in the workflow should be aware of each other and able to transfer information.</p>
</list-item>
<list-item>
<p>
<italic>Intuitive interface.</italic>
A user should not struggle to grasp how to perform a certain task. Rather, as explained in PEP 20
<xref ref-type="fn" rid="fn0003">
<sup>3</sup>
</xref>
, “There should be one—and preferably only one—obvious way to do it.”</p>
</list-item>
<list-item>
<p>
<italic>Encourage good habits.</italic>
In Python, code layout is not left up to a user—it is part of language specification, resulting in inherently highly readable code as compared to other programming languages. Similarly, I maintain that software should be clever enough to encourage or even require using such habits
<italic>by design</italic>
.</p>
</list-item>
</list>
</sec>
<sec>
<title>Implementation</title>
<p>The aim of
<italic>psychopy_ext</italic>
is to streamline a typical workflow in psychology and neuroscience research that is depicted in Figure
<xref ref-type="fig" rid="F1">1</xref>
. In particular, an ideal tool should:</p>
<list list-type="order">
<list-item>
<p>Streamline as many workflow steps as possible (“be clever”).</p>
</list-item>
<list-item>
<p>Seamlessly tie together these workflow steps.</p>
</list-item>
<list-item>
<p>Facilitate reproducibility of the entire workflow.</p>
</list-item>
</list>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption>
<p>
<bold>A typical research workflow in neuroscience and psychology</bold>
. For each task, modules from
<italic>psychopy_ext</italic>
that streamline the particular task are listed. Figure adapted from Kubilius (
<xref ref-type="bibr" rid="B11">2013</xref>
).</p>
</caption>
<graphic xlink:href="fninf-07-00052-g0001"></graphic>
</fig>
<p>To reach this goal,
<italic>psychopy_ext</italic>
aims to abstract common routines encountered in a typical research cycle, and wrap relevant existing packages in the format that makes them easily available to an end-user. By adhering to the design philosophy explained above, the goal is to anticipate common user's needs and provide a “magic” (or “all taken care of”) experience. This goal is achieved by employing several means.</p>
<p>First of all,
<italic>psychopy_ext</italic>
makes many choices for a user. For example, while there are many formats to store data collected during an experiment, only several of them facilitate sharing (White et al.,
<xref ref-type="bibr" rid="B21">2013</xref>
). Thus, unlike many other packages,
<italic>psychopy_ext</italic>
imposes that data is saved solely to a comma-delimited.csv file in the long format, which is versatile and widely adopted, and it does not support exporting to tab-delimited or Microsoft Excel's xsl/xslx files which can be potentially problematic (White et al.,
<xref ref-type="bibr" rid="B21">2013</xref>
). Such consistency in data output structure both improves project organization and significantly simplifies functions that use this data.</p>
<p>Moreover,
<italic>psychopy_ext</italic>
has a large number of built-in functions that ensure that an experiment or an analysis can be up and running with minimal effort on the user part. Very few things have to be specified by a user to generate working experiments, control stimuli, or produce nice looking plots. Importantly, unit testing and version control are built-in features of
<italic>psychopy_ext</italic>
, gently encouraging a user to embrace good programming practices. Similarly, access to simple image processing models is provided, allowing researchers to quickly rule out potential confounds in their stimuli prior to conducting a study and resulting in better controlled research paradigms.</p>
<p>Finally,
<italic>psychopy_ext</italic>
strives to integrate well with Python in order to improve coding habits. In my experience, experiments are often understood and coded as a sequence of commands. However, this intuitive model quickly breaks when more sophisticated routines and reuse of parts of code are necessary, resulting in a poor codebase organization overall. Therefore, in
<italic>psychopy_ext</italic>
experiments and analyses are defined as classes with their methods intended for a single task only. Such design subjects users to learn and actively benefit from object-oriented programming (OOP) and modularity. Moreover, the code automatically becomes more readable.</p>
<p>While adopting a particular workflow might induce a steep learning curve, I maintain that common templates facilitate code clarity, comprehension, and reproducibility (Wilson et al.,
<xref ref-type="bibr" rid="B22">2012</xref>
). In fact, multiple automations featured in
<italic>psychopy_ext</italic>
are solely possible due to this rigid structure. On the other hand, introducing such templates does not impede flexibility because in the OOP approach a user is free to customize everything to her own needs.</p>
</sec>
<sec>
<title>Technical details</title>
<sec>
<title>Scope and audience</title>
<p>
<italic>Psychopy_ext</italic>
is a Python package that wraps together other Python tools for streamlining research, including
<italic>PsychoPy</italic>
,
<italic>pandas</italic>
,
<italic>matplotlib</italic>
, and
<italic>pymvpa2</italic>
. As such, it is not a standalone tool; rather a user is expected to have Python with relevant dependencies installed (which is the case for
<italic>PsychoPy</italic>
users that
<italic>psychopy_ext</italic>
is directly targeted to). Moreover, users are expected to be at least somewhat familiar with OOP as
<italic>psychopy_ext</italic>
takes an extensive advantage of it.</p>
</sec>
<sec>
<title>Dependencies</title>
<p>
<italic>Psychopy_ext</italic>
depends on
<italic>PsychoPy</italic>
<xref ref-type="fn" rid="fn0004">
<sup>4</sup>
</xref>
(version 1.70+) and
<italic>pandas</italic>
<xref ref-type="fn" rid="fn0005">
<sup>6</sup>
</xref>
(version 0.12+), both of which are provided by the Standalone
<italic>PsychoPy</italic>
distribution. To benefit from automatic docstring conversion to instruction displays during experiments,
<italic>docutils</italic>
<xref ref-type="fn" rid="fn0006">
<sup>6</sup>
</xref>
is required.
<italic>Seaborn</italic>
<xref ref-type="fn" rid="fn0007">
<sup>7</sup>
</xref>
(version 0.1+) is also highly recommended for extremely beautiful plots (otherwise it defaults to good-enough
<italic>pandas</italic>
parameters). For fMRI analyses,
<italic>pymvpa2</italic>
<xref ref-type="fn" rid="fn0008">
<sup>8</sup>
</xref>
(version 2.0+) and
<italic>nibabel</italic>
<xref ref-type="fn" rid="fn0009">
<sup>9</sup>
</xref>
are required.</p>
</sec>
<sec>
<title>Installation</title>
<p>
<italic>Psychopy_ext</italic>
is part of the Standalone PsychoPy distribution. Inexperienced users are encouraged to obtain it by downloading this distribution because it comes packaged with
<italic>psychopy_ext</italic>
dependencies as well as a number of other scientific packages. More advanced users can install
<italic>psychopy_ext</italic>
using the standard
<italic>pip</italic>
installation procedure (
<italic>pip install psychopy_ext</italic>
) provided they have dependencies already installed. However, for maximal flexibility users are encouraged to download the source package of
<italic>psychopy_ext</italic>
and place it together their experiment projects without ever installing it.</p>
</sec>
<sec>
<title>Documentation</title>
<p>
<italic>Psychopy_ext</italic>
provides an extensive user manual and a growing list of demos, including behavioral and fMRI experiments, single and multiple task experiments, and fixed length and adaptive (staircase) paradigms.</p>
</sec>
<sec>
<title>Creating your own project</title>
<p>The easiest way to get started on
<italic>psychopy_ext</italic>
is to copy the entire
<italic>demos</italic>
folder, choose a demo most closely resembling user's paradigm, and adjust it accordingly.</p>
</sec>
<sec>
<title>License</title>
<p>
<italic>Psychopy_ext</italic>
is distributed under GNU General Public License v3 or later
<xref ref-type="fn" rid="fn0010">
<sup>10</sup>
</xref>
.</p>
</sec>
<sec>
<title>Stability</title>
<p>
<italic>Psychopy_ext</italic>
has been 4 years in development and has reached a stable core architecture with the current release of version 0.5. It is included in the Standalone PsychoPy distribution since version 1.79. All modules in the package except for
<italic>fmri</italic>
(which is provided as a
<italic>beta</italic>
version) are automatically tested with unit tests.</p>
</sec>
</sec>
</sec>
<sec>
<title>Overview of currently available tools</title>
<p>Below I evaluate currently available tools using these criteria and highlight where
<italic>psychopy_ext</italic>
could be used to provide a better user experience in the context of psychology and neuroscience.</p>
<sec>
<title>Streamlining within package</title>
<p>Most currently available tools for researchers excel at providing building blocks for specific tasks but typically lack standard routines (or templates) to easily integrate these blocks together. For example, creating a Gabor stimulus in
<italic>PsychoPy</italic>
is simple and achieved by calling a single command. However, a real experiment is never limited to a mere presentation of a stimulus but rather consists of a series of manipulations on these primitive building blocks. Crucially, however, many of these manipulations are not pre-defined in
<italic>PsychoPy</italic>
. For instance, instructions are usually shown at the beginning of the experiment, trials consist of showing several stimuli in a row (e.g., fixation, stimulus, fixation, and recoding participant's response), data and runtime logs are recorded to data files, yet none of these steps have the same single command access as the Gabor patch.</p>
<p>Presumably, such limitation is not a shortcoming but rather the wide-spread philosophy that each experiment might require a different approach and a user should be free to combine building blocks for a particular task at hand. However, as illustrated above, upon imposing certain assumptions even complex workflows can often be abstracted and thus streamlined to a large extent, in effect requiring only minimal customization on the user part.</p>
<p>Many other packages used by researchers suffer from a similar limitation. For example, while
<italic>matplotlib</italic>
can quickly produce plots, with default settings they are rather unappealing and a lot of handiwork is required each time to prepare figures for publication. It is possible that publishable quality is not the major goal of
<italic>matplotlib</italic>
or, similarly to
<italic>PsychoPy</italic>
, requirements for figures might be thought to vary case-by-case. However, as
<italic>seaborn</italic>
successfully demonstrates, pretty publication quality plots can be made even for complex analyses by default, and it is therefore incorporated in
<italic>psychopy_ext</italic>
.</p>
</sec>
<sec>
<title>Integration across packages</title>
<p>Most currently available tools for researchers address only a single step of the typical workflow depicted in Figure
<xref ref-type="fig" rid="F1">1</xref>
. For example,
<italic>PyMVPA</italic>
and
<italic>pandas</italic>
are powerful libraries for data analysis but they make little or no assumptions how data was obtained and what its structure could be. Lack of such assumptions make these tools very flexible and applicable to nearly all contexts but, unfortunately, at a cost of users having to connect separate workflow steps manually.</p>
<p>Consider, for example,
<italic>pandas'</italic>
generic Split-Apply-Combine routine which allows users to split data into groups according to a certain criterion, then apply a function to each of those groups, and combine the results into a new data structure. Such routine is clearly useful for data analysis in general. However, many psychologists will end up using this routine to compute average response time or accuracy among participants. With the existing Split-Apply-Combine routine it would be somewhat tedious to implement this computation, but given the ubiquity of it a researcher can rightfully expect it to be available out of the box. However,
<italic>pandas</italic>
is not specialized for neuroscience and thus cannot provide such function. Similarly,
<italic>PsychoPy</italic>
, the leading Python package for designing and coding experiments, currently does not provide an interface for conducting data analysis either.</p>
<p>To the best of my knowledge, there are no tools currently that would directly connect experiments, analyses, and simulations. However, there have been several attempts to better integrate research workflow. One notable effort in neuroscience community is the
<italic>NeuroDebian</italic>
project (Halchenko and Hanke,
<xref ref-type="bibr" rid="B6">2012</xref>
) that provides a platform with many tools used by neuroscientists available with a single installation command. Since the entire operating system and packages can be wrapped in a Virtual Machine, this project provides a viable solution to a difficult problem of sharing the entire research workflow in such a way that anybody would be guaranteed to be able to run the project.</p>
<p>Alternative solutions include research-oriented workflow management systems such as VisTrails
<xref ref-type="fn" rid="fn0011">
<sup>11</sup>
</xref>
, Taverna
<xref ref-type="fn" rid="fn0012">
<sup>12</sup>
</xref>
, Galaxy
<xref ref-type="fn" rid="fn0013">
<sup>13</sup>
</xref>
, and ActivePapers
<xref ref-type="fn" rid="fn0014">
<sup>14</sup>
</xref>
that link separate workflow components together into one. These systems are very powerful and versatile yet might be too elaborate for the typically modest workflows that neuroscientists and psychologists share. Moreover, a user nonetheless has to implement many types of communication between nodes in the workflow manually.</p>
<p>There are also a number of tools that integrate analysis output with manuscript production. Most notably,
<italic>Sweave</italic>
and
<italic>knitr</italic>
are popular packages for dynamic report generation that enable embedding R code outputs into text documents (see Open Science Paper
<xref ref-type="fn" rid="fn0015">
<sup>15</sup>
</xref>
and Wallis et al.
<xref ref-type="fn" rid="fn0016">
<sup>16</sup>
</xref>
,
<xref ref-type="bibr" rid="B20a">2014</xref>
, for examples of usage in research). A similar
<italic>Python</italic>
implementation of
<italic>Sweave</italic>
is available via
<italic>Pweave</italic>
. There are also a number of alternatives for incorporating text into Python code, such as IPython Notebook (Perez and Granger,
<xref ref-type="bibr" rid="B18">2007</xref>
),
<italic>pylit</italic>
<xref ref-type="fn" rid="fn0017">
<sup>17</sup>
</xref>
,
<italic>pyreport</italic>
<xref ref-type="fn" rid="fn0018">
<sup>18</sup>
</xref>
, or for incorporating Python code into LaTeX documents (
<italic>pythonTeX</italic>
<xref ref-type="fn" rid="fn0019">
<sup>19</sup>
</xref>
), as well as language-independent solutions like
<italic>dexy.it</italic>
<xref ref-type="fn" rid="fn0020">
<sup>20</sup>
</xref>
. However, it is not clear at the moment which of these approaches will be adopted by the community at large, but in the future one of these packages could also be integrated in the
<italic>psychopy_ext</italic>
framework.</p>
</sec>
<sec>
<title>Reproducibility</title>
<p>Research output should be completely reproducible. However, in practice, this is often not the case. Researchers often fail to organize their code base and analyses outputs, do not keep track of changes, neglect to comment code, and usually complete a number of steps in their workflow manually, which make an exact reproduction of output hardly possible even for the original author. Unfortunately, few efforts have been put forward to address these issues.</p>
<p>One simple way to improve reproducibility is provided by version control systems such as
<italic>git</italic>
or Mercurial (
<italic>hg</italic>
). These systems document changes in code, potentially allowing going back and inspecting parameters that produced a particular output. A similar but somewhat more focused towards research approach is implemented in the
<italic>Sumatra</italic>
package (Davison,
<xref ref-type="bibr" rid="B4">2012</xref>
).
<italic>Sumatra</italic>
is meant for keeping records of parameters in projects based on numerical simulations. It keeps a record of parameters used at each execution, and also allows providing comments about simulations, link to data files and so on. Both version control systems and
<italic>Sumatra</italic>
can significantly increase organization and transparency. However, due to their relative complexity and a requirement of a special commitment from a researcher to maintain a log of activity, such systems are not widely adopted by researchers in the field. Arguably, such tools would work best if they were implemented to work implicitly, which is the approach that
<italic>psychopy_ext</italic>
enacts. Moreover, reproducibility is usually poor due to lack of instructions how to reproduce given results and what parameters should be used rather than because of a mere lack of code history. An ideal tool should therefore encourage code documentation and overall organization.</p>
</sec>
<sec>
<title>Conclusion</title>
<p>Overall, a number of excellent specialized Python packages are available to researchers today yet there does not appear to be a package that would match the three criteria I proposed for an “ideal” tool. Current tools largely do not provide a top-down approach to a typical scientific routine. In particular, the entire workflow should be possible to run largely automatically with only an occasional user intervention where customization to particular needs (such as defining stimuli or selecting analysis conditions) is necessary.</p>
</sec>
</sec>
<sec>
<title>
<italic>Psychopy_Ext</italic>
components</title>
<sec>
<title>Overview</title>
<p>
<italic>Psychopy_ext</italic>
is composed of six largely distinct modules: user interface (extends
<italic>argparse</italic>
and
<italic>psychopy.gui</italic>
), experiment creation (extends
<italic>PsychoPy</italic>
), (generic) data analysis (extends
<italic>pandas</italic>
), fMRI data analysis (extends
<italic>pymvpa2</italic>
), modeling and plotting (extends
<italic>matplotlib</italic>
). The modules easily combine together in order to streamline user's workflow (Figure
<xref ref-type="fig" rid="F1">1</xref>
).</p>
</sec>
<sec>
<title>Project structure</title>
<p>
<italic>Psychopy_ext</italic>
assumes the position that all project-related materials must reside together, organized in a rigid and consistent folder and file naming structure (Figure
<xref ref-type="fig" rid="F2">2</xref>
). Data and other output files are stored in separate folders for each study, all of which reside in the Project folder (unless specified otherwise). Such organization already improves researcher's habits with no extra effort and significantly facilitates collaboration and reproducibility.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption>
<p>
<bold>A recommended project structure</bold>
.</p>
</caption>
<graphic xlink:href="fninf-07-00052-g0002"></graphic>
</fig>
<p>A Project is assumed to consist of multiple Studies, each defined by a separate Python script (Figure
<xref ref-type="fig" rid="F3">3</xref>
). A Study consists of experiment, analysis, simulation, and any other user defined classes. Any non-private methods defined by these classes (such as running the experiment, displaying all stimuli, plotting average response times and so on) can be called via GUI or a command-line interface (see
<italic>User interfaces</italic>
). It is also possible to limit callable methods by providing
<italic>actions</italic>
keyword to the class constructor.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption>
<p>
<bold>A typical project structure</bold>
. A project is composed of one or more studies that are defined in separate scripts, and each of them can have experiment, analysis, simulation, or fMRI analysis classes defined. Experiments can have one or more tasks (like a training paradigm and then testing performance), that can be further subdivided into smaller blocks, providing short pauses in between the blocks. Each block has a list of trials that are composed of a list of events. For fMRI analyses, computations occur per participant per ROI.</p>
</caption>
<graphic xlink:href="fninf-07-00052-g0003"></graphic>
</fig>
<p>All these scripts are not meant to be called directly. Rather, a single file,
<italic>run.py</italic>
, is used in order to provide a unified interface. Running this file will open a GUI where a user can choose which study to run and what parameters to use, or parameters can be passed directly via a command-line interface. Finally, parameters specific to particular setups, such as monitor sizes and distances, can be specified in
<italic>computer.py</italic>
file, providing a seamless functioning across multiple machines.</p>
</sec>
<sec>
<title>User interfaces</title>
<p>To facilitate the goal of unifying all research steps,
<italic>psychopy_exp</italic>
module
<italic>ui</italic>
automatically generates command-line (CLI) and graphic user interfaces (GUI) from user's code. It scrapes through all scripts (using Python's
<italic>inspect</italic>
module) seeking for non-private classes and functions, and extracts initialization parameter values stored in a
<italic>name</italic>
,
<italic>info</italic>
and
<italic>rp</italic>
variables. A
<italic>name</italic>
is used as an alias in CLI to call that class.
<italic>info</italic>
is a concept inherited from PsychoPy's
<italic>extraInfo</italic>
and is defined as a dictionary of
<italic>(key, value)</italic>
pairs of information that a user wants to later save in an output file (e.g., participant ID). Finally,
<italic>rp</italic>
define parameters that will not be saved in the output but control how a script runs. For example, they could control whether output files are saved or not, whether unit tests should be performed and so on. A number of standard
<italic>rp</italic>
options are already built-in (see Figure
<xref ref-type="fig" rid="F4">4</xref>
).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption>
<p>
<bold>Graphical user interface (GUI)</bold>
.
<italic>Psychopy_ext</italic>
converts
<italic>info</italic>
and
<italic>rp</italic>
parameters found in the class definition of an experiment or an analysis into GUI widgets, and methods into buttons. Note that this GUI is completely automatically generated from a class definition and does not require user intervention.</p>
</caption>
<graphic xlink:href="fninf-07-00052-g0004"></graphic>
</fig>
<p>When
<italic>run.py</italic>
file is called, a GUI is generated using these parameters (Figure
<xref ref-type="fig" rid="F4">4</xref>
). A GUI is a
<italic>wxPython</italic>
app where different studies are presented in a
<italic>wx.Listbook</italic>
, and each task (running the experiment or performing analyses) is nested in its tabs as a
<italic>wx.Notebook</italic>
with input fields generated from
<italic>info</italic>
and
<italic>rp</italic>
(note different widgets for different input types) and buttons created for available actions (e.g., plot response time data or plot accuracy). As such,
<italic>psychopy_ext</italic>
significantly extends
<italic>PsychoPy's</italic>
functionality where only a simple dialog box is available via its
<italic>gui</italic>
module.</p>
<p>Most users will benefit from the automatically generated GUI for a number of reasons. First, running and rerunning experiments or analyses while manipulating various parameters becomes much easier, merely a matter of ticking the right boxes and clicking buttons rather than editing source code every time. Moreover, when a button is pressed, a new subprocess is initiated to run a particular task. Thus, a user can keep the GUI open and continue changing and rerunning the code with the same parameters, which greatly speeds up development. Finally, rerunning the project becomes much easier for other researchers.</p>
<p>Some users will also appreciate a powerful CLI for running tasks. CLI allows users to call required tasks directly without the intermediate GUI step. It uses syntax comparable to Python's
<italic>argparse</italic>
with a difference that positional arguments (script, class and function names) are before optional arguments, for example (also see Figure
<xref ref-type="fig" rid="F3">3</xref>
):
<preformat>python run.py main exp run ––subjid subj_01 ––no_output </preformat>
</p>
<p>If no arguments are provided (i.e.,
<monospace>python run.py</monospace>
), a GUI is generated instead.</p>
<p>Note that using Python's default
<italic>argparse</italic>
would be considerably less convenient as one would have to manually update
<italic>argparse</italic>
definitions every time a new option or function is introduced to a class.</p>
<p>Moreover, it is important to understand that such user interfaces would not otherwise be possible if a particular code structure were not imposed by
<italic>psychopy_ext</italic>
. In order to be able to use an interface, a user is forced to organized her code into classes and functions, and immediately choose which parameters can be manipulated by a user. Such organization brings significant clarity to the code (variables are not scattered around the code) and teaches a user the benefits of OOP. Moreover, reproducibility is inherently built in the code and does not require any special preparation before publishing. In fact, the significant time investment in preparing code for public is often cited as one of the reasons researchers do not publish their code by default (Barnes,
<xref ref-type="bibr" rid="B2">2010</xref>
), thus
<italic>psychopy_ext</italic>
might help to alter this tendency.</p>
</sec>
<sec>
<title>Running experiments</title>
<p>The experiment module
<italic>exp</italic>
provides a basic template and multiple common functions for presenting stimuli on a screen and collecting responses. An experiment is created by defining a class that inherits from the
<italic>Experiment</italic>
class, thus gently introducing the concept of inheritance. This may be somewhat unusual to many users used to linear experimental scripts but the advantage is that a number of functions can readily be used from the parent class or overridden if necessary. Only stimulus definition, trial structure, and trial list creation are always defined in the child class (see Figure
<xref ref-type="fig" rid="F5">5</xref>
; Listing
<xref ref-type="fig" rid="L1">1</xref>
). Again, a good practice of modularity becomes natural in this setting.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption>
<p>
<bold>A typical experiment and analysis structure</bold>
. A user executes
<italic>run.py</italic>
file either without any arguments (resulting in a GUI) or with them (as shown in this example). Then, relevant scripts (brown), classes (purple) and methods (black) are found and performed. A minimal structure of the script is depicted in the lower panel. The user only has to specify stimuli, trial structure, and the list of trials for experiment, and an analysis method for analysis.</p>
</caption>
<graphic xlink:href="fninf-07-00052-g0005"></graphic>
</fig>
<fig id="L1" position="float">
<label>Listing 1</label>
<caption>
<p>
<bold>The simplest fully functional experiment (with data and log files generated) that shows eight trials of Gabor grating and waits for response in between them</bold>
.</p>
</caption>
<graphic xlink:href="fninf-07-00052-g0006"></graphic>
</fig>
<p>Listing
<xref ref-type="fig" rid="L1">1</xref>
shows how to create a simple experiment in
<italic>psychopy_ext</italic>
consisting of a single task only. To have more than one task (e.g., first training on a particular set of stimuli and then testing performance), multiple Task classes (inheriting from the
<italic>exp.Task</italic>
class) can be defined separately with the same basic structure as demonstrated above (see Figure
<xref ref-type="fig" rid="F3">3</xref>
). The tasks should be stored in a
<monospace>self.tasks</monospace>
variable in the main Experiment class, which would then call each task one by one during the runtime. Each Task can further be divided into Blocks with short pauses in between by defining a
<monospace>self.blockcol</monospace>
variable that refers to a particular column in
<monospace>self.exp_plan</monospace>
variable where block number is stored. Blocks consist of Trials that consist of Events (e.g., show a fixation, show a stimulus, show a fixation and wait for response). The flow of these components is handled by the
<italic>exp</italic>
module; a user is required to only define these structures (though a deeper customization is a matter of overriding the default methods, of course). Experiment, Task, and Block classes have
<monospace>before</monospace>
and
<monospace>after</monospace>
methods that allow to customize what happens just before and right after each of them are executed. These methods are typically useful to define instruction or feedback displays between tasks.</p>
<p>Beyond streamlining experiment creation, the Experiment and Task classes offer several methods to address typical researcher needs. First, every experiment inherited from these classes has a built-in automatic running functionality which allows users to quickly go through the entire experiment, in the essence acting as unit testing. Moreover, keyboard input is simulated such that responses could be collected and analyzed. A user can even define simulated responses such that they would match the expected outcome of the experiment. Such manipulation is especially handy when a novel analysis technique is used and the user is not confident that it was implemented correctly. Together, this function enables users to quickly verify that both experimental and analysis code are working properly prior to collecting any data.</p>
<p>The Experiment class also simplifies study registration and data collection processes by integrating with version control systems
<italic>git</italic>
and Mercurial (
<italic>hg</italic>
). If an appropriate flag is selected, at the end of experiment new data and log files are committed and pushed to a remote repository. Therefore, this feature allows an automatic data sharing among collaborators, creates an instant backup, and prevents users from tampering with raw data.</p>
</sec>
<sec>
<title>Data analysis and plotting</title>
<p>Data analysis (
<italic>stats</italic>
) and plotting (
<italic>plot</italic>
) modules aim to simplify basic data analysis and plotting. The
<italic>stats</italic>
module tailors
<italic>pandas</italic>
functionality for typical analysis patterns in neuroscience and psychological research. In particular, it provides the
<italic>aggregate</italic>
function which splits data into groups according to a certain criterion (e.g., a participant ID) and applies a requested function to each group (an average, by default), returning a
<italic>pandas.DataFrame</italic>
. For example, to aggregate response times for each participant separately and then plot averaged data in two subplots (per session) with three levels on the x-axis and two conditions in different colors with error bars (Figure
<xref ref-type="fig" rid="F6">6</xref>
, bar plot), the following command is used:</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption>
<p>
<bold>Plots generated by the
<italic>plot</italic>
module. Left panel:</bold>
bar plot, line plot, and scatter plot;
<bold>right panel</bold>
: bean plot (Kampstra,
<xref ref-type="bibr" rid="B10">2008</xref>
) and matrix plot. The pretty color scheme is applied by default and subplot layout, tick spacing, labels and other plot properties are inferred from the original data without any manual input. *
<italic>p</italic>
< 0.05, **
<italic>p</italic>
< 0.01.</p>
</caption>
<graphic xlink:href="fninf-07-00052-g0007"></graphic>
</fig>
<preformat>agg = stats.aggregate(df, rows='levels', subplots='subplots', cols='cond', yerr='subjID', values='rt') </preformat>
<p>This results in a DataFrame with subplot, level, and condition labels its index, and an average per participant (as specified by
<monospace>yerr</monospace>
keyword) in columns.</p>
<p>The
<monospace>agg</monospace>
variable can be directly used for plotting, vastly simplifying and improving plotting experience:
<preformat>plt = plot.Plot() agg = plt.plot(agg, kind='bar') plt.show() </preformat>
</p>
<p>On top of plotting data, the
<italic>plot</italic>
function also:
<list list-type="bullet">
<list-item>
<p>creates the required number of subplots</p>
</list-item>
<list-item>
<p>formats and labels axes</p>
</list-item>
<list-item>
<p>formats legend</p>
</list-item>
<list-item>
<p>draws error bars</p>
</list-item>
<list-item>
<p>for line and bar plots, performs a
<italic>t</italic>
-test (either one-sample or two-samples) and displays results with one or more stars above</p>
</list-item>
<list-item>
<p>chooses pretty color and layout options reminiscent of R's
<italic>ggplot2</italic>
using
<italic>seaborn</italic>
, or
<italic>pandas</italic>
default color scheme if
<italic>seaborn</italic>
is not available.</p>
</list-item>
</list>
</p>
<p>Observe that the resulting plot is immediately correctly formatted because the
<italic>aggregate</italic>
function recorded data layout information in the index and column names. Moreover, in many cases it has enough information (labels, error bars) and polish for publication, in part thanks to
<italic>seaborn</italic>
package. (In future releases, a tighter integration with
<italic>seaborn</italic>
is planned.)</p>
<p>Also note that plotting module takes a slightly different approach from
<italic>matplotlib</italic>
by requiring to initialize the Plot() class for each plot. Due to this change, it becomes possible to easily and automatically create figures with multiple subplots. For example, a subplot does not need to be created prior to plotting; it is automatically created upon the next call of the
<italic>plot</italic>
function.</p>
</sec>
<sec>
<title>Functional magnetic imaging (fMRI) analysis</title>
<p>Preprocessing of functional magnetic imaging (fMRI) data has become mainstream as a result of a number of robust and free analysis tools, such as
<italic>SPM</italic>
(Ashburner and Friston,
<xref ref-type="bibr" rid="B1">2005</xref>
),
<italic>FreeSurfer</italic>
<xref ref-type="fn" rid="fn0021">
<sup>21</sup>
</xref>
, or
<italic>AFNI</italic>
(Cox,
<xref ref-type="bibr" rid="B3">1996</xref>
). More recently, multivariate pattern analysis (MVPA) has become available to many researchers thanks to packages such as
<italic>PyMVPA</italic>
(Hanke et al.,
<xref ref-type="bibr" rid="B7">2009</xref>
). However, similar to stimulus presentation packages, many free fMRI analysis tools lack standard “plug-and-play” routines that would allow users to carry out data analysis automatically. For example, setting up a generic routine in
<italic>PyMVPA</italic>
that would go over all participants, extract relevant regions of interest (ROIs), perform and plot correlational or support vector machine (SVM) analysis is not possible because researchers usually have their own preferred workflows.</p>
<p>However, in
<italic>psychopy_ext</italic>
this goal becomes viable due to a well-controlled data structure. The
<italic>fmri</italic>
module consists of the
<italic>Preproc</italic>
and the
<italic>Analysis</italic>
classes that only require relevant participant ID's and ROIs to be specified to carry out analyses in full. The
<italic>Preproc</italic>
class generates batch scripts to compute beta- or
<italic>t</italic>
-values using Statistical Parametric Mapping toolbox (Ashburner and Friston,
<xref ref-type="bibr" rid="B1">2005</xref>
). In future releases, this functionality could be extended to automate the entire preprocessing workflow using
<italic>Nipype</italic>
(Gorgolewski et al.,
<xref ref-type="bibr" rid="B5">2011</xref>
) or
<italic>Lyman</italic>
<xref ref-type="fn" rid="fn0022">
<sup>22</sup>
</xref>
packages. The
<italic>Analysis</italic>
class uses preprocessed data to display regions of interest, plot changes in the fMRI signal intensity and perform univariate (BOLD signal averages for each condition) and multivariate (MVPA) analyses (however, group analyses are not implemented).</p>
<p>For MVPA analyses, two most popular analysis approaches, namely, correlational and SVM analyses, are provided. Both are implemented in a similar fashion. First, data is normalized for each run by subtracting the mean across conditions per voxel (for correlational analyses) or across voxels per condition (for SVM analyses; Kubilius et al.,
<xref ref-type="bibr" rid="B12">2011</xref>
). Next, data is divided in two halves (for correlations) or into roughly 75% of training data (to train the SVM) and 25% of test data (to test the SVM performance). Pair-wise correlations or pair-wise decoding for all possible combinations are then computed. For SVM, by default a linear nu-SVM kernel is used and an average of the test set is taken to improve the performance (Kubilius et al.,
<xref ref-type="bibr" rid="B12">2011</xref>
). In order to achieve a more stable performance, this computation is performed for 100 iterations by randomly choosing the splits of samples. Outputs of these computations are provided in a standard
<italic>pandas</italic>
DataFrame format, which can further be used to plot the results within the same
<italic>psychopy_ext</italic>
framework.</p>
<p>Although this module is experimental at the moment due to the lack of relevant unit tests, it has already been used in several published or submitted papers (Kubilius et al.,
<xref ref-type="bibr" rid="B12">2011</xref>
; Kubilius et al., unpublished results). Moreover, a user can easily adapt a particular analysis details to her liking while still benefiting from the implementation of the global routine.</p>
</sec>
<sec>
<title>Simulations</title>
<p>In many vision experiments it is important to verify that the observed effects are not a mere result of some low-level image properties that are not related to the investigated effect. Several simple models have been used in the literature to rule out such alternative explanations, including computing pixel-wise differences between conditions (e.g., Op de Beeck et al.,
<xref ref-type="bibr" rid="B14">2001</xref>
), applying a simple model of V1 such as the GaborJet model (Lades et al.,
<xref ref-type="bibr" rid="B13">1993</xref>
), or applying a more complex model of the visual system such as HMAX (Riesenhuber and Poggio,
<xref ref-type="bibr" rid="B19">1999</xref>
).
<italic>Psychopy_ext</italic>
provides a wrapper to these models so that they could be accessed with the same syntax, namely, by passing filenames or
<italic>numpy</italic>
arrays of the images that should be analyzed and compared:
<preformat>model = models.HMAX() model.compare(filenames) </preformat>
</p>
<p>To get raw model output, the
<italic>run</italic>
command can be used:
<preformat>model = models.HMAX() out = model.run(test_ims=test_fnames, train_ims=train_fnames) </preformat>
</p>
<p>Pixel-wise differences model is the simplest model for estimating differences between images. Images are converted to grayscale, and a Euclidean distance is computed between all pairs of stimuli, resulting in an n-by-n dissimilarity matrix for
<italic>n</italic>
input images (Op de Beeck et al.,
<xref ref-type="bibr" rid="B14">2001</xref>
).</p>
<p>GaborJet model (Lades et al.,
<xref ref-type="bibr" rid="B13">1993</xref>
) belongs to the family of minimal V1-like models where image decomposition is performed by convolving an image with Gabor filters of different orientation and spatial frequency. In the GaborJet model, convolution is performed using 8 orientations (in the steps of 45°) and 5 spatial frequencies on a 10-by-10 grid in the Fourier domain. The output consists of the magnitude and phase of this convolution (arrays of 4000 elements), and the sampled grid positions. For comparing model outputs, only magnitudes are usually used to compute an angular distance between the two output vectors (Xu et al.,
<xref ref-type="bibr" rid="B23">2009</xref>
). In
<italic>psychopy_ext</italic>
, the code has been implemented in Python by following the MATLAB implementation available on Irving Biederman's website
<xref ref-type="fn" rid="fn0023">
<sup>23</sup>
</xref>
.</p>
<p>HMAX model (Riesenhuber and Poggio,
<xref ref-type="bibr" rid="B19">1999</xref>
) has been proposed as a generic architecture of the visual cortex. It consists of four image processing layers and an output layer. Initially, a convolution between the image and Gabor filters of four orientations (in the steps of 45°) and 12 spatial frequencies (range: 7–29 px) grouped into four channels is computed (layer S1). Next, a maximum of outputs of the same orientation over each spatial frequency channel is taken (layer C1). Outputs of this operation are pooled together in 256 distinct four-orientation configurations (for each scale; layer S2), and a final maximum across the four scales is computed (layer C2), resulting in an output vector with 256 elements. If training data is provided, these responses can further be compared to the stored representations at the final view-tuned units (VTU) layer. In
<italic>psychopy_ext</italic>
, the code has been implemented in Python by following the MATLAB implementation by Minjoon Kouh and the original implementation available on Max Riesenhuber's website
<xref ref-type="fn" rid="fn0024">
<sup>24</sup>
</xref>
. (Note that the current implementation of HMAX as provided by Poggio lab is much more advanced than the one implemented in
<italic>psychopy_ext</italic>
.)</p>
</sec>
</sec>
<sec>
<title>Limitations</title>
<p>
<italic>Psychopy_ext</italic>
debuted publically in November 2013 and thus has not been adopted and extensively tested by the community yet. It is therefore difficult to predict the learning curve of the underlying
<italic>psychopy_ext</italic>
philosophy and to what extent it resonates with the needs of the community. For example, many researchers are used to linear experimental and analysis scripts, while
<italic>psychopy_ext</italic>
relies on object-based programming concepts such as classes and modular functions in order to provide inheritance and flexibility. However, object-oriented approach also means that whenever necessary functions are not available directly from
<italic>psychopy_ext</italic>
or do not meet user's needs, they can be overridden or used directly from the packages that are extended, often (but not always) without affecting the rest of the workflow.</p>
<p>Furthermore,
<italic>psychopy_ext</italic>
was designed to improve a workflow of a typical
<italic>PsychoPy</italic>
user. Researchers that use other stimulus generation packages or even different programming languages (such as
<italic>R</italic>
for data analyses) will not be able to benefit from
<italic>psychopy_ext</italic>
as easily. Such limitation is partially a design choice to provide workflows that depend on as few tools as possible. Python has a large number of powerful packages and
<italic>psychopy_ext</italic>
is committed to promoting them in favor of equivalent solutions in other languages. Nonetheless, when an alternative does not exist, users can easily interact with their
<italic>R</italic>
(via rpy2
<xref ref-type="fn" rid="fn0025">
<sup>25</sup>
</xref>
),
<italic>C/C++</italic>
(via Python's own
<italic>ctypes</italic>
), MATLAB (via pymatlab
<xref ref-type="fn" rid="fn0026">
<sup>26</sup>
</xref>
or mlab
<xref ref-type="fn" rid="fn0027">
<sup>27</sup>
</xref>
) and a number of other kinds of scripts.</p>
</sec>
<sec>
<title>Discussion and future roadmap</title>
<p>Four years into development,
<italic>psychopy_ext</italic>
is already successfully addressing a number of issues encountered in streamlining a typical research workflow and its reproducibility. By design, it enables researchers to produce well-organized projects with a number of typical steps automated, providing prebaked templates and interfaces for common tasks, and implementing default unit testing in a form of customizable simulations. These projects can be rapidly developed as
<italic>psychopy_ext</italic>
requires only a minimal customization by a user to run and are easily reproducible via an automatically generated GUI.</p>
<p>In future releases
<italic>psychopy_ext</italic>
will introduce more tools to streamline typical routines encountered by psychologists and neuroscientists. Beyond small improvements, there are several intriguing possibilities that
<italic>psychopy_ext</italic>
could explore.</p>
<p>To begin, an interesting and, arguably, quite intuitive approach to reproducibility has been recently introduced by Stevens et al. (
<xref ref-type="bibr" rid="B20">2013</xref>
) in their Python package called
<italic>Lancet</italic>
. Often, reproducibility is understood as a
<italic>post-hoc</italic>
feature where a researcher cleans up and organizes her code just prior to publication. Since this final code has a very different structure from a naturally exploratory format of code in day-to-day research, extra effort is required from a researcher to prepare it. In contrast,
<italic>Lancet</italic>
allows exploratory research to naturally grow from IPython Notebooks into more complex workflows where external processes can be launched and tracked from the same pipeline. Such natural code evolution is also encouraged in
<italic>psychopy_ext</italic>
but instead by defining new classes and functions for new branches of exploration. Introducing functionality of both approaches might be fruitful to explore in future releases of
<italic>psychopy_ext</italic>
.</p>
<p>Furthermore, given a neat integration of experimental and analysis workflow it would be possible to automatically produce reports of the experimental and analyses parameters and outputs. Upon integration of experiment's parameters, this feature could even lead to an initial draft of a manuscript with Methods and Results sections already partially prefilled. In fact, in the development branch of
<italic>psychopy_ext</italic>
, a very primitive approach to generating reports of analyses in a single HTML file is already available. More robust results could be achieved by integrating one of the Python packages for combining text and code as mentioned in the
<italic>Integration</italic>
section.</p>
<p>Integration of resources could be further fostered by a general project management tool. This tool could provide access to all project materials, as well as track and display changes in them, similar to
<italic>Projects</italic>
<xref ref-type="fn" rid="fn0028">
<sup>28</sup>
</xref>
software for Mac OS or a number of open and platform-independent workflow systems mentioned in the
<italic>Integration</italic>
section, especially VisTrails since it is Python-based. Alternatively, such tool could be browser-based, thus enabling researchers to access their projects from anywhere, and it could integrate well with the existing browser-based solutions, such as data plotting libraries.</p>
<p>Moving toward more GUI-based solutions also opens a possibility to improve user experience in designing an experiment and analysis. For example, experiment creation in
<italic>psychopy_ext</italic>
is already semantically structured: projects consist of experiments that consist of tasks that consist of blocks, trials and events. Such organization easily maps onto a GUI with blocks representing different components, somewhat akin to
<italic>PsychoPy Builder</italic>
. Similarly, a pivot table or pivot chart option, reminiscent of the one in
<italic>Microsoft Excel</italic>
, could be provided to allow a quick exploration of data.</p>
<p>Taken together,
<italic>psychopy_ext</italic>
provides a transparent and extendable framework for developing, sharing and reusing code in neuroscience and psychology.</p>
<sec>
<title>Conflict of interest statement</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</sec>
</body>
<back>
<ack>
<p>I would like to thank Jonathan Peirce for his support in disseminating
<italic>psychopy_ext</italic>
. I am also grateful to Pieter Moors, Sander Van de Cruys, Maarten Demeyer, and Marco Maas for beta testing
<italic>psychopy_ext</italic>
, as well as reviewers of this manuscript and the editor for their valuable comments. This work was supported in part by a Methusalem Grant (METH/08/02) awarded to Johan Wagemans from the Flemish Government. Jonas Kubilius is a research assistant of the Research Foundation—Flanders (FWO).</p>
</ack>
<fn-group>
<fn id="fn0001">
<p>
<sup>1</sup>
<ext-link ext-link-type="uri" xlink:href="http://pandas.pydata.org">http://pandas.pydata.org</ext-link>
</p>
</fn>
<fn id="fn0002">
<p>
<sup>2</sup>
<ext-link ext-link-type="uri" xlink:href="http://statsmodels.sourceforge.net">http://statsmodels.sourceforge.net</ext-link>
</p>
</fn>
<fn id="fn0003">
<p>
<sup>3</sup>
<ext-link ext-link-type="uri" xlink:href="http://www.python.org/dev/peps/pep-0020/">http://www.python.org/dev/peps/pep-0020/</ext-link>
</p>
</fn>
<fn id="fn0004">
<p>
<sup>4</sup>
<ext-link ext-link-type="uri" xlink:href="http://www.psychopy.org/">http://www.psychopy.org/</ext-link>
</p>
</fn>
<fn id="fn0005">
<p>
<sup>5</sup>
<ext-link ext-link-type="uri" xlink:href="http://pandas.pydata.org/">http://pandas.pydata.org/</ext-link>
</p>
</fn>
<fn id="fn0006">
<p>
<sup>6</sup>
<ext-link ext-link-type="uri" xlink:href="http://docutils.sourceforge.net/">http://docutils.sourceforge.net/</ext-link>
</p>
</fn>
<fn id="fn0007">
<p>
<sup>7</sup>
<ext-link ext-link-type="uri" xlink:href="http://stanford.edu/~mwaskom/software/seaborn">http://stanford.edu/~mwaskom/software/seaborn</ext-link>
</p>
</fn>
<fn id="fn0008">
<p>
<sup>8</sup>
<ext-link ext-link-type="uri" xlink:href="http://www.pymvpa.org/">http://www.pymvpa.org/</ext-link>
</p>
</fn>
<fn id="fn0009">
<p>
<sup>9</sup>
<ext-link ext-link-type="uri" xlink:href="http://nipy.org/nibabel/">http://nipy.org/nibabel/</ext-link>
</p>
</fn>
<fn id="fn0010">
<p>
<sup>10</sup>
<ext-link ext-link-type="uri" xlink:href="http://www.gnu.org/licenses/">http://www.gnu.org/licenses/</ext-link>
</p>
</fn>
<fn id="fn0011">
<p>
<sup>11</sup>
<ext-link ext-link-type="uri" xlink:href="http://www.vistrails.org">http://www.vistrails.org</ext-link>
</p>
</fn>
<fn id="fn0012">
<p>
<sup>12</sup>
<ext-link ext-link-type="uri" xlink:href="http://www.taverna.org.uk/">http://www.taverna.org.uk/</ext-link>
</p>
</fn>
<fn id="fn0013">
<p>
<sup>13</sup>
<ext-link ext-link-type="uri" xlink:href="http://galaxyproject.org/">http://galaxyproject.org/</ext-link>
</p>
</fn>
<fn id="fn0014">
<p>
<sup>14</sup>
<ext-link ext-link-type="uri" xlink:href="https://bitbucket.org/khinsen/active_papers_py">https://bitbucket.org/khinsen/active_papers_py</ext-link>
</p>
</fn>
<fn id="fn0015">
<p>
<sup>15</sup>
<ext-link ext-link-type="uri" xlink:href="https://github.com/cpfaff/Open-Science-Paper">https://github.com/cpfaff/Open-Science-Paper</ext-link>
</p>
</fn>
<fn id="fn0016">
<p>
<sup>16</sup>
<ext-link ext-link-type="uri" xlink:href="https://github.com/tomwallis/microperimetry_faces">https://github.com/tomwallis/microperimetry_faces</ext-link>
</p>
</fn>
<fn id="fn0017">
<p>
<sup>17</sup>
<ext-link ext-link-type="uri" xlink:href="http://pylit.berlios.de/">http://pylit.berlios.de/</ext-link>
</p>
</fn>
<fn id="fn0018">
<p>
<sup>18</sup>
<ext-link ext-link-type="uri" xlink:href="http://gael-varoquaux.info/computers/pyreport/">http://gael-varoquaux.info/computers/pyreport/</ext-link>
</p>
</fn>
<fn id="fn0019">
<p>
<sup>19</sup>
<ext-link ext-link-type="uri" xlink:href="https://github.com/gpoore/pythontex">https://github.com/gpoore/pythontex</ext-link>
</p>
</fn>
<fn id="fn0020">
<p>
<sup>20</sup>
<ext-link ext-link-type="uri" xlink:href="http://www.dexy.it/">http://www.dexy.it/</ext-link>
</p>
</fn>
<fn id="fn0021">
<p>
<sup>21</sup>
<ext-link ext-link-type="uri" xlink:href="http://surfer.nmr.mgh.harvard.edu/">http://surfer.nmr.mgh.harvard.edu/</ext-link>
</p>
</fn>
<fn id="fn0022">
<p>
<sup>22</sup>
<ext-link ext-link-type="uri" xlink:href="http://stanford.edu/~mwaskom/software/lyman/">http://stanford.edu/~mwaskom/software/lyman/</ext-link>
</p>
</fn>
<fn id="fn0023">
<p>
<sup>23</sup>
<ext-link ext-link-type="uri" xlink:href="http://geon.usc.edu/GWTgrid_simple.m">http://geon.usc.edu/GWTgrid_simple.m</ext-link>
</p>
</fn>
<fn id="fn0024">
<p>
<sup>24</sup>
<ext-link ext-link-type="uri" xlink:href="http://riesenhuberlab.neuro.georgetown.edu/hmax/index.html">http://riesenhuberlab.neuro.georgetown.edu/hmax/index.html</ext-link>
</p>
</fn>
<fn id="fn0025">
<p>
<sup>25</sup>
<ext-link ext-link-type="uri" xlink:href="http://rpy.sourceforge.net/rpy2.html">http://rpy.sourceforge.net/rpy2.html</ext-link>
</p>
</fn>
<fn id="fn0026">
<p>
<sup>26</sup>
<ext-link ext-link-type="uri" xlink:href="http://molflow.com/pymatlab.html">http://molflow.com/pymatlab.html</ext-link>
</p>
</fn>
<fn id="fn0027">
<p>
<sup>27</sup>
<ext-link ext-link-type="uri" xlink:href="https://github.com/ewiger/mlab">https://github.com/ewiger/mlab</ext-link>
</p>
</fn>
<fn id="fn0028">
<p>
<sup>28</sup>
<ext-link ext-link-type="uri" xlink:href="https://projects.ac/">https://projects.ac/</ext-link>
</p>
</fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ashburner</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Friston</surname>
<given-names>K. J.</given-names>
</name>
</person-group>
(
<year>2005</year>
).
<article-title>Unified segmentation</article-title>
.
<source>Neuroimage</source>
<volume>26</volume>
,
<fpage>839</fpage>
<lpage>851</lpage>
<pub-id pub-id-type="doi">10.1016/j.neuroimage.2005.02.018</pub-id>
<pub-id pub-id-type="pmid">15955494</pub-id>
</mixed-citation>
</ref>
<ref id="B2">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barnes</surname>
<given-names>N.</given-names>
</name>
</person-group>
(
<year>2010</year>
).
<article-title>Publish your computer code: it is good enough</article-title>
.
<source>Nat. News</source>
<volume>467</volume>
,
<fpage>753</fpage>
<lpage>753</lpage>
<pub-id pub-id-type="doi">10.1038/467753a</pub-id>
<pub-id pub-id-type="pmid">20944687</pub-id>
</mixed-citation>
</ref>
<ref id="B3">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cox</surname>
<given-names>R. W.</given-names>
</name>
</person-group>
(
<year>1996</year>
).
<article-title>AFNI: software for analysis and visualization of functional magnetic resonance neuroimages</article-title>
.
<source>Comput. Biomed. Res</source>
.
<volume>29</volume>
,
<fpage>162</fpage>
<lpage>173</lpage>
<pub-id pub-id-type="doi">10.1006/cbmr.1996.0014</pub-id>
<pub-id pub-id-type="pmid">8812068</pub-id>
</mixed-citation>
</ref>
<ref id="B4">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Davison</surname>
<given-names>A.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Automated capture of experiment context for easier reproducibility in computational research</article-title>
.
<source>Comput. Sci. Eng</source>
.
<volume>14</volume>
,
<fpage>48</fpage>
<lpage>56</lpage>
<pub-id pub-id-type="doi">10.1109/MCSE.2012.41</pub-id>
</mixed-citation>
</ref>
<ref id="B5">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gorgolewski</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Madison</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Clark</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Halchenko</surname>
<given-names>Y. O.</given-names>
</name>
<name>
<surname>Waskom</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Ghosh</surname>
<given-names>S. S.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in Python</article-title>
.
<source>Front. Neuroinform</source>
.
<volume>5</volume>
:
<issue>13</issue>
<pub-id pub-id-type="doi">10.3389/fninf.2011.00013</pub-id>
<pub-id pub-id-type="pmid">21897815</pub-id>
</mixed-citation>
</ref>
<ref id="B6">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Halchenko</surname>
<given-names>Y. O.</given-names>
</name>
<name>
<surname>Hanke</surname>
<given-names>M.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>Open is not enough. Let's take the next step: an integrated, community-driven computing platform for neuroscience</article-title>
.
<source>Front. Neuroinform</source>
.
<volume>6</volume>
:
<issue>22</issue>
<pub-id pub-id-type="doi">10.3389/fninf.2012.00022</pub-id>
<pub-id pub-id-type="pmid">23055966</pub-id>
</mixed-citation>
</ref>
<ref id="B7">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hanke</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Halchenko</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Sederberg</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Hanson</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Haxby</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Pollmann</surname>
<given-names>S.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>PyMVPA: a Python toolbox for multivariate pattern analysis of fMRI data</article-title>
.
<source>Neuroinformatics</source>
<volume>7</volume>
,
<fpage>37</fpage>
<lpage>53</lpage>
<pub-id pub-id-type="doi">10.1007/s12021-008-9041-y</pub-id>
<pub-id pub-id-type="pmid">19184561</pub-id>
</mixed-citation>
</ref>
<ref id="B8">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ince</surname>
<given-names>D. C.</given-names>
</name>
<name>
<surname>Hatton</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Graham-Cumming</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2012</year>
).
<article-title>The case for open computer programs</article-title>
.
<source>Nature</source>
<volume>482</volume>
,
<fpage>485</fpage>
<lpage>488</lpage>
<pub-id pub-id-type="doi">10.1038/nature10836</pub-id>
<pub-id pub-id-type="pmid">22358837</pub-id>
</mixed-citation>
</ref>
<ref id="B9">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Joppa</surname>
<given-names>L. N.</given-names>
</name>
<name>
<surname>McInerny</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Harper</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Salido</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Takeda</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>O'Hara</surname>
<given-names>K.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2013</year>
).
<article-title>Troubling trends in scientific software use</article-title>
.
<source>Science</source>
<volume>340</volume>
,
<fpage>814</fpage>
<lpage>815</lpage>
<pub-id pub-id-type="doi">10.1126/science.1231535</pub-id>
<pub-id pub-id-type="pmid">23687031</pub-id>
</mixed-citation>
</ref>
<ref id="B10">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kampstra</surname>
<given-names>P.</given-names>
</name>
</person-group>
(
<year>2008</year>
).
<article-title>Beanplot: a boxplot alternative for visual comparison of distributions</article-title>
.
<source>J. Stat. Softw</source>
.
<volume>28</volume>
,
<fpage>1</fpage>
<lpage>9</lpage>
Available online at:
<ext-link ext-link-type="uri" xlink:href="http://www.jstatsoft.org/v28/c01">http://www.jstatsoft.org/v28/c01</ext-link>
</mixed-citation>
</ref>
<ref id="B11">
<mixed-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Kubilius</surname>
<given-names>J.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<source>The Open Science Cycle</source>
. Available online at:
<ext-link ext-link-type="uri" xlink:href="http://figshare.com/articles/The_Open_Science_Cycle_July_2013/751548">http://figshare.com/articles/The_Open_Science_Cycle_July_2013/751548</ext-link>
</mixed-citation>
</ref>
<ref id="B12">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kubilius</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wagemans</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Op de Beeck</surname>
<given-names>H. P.</given-names>
</name>
</person-group>
(
<year>2011</year>
).
<article-title>Emergence of perceptual gestalts in the human visual cortex: the case of the configural-superiority effect</article-title>
.
<source>Psychol. Sci</source>
.
<volume>22</volume>
,
<fpage>1296</fpage>
<lpage>1303</lpage>
<pub-id pub-id-type="doi">10.1177/0956797611417000</pub-id>
<pub-id pub-id-type="pmid">21934133</pub-id>
</mixed-citation>
</ref>
<ref id="B13">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lades</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Vorbruggen</surname>
<given-names>J. C.</given-names>
</name>
<name>
<surname>Buhmann</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lange</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>von der Malsburg</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Wurtz</surname>
<given-names>R. P.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>1993</year>
).
<article-title>Distortion invariant object recognition in the dynamic link architecture</article-title>
.
<source>IEEE Trans. Comput</source>
.
<volume>42</volume>
,
<fpage>300</fpage>
<lpage>311</lpage>
<pub-id pub-id-type="doi">10.1109/12.210173</pub-id>
</mixed-citation>
</ref>
<ref id="B14">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Op de Beeck</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Wagemans</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Vogels</surname>
<given-names>R.</given-names>
</name>
</person-group>
(
<year>2001</year>
).
<article-title>Inferotemporal neurons represent low-dimensional configurations of parameterized shapes</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>4</volume>
,
<fpage>1244</fpage>
<lpage>1252</lpage>
<pub-id pub-id-type="doi">10.1038/nn767</pub-id>
<pub-id pub-id-type="pmid">11713468</pub-id>
</mixed-citation>
</ref>
<ref id="B15">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pedregosa</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Varoquaux</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Gramfort</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Michel</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Thirion</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Grisel</surname>
<given-names>O.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2011</year>
).
<article-title>Scikit-learn: machine learning in Python</article-title>
.
<source>J. Mach. Learn. Res</source>
.
<volume>12</volume>
,
<fpage>2825</fpage>
<lpage>2830</lpage>
Available online at:
<ext-link ext-link-type="uri" xlink:href="http://jmlr.org/papers/v12/pedregosa11a.html">http://jmlr.org/papers/v12/pedregosa11a.html</ext-link>
</mixed-citation>
</ref>
<ref id="B16">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peirce</surname>
<given-names>J. W.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>PsychoPy–Psychophysics software in Python</article-title>
.
<source>J. Neurosci. Methods</source>
<volume>162</volume>
,
<fpage>8</fpage>
<lpage>13</lpage>
<pub-id pub-id-type="doi">10.1016/j.jneumeth.2006.11.017</pub-id>
<pub-id pub-id-type="pmid">17254636</pub-id>
</mixed-citation>
</ref>
<ref id="B17">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Peirce</surname>
<given-names>J. W.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Generating stimuli for neuroscience using PsychoPy</article-title>
.
<source>Front. Neuroinform</source>
.
<volume>2</volume>
:
<fpage>10</fpage>
<pub-id pub-id-type="doi">10.3389/neuro.11.010.2008</pub-id>
<pub-id pub-id-type="pmid">19198666</pub-id>
</mixed-citation>
</ref>
<ref id="B18">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Perez</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Granger</surname>
<given-names>B. E.</given-names>
</name>
</person-group>
(
<year>2007</year>
).
<article-title>IPython: a system for interactive scientific computing</article-title>
.
<source>Comput. Sci. Eng</source>
.
<volume>9</volume>
,
<fpage>21</fpage>
<lpage>29</lpage>
<pub-id pub-id-type="doi">10.1109/MCSE.2007.53</pub-id>
</mixed-citation>
</ref>
<ref id="B19">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Riesenhuber</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Poggio</surname>
<given-names>T.</given-names>
</name>
</person-group>
(
<year>1999</year>
).
<article-title>Hierarchical models of object recognition in cortex</article-title>
.
<source>Nat. Neurosci</source>
.
<volume>2</volume>
,
<fpage>1019</fpage>
<lpage>1025</lpage>
<pub-id pub-id-type="doi">10.1038/14819</pub-id>
<pub-id pub-id-type="pmid">10526343</pub-id>
</mixed-citation>
</ref>
<ref id="B20">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stevens</surname>
<given-names>J. R.</given-names>
</name>
<name>
<surname>Elver</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Bednar</surname>
<given-names>J. A.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>An automated and reproducible workflow for running and analyzing neural simulations using Lancet and IPython Notebook</article-title>
.
<source>Front. Neuroinform</source>
.
<volume>7</volume>
:
<issue>44</issue>
<pub-id pub-id-type="doi">10.3389/fninf.2013.00044</pub-id>
</mixed-citation>
</ref>
<ref id="B20a">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wallis</surname>
<given-names>T. S.</given-names>
</name>
<name>
<surname>Taylor</surname>
<given-names>C. P.</given-names>
</name>
<name>
<surname>Wallis</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jackson</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Bex</surname>
<given-names>P. J.</given-names>
</name>
</person-group>
(
<year>2014</year>
).
<article-title>Characterisation of field loss based on microperimetry is predictive of face recognition difficulties</article-title>
.
<source>Invest. Ophthalmol. Vis. Sci</source>
.
<volume>55</volume>
,
<fpage>142</fpage>
<lpage>153</lpage>
<pub-id pub-id-type="doi">10.1167/iovs.13-12420</pub-id>
<pub-id pub-id-type="pmid">24302589</pub-id>
</mixed-citation>
</ref>
<ref id="B21">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>White</surname>
<given-names>E. P.</given-names>
</name>
<name>
<surname>Baldridge</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Brym</surname>
<given-names>Z. T.</given-names>
</name>
<name>
<surname>Locey</surname>
<given-names>K. J.</given-names>
</name>
<name>
<surname>McGlinn</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Supp</surname>
<given-names>S. R.</given-names>
</name>
</person-group>
(
<year>2013</year>
).
<article-title>Nine simple ways to make it easier to (re) use your data</article-title>
.
<source>PeerJ PrePrints</source>
<volume>1</volume>
:
<fpage>e7v2</fpage>
<pub-id pub-id-type="doi">10.7287/peerj.preprints.7v2</pub-id>
</mixed-citation>
</ref>
<ref id="B22">
<mixed-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Wilson</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Aruliah</surname>
<given-names>D. A.</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>C. T.</given-names>
</name>
<name>
<surname>Hong</surname>
<given-names>N. P. C.</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Guy</surname>
<given-names>R. T.</given-names>
</name>
<etal></etal>
</person-group>
(
<year>2012</year>
).
<source>Best Practices for Scientific Computing (arXiv e-print No. 1210.0530)</source>
. Available online at:
<ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/1210.0530">http://arxiv.org/abs/1210.0530</ext-link>
</mixed-citation>
</ref>
<ref id="B23">
<mixed-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Yue</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Lescroart</surname>
<given-names>M. D.</given-names>
</name>
<name>
<surname>Biederman</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>J. G.</given-names>
</name>
</person-group>
(
<year>2009</year>
).
<article-title>Adaptation in the fusiform face area (FFA): image or person?</article-title>
<source>Vision Res</source>
.
<volume>49</volume>
,
<fpage>2800</fpage>
<lpage>2807</lpage>
<pub-id pub-id-type="doi">10.1016/j.visres.2009.08.021</pub-id>
<pub-id pub-id-type="pmid">19712692</pub-id>
</mixed-citation>
</ref>
</ref-list>
</back>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Belgique/explor/OpenAccessBelV2/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000465 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000465 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Belgique
   |area=    OpenAccessBelV2
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     PMC:3894454
   |texte=   A framework for streamlining research workflow in neuroscience and psychology
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/RBID.i   -Sk "pubmed:24478691" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd   \
       | NlmPubMed2Wicri -a OpenAccessBelV2 

Wicri

This area was generated with Dilib version V0.6.25.
Data generation: Thu Dec 1 00:43:49 2016. Site generation: Wed Mar 6 14:51:30 2024