Serveur d'exploration Cyberinfrastructure

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.
***** Acces problem to record *****\

Identifieur interne : 000556 ( Pmc/Corpus ); précédent : 0005559; suivant : 0005570 ***** probable Xml problem with record *****

Links to Exploration step


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Adapting federated cyberinfrastructure for shared data collection facilities in structural biology</title>
<author>
<name sortKey="Stokes Rees, Ian" sort="Stokes Rees, Ian" uniqKey="Stokes Rees I" first="Ian" last="Stokes-Rees">Ian Stokes-Rees</name>
<affiliation>
<nlm:aff id="a">Department of Biological Chemistry and Molecular Pharmacology,
<institution>Harvard Medical School</institution>
, Boston, MA 02115,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Levesque, Ian" sort="Levesque, Ian" uniqKey="Levesque I" first="Ian" last="Levesque">Ian Levesque</name>
<affiliation>
<nlm:aff id="a">Department of Biological Chemistry and Molecular Pharmacology,
<institution>Harvard Medical School</institution>
, Boston, MA 02115,
<country>USA</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="b">Howard Hughes Medical Institute,
<institution>Harvard Medical School</institution>
, Boston, MA 02115,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Murphy, Frank V" sort="Murphy, Frank V" uniqKey="Murphy F" first="Frank V." last="Murphy">Frank V. Murphy</name>
<affiliation>
<nlm:aff id="c">Northeast Collaborative Access Team (NE-CAT), Advanced Photon Source,
<institution>Argonne National Laboratory</institution>
, Argonne, IL 60439,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Yang, Wei" sort="Yang, Wei" uniqKey="Yang W" first="Wei" last="Yang">Wei Yang</name>
<affiliation>
<nlm:aff id="d">Joint Center for Structural Genomics, Stanford Synchrotron Radiation Light Source,
<institution>Stanford University</institution>
, Menlo Park, CA 94025,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Deacon, Ashley" sort="Deacon, Ashley" uniqKey="Deacon A" first="Ashley" last="Deacon">Ashley Deacon</name>
<affiliation>
<nlm:aff id="d">Joint Center for Structural Genomics, Stanford Synchrotron Radiation Light Source,
<institution>Stanford University</institution>
, Menlo Park, CA 94025,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Sliz, Piotr" sort="Sliz, Piotr" uniqKey="Sliz P" first="Piotr" last="Sliz">Piotr Sliz</name>
<affiliation>
<nlm:aff id="a">Department of Biological Chemistry and Molecular Pharmacology,
<institution>Harvard Medical School</institution>
, Boston, MA 02115,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22514186</idno>
<idno type="pmc">3329960</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3329960</idno>
<idno type="RBID">PMC:3329960</idno>
<idno type="doi">10.1107/S0909049512009776</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">000556</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Adapting federated cyberinfrastructure for shared data collection facilities in structural biology</title>
<author>
<name sortKey="Stokes Rees, Ian" sort="Stokes Rees, Ian" uniqKey="Stokes Rees I" first="Ian" last="Stokes-Rees">Ian Stokes-Rees</name>
<affiliation>
<nlm:aff id="a">Department of Biological Chemistry and Molecular Pharmacology,
<institution>Harvard Medical School</institution>
, Boston, MA 02115,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Levesque, Ian" sort="Levesque, Ian" uniqKey="Levesque I" first="Ian" last="Levesque">Ian Levesque</name>
<affiliation>
<nlm:aff id="a">Department of Biological Chemistry and Molecular Pharmacology,
<institution>Harvard Medical School</institution>
, Boston, MA 02115,
<country>USA</country>
</nlm:aff>
</affiliation>
<affiliation>
<nlm:aff id="b">Howard Hughes Medical Institute,
<institution>Harvard Medical School</institution>
, Boston, MA 02115,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Murphy, Frank V" sort="Murphy, Frank V" uniqKey="Murphy F" first="Frank V." last="Murphy">Frank V. Murphy</name>
<affiliation>
<nlm:aff id="c">Northeast Collaborative Access Team (NE-CAT), Advanced Photon Source,
<institution>Argonne National Laboratory</institution>
, Argonne, IL 60439,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Yang, Wei" sort="Yang, Wei" uniqKey="Yang W" first="Wei" last="Yang">Wei Yang</name>
<affiliation>
<nlm:aff id="d">Joint Center for Structural Genomics, Stanford Synchrotron Radiation Light Source,
<institution>Stanford University</institution>
, Menlo Park, CA 94025,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Deacon, Ashley" sort="Deacon, Ashley" uniqKey="Deacon A" first="Ashley" last="Deacon">Ashley Deacon</name>
<affiliation>
<nlm:aff id="d">Joint Center for Structural Genomics, Stanford Synchrotron Radiation Light Source,
<institution>Stanford University</institution>
, Menlo Park, CA 94025,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Sliz, Piotr" sort="Sliz, Piotr" uniqKey="Sliz P" first="Piotr" last="Sliz">Piotr Sliz</name>
<affiliation>
<nlm:aff id="a">Department of Biological Chemistry and Molecular Pharmacology,
<institution>Harvard Medical School</institution>
, Boston, MA 02115,
<country>USA</country>
</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Journal of Synchrotron Radiation</title>
<idno type="ISSN">0909-0495</idno>
<idno type="eISSN">1600-5775</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>It has been difficult, historically, to manage and maintain early-stage experimental data collected by structural biologists in synchrotron facilities. This work describes a prototype system that adapts existing federated cyberinfrastructure technology and techniques to manage collected data at synchrotrons and to facilitate the efficient and secure transfer of data to the owner’s home institution.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">J Synchrotron Radiat</journal-id>
<journal-id journal-id-type="iso-abbrev">J Synchrotron Radiat</journal-id>
<journal-id journal-id-type="publisher-id">J. Synchrotron Rad.</journal-id>
<journal-title-group>
<journal-title>Journal of Synchrotron Radiation</journal-title>
</journal-title-group>
<issn pub-type="ppub">0909-0495</issn>
<issn pub-type="epub">1600-5775</issn>
<publisher>
<publisher-name>International Union of Crystallography</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22514186</article-id>
<article-id pub-id-type="pmc">3329960</article-id>
<article-id pub-id-type="publisher-id">ie5073</article-id>
<article-id pub-id-type="doi">10.1107/S0909049512009776</article-id>
<article-id pub-id-type="coden">JSYRES</article-id>
<article-id pub-id-type="pii">S0909049512009776</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Computer Programs</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Adapting federated cyberinfrastructure for shared data collection facilities in structural biology</article-title>
<alt-title>Adapting federated cyberinfrastructure</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Stokes-Rees</surname>
<given-names>Ian</given-names>
</name>
<xref ref-type="aff" rid="a">a</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Levesque</surname>
<given-names>Ian</given-names>
</name>
<xref ref-type="aff" rid="a">a</xref>
<xref ref-type="aff" rid="b">b</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Murphy</surname>
<given-names>Frank V.</given-names>
<suffix>IV</suffix>
</name>
<xref ref-type="aff" rid="c">c</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Yang</surname>
<given-names>Wei</given-names>
</name>
<xref ref-type="aff" rid="d">d</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Deacon</surname>
<given-names>Ashley</given-names>
</name>
<xref ref-type="aff" rid="d">d</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sliz</surname>
<given-names>Piotr</given-names>
</name>
<xref ref-type="aff" rid="a">a</xref>
<xref ref-type="corresp" rid="cor">*</xref>
</contrib>
<aff id="a">
<label>a</label>
Department of Biological Chemistry and Molecular Pharmacology,
<institution>Harvard Medical School</institution>
, Boston, MA 02115,
<country>USA</country>
</aff>
<aff id="b">
<label>b</label>
Howard Hughes Medical Institute,
<institution>Harvard Medical School</institution>
, Boston, MA 02115,
<country>USA</country>
</aff>
<aff id="c">
<label>c</label>
Northeast Collaborative Access Team (NE-CAT), Advanced Photon Source,
<institution>Argonne National Laboratory</institution>
, Argonne, IL 60439,
<country>USA</country>
</aff>
<aff id="d">
<label>d</label>
Joint Center for Structural Genomics, Stanford Synchrotron Radiation Light Source,
<institution>Stanford University</institution>
, Menlo Park, CA 94025,
<country>USA</country>
</aff>
</contrib-group>
<author-notes>
<corresp id="cor">Correspondence e-mail:
<email>piotr_sliz@hms.harvard.edu</email>
</corresp>
</author-notes>
<pub-date pub-type="ppub">
<day>01</day>
<month>5</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="epub">
<day>06</day>
<month>4</month>
<year>2012</year>
</pub-date>
<pub-date pub-type="pmc-release">
<day>06</day>
<month>4</month>
<year>2012</year>
</pub-date>
<pmc-comment> PMC Release delay is 0 months and 0 days and was based on the . </pmc-comment>
<volume>19</volume>
<issue>Pt 3</issue>
<issue-id pub-id-type="publisher-id">s120300</issue-id>
<fpage>462</fpage>
<lpage>467</lpage>
<history>
<date date-type="received">
<day>09</day>
<month>12</month>
<year>2011</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>3</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>© Ian Stokes-Rees et al. 2012</copyright-statement>
<copyright-year>2012</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/2.0/uk/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are cited.</license-p>
</license>
</permissions>
<self-uri xlink:type="simple" xlink:href="http://dx.doi.org/10.1107/S0909049512009776">A full version of this article is available from Crystallography Journals Online.</self-uri>
<abstract abstract-type="toc">
<p>It has been difficult, historically, to manage and maintain early-stage experimental data collected by structural biologists in synchrotron facilities. This work describes a prototype system that adapts existing federated cyberinfrastructure technology and techniques to manage collected data at synchrotrons and to facilitate the efficient and secure transfer of data to the owner’s home institution.</p>
</abstract>
<abstract>
<p>Early stage experimental data in structural biology is generally unmaintained and inaccessible to the public. It is increasingly believed that this data, which forms the basis for each macromolecular structure discovered by this field, must be archived and, in due course, published. Furthermore, the widespread use of shared scientific facilities such as synchrotron beamlines complicates the issue of data storage, access and movement, as does the increase of remote users. This work describes a prototype system that adapts existing federated cyberinfra­structure technology and techniques to significantly improve the operational environment for users and administrators of synchrotron data collection facilities used in structural biology. This is achieved through software from the
<italic>Virtual Data Toolkit</italic>
and
<italic>Globus</italic>
, bringing together federated users and facilities from the Stanford Synchrotron Radiation Lightsource, the Advanced Photon Source, the Open Science Grid, the SBGrid Consortium and Harvard Medical School. The performance and experience with the prototype provide a model for data management at shared scientific facilities.</p>
</abstract>
<kwd-group>
<kwd>grid computing</kwd>
<kwd>data handling and security</kwd>
<kwd>diffraction images</kwd>
<kwd>collaborative tools</kwd>
<kwd>structural biology</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="sec1">
<label>1.</label>
<title>Introduction   </title>
<p>The field of structural biology provides atomic-scale models of macromolecules. While these models are typically made public through the Protein Data Bank (PDB; Berman, 2000
<xref ref-type="bibr" rid="bb6"></xref>
), the source experimental data used to establish the models is generally not published. It has been difficult, historically, to manage and maintain this early-stage experimental data, and impractical to make it publicly available. Recent advances in data management systems provide the opportunity to reconsider data retention and publication policies. The shift towards data collection from shared scientific facilities, such as synchrotron beamlines where users from numerous institutions are hosted, compounds the importance of establishing improved storage and data management systems. Remote users, who ship their samples to a facility for data collection, also require more robust data management and movement infrastructure. Advances in the technology and automation at these shared facilities are producing higher data rates, with an anticipated need to process terabytes per day in the near future (Soltis
<italic>et al.</italic>
, 2008
<xref ref-type="bibr" rid="bb24"></xref>
). These challenges are similar to those faced by genomics research or high-energy physics: centralized data collection at a shared facility by a large group of users with independent affiliations and collaborations. By adapting technology and techniques developed for federated cyberinfrastructure it is possible to significantly improve the operational environment for facility users and administrators. This paper describes a prototype system developed and deployed for the structural biology community that leverages federated identity management systems and grid computing infrastructure to streamline authentication and authorization, data access and data management for multi-gigabyte data sets. These capabilities provide a foundation for secure archival systems that enable collaboration and, optionally, public access to valuable data products of the research process that otherwise are often lost with time or backed up to inaccessible archives.</p>
<p>This prototype focuses on macromolecular crystallography, a sub-discipline of structural biology that has undergone a rapid expansion in recent years. Crystallographic data are routinely collected on specialized X-ray beamlines at shared synchrotron facilities. Each beamline operates 24 h per day and continually produces large sets of images that need to be securely stored and subsequently transferred to the home institution of the research group that provided the sample under investigation.</p>
<p>Here we present a prototype system designed to support members of 170 structural biology laboratories participating in the SBGrid Consortium (
<ext-link ext-link-type="uri" xlink:href="http://www.sbgrid.org/">http://www.sbgrid.org/</ext-link>
). The system integrates with data collection facilities at synchrotron beamlines at the Advanced Photon Source (APS), the Stanford Synchrotron Radiation Lightsource (SSRL) and a shared X-ray data collection facility at Harvard Medical School (HMS). The system relies on Globus Online (Foster, 2011
<xref ref-type="bibr" rid="bb12"></xref>
) to link disparate storage systems, X.509 user certificates (Housley
<italic>et al.</italic>
, 2002
<xref ref-type="bibr" rid="bb13"></xref>
) for identity tokens, the Virtual Organization Management System (VOMS) (Alfieri
<italic>et al.</italic>
, 2005
<xref ref-type="bibr" rid="bb2"></xref>
) to define roles and group membership, a MyProxy server (Basney
<italic>et al.</italic>
, 2005
<xref ref-type="bibr" rid="bb5"></xref>
) as a centralized user credential store (similar to a network-accessible digital key chain), and the SBGrid Science Portal (
<ext-link ext-link-type="uri" xlink:href="http://portal.sbgrid.org/">http://portal.sbgrid.org/</ext-link>
) for user account management. As a prototype it demonstrates the viability of deploying a system to address federated data access and archiving of data in a secure manner for researchers, laboratories and collaborations who produce data at shared scientific facilities.</p>
</sec>
<sec id="sec2">
<label>2.</label>
<title>Materials and methods   </title>
<p>The novel work of this prototype is based on the integration of services with the SBGrid Science Portal in order to link user identities, group affiliations, data access and data movement. Conceptually, the portal acts as a central location for users to register and manage group membership. Federated and community driven collaborations are known as virtual organizations (VOs), which can include a hierarchy of sub-VOs and specific roles which group members assume. From the web portal, users are able to access data held on various storage systems: facility-based, institutional, laboratory or personal computer. They can then initiate data movement requests between any two endpoints (for example, from a facility to a laboratory), and the transfer will be scheduled and managed by third-party services that do not require the user to maintain a connection between the two endpoints. Storage systems subscribe to the portal’s group membership lists to manage access control, and configure a mapping from a ‘portal’ identity to the local storage system’s user identifier.</p>
<p>The prototype is based on software and infrastructure provided by the Virtual Data Toolkit (Roy
<italic>et al.</italic>
, 2009
<xref ref-type="bibr" rid="bb22"></xref>
), the Open Science Grid (Pordes
<italic>et al.</italic>
, 2007
<xref ref-type="bibr" rid="bb21"></xref>
) and Globus (Foster, 2005
<xref ref-type="bibr" rid="bb11"></xref>
). User identities are handled through the US Department of Energy’s Energy Science Network (ESNet) Certificate Authority (CA) (Muruganantham
<italic>et al.</italic>
, 2005
<xref ref-type="bibr" rid="bb19"></xref>
) which issues X.509 certificates to eligible users. Group membership is handled by the VO-hosted and managed VOMS server (Alfieri
<italic>et al.</italic>
, 2005
<xref ref-type="bibr" rid="bb2"></xref>
).</p>
<p>The use of web interfaces to facilitate interaction with federated cyberinfrastructure has been proposed in the form of abstract models (Von Laszewski & Foster, 1999
<xref ref-type="bibr" rid="bb26"></xref>
), generic frameworks (Novotny
<italic>et al.</italic>
, 2004
<xref ref-type="bibr" rid="bb20"></xref>
; McLennan & Kennell, 2010
<xref ref-type="bibr" rid="bb15"></xref>
) or domain-specific portals (Klimeck
<italic>et al.</italic>
, 2008
<xref ref-type="bibr" rid="bb14"></xref>
). The development of the Globus Online service (GO) (Foster, 2011
<xref ref-type="bibr" rid="bb12"></xref>
) has now also made the previously complicated task of using high-performance parallel file transfer tools such as GridFTP (Allcock
<italic>et al.</italic>
, 2001
<xref ref-type="bibr" rid="bb3"></xref>
), often the only means to access storage systems and major computing centers, accessible through simple web interfaces. The existing SBGrid Science Portal, designed specifically for the structural biology community, acts as a central web-based interface for users and groups to manage accounts and data. Group management services are derived from VOMS, with web-based data access control from GridSite (McNab & Li, 2010
<xref ref-type="bibr" rid="bb16"></xref>
), and system-level identity management and access control using the FreeIPA LDAP server.</p>
<p>The X.509 public key infrastructure, common to most grid environments, has been used to provide secure federated identity tokens. A federated computing environment requires a simple and secure mechanism to authenticate users and make authorization decisions. Users need a common portable identification scheme that allows them to access systems at independent computing centers: their home institution; that of their collaborators; and at any facility where they may be collecting, storing or processing data. System administrators need secure mechanisms to delegate authentication, data access control (distinct from system access control) and the organization of users into groups or collaborations. The SBGrid VOMS server provides the authoritative list of group members which can then be used by various GUMS (Baker
<italic>et al.</italic>
, 2003
<xref ref-type="bibr" rid="bb4"></xref>
) servers (for example, at facilities) or in a grid-mapfile (a text-based configuration file) to assign group members to specific system-level accounts. Individual groups use VOMS to self-administer which users are members of their group, and their roles within the group, providing greater operational autonomy to user groups. In this way users at a shared facility authenticate using their personal grid identity token, rather than a system-level username. This alleviates the need for shared accounts and passwords, while avoiding the need for the facility to create accounts for every user, instead allowing multiple X.509 identity tokens to map to the same shared system-level account. The MyProxy server at the National Center for Supercomputing Applications (NCSA) has been used as an intermediary to cache proxy certificates for users and to make them accessible using the more conventional username/passphrase approach. With MyProxy, users do not need to manage their digital certificates directly, as the process for this is cumbersome and specific to each web browser and operating system.</p>
<p>The SBGrid Science Portal hosts the single identity for all users of the prototype system. This eliminates the need for users to interact with the ESNet Certificate Authority, a VOMS server or MyProxy. Globus Online does not have account interaction application program interfaces (APIs), so a separate GO account must be created and linked to the portal identity. The portal also acts as a storage host, holding user data if necessary. The portal identities are handled by an LDAP-based system, FreeIPA. This single identity source is used for authentication to the science portal, for command-line shell access, and for the shared X-ray data collection facility.</p>
</sec>
<sec id="sec3">
<label>3.</label>
<title>Results   </title>
<p>The trial of the prototype system consisted of configuring SSRL and Northeast Collaborative Access Team (NE-CAT) as GO endpoints, setting up the necessary X.509 authentication system, and mapping grid identities to user identities at the participating sites. SSRL, as part of the Stanford Linear Accelerator Center which already participates in Open Science Grid, already had much of the necessary grid infrastructure in place, including GridFTP-enabled storage systems. NE-CAT, with no previous exposure to federated cyber­infrastructure, utilized the Globus Connect (GC) client and made necessary firewall changes. Users requested grid accounts through the SBGrid Science Portal, which automatically registered them into the SBGrid VO, and created a proxy certificate with the NCSA MyProxy server. The users also had to create Globus Online accounts and link these to their grid certificates.</p>
<p>The system allows users to access and utilize the synchrotron facility in the normal way, saving data to group-specific storage areas local to the beamline. Once collected, users can elect to initiate a data transfer back to their home institution, or leave the data on the facility’s storage system and access it ‘on-demand’ for later analysis and processing. These transfers are mediated by the GO service which acts as a third-party controller. This provides numerous advantages over conventional file transfer mechanisms such as rsync, scp or ftp. The use of GridFTP as the underlying file transfer protocol enables high-performance parallel data transfers to maximize throughput. GO provides a simple web-based interface to initiate third-party file transfers between two sites, monitored by GO, but with the data traffic routed directly between the endpoints. This does not require the user to maintain an active connection to either of the endpoints, nor does it place the user or GO as an intermediary in the network path. GO also implements reliable file transfer which copes with any transient network or connection failures. The use of federated identity tokens allows GO to act on behalf of the user automatically and autonomously to reconnect and retry transfers if they are interrupted using the delegated proxy identity token for the user.</p>
<p>The GC client further aides in this process as it establishes a secure tunneled connection from the client system (a user laptop or desktop, or a laboratory file server) to GO which can then be used to rendezvous the endpoint connection from another GridFTP server or GO ‘private’ endpoint. This is invaluable in situations where firewalls or administrative complexity make it impractical to set up a permanent public GridFTP server at a desired endpoint. A multi-user GC agent is available that runs a single instance of a tunneling GridFTP server but can be configured to support multiple user connections from the web-based GO controller interface. This is designed for persisted operation on file servers that cannot easily be configured with a public GridFTP server.</p>
<p>A representative data set of 2000 X-ray image files, requiring 34 GB of storage, were transferred between three classes of endpoint: beamlines (SSRL and NE-CAT), laboratories (HMS) and data archival facilities [National Center for Supercomputing Applications and Fermi National Accelerator Laboratory (FNAL)]. Table 1
<xref ref-type="table" rid="table1"></xref>
shows the transfer rates between various pairs of endpoints. In all cases these rates outperformed the alternatives of scp or rsync by 50% to 300%. Completion times ranged from 12 to 25 min with Globus Online, in contrast to 30 to 60 min for conventional transfer techniques.</p>
<p>Fig. 1
<xref ref-type="fig" rid="fig1"></xref>
illustrates the geographic location of the various endpoints. Although not illustrated in this table, additional trial transfers were made to Texas Tech, the University of Nebraska-Lincoln, the University of California San Diego and Sao Paulo State University (Brazil).</p>
</sec>
<sec id="sec4">
<label>4.</label>
<title>Discussion   </title>
<p>The primary objectives of this project were to prototype improved models for data management at shared X-ray beamlines and to facilitate the efficient and secure transfer of collected data to the owner’s home institution. Secondarily, this project demonstrates aspects necessary for a permanent facility-based data archive, where users and laboratories can reliably access their data ‘on-demand’, without the pressure to store and preserve that data at their home institution.</p>
<p>Tertiary objectives relate to the evaluation of grid computing tools and protocols for managing users, groups, access control and data in a collaborative federated environment. These issues were central to the original development of grid computing over a decade ago (Foster, 2001
<xref ref-type="bibr" rid="bb10"></xref>
).</p>
<p>While systems exist to improve the high-level management of experimental data and the data’s life-cycle (Moore
<italic>et al.</italic>
, 2006
<xref ref-type="bibr" rid="bb18"></xref>
; Mattmann
<italic>et al.</italic>
, 2004
<xref ref-type="bibr" rid="bb17"></xref>
; Flannery
<italic>et al.</italic>
, 2009
<xref ref-type="bibr" rid="bb9"></xref>
), there are limited facilities for simple end-user access and management of federated data sets. The typical solution for beamline users is to transport an external hard drive with them to the facility and to copy files over USB or FireWire. The rapid increase in remote users is eliminating this option. Facilities generally hold user data for a period of weeks to months, but only as a temporary store until the user can repatriate their collected data at which point their institutional or laboratory system administrators take responsibility for managing and backing up the data. In most cases the institutions or laboratories are poorly equipped to be the long-term archivists of these important experimental results, and anecdotal comments suggest the data is discarded within a few years of the departure of the original ‘owner’, or maintained through stacks of disconnected and often unlabelled hard drives. Locating beamline data collected one or two years earlier can often be difficult or impossible. It is to improve this operational reality that the prototype system described here was developed.</p>
<p>Data access control through GO is based on the system-level user that a particular grid identity token is mapped to. On-disk file and directory permissions are applied to any file system operations by this system-level user (POSIX users and file systems). Additional access control mechanisms are available on the SBGrid Science Portal, namely LDAP-based WebDAV, and X.509 DN-based Grid Access Control Language (GACL) (McNab & Li, 2010
<xref ref-type="bibr" rid="bb16"></xref>
). WebDAV has the advantage of being accessible through standard network file system connection protocols; however, managing the access control policies currently requires manual editing of
<inline-formula>
<inline-graphic xlink:href="s-19-00462-efi1.jpg" mimetype="image" mime-subtype="gif"></inline-graphic>
</inline-formula>
configuration files on a per-directory basis through a command-line interface. In contrast, GACL provides a browser-based ACL editing tool and user-driven DN lists for groups; however, it requires the user to have their certificate available in their browser’s keystore. No other aspect of this system requires users to have direct access to their X.509 certificate on the client side. The single-user GC client is not supposed to require any client-side specific manual system configuration; however, strict firewall rules at NE-CAT prevented it from working without some adjustment: the systems there had no outbound Internet connectivity except for basic services (22, 80, 443 for ssh, http, https, respectively). The multi-user GC client has promising features for supporting laboratory-level storage systems with a tunneled GridFTP server, but was not available in time for this prototype.</p>
<p>Federated cyberinfrastructure requires the organization of users into manageable groups. While the original vision of grid computing was for a VO to represent such a group (Foster, 2001
<xref ref-type="bibr" rid="bb10"></xref>
), the capabilities and requirements of a VO have resulted in relatively static long-lived entities. Smaller user-managed groups, sometimes called dynamic VOs, are possible through several tools. The latest VOMSAdmin interface simplifies the creation of a hierarchy of VOMS groups and the definition of access control lists (ACLs). These ACLs are associated with roles that permit the administration of a single sub-group, or entire sub-trees in the VOMS group hierarchy. Users can then be assigned those roles, and they can self-organize VOMS-group membership. With a GUMS/PRIMA environment, rich mapping policies can be configured based on the user’s X.509 Attribute Certificates (ACs). GUMS, which is installed and used by a single site, can be configured to fetch VOMS information for all recognized VOs from the respective VO’s VOMS servers.</p>
<p>For example, NE-CAT can query the SBGrid VOMS server to obtain a list of VOMS groups (
<italic>e.g.</italic>
representing laboratory groups or collaborations) within the VO, and a list of those users (by DN) within each VOMS group. NE-CAT then has the option to automate the construction of the grid-mapfile, or to base a mapping decision directly on the VOMS group AC within the user’s X.509 authentication token. A manual protocol is necessary when new groups are added to the VO so NE-CAT can create a new system-level group.</p>
<p>This prototype has motivated several beamlines to consider the benefits of a federated identity and data management environment. The next step is to extend this prototype to a production environment and to include more users, laboratory groups and beamlines. The experience gained in the process has provided some clear guidance around the improved use of VOMS, and a need to evaluate Storage Resource Manager (Shoshani
<italic>et al.</italic>
, 2002
<xref ref-type="bibr" rid="bb23"></xref>
) software with its fuller feature set as an alternative to the basic GridFTP servers that have currently been deployed. The mechanics of system-level account creation and mapping must also be further improved, likely requiring an evaluation of a GUMS/PRIMA system as an alternative to grid-mapfiles. The lack of ACL visibility and management of files through the GO interface is acceptable in the short term, but enhancement to existing systems or integration of the alternatives mentioned above (WebDAV with LDAP, or GACL) will eventually be required. Expanding the prototype into a long-term archival system would require a greater formalization of archival policies, funding for operational infrastructure and staffing, and consideration of issues such as data set unique identifiers and public release policies. The DataCite initiative (Brase, 2009
<xref ref-type="bibr" rid="bb7"></xref>
), for example, provides a model to issue document object identifiers (DOIs) for data sets that are to be archived and disseminated.</p>
<p>An advantage of using GridFTP and X.509 identity tokens is the ease with which users and data can interact with existing federated cyberinfrastructure for any compute-intensive processing of the data. The authors have shown elsewhere (Stokes-Rees & Sliz, 2010
<xref ref-type="bibr" rid="bb25"></xref>
) the power of this approach, a form of the map/reduce (Dean & Ghemawat, 2004
<xref ref-type="bibr" rid="bb8"></xref>
) paradigm, to parallelize processing, reduce wait time for results, or to enable new, previously intractable, analytical techniques.</p>
<p>Improved data archival systems and public access to the original imaging data would aid the research process, facilitate the development of improved structure determination algorithms by researchers focused on methods, and allow independent validation of the structure. A full featured system would reduce the data management overhead, improve the secure sharing of data, retain valuable early stage data products, and publish a more complete record of the experimental results, ultimately leading to the more efficient production of high-quality models. Publication standards have already evolved to expect the ‘structure factor’ data, which provides a summarized view of the raw experimental data, to be published in addition to the structural model. Unfortunately structure factor data is not sufficient to fully validate the calculation of experimental phases, space group assignment, or the correctness of the data integration strategy. Additionally, data sets that do not lead to publication may still be valuable in the future if they are publicly accessible, possibly leading to new or improved structure models.</p>
<p>Historically, calls to establish experimental data archives for crystallography to hold larger data sets have been met with limited support, mostly owing to the high cost of establishing large community-wide archiving facilities. Such initiatives are now technologically and economically feasible. For example, storing a single-crystal data set for every one of the 67000 deposited macromolecules in the PDB derived from X-ray crystallography would require 130 TB (based on 19 MB per image and 100 images per crystal). Using bulk storage with basic data loss protection this volume of data can be stored for hardware costs of a few tens of thousands of dollars (
<ext-link ext-link-type="uri" xlink:href="http://bioteam.net/2011/08/real-world-backblaze-costs/">http://bioteam.net/2011/08/real-world-backblaze-costs/</ext-link>
) in less than one server rack. The remaining challenges then focus on security, metadata, annotation, federated user and group management, data provenance and lifecycle, replication and cataloging.</p>
<p>We envision a three-tier model of time- and quota-limited storage. The first tier would contain full experimental data, be private to the data owners, be retained for less than one year, and be subject to a storage quota. The second tier would be long-term private group storage, also subject to a storage quota, but with a fixed public release date three to five years in the future. The third tier would hold all public data, and be managed by the archive’s curatorial staff rather than the group owner. Pressure on per-group storage quota may encourage data owners to release data as public to move it from tier 2 private storage to tier 3 public storage earlier than the contracted release date. Designated data managers within groups could move data at will from one tier to the next, but not back again. Aspects of this model parallel the process used for structure deposition with the PDB. Fig. 2
<xref ref-type="fig" rid="fig2"></xref>
illustrates an example deployment of this prototype, including the NE-CAT facility, it’s shared staging and archival storage system, the laboratory file servers at Harvard, a personal computer, and a user who controls data movement between the systems with the assistance of Globus Online.</p>
<p>To expand the adoption of this prototype system it would be necessary to advance the integration of the SBGrid Science Portal, MyProxy server and Globus Online for integrated account management and secure data access. It would also be necessary to formalize and automate a procedure for shared facilities to approve users or groups from the central VOMS server and to integrate this with their local account creation and X.509 mapping protocols. Achieving this would provide a base for federated data access and high-performance reliable file transfer. Subsequent to this, the issues around group management, annotation, metadata, access control, archival policy, file catalogs and replication/backup would need to be considered. Among these, data security, access control and public deposition protocols would be of key importance. Tertiary issues around information lifecycle management, data set global unique identifiers, provenance, data format details and data collection conditions could also be addressed. Importantly, this archival system is not specific to macromolecular crystallography and could be applied to the many areas where long-lived, high-value and large scientific data sets are present.</p>
<p>The trial of this prototype has established the viability of integrating existing systems, software and models for federated grid computing to improve the operational conditions for shared scientific data collection facilities. Additional capabilities not present in the prototype are variously provided by systems such as iRODS (Moore
<italic>et al.</italic>
, 2006
<xref ref-type="bibr" rid="bb18"></xref>
), OODT (Mattmann
<italic>et al.</italic>
, 2004
<xref ref-type="bibr" rid="bb17"></xref>
) and ICAT (Flannery
<italic>et al.</italic>
, 2009
<xref ref-type="bibr" rid="bb9"></xref>
). Both OODT and ICAT would need to be evaluated for their modularity and capacity to be integrated with the existing SBGrid Science Portal. Issues of long-term support for the facility-based storage systems will need to be addressed, but the proposed combination of a federated environment with centralized archival storage would provide a more sustainable and economical model for the data management needs of this community.</p>
</sec>
</body>
<back>
<ack>
<p>The work was supported by the
<funding-source>National Institute of Health</funding-source>
grant
<award-id>1U54GM094608-01</award-id>
and the
<funding-source>National Science Foundation Research Coordination Network</funding-source>
, grant
<award-id>0639193</award-id>
. This research was carried out using resources provided by the Open Science Grid (supported by the National Science Foundation and the US Department of Energy’s Office of Science). This work is based upon research conducted at the Advanced Photon Source on the Northeastern Collaborative Access Team beamlines, which are supported by grants from the National Center for Research Resources (
<award-id>5P41RR015301-10</award-id>
) and the National Institute of General Medical Sciences (
<award-id>8 P41 GM103403-10</award-id>
) from the National Institutes of Health. Use of the Advanced Photon Source, an Office of Science User Facility operated for the US Department of Energy (DOE) Office of Science by Argonne National Laboratory, was supported by the US DOE under Contract No. DE-AC02-06CH11357. AD is supported by the NIH, National Institutes of General Medical Sciences, Protein Structure Initiative award to JCSG (U54 GM094586 and GM074898). Portions of this research were performed at the Stanford Synchrotron Radiation Lightsource (SSRL), SLAC National Accelerator Laboratory. The SSRL is a national user facility operated by Stanford University on behalf of the US Department of Energy, Office of Basic Energy Sciences. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by
<funding-source>National Science Foundation</funding-source>
grant number
<award-id>OCI-1053575</award-id>
. The authors would also like to thank Steve Tuecke, Rachana Ananthakrishnan and Raj Kettimuthu from Globus Online for their support and encouragement with this project, and James Withrow from NE-CAT for technical support in the deployment process. Terrence Martin, at the University of California San Diego (UCSD), and Brian Bockleman, at the University of Nebraska-Lincoln, provided invaluable assistance with service configuration and data management on the HadoopFS system at UCSD.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="bb2">
<mixed-citation publication-type="other">Alfieri, R., Cecchini, R., Ciaschini, V., dell’Agnello, L., Frohner, Á., Lorentey, K. & Spataro, F. (2005).
<italic>Future Gener. Comput. Syst.</italic>
<bold>21</bold>
, 549–558.</mixed-citation>
</ref>
<ref id="bb3">
<mixed-citation publication-type="other">Allcock, W., Bester, J., Bresnahan, J., Chervenak, A., Liming, L. & Tuecke, S. (2001).
<italic>GridFTP: Protocol extensions to FTP for the Grid.</italic>
Global Grid Forum.</mixed-citation>
</ref>
<ref id="bb4">
<mixed-citation publication-type="other">Baker, R., Yu, D. & Wlodek, T. (2003).
<italic>Proceedings of the Conference on Computing in High Energy and Nuclear Physics</italic>
, 24–28 March 2003, La Jolla, CA, USA.</mixed-citation>
</ref>
<ref id="bb5">
<mixed-citation publication-type="other">Basney, J., Humphrey, M. & Welch, V. (2005).
<italic>Softw. Pract. Exp.</italic>
<bold>35</bold>
, 801–816.</mixed-citation>
</ref>
<ref id="bb6">
<mixed-citation publication-type="other">Berman, H. M. (2000).
<italic>Nucleic Acids Res.</italic>
<bold>28</bold>
, 235–242.</mixed-citation>
</ref>
<ref id="bb7">
<mixed-citation publication-type="other">Brase, J. (2009).
<italic>Proceedings of the Fourth International Conference on Cooperation and Promotion of Information Resources in Science and Technology (COINFO ’09)</italic>
, pp. 257–261. Washington: IEEE Computer Society.</mixed-citation>
</ref>
<ref id="bb8">
<mixed-citation publication-type="other">Dean, J. & Ghemawat, S. (2004).
<italic>Proceedings of the 6th Symposium on Operating Systems Design and Implementation</italic>
, 6–8 December 2004, San Francisco, CA, USA.</mixed-citation>
</ref>
<ref id="bb9">
<mixed-citation publication-type="other">Flannery, D., Matthews, B., Griffin, T., Bicarregui, J., Gleaves, M., Lerusse, L., Downing, R., Ashton, A., Sufi, S.m Drinkwater, G. & Kleese, K. (2009).
<italic>Fifth IEEE International Conference on e-Science (e-Science’2009)</italic>
, 9–11 December 2009, Oxford, UK.</mixed-citation>
</ref>
<ref id="bb10">
<mixed-citation publication-type="other">Foster, I. (2001).
<italic>Int. J. High Perform. Comput. Appl.</italic>
<bold>15</bold>
, 200–222.</mixed-citation>
</ref>
<ref id="bb11">
<mixed-citation publication-type="other">Foster, I. (2005).
<italic>Lecture Notes in Computer Science</italic>
, Vol. 3779,
<italic>Network and Parallel Computing</italic>
, edited by H. Jin, D. Reed and W. Jiang, pp. 2–13. Berlin/Heidelberg: Springer.</mixed-citation>
</ref>
<ref id="bb12">
<mixed-citation publication-type="other">Foster, I. (2011).
<italic>IEEE Internet Comput.</italic>
<bold>15</bold>
, 70–73.</mixed-citation>
</ref>
<ref id="bb13">
<mixed-citation publication-type="other">Housley, R., Polk, W., Ford, W. & Solo, D. (2002).
<italic>RFC3280: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile.</italic>
RFC Editor, USA.</mixed-citation>
</ref>
<ref id="bb14">
<mixed-citation publication-type="other">Klimeck, G., McLennan, M., Brophy, S. P., Adams III, G. B. & Lundstrom, M. S. (2008).
<italic>Comput. Sci. Eng.</italic>
<bold>10</bold>
, 17–23.</mixed-citation>
</ref>
<ref id="bb15">
<mixed-citation publication-type="other">McLennan, M. & Kennell, R. (2010).
<italic>Comput. Sci. Eng.</italic>
<bold>12</bold>
, 48–52.</mixed-citation>
</ref>
<ref id="bb16">
<mixed-citation publication-type="other">McNab, A. & Li, Y. (2010).
<italic>17th International Conference on Computing in High Energy and Nuclear Physics (CHEP 09)</italic>
, 21–27 March 2009, Prague, Czech Republic.</mixed-citation>
</ref>
<ref id="bb17">
<mixed-citation publication-type="other">Mattmann, C. A., Crichton, D. J., Hughes, S. J., Kelly, S. C. & Ramirez, P. (2004).
<italic>Proceedings of the 4th Working IEEE/IFIP Conference on Software Architecture (WICSA-4)</italic>
, 12–15 June 2004, Oslo, Norway.</mixed-citation>
</ref>
<ref id="bb18">
<mixed-citation publication-type="other">Moore, R., Chen, S.-Y., Schroeder, W., Rajasekar, A., Wan, M. & Jagatheesan, A. (2006).
<italic>2nd International Conference on e-Science and Grid Computing (e-Science’06)</italic>
, Amsterdam, Netherlands.</mixed-citation>
</ref>
<ref id="bb19">
<mixed-citation publication-type="other">Muruganantham, D., Helm, M. & Genovese, T. (2005).
<italic>J. Phys. Conf. Ser.</italic>
<bold>16</bold>
, 591–595.</mixed-citation>
</ref>
<ref id="bb20">
<mixed-citation publication-type="other">Novotny, J., Russell, M. & Wehrens, O. (2004).
<italic>Concurrency Comput. Pract. Exp.</italic>
<bold>16</bold>
, 503–513.</mixed-citation>
</ref>
<ref id="bb21">
<mixed-citation publication-type="other">Pordes, R.
<italic>et al.</italic>
(2007).
<italic>J. Phys. Conf. Ser.</italic>
<bold>78</bold>
, 012057.</mixed-citation>
</ref>
<ref id="bb22">
<mixed-citation publication-type="other">Roy, A.
<italic>et al.</italic>
(2009).
<italic>J. Phys. Conf. Ser.</italic>
<bold>180</bold>
, 012052.</mixed-citation>
</ref>
<ref id="bb23">
<mixed-citation publication-type="other">Shoshani, A., Sim, A. & Gu, J. (2002).
<italic>Nineteenth IEEE Symposium on Mass Storage Systems and Technologies (MSS’02).</italic>
</mixed-citation>
</ref>
<ref id="bb24">
<mixed-citation publication-type="other">Soltis, S. M.
<italic>et al.</italic>
(2008).
<italic>Acta Cryst.</italic>
D
<bold>64</bold>
, 1210–1221.</mixed-citation>
</ref>
<ref id="bb25">
<mixed-citation publication-type="other">Stokes-Rees, I. & Sliz, P. (2010).
<italic>Proc. Natl Acad. Sci.</italic>
<bold>107</bold>
, 21476.</mixed-citation>
</ref>
<ref id="bb26">
<mixed-citation publication-type="other">Von Laszewski, G. & Foster, I. (1999).
<italic>Proceedings of the Workshop on Distributed Computing on the Web</italic>
, Rostock, Germany, June 1999.</mixed-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="fig1" position="float">
<label>Figure 1</label>
<caption>
<p>Geographic distribution of five endpoints participating in the trial of the prototype system. The flags represent, from left to right, Stanford Synchrotron Radiation Lightsource (blue, Stanford Linear Accelerator Center, Palo Alto, CA, USA), Fermi National Accelerator Laboratory (white, Batavia, IL, USA), Northeast Collaborative Access Team (green, Advanced Photon Source, Argonne, IL, USA), National Center for Supercomputing Applications (red, University of Illinois Urbana-Champaign, Champaign, IL, USA) and Harvard Medical School (yellow, Boston, MA, USA).</p>
</caption>
<graphic xlink:href="s-19-00462-fig1"></graphic>
</fig>
<fig id="fig2" position="float">
<label>Figure 2</label>
<caption>
<p>Example usage scenario. User ‘Scott’ from the Sliz Laboratory at Harvard collects data at the NE-CAT beamline. It is stored temporarily to a tier 1 staging server in sliz@necat.aps.argonne.gov:/stage/sliz before being archived to tier 2 storage in sliz@necat.aps.argonne.gov:/data/sliz. Scott can use Globus Online to access his data at NE-CAT and transfer it to his personal space on the laboratory file server scott@sliz.harvard.edu:/home/scott or to his own laptop. Other laboratory members can access his data at NE-CAT owing to the mapping of all Sliz Laboratory members to the same system identity. The general public can access public archived data from 2009 in the tier 3 archival storage through a web interface. More recent archived data from 2010 and 2011 is embargoed and not available to the public.</p>
</caption>
<graphic xlink:href="s-19-00462-fig2"></graphic>
</fig>
<table-wrap id="table1" position="float">
<label>Table 1</label>
<caption>
<title>Transfer rates between participating sites</title>
</caption>
<table frame="hsides" rules="groups">
<thead valign="top">
<tr>
<th style="" rowspan="1" colspan="1" align="left" charoff="50" valign="bottom"> </th>
<th style="border-bottom:1px solid black;" rowspan="1" colspan="5" align="left" charoff="50" valign="bottom">Transfer rate (MB s
<sup>−1</sup>
)</th>
</tr>
<tr>
<th style="border-bottom:1px solid black;" rowspan="1" colspan="1" align="left" charoff="50" valign="bottom"> </th>
<th style="border-bottom:1px solid black;" rowspan="1" colspan="1" align="left" charoff="50" valign="bottom">To HMS</th>
<th style="border-bottom:1px solid black;" rowspan="1" colspan="1" align="left" charoff="50" valign="bottom">To SSRL</th>
<th style="border-bottom:1px solid black;" rowspan="1" colspan="1" align="left" charoff="50" valign="bottom">To NE-CAT</th>
<th style="border-bottom:1px solid black;" rowspan="1" colspan="1" align="left" charoff="50" valign="bottom">To NCSA</th>
<th style="border-bottom:1px solid black;" rowspan="1" colspan="1" align="left" charoff="50" valign="bottom">To FNAL</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td style="" rowspan="1" colspan="1" align="left" valign="top">From HMS</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top"> </td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">47</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">29</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">53</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">45</td>
</tr>
<tr>
<td style="" rowspan="1" colspan="1" align="left" valign="top">From SSRL</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">23</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top"> </td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">N/A</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">26</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">22</td>
</tr>
<tr>
<td style="" rowspan="1" colspan="1" align="left" valign="top">From NE-CAT</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">29</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">N/A</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top"> </td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">N/A</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">N/A</td>
</tr>
<tr>
<td style="" rowspan="1" colspan="1" align="left" valign="top">From NCSA</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">19</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">20</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">N/A</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top"> </td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">N/A</td>
</tr>
<tr>
<td style="" rowspan="1" colspan="1" align="left" valign="top">From FNAL</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">36</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">39</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">N/A</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top">N/A</td>
<td style="" rowspan="1" colspan="1" align="left" valign="top"> </td>
</tr>
</tbody>
</table>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/CyberinfraV1/Data/Pmc/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000556  | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Corpus/biblio.hfd -nk 000556  | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    CyberinfraV1
   |flux=    Pmc
   |étape=   Corpus
   |type=    RBID
   |clé=     
   |texte=   
}}

Wicri

This area was generated with Dilib version V0.6.25.
Data generation: Thu Oct 27 09:30:58 2016. Site generation: Sun Mar 10 23:08:40 2024