Serveur d'exploration Cyberinfrastructure

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A Three Tier Architecture for LiDAR Interpolation and Analysis

Identifieur interne : 000227 ( PascalFrancis/Corpus ); précédent : 000226; suivant : 000228

A Three Tier Architecture for LiDAR Interpolation and Analysis

Auteurs : Efrat Jaeger-Frank ; Christopher J. Crosby ; Ashraf Memon ; Viswanath Nandigam ; J. Ramon Arrowsmith ; Jeffery Conner ; Ilkay Altintas ; Chaitan Baru

Source :

RBID : Pascal:08-0051275

Descripteurs français

English descriptors

Abstract

Emerging Grid technologies enable solving scientific problems that involve large datasets and complex analyses. Coordinating distributed Grid resources and computational processes requires adaptable interfaces and tools that provide a modularized and configurable environment for accessing Grid clusters and executing high performance computational tasks. In addition, it is beneficial to make these tools available to the community in a unified framework through a shared cyberinfrastructure, or a portal, so scientists can focus on their scientific work and not be concerned with the implementation of the underlying infrastructure. In this paper we describe a scientific workflow approach to coordinate various resources as data analysis pipelines. We present a three tier architecture for LiDAR. interpolation and analysis, a high performance processing of point intensive datasets, utilizing a portal, a scientific workflow engine and Grid technologies. Our proposed solution is available through the GEON portal and, though focused on LiDAR processing, is applicable to other domains as well.

Notice en format standard (ISO 2709)

Pour connaître la documentation sur le format Inist Standard.

pA  
A01 01  1    @0 0302-9743
A05       @2 3991
A08 01  1  ENG  @1 A Three Tier Architecture for LiDAR Interpolation and Analysis
A09 01  1  ENG  @1 Computational science. Part I-IV : ICCS 2006 : 6th international conference, Reading, UK, May 28-31, 2006 : proceedings
A11 01  1    @1 JAEGER-FRANK (Efrat)
A11 02  1    @1 CROSBY (Christopher J.)
A11 03  1    @1 MEMON (Ashraf)
A11 04  1    @1 NANDIGAM (Viswanath)
A11 05  1    @1 ARROWSMITH (J. Ramon)
A11 06  1    @1 CONNER (Jeffery)
A11 07  1    @1 ALTINTAS (Ilkay)
A11 08  1    @1 BARU (Chaitan)
A14 01      @1 San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive @2 La Jolla, CA 92093 @3 USA @Z 1 aut. @Z 3 aut. @Z 4 aut. @Z 7 aut. @Z 8 aut.
A14 02      @1 Department of Geological Sciences, Arizona State University @2 Tempe, AZ 85281 @3 USA @Z 2 aut. @Z 5 aut. @Z 6 aut.
A20       @1 920-927
A21       @1 2006
A23 01      @0 ENG
A26 01      @0 3-540-34379-2
A43 01      @1 INIST @2 16343 @5 354000172811804420
A44       @0 0000 @1 © 2008 INIST-CNRS. All rights reserved.
A45       @0 9 ref.
A47 01  1    @0 08-0051275
A60       @1 P @2 C
A61       @0 A
A64 01  1    @0 Lecture notes in computer science
A66 01      @0 DEU
C01 01    ENG  @0 Emerging Grid technologies enable solving scientific problems that involve large datasets and complex analyses. Coordinating distributed Grid resources and computational processes requires adaptable interfaces and tools that provide a modularized and configurable environment for accessing Grid clusters and executing high performance computational tasks. In addition, it is beneficial to make these tools available to the community in a unified framework through a shared cyberinfrastructure, or a portal, so scientists can focus on their scientific work and not be concerned with the implementation of the underlying infrastructure. In this paper we describe a scientific workflow approach to coordinate various resources as data analysis pipelines. We present a three tier architecture for LiDAR. interpolation and analysis, a high performance processing of point intensive datasets, utilizing a portal, a scientific workflow engine and Grid technologies. Our proposed solution is available through the GEON portal and, though focused on LiDAR processing, is applicable to other domains as well.
C02 01  X    @0 001D02B04
C02 02  X    @0 001D02B07D
C02 03  X    @0 001D02A05
C03 01  X  FRE  @0 Système réparti @5 06
C03 01  X  ENG  @0 Distributed system @5 06
C03 01  X  SPA  @0 Sistema repartido @5 06
C03 02  3  FRE  @0 Base donnée très grande @5 07
C03 02  3  ENG  @0 Very large databases @5 07
C03 03  X  FRE  @0 Allocation ressource @5 08
C03 03  X  ENG  @0 Resource allocation @5 08
C03 03  X  SPA  @0 Asignación recurso @5 08
C03 04  X  FRE  @0 Calcul réparti @5 09
C03 04  X  ENG  @0 Distributed computing @5 09
C03 04  X  SPA  @0 Cálculo repartido @5 09
C03 05  X  FRE  @0 Grille @5 10
C03 05  X  ENG  @0 Grid @5 10
C03 05  X  SPA  @0 Rejilla @5 10
C03 06  X  FRE  @0 Haute performance @5 11
C03 06  X  ENG  @0 High performance @5 11
C03 06  X  SPA  @0 Alto rendimiento @5 11
C03 07  X  FRE  @0 Collecticiel @5 12
C03 07  X  ENG  @0 Groupware @5 12
C03 07  X  SPA  @0 Groupware @5 12
C03 08  X  FRE  @0 Workflow @5 13
C03 08  X  ENG  @0 Workflow @5 13
C03 08  X  SPA  @0 Workflow @5 13
C03 09  X  FRE  @0 Analyse donnée @5 14
C03 09  X  ENG  @0 Data analysis @5 14
C03 09  X  SPA  @0 Análisis datos @5 14
C03 10  X  FRE  @0 Processeur pipeline @5 15
C03 10  X  ENG  @0 Pipeline processor @5 15
C03 10  X  SPA  @0 Procesador oleoducto @5 15
C03 11  X  FRE  @0 Radar optique @5 18
C03 11  X  ENG  @0 Lidar @5 18
C03 11  X  SPA  @0 Radar óptico @5 18
C03 12  X  FRE  @0 Evaluation performance @5 19
C03 12  X  ENG  @0 Performance evaluation @5 19
C03 12  X  SPA  @0 Evaluación prestación @5 19
N21       @1 028
N44 01      @1 OTO
N82       @1 OTO
pR  
A30 01  1  ENG  @1 IInternational Conference on Computational Science @2 6 @3 Reading GBR @4 2006

Format Inist (serveur)

NO : PASCAL 08-0051275 INIST
ET : A Three Tier Architecture for LiDAR Interpolation and Analysis
AU : JAEGER-FRANK (Efrat); CROSBY (Christopher J.); MEMON (Ashraf); NANDIGAM (Viswanath); ARROWSMITH (J. Ramon); CONNER (Jeffery); ALTINTAS (Ilkay); BARU (Chaitan)
AF : San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive/La Jolla, CA 92093/Etats-Unis (1 aut., 3 aut., 4 aut., 7 aut., 8 aut.); Department of Geological Sciences, Arizona State University/Tempe, AZ 85281/Etats-Unis (2 aut., 5 aut., 6 aut.)
DT : Publication en série; Congrès; Niveau analytique
SO : Lecture notes in computer science; ISSN 0302-9743; Allemagne; Da. 2006; Vol. 3991; Pp. 920-927; Bibl. 9 ref.
LA : Anglais
EA : Emerging Grid technologies enable solving scientific problems that involve large datasets and complex analyses. Coordinating distributed Grid resources and computational processes requires adaptable interfaces and tools that provide a modularized and configurable environment for accessing Grid clusters and executing high performance computational tasks. In addition, it is beneficial to make these tools available to the community in a unified framework through a shared cyberinfrastructure, or a portal, so scientists can focus on their scientific work and not be concerned with the implementation of the underlying infrastructure. In this paper we describe a scientific workflow approach to coordinate various resources as data analysis pipelines. We present a three tier architecture for LiDAR. interpolation and analysis, a high performance processing of point intensive datasets, utilizing a portal, a scientific workflow engine and Grid technologies. Our proposed solution is available through the GEON portal and, though focused on LiDAR processing, is applicable to other domains as well.
CC : 001D02B04; 001D02B07D; 001D02A05
FD : Système réparti; Base donnée très grande; Allocation ressource; Calcul réparti; Grille; Haute performance; Collecticiel; Workflow; Analyse donnée; Processeur pipeline; Radar optique; Evaluation performance
ED : Distributed system; Very large databases; Resource allocation; Distributed computing; Grid; High performance; Groupware; Workflow; Data analysis; Pipeline processor; Lidar; Performance evaluation
SD : Sistema repartido; Asignación recurso; Cálculo repartido; Rejilla; Alto rendimiento; Groupware; Workflow; Análisis datos; Procesador oleoducto; Radar óptico; Evaluación prestación
LO : INIST-16343.354000172811804420
ID : 08-0051275

Links to Exploration step

Pascal:08-0051275

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">A Three Tier Architecture for LiDAR Interpolation and Analysis</title>
<author>
<name sortKey="Jaeger Frank, Efrat" sort="Jaeger Frank, Efrat" uniqKey="Jaeger Frank E" first="Efrat" last="Jaeger-Frank">Efrat Jaeger-Frank</name>
<affiliation>
<inist:fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Crosby, Christopher J" sort="Crosby, Christopher J" uniqKey="Crosby C" first="Christopher J." last="Crosby">Christopher J. Crosby</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Department of Geological Sciences, Arizona State University</s1>
<s2>Tempe, AZ 85281</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>5 aut.</sZ>
<sZ>6 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Memon, Ashraf" sort="Memon, Ashraf" uniqKey="Memon A" first="Ashraf" last="Memon">Ashraf Memon</name>
<affiliation>
<inist:fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Nandigam, Viswanath" sort="Nandigam, Viswanath" uniqKey="Nandigam V" first="Viswanath" last="Nandigam">Viswanath Nandigam</name>
<affiliation>
<inist:fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Arrowsmith, J Ramon" sort="Arrowsmith, J Ramon" uniqKey="Arrowsmith J" first="J. Ramon" last="Arrowsmith">J. Ramon Arrowsmith</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Department of Geological Sciences, Arizona State University</s1>
<s2>Tempe, AZ 85281</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>5 aut.</sZ>
<sZ>6 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Conner, Jeffery" sort="Conner, Jeffery" uniqKey="Conner J" first="Jeffery" last="Conner">Jeffery Conner</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Department of Geological Sciences, Arizona State University</s1>
<s2>Tempe, AZ 85281</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>5 aut.</sZ>
<sZ>6 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Altintas, Ilkay" sort="Altintas, Ilkay" uniqKey="Altintas I" first="Ilkay" last="Altintas">Ilkay Altintas</name>
<affiliation>
<inist:fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Baru, Chaitan" sort="Baru, Chaitan" uniqKey="Baru C" first="Chaitan" last="Baru">Chaitan Baru</name>
<affiliation>
<inist:fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">08-0051275</idno>
<date when="2006">2006</date>
<idno type="stanalyst">PASCAL 08-0051275 INIST</idno>
<idno type="RBID">Pascal:08-0051275</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000227</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">A Three Tier Architecture for LiDAR Interpolation and Analysis</title>
<author>
<name sortKey="Jaeger Frank, Efrat" sort="Jaeger Frank, Efrat" uniqKey="Jaeger Frank E" first="Efrat" last="Jaeger-Frank">Efrat Jaeger-Frank</name>
<affiliation>
<inist:fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Crosby, Christopher J" sort="Crosby, Christopher J" uniqKey="Crosby C" first="Christopher J." last="Crosby">Christopher J. Crosby</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Department of Geological Sciences, Arizona State University</s1>
<s2>Tempe, AZ 85281</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>5 aut.</sZ>
<sZ>6 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Memon, Ashraf" sort="Memon, Ashraf" uniqKey="Memon A" first="Ashraf" last="Memon">Ashraf Memon</name>
<affiliation>
<inist:fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Nandigam, Viswanath" sort="Nandigam, Viswanath" uniqKey="Nandigam V" first="Viswanath" last="Nandigam">Viswanath Nandigam</name>
<affiliation>
<inist:fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Arrowsmith, J Ramon" sort="Arrowsmith, J Ramon" uniqKey="Arrowsmith J" first="J. Ramon" last="Arrowsmith">J. Ramon Arrowsmith</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Department of Geological Sciences, Arizona State University</s1>
<s2>Tempe, AZ 85281</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>5 aut.</sZ>
<sZ>6 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Conner, Jeffery" sort="Conner, Jeffery" uniqKey="Conner J" first="Jeffery" last="Conner">Jeffery Conner</name>
<affiliation>
<inist:fA14 i1="02">
<s1>Department of Geological Sciences, Arizona State University</s1>
<s2>Tempe, AZ 85281</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>5 aut.</sZ>
<sZ>6 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Altintas, Ilkay" sort="Altintas, Ilkay" uniqKey="Altintas I" first="Ilkay" last="Altintas">Ilkay Altintas</name>
<affiliation>
<inist:fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
<author>
<name sortKey="Baru, Chaitan" sort="Baru, Chaitan" uniqKey="Baru C" first="Chaitan" last="Baru">Chaitan Baru</name>
<affiliation>
<inist:fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</inist:fA14>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Lecture notes in computer science</title>
<idno type="ISSN">0302-9743</idno>
<imprint>
<date when="2006">2006</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Lecture notes in computer science</title>
<idno type="ISSN">0302-9743</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Data analysis</term>
<term>Distributed computing</term>
<term>Distributed system</term>
<term>Grid</term>
<term>Groupware</term>
<term>High performance</term>
<term>Lidar</term>
<term>Performance evaluation</term>
<term>Pipeline processor</term>
<term>Resource allocation</term>
<term>Very large databases</term>
<term>Workflow</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Système réparti</term>
<term>Base donnée très grande</term>
<term>Allocation ressource</term>
<term>Calcul réparti</term>
<term>Grille</term>
<term>Haute performance</term>
<term>Collecticiel</term>
<term>Workflow</term>
<term>Analyse donnée</term>
<term>Processeur pipeline</term>
<term>Radar optique</term>
<term>Evaluation performance</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Emerging Grid technologies enable solving scientific problems that involve large datasets and complex analyses. Coordinating distributed Grid resources and computational processes requires adaptable interfaces and tools that provide a modularized and configurable environment for accessing Grid clusters and executing high performance computational tasks. In addition, it is beneficial to make these tools available to the community in a unified framework through a shared cyberinfrastructure, or a portal, so scientists can focus on their scientific work and not be concerned with the implementation of the underlying infrastructure. In this paper we describe a scientific workflow approach to coordinate various resources as data analysis pipelines. We present a three tier architecture for LiDAR. interpolation and analysis, a high performance processing of point intensive datasets, utilizing a portal, a scientific workflow engine and Grid technologies. Our proposed solution is available through the GEON portal and, though focused on LiDAR processing, is applicable to other domains as well.</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0302-9743</s0>
</fA01>
<fA05>
<s2>3991</s2>
</fA05>
<fA08 i1="01" i2="1" l="ENG">
<s1>A Three Tier Architecture for LiDAR Interpolation and Analysis</s1>
</fA08>
<fA09 i1="01" i2="1" l="ENG">
<s1>Computational science. Part I-IV : ICCS 2006 : 6th international conference, Reading, UK, May 28-31, 2006 : proceedings</s1>
</fA09>
<fA11 i1="01" i2="1">
<s1>JAEGER-FRANK (Efrat)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>CROSBY (Christopher J.)</s1>
</fA11>
<fA11 i1="03" i2="1">
<s1>MEMON (Ashraf)</s1>
</fA11>
<fA11 i1="04" i2="1">
<s1>NANDIGAM (Viswanath)</s1>
</fA11>
<fA11 i1="05" i2="1">
<s1>ARROWSMITH (J. Ramon)</s1>
</fA11>
<fA11 i1="06" i2="1">
<s1>CONNER (Jeffery)</s1>
</fA11>
<fA11 i1="07" i2="1">
<s1>ALTINTAS (Ilkay)</s1>
</fA11>
<fA11 i1="08" i2="1">
<s1>BARU (Chaitan)</s1>
</fA11>
<fA14 i1="01">
<s1>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive</s1>
<s2>La Jolla, CA 92093</s2>
<s3>USA</s3>
<sZ>1 aut.</sZ>
<sZ>3 aut.</sZ>
<sZ>4 aut.</sZ>
<sZ>7 aut.</sZ>
<sZ>8 aut.</sZ>
</fA14>
<fA14 i1="02">
<s1>Department of Geological Sciences, Arizona State University</s1>
<s2>Tempe, AZ 85281</s2>
<s3>USA</s3>
<sZ>2 aut.</sZ>
<sZ>5 aut.</sZ>
<sZ>6 aut.</sZ>
</fA14>
<fA20>
<s1>920-927</s1>
</fA20>
<fA21>
<s1>2006</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA26 i1="01">
<s0>3-540-34379-2</s0>
</fA26>
<fA43 i1="01">
<s1>INIST</s1>
<s2>16343</s2>
<s5>354000172811804420</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2008 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>9 ref.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>08-0051275</s0>
</fA47>
<fA60>
<s1>P</s1>
<s2>C</s2>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Lecture notes in computer science</s0>
</fA64>
<fA66 i1="01">
<s0>DEU</s0>
</fA66>
<fC01 i1="01" l="ENG">
<s0>Emerging Grid technologies enable solving scientific problems that involve large datasets and complex analyses. Coordinating distributed Grid resources and computational processes requires adaptable interfaces and tools that provide a modularized and configurable environment for accessing Grid clusters and executing high performance computational tasks. In addition, it is beneficial to make these tools available to the community in a unified framework through a shared cyberinfrastructure, or a portal, so scientists can focus on their scientific work and not be concerned with the implementation of the underlying infrastructure. In this paper we describe a scientific workflow approach to coordinate various resources as data analysis pipelines. We present a three tier architecture for LiDAR. interpolation and analysis, a high performance processing of point intensive datasets, utilizing a portal, a scientific workflow engine and Grid technologies. Our proposed solution is available through the GEON portal and, though focused on LiDAR processing, is applicable to other domains as well.</s0>
</fC01>
<fC02 i1="01" i2="X">
<s0>001D02B04</s0>
</fC02>
<fC02 i1="02" i2="X">
<s0>001D02B07D</s0>
</fC02>
<fC02 i1="03" i2="X">
<s0>001D02A05</s0>
</fC02>
<fC03 i1="01" i2="X" l="FRE">
<s0>Système réparti</s0>
<s5>06</s5>
</fC03>
<fC03 i1="01" i2="X" l="ENG">
<s0>Distributed system</s0>
<s5>06</s5>
</fC03>
<fC03 i1="01" i2="X" l="SPA">
<s0>Sistema repartido</s0>
<s5>06</s5>
</fC03>
<fC03 i1="02" i2="3" l="FRE">
<s0>Base donnée très grande</s0>
<s5>07</s5>
</fC03>
<fC03 i1="02" i2="3" l="ENG">
<s0>Very large databases</s0>
<s5>07</s5>
</fC03>
<fC03 i1="03" i2="X" l="FRE">
<s0>Allocation ressource</s0>
<s5>08</s5>
</fC03>
<fC03 i1="03" i2="X" l="ENG">
<s0>Resource allocation</s0>
<s5>08</s5>
</fC03>
<fC03 i1="03" i2="X" l="SPA">
<s0>Asignación recurso</s0>
<s5>08</s5>
</fC03>
<fC03 i1="04" i2="X" l="FRE">
<s0>Calcul réparti</s0>
<s5>09</s5>
</fC03>
<fC03 i1="04" i2="X" l="ENG">
<s0>Distributed computing</s0>
<s5>09</s5>
</fC03>
<fC03 i1="04" i2="X" l="SPA">
<s0>Cálculo repartido</s0>
<s5>09</s5>
</fC03>
<fC03 i1="05" i2="X" l="FRE">
<s0>Grille</s0>
<s5>10</s5>
</fC03>
<fC03 i1="05" i2="X" l="ENG">
<s0>Grid</s0>
<s5>10</s5>
</fC03>
<fC03 i1="05" i2="X" l="SPA">
<s0>Rejilla</s0>
<s5>10</s5>
</fC03>
<fC03 i1="06" i2="X" l="FRE">
<s0>Haute performance</s0>
<s5>11</s5>
</fC03>
<fC03 i1="06" i2="X" l="ENG">
<s0>High performance</s0>
<s5>11</s5>
</fC03>
<fC03 i1="06" i2="X" l="SPA">
<s0>Alto rendimiento</s0>
<s5>11</s5>
</fC03>
<fC03 i1="07" i2="X" l="FRE">
<s0>Collecticiel</s0>
<s5>12</s5>
</fC03>
<fC03 i1="07" i2="X" l="ENG">
<s0>Groupware</s0>
<s5>12</s5>
</fC03>
<fC03 i1="07" i2="X" l="SPA">
<s0>Groupware</s0>
<s5>12</s5>
</fC03>
<fC03 i1="08" i2="X" l="FRE">
<s0>Workflow</s0>
<s5>13</s5>
</fC03>
<fC03 i1="08" i2="X" l="ENG">
<s0>Workflow</s0>
<s5>13</s5>
</fC03>
<fC03 i1="08" i2="X" l="SPA">
<s0>Workflow</s0>
<s5>13</s5>
</fC03>
<fC03 i1="09" i2="X" l="FRE">
<s0>Analyse donnée</s0>
<s5>14</s5>
</fC03>
<fC03 i1="09" i2="X" l="ENG">
<s0>Data analysis</s0>
<s5>14</s5>
</fC03>
<fC03 i1="09" i2="X" l="SPA">
<s0>Análisis datos</s0>
<s5>14</s5>
</fC03>
<fC03 i1="10" i2="X" l="FRE">
<s0>Processeur pipeline</s0>
<s5>15</s5>
</fC03>
<fC03 i1="10" i2="X" l="ENG">
<s0>Pipeline processor</s0>
<s5>15</s5>
</fC03>
<fC03 i1="10" i2="X" l="SPA">
<s0>Procesador oleoducto</s0>
<s5>15</s5>
</fC03>
<fC03 i1="11" i2="X" l="FRE">
<s0>Radar optique</s0>
<s5>18</s5>
</fC03>
<fC03 i1="11" i2="X" l="ENG">
<s0>Lidar</s0>
<s5>18</s5>
</fC03>
<fC03 i1="11" i2="X" l="SPA">
<s0>Radar óptico</s0>
<s5>18</s5>
</fC03>
<fC03 i1="12" i2="X" l="FRE">
<s0>Evaluation performance</s0>
<s5>19</s5>
</fC03>
<fC03 i1="12" i2="X" l="ENG">
<s0>Performance evaluation</s0>
<s5>19</s5>
</fC03>
<fC03 i1="12" i2="X" l="SPA">
<s0>Evaluación prestación</s0>
<s5>19</s5>
</fC03>
<fN21>
<s1>028</s1>
</fN21>
<fN44 i1="01">
<s1>OTO</s1>
</fN44>
<fN82>
<s1>OTO</s1>
</fN82>
</pA>
<pR>
<fA30 i1="01" i2="1" l="ENG">
<s1>IInternational Conference on Computational Science</s1>
<s2>6</s2>
<s3>Reading GBR</s3>
<s4>2006</s4>
</fA30>
</pR>
</standard>
<server>
<NO>PASCAL 08-0051275 INIST</NO>
<ET>A Three Tier Architecture for LiDAR Interpolation and Analysis</ET>
<AU>JAEGER-FRANK (Efrat); CROSBY (Christopher J.); MEMON (Ashraf); NANDIGAM (Viswanath); ARROWSMITH (J. Ramon); CONNER (Jeffery); ALTINTAS (Ilkay); BARU (Chaitan)</AU>
<AF>San Diego Supercomputer Center, University of California, Sail Diego, 9500 Gilman Drive/La Jolla, CA 92093/Etats-Unis (1 aut., 3 aut., 4 aut., 7 aut., 8 aut.); Department of Geological Sciences, Arizona State University/Tempe, AZ 85281/Etats-Unis (2 aut., 5 aut., 6 aut.)</AF>
<DT>Publication en série; Congrès; Niveau analytique</DT>
<SO>Lecture notes in computer science; ISSN 0302-9743; Allemagne; Da. 2006; Vol. 3991; Pp. 920-927; Bibl. 9 ref.</SO>
<LA>Anglais</LA>
<EA>Emerging Grid technologies enable solving scientific problems that involve large datasets and complex analyses. Coordinating distributed Grid resources and computational processes requires adaptable interfaces and tools that provide a modularized and configurable environment for accessing Grid clusters and executing high performance computational tasks. In addition, it is beneficial to make these tools available to the community in a unified framework through a shared cyberinfrastructure, or a portal, so scientists can focus on their scientific work and not be concerned with the implementation of the underlying infrastructure. In this paper we describe a scientific workflow approach to coordinate various resources as data analysis pipelines. We present a three tier architecture for LiDAR. interpolation and analysis, a high performance processing of point intensive datasets, utilizing a portal, a scientific workflow engine and Grid technologies. Our proposed solution is available through the GEON portal and, though focused on LiDAR processing, is applicable to other domains as well.</EA>
<CC>001D02B04; 001D02B07D; 001D02A05</CC>
<FD>Système réparti; Base donnée très grande; Allocation ressource; Calcul réparti; Grille; Haute performance; Collecticiel; Workflow; Analyse donnée; Processeur pipeline; Radar optique; Evaluation performance</FD>
<ED>Distributed system; Very large databases; Resource allocation; Distributed computing; Grid; High performance; Groupware; Workflow; Data analysis; Pipeline processor; Lidar; Performance evaluation</ED>
<SD>Sistema repartido; Asignación recurso; Cálculo repartido; Rejilla; Alto rendimiento; Groupware; Workflow; Análisis datos; Procesador oleoducto; Radar óptico; Evaluación prestación</SD>
<LO>INIST-16343.354000172811804420</LO>
<ID>08-0051275</ID>
</server>
</inist>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/CyberinfraV1/Data/PascalFrancis/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000227 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Corpus/biblio.hfd -nk 000227 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    CyberinfraV1
   |flux=    PascalFrancis
   |étape=   Corpus
   |type=    RBID
   |clé=     Pascal:08-0051275
   |texte=   A Three Tier Architecture for LiDAR Interpolation and Analysis
}}

Wicri

This area was generated with Dilib version V0.6.25.
Data generation: Thu Oct 27 09:30:58 2016. Site generation: Sun Mar 10 23:08:40 2024