Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

Identifieur interne : 002125 ( Ncbi/Merge ); précédent : 002124; suivant : 002126

Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web

Auteurs : Álvaro Sigüenza ; David Díaz-Pardo ; Jesús Bernat ; Vasile Vancea ; José Luis Blanco ; David Conejero ; Luis Hernández G Mez

Source :

RBID : PMC:3386742

Abstract

Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures, etc., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound.


Url:
DOI: 10.3390/s120506307
PubMed: 22778643
PubMed Central: 3386742

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:3386742

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web</title>
<author>
<name sortKey="Siguenza, Alvaro" sort="Siguenza, Alvaro" uniqKey="Siguenza A" first="Álvaro" last="Sigüenza">Álvaro Sigüenza</name>
<affiliation>
<nlm:aff id="af1-sensors-12-06307"> ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Diaz Pardo, David" sort="Diaz Pardo, David" uniqKey="Diaz Pardo D" first="David" last="Díaz-Pardo">David Díaz-Pardo</name>
<affiliation>
<nlm:aff id="af1-sensors-12-06307"> ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bernat, Jesus" sort="Bernat, Jesus" uniqKey="Bernat J" first="Jesús" last="Bernat">Jesús Bernat</name>
<affiliation>
<nlm:aff id="af2-sensors-12-06307"> Telefónica Investigación y Desarrollo, Distrito C. Edificio Oeste 1, Ronda de la Comunicación, s/n, 28050 Madrid, Spain; E-Mails:
<email>bernat@tid.es</email>
(J.B.);
<email>dco@tid.es</email>
(D.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Vancea, Vasile" sort="Vancea, Vasile" uniqKey="Vancea V" first="Vasile" last="Vancea">Vasile Vancea</name>
<affiliation>
<nlm:aff id="af1-sensors-12-06307"> ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Blanco, Jose Luis" sort="Blanco, Jose Luis" uniqKey="Blanco J" first="José Luis" last="Blanco">José Luis Blanco</name>
<affiliation>
<nlm:aff id="af1-sensors-12-06307"> ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Conejero, David" sort="Conejero, David" uniqKey="Conejero D" first="David" last="Conejero">David Conejero</name>
<affiliation>
<nlm:aff id="af2-sensors-12-06307"> Telefónica Investigación y Desarrollo, Distrito C. Edificio Oeste 1, Ronda de la Comunicación, s/n, 28050 Madrid, Spain; E-Mails:
<email>bernat@tid.es</email>
(J.B.);
<email>dco@tid.es</email>
(D.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="G Mez, Luis Hernandez" sort="G Mez, Luis Hernandez" uniqKey="G Mez L" first="Luis Hernández" last="G Mez">Luis Hernández G Mez</name>
<affiliation>
<nlm:aff id="af1-sensors-12-06307"> ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</nlm:aff>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">22778643</idno>
<idno type="pmc">3386742</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3386742</idno>
<idno type="RBID">PMC:3386742</idno>
<idno type="doi">10.3390/s120506307</idno>
<date when="2012">2012</date>
<idno type="wicri:Area/Pmc/Corpus">002491</idno>
<idno type="wicri:Area/Pmc/Curation">002491</idno>
<idno type="wicri:Area/Pmc/Checkpoint">001571</idno>
<idno type="wicri:Area/Ncbi/Merge">002125</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web</title>
<author>
<name sortKey="Siguenza, Alvaro" sort="Siguenza, Alvaro" uniqKey="Siguenza A" first="Álvaro" last="Sigüenza">Álvaro Sigüenza</name>
<affiliation>
<nlm:aff id="af1-sensors-12-06307"> ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Diaz Pardo, David" sort="Diaz Pardo, David" uniqKey="Diaz Pardo D" first="David" last="Díaz-Pardo">David Díaz-Pardo</name>
<affiliation>
<nlm:aff id="af1-sensors-12-06307"> ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Bernat, Jesus" sort="Bernat, Jesus" uniqKey="Bernat J" first="Jesús" last="Bernat">Jesús Bernat</name>
<affiliation>
<nlm:aff id="af2-sensors-12-06307"> Telefónica Investigación y Desarrollo, Distrito C. Edificio Oeste 1, Ronda de la Comunicación, s/n, 28050 Madrid, Spain; E-Mails:
<email>bernat@tid.es</email>
(J.B.);
<email>dco@tid.es</email>
(D.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Vancea, Vasile" sort="Vancea, Vasile" uniqKey="Vancea V" first="Vasile" last="Vancea">Vasile Vancea</name>
<affiliation>
<nlm:aff id="af1-sensors-12-06307"> ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Blanco, Jose Luis" sort="Blanco, Jose Luis" uniqKey="Blanco J" first="José Luis" last="Blanco">José Luis Blanco</name>
<affiliation>
<nlm:aff id="af1-sensors-12-06307"> ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="Conejero, David" sort="Conejero, David" uniqKey="Conejero D" first="David" last="Conejero">David Conejero</name>
<affiliation>
<nlm:aff id="af2-sensors-12-06307"> Telefónica Investigación y Desarrollo, Distrito C. Edificio Oeste 1, Ronda de la Comunicación, s/n, 28050 Madrid, Spain; E-Mails:
<email>bernat@tid.es</email>
(J.B.);
<email>dco@tid.es</email>
(D.C.)</nlm:aff>
</affiliation>
</author>
<author>
<name sortKey="G Mez, Luis Hernandez" sort="G Mez, Luis Hernandez" uniqKey="G Mez L" first="Luis Hernández" last="G Mez">Luis Hernández G Mez</name>
<affiliation>
<nlm:aff id="af1-sensors-12-06307"> ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</nlm:aff>
</affiliation>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2012">2012</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures,
<italic>etc</italic>
., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Weiser, M" uniqKey="Weiser M">M. Weiser</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sundmaeker, H" uniqKey="Sundmaeker H">H. Sundmaeker</name>
</author>
<author>
<name sortKey="Guillermin, P" uniqKey="Guillermin P">P. Guillermin</name>
</author>
<author>
<name sortKey="Friess, P" uniqKey="Friess P">P. Friess</name>
</author>
<author>
<name sortKey="Woelffle, S" uniqKey="Woelffle S">S Woelfflé</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Broring, A" uniqKey="Broring A">A. Bröring</name>
</author>
<author>
<name sortKey="Echterhoff, J" uniqKey="Echterhoff J">J. Echterhoff</name>
</author>
<author>
<name sortKey="Jirka, S" uniqKey="Jirka S">S. Jirka</name>
</author>
<author>
<name sortKey="Simonis, I" uniqKey="Simonis I">I. Simonis</name>
</author>
<author>
<name sortKey="Everding, T" uniqKey="Everding T">T. Everding</name>
</author>
<author>
<name sortKey="Stasch, C" uniqKey="Stasch C">C. Stasch</name>
</author>
<author>
<name sortKey="Liang, S" uniqKey="Liang S">S. Liang</name>
</author>
<author>
<name sortKey="Lemmens, R" uniqKey="Lemmens R">R. Lemmens</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sheth, A" uniqKey="Sheth A">A. Sheth</name>
</author>
<author>
<name sortKey="Henson, C" uniqKey="Henson C">C. Henson</name>
</author>
<author>
<name sortKey="Sahoo, S" uniqKey="Sahoo S">S. Sahoo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Siguenza, A" uniqKey="Siguenza A">A. Sigüenza</name>
</author>
<author>
<name sortKey="Blanco, J L" uniqKey="Blanco J">J.L. Blanco</name>
</author>
<author>
<name sortKey="Bernat, J" uniqKey="Bernat J">J. Bernat</name>
</author>
<author>
<name sortKey="Hernandez, L" uniqKey="Hernandez L">L Hernández</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Foerster, T" uniqKey="Foerster T">T. Foerster</name>
</author>
<author>
<name sortKey="Jirka, S" uniqKey="Jirka S">S. Jirka</name>
</author>
<author>
<name sortKey="Stasch, C" uniqKey="Stasch C">C. Stasch</name>
</author>
<author>
<name sortKey="Pross, B" uniqKey="Pross B">B. Pross</name>
</author>
<author>
<name sortKey="Everding, T" uniqKey="Everding T">T. Everding</name>
</author>
<author>
<name sortKey="Broring, A" uniqKey="Broring A">A. Bröring</name>
</author>
<author>
<name sortKey="Juerrens, E H" uniqKey="Juerrens E">E.H. Juerrens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Siguenza, A" uniqKey="Siguenza A">A. Sigüenza</name>
</author>
<author>
<name sortKey="Pardo, D" uniqKey="Pardo D">D. Pardo</name>
</author>
<author>
<name sortKey="Blanco, J L" uniqKey="Blanco J">J.L. Blanco</name>
</author>
<author>
<name sortKey="Bernat, J" uniqKey="Bernat J">J. Bernat</name>
</author>
<author>
<name sortKey="Garijo, M" uniqKey="Garijo M">M. Garijo</name>
</author>
<author>
<name sortKey="Hernandez, L" uniqKey="Hernandez L">L Hernández</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jirka, S" uniqKey="Jirka S">S. Jirka</name>
</author>
<author>
<name sortKey="Broring, A" uniqKey="Broring A">A. Bröring</name>
</author>
<author>
<name sortKey="Foerster, T" uniqKey="Foerster T">T Foerster</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mcglaun, G" uniqKey="Mcglaun G">G. McGlaun</name>
</author>
<author>
<name sortKey="Althoff, F" uniqKey="Althoff F">F. Althoff</name>
</author>
<author>
<name sortKey="Lang, M" uniqKey="Lang M">M. Lang</name>
</author>
<author>
<name sortKey="Rigoll, G" uniqKey="Rigoll G">G. Rigoll</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pieraccini, R" uniqKey="Pieraccini R">R. Pieraccini</name>
</author>
<author>
<name sortKey="Dayanidhi, K" uniqKey="Dayanidhi K">K. Dayanidhi</name>
</author>
<author>
<name sortKey="Bloom, J" uniqKey="Bloom J">J. Bloom</name>
</author>
<author>
<name sortKey="Dahan, J" uniqKey="Dahan J">J. Dahan</name>
</author>
<author>
<name sortKey="Phillips, M" uniqKey="Phillips M">M. Phillips</name>
</author>
<author>
<name sortKey="Goodman, B R" uniqKey="Goodman B">B.R. Goodman</name>
</author>
<author>
<name sortKey="Prasad, K V" uniqKey="Prasad K">K.V. Prasad</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Amditis, A" uniqKey="Amditis A">A. Amditis</name>
</author>
<author>
<name sortKey="Kussmann, H" uniqKey="Kussmann H">H. Kussmann</name>
</author>
<author>
<name sortKey="Polynchronopoulos, A" uniqKey="Polynchronopoulos A">A. Polynchronopoulos</name>
</author>
<author>
<name sortKey="Engstrom, J" uniqKey="Engstrom J">J. Engström</name>
</author>
<author>
<name sortKey="Andreone, L" uniqKey="Andreone L">L Andreone</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hervas, R" uniqKey="Hervas R">R. Hervás</name>
</author>
<author>
<name sortKey="Bravo, J" uniqKey="Bravo J">J. Bravo</name>
</author>
<author>
<name sortKey="Fontecha, J" uniqKey="Fontecha J">J. Fontecha</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nelson, E" uniqKey="Nelson E">E. Nelson</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Goodchild, M F" uniqKey="Goodchild M">M.F. Goodchild</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Hoch, S" uniqKey="Hoch S">S. Hoch</name>
</author>
<author>
<name sortKey="Schweigert, M" uniqKey="Schweigert M">M. Schweigert</name>
</author>
<author>
<name sortKey="Althoff, F" uniqKey="Althoff F">F. Althoff</name>
</author>
<author>
<name sortKey="Rigoll, G" uniqKey="Rigoll G">G. Rigoll</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Harel, D" uniqKey="Harel D">D. Harel</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Siguenza, A" uniqKey="Siguenza A">A. Sigüenza</name>
</author>
<author>
<name sortKey="Blanco, J L" uniqKey="Blanco J">J.L. Blanco</name>
</author>
<author>
<name sortKey="Bernat, J" uniqKey="Bernat J">J. Bernat</name>
</author>
<author>
<name sortKey="Hernandez, L" uniqKey="Hernandez L">L. Hernández</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Vollrath, M" uniqKey="Vollrath M">M. Vollrath</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kuter, U" uniqKey="Kuter U">U. Kuter</name>
</author>
<author>
<name sortKey="Golbeck, J" uniqKey="Golbeck J">J. Golbeck</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="L Pez De Ipi A, D" uniqKey="L Pez De Ipi A D">D. López de Ipiña</name>
</author>
<author>
<name sortKey="Diaz De Sarralde, I" uniqKey="Diaz De Sarralde I">I. Díaz de Sarralde</name>
</author>
<author>
<name sortKey="Garcia Zubia, J" uniqKey="Garcia Zubia J">J. García Zubia</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ke Ler, C" uniqKey="Ke Ler C">C. Keßler</name>
</author>
<author>
<name sortKey="Janowicz, K" uniqKey="Janowicz K">K. Janowicz</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bizer, C" uniqKey="Bizer C">C. Bizer</name>
</author>
<author>
<name sortKey="Heath, T" uniqKey="Heath T">T. Heath</name>
</author>
<author>
<name sortKey="Berners Lee, T" uniqKey="Berners Lee T">T. Berners-Lee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Henson, C A" uniqKey="Henson C">C.A. Henson</name>
</author>
<author>
<name sortKey="Pschorr, J K" uniqKey="Pschorr J">J.K. Pschorr</name>
</author>
<author>
<name sortKey="Sheth, A P" uniqKey="Sheth A">A.P. Sheth</name>
</author>
<author>
<name sortKey="Thuirunarayan, K" uniqKey="Thuirunarayan K">K. Thuirunarayan</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Prud Hommeaux, E" uniqKey="Prud Hommeaux E">E. Prud'hommeaux</name>
</author>
<author>
<name sortKey="Seaborne, A" uniqKey="Seaborne A">A. Seaborne</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Cyganiak, R" uniqKey="Cyganiak R">R. Cyganiak</name>
</author>
<author>
<name sortKey="Bizer, C" uniqKey="Bizer C">C. Bizer</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Blanco, J L" uniqKey="Blanco J">J.L. Blanco</name>
</author>
<author>
<name sortKey="Siguenza, A" uniqKey="Siguenza A">A. Sigüenza</name>
</author>
<author>
<name sortKey="Diaz, D" uniqKey="Diaz D">D. Díaz</name>
</author>
<author>
<name sortKey="Sendra, M" uniqKey="Sendra M">M. Sendra</name>
</author>
<author>
<name sortKey="Hernandez, L" uniqKey="Hernandez L">L Hernández</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nishimoto, T" uniqKey="Nishimoto T">T. Nishimoto</name>
</author>
<author>
<name sortKey="Shioya, M" uniqKey="Shioya M">M. Shioya</name>
</author>
<author>
<name sortKey="Takahasi, J" uniqKey="Takahasi J">J. Takahasi</name>
</author>
<author>
<name sortKey="Daigo, H" uniqKey="Daigo H">H. Daigo</name>
</author>
</analytic>
</biblStruct>
<biblStruct></biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-id journal-id-type="iso-abbrev">Sensors (Basel)</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>Molecular Diversity Preservation International (MDPI)</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">22778643</article-id>
<article-id pub-id-type="pmc">3386742</article-id>
<article-id pub-id-type="doi">10.3390/s120506307</article-id>
<article-id pub-id-type="publisher-id">sensors-12-06307</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Sigüenza</surname>
<given-names>Álvaro</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-12-06307">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c1-sensors-12-06307">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Díaz-Pardo</surname>
<given-names>David</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-12-06307">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Bernat</surname>
<given-names>Jesús</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-12-06307">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Vancea</surname>
<given-names>Vasile</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-12-06307">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Blanco</surname>
<given-names>José Luis</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-12-06307">
<sup>1</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Conejero</surname>
<given-names>David</given-names>
</name>
<xref ref-type="aff" rid="af2-sensors-12-06307">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Gómez</surname>
<given-names>Luis Hernández</given-names>
</name>
<xref ref-type="aff" rid="af1-sensors-12-06307">
<sup>1</sup>
</xref>
</contrib>
</contrib-group>
<aff id="af1-sensors-12-06307">
<label>1</label>
ETSI Telecomunicación, Universidad Politécnica de Madrid, Avenida Complutense 30, E-28040 Madrid, Spain; E-Mails:
<email>dpardo@gaps.ssr.upm.es</email>
(D.D.-P.);
<email>mr.vasilevancea@yahoo.com</email>
(V.V.);
<email>jlblanco@gaps.ssr.upm.es</email>
(J.L.B.);
<email>luisalfonso.hernandez@upm.es</email>
(L.H.G.)</aff>
<aff id="af2-sensors-12-06307">
<label>2</label>
Telefónica Investigación y Desarrollo, Distrito C. Edificio Oeste 1, Ronda de la Comunicación, s/n, 28050 Madrid, Spain; E-Mails:
<email>bernat@tid.es</email>
(J.B.);
<email>dco@tid.es</email>
(D.C.)</aff>
<author-notes>
<corresp id="c1-sensors-12-06307">
<label>*</label>
Author to whom correspondence should be addressed; E-Mail:
<email>alvaro.siguenza@gaps.ssr.upm.es</email>
; Tel.: +34-91-549-5700; Fax: +34-91-336-7350.</corresp>
</author-notes>
<pub-date pub-type="collection">
<year>2012</year>
</pub-date>
<pub-date pub-type="epub">
<day>11</day>
<month>5</month>
<year>2012</year>
</pub-date>
<volume>12</volume>
<issue>5</issue>
<fpage>6307</fpage>
<lpage>6330</lpage>
<history>
<date date-type="received">
<day>16</day>
<month>3</month>
<year>2012</year>
</date>
<date date-type="rev-recd">
<day>03</day>
<month>5</month>
<year>2012</year>
</date>
<date date-type="accepted">
<day>03</day>
<month>5</month>
<year>2012</year>
</date>
</history>
<permissions>
<copyright-statement>© 2012 by the authors; licensee MDPI, Basel, Switzerland.</copyright-statement>
<copyright-year>2012</copyright-year>
<license>
<license-p>
<pmc-comment>CREATIVE COMMONS</pmc-comment>
This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/">http://creativecommons.org/licenses/by/3.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>Current “Internet of Things” concepts point to a future where connected objects gather meaningful information about their environment and share it with other objects and people. In particular, objects embedding Human Machine Interaction (HMI), such as mobile devices and, increasingly, connected vehicles, home appliances, urban interactive infrastructures,
<italic>etc</italic>
., may not only be conceived as sources of sensor information, but, through interaction with their users, they can also produce highly valuable context-aware human-generated observations. We believe that the great promise offered by combining and sharing all of the different sources of information available can be realized through the integration of HMI and Semantic Sensor Web technologies. This paper presents a technological framework that harmonizes two of the most influential HMI and Sensor Web initiatives: the W3C's Multimodal Architecture and Interfaces (MMI) and the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) with its semantic extension, respectively. Although the proposed framework is general enough to be applied in a variety of connected objects integrating HMI, a particular development is presented for a connected car scenario where drivers' observations about the traffic or their environment are shared across the Semantic Sensor Web. For implementation and evaluation purposes an on-board OSGi (Open Services Gateway Initiative) architecture was built, integrating several available HMI, Sensor Web and Semantic Web technologies. A technical performance test and a conceptual validation of the scenario with potential users are reported, with results suggesting the approach is sound.</p>
</abstract>
<kwd-group>
<kwd>connected objects</kwd>
<kwd>connected cars</kwd>
<kwd>human-generated observations</kwd>
<kwd>Human-Machine Interaction</kwd>
<kwd>Sensor Web</kwd>
<kwd>Semantic Sensor Web</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec>
<label>1.</label>
<title>Introduction</title>
<p>The current evolution of ubiquitous computing and information networks is rapidly merging the physical and the digital worlds enabling the ideation and development of a new generation of intelligent applications as eHealth, Logistics, Intelligent Transportation, Environmental Monitoring, Smart Grids, Smart Metering or Home Automation. This scenario, seminal in Mark Weiser's Ubiquitous Computing work [
<xref ref-type="bibr" rid="b1-sensors-12-06307">1</xref>
] and now evolving into the “Internet of Things” [
<xref ref-type="bibr" rid="b2-sensors-12-06307">2</xref>
] concept, points toward a future in which many objects around us will be able to acquire meaningful information about their environment and communicate it to other objects and to people.</p>
<p>Among this universe of interconnected objects, those embedding Human-Machine Interaction (HMI) technologies, such as mobile phones, connected vehicles, home appliances, smart buildings, interactive urban infrastructures,
<italic>etc</italic>
., can play an important role as they can be aware of real-world information and, at the same time, provide enriched information to other users, objects or applications. Data could come from human (social networks, monitoring systems, interactive devices), or from machine input (e.g., different Sensor Networks), and the HMI is the connection between these two sources.</p>
<p>Sensor and Actuator Networks (SANs) are becoming an inexhaustible source of real world information, so the Sensor Web term is being used to describe a middleware between sensors and applications: “Web accessible sensor networks and archived sensor data that can be discovered and accessed using standard protocols and application programming interfaces” [
<xref ref-type="bibr" rid="b3-sensors-12-06307">3</xref>
]. An emerging number of Sensor Web portals, such as Sensorpedia [
<xref ref-type="bibr" rid="b4-sensors-12-06307">4</xref>
], SensorMap [
<xref ref-type="bibr" rid="b5-sensors-12-06307">5</xref>
], SensorBase [
<xref ref-type="bibr" rid="b6-sensors-12-06307">6</xref>
] or Pachube [
<xref ref-type="bibr" rid="b7-sensors-12-06307">7</xref>
], are currently being developed to enable users to upload and share sensor data. One of the most influential Sensor Web initiatives is the Sensor Web Enablement (SWE) of the Open Geospatial Consortium (OGC). The SWE [
<xref ref-type="bibr" rid="b8-sensors-12-06307">8</xref>
] is defining a set of standards to develop “an infrastructure which enables an interoperable usage of sensor resources by enabling their discovery, access, tasking, as well as eventing and alerting within the Sensor Web in a standardized way”.</p>
<p>Further efforts to improve interoperability of a world of heterogeneous and geographically dispersed interconnected SANs include the proposal of a Semantic Sensor Web [
<xref ref-type="bibr" rid="b9-sensors-12-06307">9</xref>
,
<xref ref-type="bibr" rid="b10-sensors-12-06307">10</xref>
]. The Semantic Sensor Web brings Semantic Web technologies to annotate sensor data making it easier for different applications to extract homogeneous interpretations of them.</p>
<p>To progress towards a full harmonization between HMI systems and the Sensor Web, advancements are needed in two fundamental areas: the integration and accessibility of a growing number of heterogeneous sensor data into HMI systems, and new mechanisms that allow sharing real-world information provided by users of connected objects into the Sensor Web or the Semantic Sensor Web.</p>
<p>In our previous research [
<xref ref-type="bibr" rid="b11-sensors-12-06307">11</xref>
] we have presented some contributions to the first issue, so in this paper we will try to contribute to the second one. In particular we will present our results based on our activities in the Mobility for Advanced Transport Networks (MARTA) project [
<xref ref-type="bibr" rid="b12-sensors-12-06307">12</xref>
], a Spanish publicly funded project where several context-aware interactive services were designed and implemented for In-Vehicle Information Systems (IVIS) and Advanced Driver Assistance Systems (ADAS). It is important to point out that, as stated before, we believe that the proposed framework for integrating HMI and Semantic Sensor Web principles and technologies is general enough and could be applied in a variety of scenarios featuring mobile devices, multimedia and home appliances, urban interactive infrastructures,
<italic>etc</italic>
.</p>
<p>Nevertheless, in order to make the presentation of the proposed framework clearer, in this paper we will focus on a scenario where a driver of a connected car provides, through interaction with an in-vehicle HMI system, contextual information that can be valuable for other applications. For example, the driver may detect potential dangers on the road (ice-patches, pedestrians,
<italic>etc</italic>
.), or certain traffic conditions (accidents or congestions) or environmental conditions (dense fog or heavy rain). Then, by interacting with an in-vehicle HMI system, (s)he can make this contextual information available to other interested applications (e.g., a Road Safety Authority or other HMI systems in surrounding connected vehicles). In the manner of recent proposals such as the Human Sensor Web [
<xref ref-type="bibr" rid="b13-sensors-12-06307">13</xref>
], these pieces of contextual information that user of the connected object (
<italic>i.e.</italic>
, the driver) provides will be referred to here as
<italic>human-generated observations</italic>
.</p>
<p>Future in-car interaction scenarios must be considered not as simple “local” driver-system interfaces, but, as
<xref ref-type="fig" rid="f1-sensors-12-06307">Figure 1</xref>
illustrates, as complex systems. HMI systems for connected cars have to manage, not only different driver's interaction modalities (speech–microphones and loudspeakers; vision–displays; haptic–knobs, buttons, touch screen;
<italic>etc</italic>
.), but also local and remote sensor information. As shown in
<xref ref-type="fig" rid="f1-sensors-12-06307">Figure 1</xref>
, context-aware HMI systems can be regarded as systems that use sensor data and user inputs to interact with applications, but at the same time HMIs may be regarded as sensing systems capable of producing real-world information for the Sensor Web. This capability of using HMI systems embedded in a connected object to publish information into the Sensor Web could be related, either to measurements from its local sensors (attached to the object), or to data directly provided by her user. In [
<xref ref-type="bibr" rid="b14-sensors-12-06307">14</xref>
] we discussed some of the main issues when using HMI systems to process and publish local sensor data. In this paper we will address those related to the publication of user-generated observations.</p>
<p>In this work we will also rely on the design principles proposed by the W3C's Multimodal Architecture and Interfaces (MMI) [
<xref ref-type="bibr" rid="b15-sensors-12-06307">15</xref>
]. Following these principles we will discuss the design of in-vehicle context-aware multimodal HMI systems capable of collecting driver's information reporting observations on different road, traffic or environmental situations, and generate semantic representations of them.</p>
<p>The rest of the paper is organized as follows: Section 2 presents related research. Section 3 describes the design of in-vehicle HMI systems to collect driver-generated observations following the principles of the W3C's MMI architecture instantiated on an OSGi framework. The semantic annotation of driver-generated observations and their publication in the Semantic Sensor Web are discussed in Section 4. Section 5 presents our experimental set-up, implemented on an on-board unit of a connected car. Performance analyses and a concept validation study are described in Section 6. Finally, conclusions and future work are discussed in Section 7.</p>
</sec>
<sec>
<label>2.</label>
<title>Related Work</title>
<p>In-vehicle context-aware HMI systems and the more recent conceptions of user-generated sensors or the Human Sensor Web [
<xref ref-type="bibr" rid="b13-sensors-12-06307">13</xref>
,
<xref ref-type="bibr" rid="b16-sensors-12-06307">16</xref>
] are two research areas closely related to the work in this paper. Recent research on context-aware HMI systems in general, and in-vehicle interactive systems in particular, has sought to ensure that they are able function in highly heterogeneous environments, adapting to all kinds of situations and contexts, always giving correct and safe feedback to their users ([
<xref ref-type="bibr" rid="b17-sensors-12-06307">17</xref>
<xref ref-type="bibr" rid="b19-sensors-12-06307">19</xref>
]). Information services embedded in HMI systems have to manage a common representation of the user (identifying his mood state, needs and preferences) and the contextual situation coming from a variety of heterogeneous sources. In order to integrate this data in a homogeneous manner, some approaches, such as the one presented in [
<xref ref-type="bibr" rid="b20-sensors-12-06307">20</xref>
], have already made use of Semantic Web technologies to define a model of contextual information composed of several independent ontologies, mainly to represent users, devices, environment and services.</p>
<p>In HMI vehicle scenarios, integrating both multimodal interaction and context for in-vehicle applications has also been addressed, and a common approach [
<xref ref-type="bibr" rid="b19-sensors-12-06307">19</xref>
] is to consider three independent domains: driver, vehicle and environment. However, most of the research in context-aware multimodal HMI systems in vehicles has been more focused on how to manage high-level representations of context than on the integration with the underlying infrastructures providing sensor data. Only few approaches, such as the work presented in [
<xref ref-type="bibr" rid="b21-sensors-12-06307">21</xref>
] for an in-car OSGi framework, have addressed the design of HMI systems including the management of different car components. Nevertheless, these studies only take into account data from local sensors (attached to the car) and do not consider the access or sharing (publication) of sensor data through the Internet.</p>
<p>The research presented in this paper can be also related to emergent concepts of user-generated sensors or human observations (descriptions of real-world phenomena), that are different from those of human sensor observations (particular sensors carried by or attached to humans). The seminal work in [
<xref ref-type="bibr" rid="b13-sensors-12-06307">13</xref>
] presents the Human Sensor Web vision as “an effort for creating and sharing human observations as well as sensor observations on the Web”, and presents an example of establishing a noise mapping community. According to this vision, future systems will use different types of observations: conventional sensor data, human sensed observations (e.g., vocal, image or text) and human collected data (sensors carried by humans, like smart phones or other personal devices), and will integrate them into the Human Sensor Web [
<xref ref-type="bibr" rid="b16-sensors-12-06307">16</xref>
]. The work in [
<xref ref-type="bibr" rid="b13-sensors-12-06307">13</xref>
] also identifies some challenges to realize the Human Sensor Web. The most persistent challenges are guaranteeing the accuracy of the data, resolving personal privacy issues and answering the fundamental question of how collective intelligence can improve on conventional methods [
<xref ref-type="bibr" rid="b22-sensors-12-06307">22</xref>
]. In a similar direction, in our work we will discuss preliminary approximations to using semantic representations–which are already being used to represent sensor data–to describe human observations, and we will explore the use of the Semantic Sensor Web principles for publishing and accessing them.</p>
</sec>
<sec>
<label>3.</label>
<title>HMI Systems to Collect Driver Observations</title>
<p>As we stated before, developing in-vehicle HMI systems requires not only the integration of the driver's input/output information (e.g., speech, touch, graphic displays,
<italic>etc</italic>
.), but also the proper management of data provided from different sensor sources: from the car (e.g., speed, wheel traction), the driver (e.g., mood, fatigue), and environment (road, traffic, weather,
<italic>etc</italic>
.) [
<xref ref-type="bibr" rid="b23-sensors-12-06307">23</xref>
]. The HMI designer typically needs to interpret the sensed data in order to identify situations that are either of direct interest to the driver, or which will help to shape communication strategies that are appropriate for each situation.</p>
<p>The W3C is in the process of defining an architecture recommendation for the design of multimodal interfaces: the MMI Reference Architecture [
<xref ref-type="bibr" rid="b15-sensors-12-06307">15</xref>
]. Major components in the W3C MMI architecture, represented in
<xref ref-type="fig" rid="f2-sensors-12-06307">Figure 2</xref>
, are the Input and Output Modality Components, which handle the information coming in from and going out to the human user, and the Interaction Manager, which coordinates the flow of the communication in the different modalities and decides the overall communication strategy in response to successive inputs from the user. The MMI architecture also considers two important elements: (1) a data component which stores the data that the Interaction Manager needs to perform its functions; and (2) an event-based communication layer to carry events between the modality components and the Interaction Manager.</p>
<p>This standardized reference architecture provides a very attractive framework for dealing with the high complexity of designing HMI systems for a variety of connected objects such as connected vehicles. In our experimental implementation, that will be detailed in Section 5, an OSGi framework [
<xref ref-type="bibr" rid="b24-sensors-12-06307">24</xref>
] was used to instantiate an embedded W3C MMI architecture into a connected car. OSGi is a Java-based service platform that allows applications to be developed from small, reusable and collaborative components called bundles. The main components in the MMI Architecture (
<italic>i.e.</italic>
, Interaction Manager, and Input/Output Modality Components) can be implemented as OSGi bundles. The platform also provides an EventAdmin OSGi Service bundle as a standard way of dealing with events in the OSGi Environment using the publish/subscribe model. Therefore, this event management capability in OSGi can represent the event-based communication layer in the W3C MMI architecture. The mapping between these OSGi capabilities and the MMI architecture is illustrated in
<xref ref-type="fig" rid="f2-sensors-12-06307">Figure 2</xref>
.</p>
<p>Extending this basic HMI architecture to include information from different sensor sources is rather straightforward. Information from both local sensors (attached to the car or connected object) and remote sensors (e.g., from the Sensor Web) can be directly accessed by developing specific bundles acting as “Sensor Components” between the sensor providers and the HMI Interaction Manager (see the Local Sensor Component and the Sensor Web Component in
<xref ref-type="fig" rid="f2-sensors-12-06307">Figure 2</xref>
).</p>
<p>Obviously, to have access to remote sensors (
<italic>i.e.</italic>
, the Sensor Web Component) the OSGi framework must also include a communication infrastructure—for example, in the case of a connected car, supporting V2V (vehicle to vehicle) and V2I (vehicle to infrastructure) communications, or just communication capabilities through in-car nomadic devices, such as the driver's mobile phone (OSGi is also a technology suitable for integration into mobile phones and other connected objects).</p>
<p>Inside the W3C MMI architecture, as in any HMI system, a key component is the Interaction Manager. The Interaction Manager receives ordered sequences of events and data from the different Components (both from the user and sensor sources) and decides what to do with them. Events may be for the Interaction Manager's own consumption, they may be forwarded to other components or they may result in the generation of new events or data by the Interaction Manager. For the purpose of designing flexible and easily configurable Interaction Managers the W3C is developing SCXML (State Chart eXtensible Markup Language) [
<xref ref-type="bibr" rid="b25-sensors-12-06307">25</xref>
], a generic event-based state-machine execution environment based on Harel statecharts [
<xref ref-type="bibr" rid="b26-sensors-12-06307">26</xref>
]. Statecharts are extensions of conventional finite state machines, with additional properties that lend themselves to describing complex control mechanisms in reactive systems in which it is necessary to coordinate components of diverse nature. SCXML is being proposed by the W3C as a major candidate language to control interaction flow in Human-Machine Interactive systems (HMIs). It is being considered for future interactive speech systems, in W3C VoiceXML 3.0 [
<xref ref-type="bibr" rid="b27-sensors-12-06307">27</xref>
], as well as for multimodal systems [
<xref ref-type="bibr" rid="b15-sensors-12-06307">15</xref>
]. As we have presented in previous work [
<xref ref-type="bibr" rid="b28-sensors-12-06307">28</xref>
], SCXML can be also very useful for combining user input information and sensor information.</p>
<p>In this work we have implemented an SCXML-based Interaction Manager controlling the data exchanges with the driver (see the details in Section 5). Driver input information is obtained using speech recognition controlled with a push-to-talk button on the steering wheel, and output information is provided through text-to-speech synthesis and a visual display. Two different models of spoken dialogue interaction have been implemented for collecting driver observations:
<italic>sensor-initiated</italic>
and
<italic>driver-initiated</italic>
. Sensor-initiated dialogue starts automatically when a sensor detects a possibly relevant situation. The sensor-initiated dialogue is a rather simple one since the HMI system has only to ask the driver to confirm (using yes/no expressions) the particular sensor-detected situation or observation. A system-generated interaction might follow a structure such as:
<list list-type="simple">
<list-item>
<p>SYSTEM: The car's sensors are detecting limited visibility. Please confirm whether there is fog or a dust cloud.</p>
</list-item>
<list-item>
<p>USER: Fog.</p>
</list-item>
<list-item>
<p>SYSTEM: Thank you for confirming the presence of fog. Security systems have been adjusted accordingly.</p>
</list-item>
</list>
</p>
<p>In driver-initiated dialogue, the dialogue is started by the driver, using a specific button in the steering wheel, when she observes what she believes is a relevant situation; for example, entering a densely foggy area, or upon seeing a tree fallen across one of the lanes of the road. An example dialogue might be:
<list list-type="simple">
<list-item>
<p>USER: There is a fallen tree on the right lane.</p>
</list-item>
<list-item>
<p>SYSTEM: Tree on right lane. Thank you for informing. The observation has been relayed to Traffic Control.</p>
</list-item>
</list>
</p>
<p>Driver-initiated dialogue presents a more challenging situation because the number of different kinds of observations a driver can report can be potentially very high. Furthermore, the spontaneous language she can use can be very rich and varied, requiring Natural Language Processing capabilities not implemented in our OSGi framework. To limit these problems, in our implementation driver-initiated interaction has been restricted to a menu-based dialogue. Once the driver decides to report an observation, she has to follow a system-directed dialogue offering a limited set of possible observations. In order to avoid speech recognition errors, which can lead to unsafe driving situations [
<xref ref-type="bibr" rid="b29-sensors-12-06307">29</xref>
], the number of different observations has been limited to 16, arranged into two sub-menu levels. In the first level the driver has to choose the category of her observation—road, traffic or environment, and in the second level she has to select the particular observation in the selected category. Some results from a preliminary usability evaluation of the test scenario are discussed in Section 6. The upcoming tests described in Section 6 will address problem situations such as those derived from speech recognition errors, with the aim to gain an understanding on the interaction effects between the (simulated) driving task and the dialogue task.</p>
<p>Finally, it is important to point out that, apart from the difficulties in designing robust and safe spoken dialogue strategies to collect human-generated observations, an important challenge, not addressed in our work, is how to provide a confidence level on the quality of the information the driver is reporting. Some strategies already in use in social networks could be explored, such as ratings of particularly reliable users or matching for coincident observations [
<xref ref-type="bibr" rid="b30-sensors-12-06307">30</xref>
].</p>
</sec>
<sec>
<label>4.</label>
<title>Publishing Driver Observations into the Semantic Sensor Web</title>
<p>Once a driver-generated observation has been collected through a driver-initiated or a sensor-initiated dialogue, the Interaction Manager has to start a procedure to make it available into the Sensor Web. Two main steps are required: (1) to provide a homogeneous representation for the human-generated observation; and (2) to drive a mechanism to publish it in the Sensor Web.</p>
<p>As mentioned in the Introduction (Section 1), there is currently an emergence of Sensor Web portals (
<italic>i.e.</italic>
, Pachube, Sensorpedia,
<italic>etc</italic>
.), and they could be considered for publishing human-generated observations. Another family of resources that could be explored for these purposes are Social Network infrastructures, such as text-based posts (e.g., Twitter [
<xref ref-type="bibr" rid="b31-sensors-12-06307">31</xref>
]).</p>
<p>In this work we will explore OGC SWE principles [
<xref ref-type="bibr" rid="b8-sensors-12-06307">8</xref>
], as they constitute one of the most mature and active proposals in the field. Nevertheless, specifying the requirements for publishing driver-generated observations using current SWE standards is far from trivial. Here are some major points that must be taken into account:
<list list-type="bullet">
<list-item>
<p>First, it would be necessary to describe the in-vehicle HMI system as a sensing system using SensorML (Sensor Model Language) [
<xref ref-type="bibr" rid="b8-sensors-12-06307">8</xref>
]. SensorML is the OGC SWE language used to describe different types of sensors and sensor systems, from simple to complex, such as earth observing satellites or, in our case, a driver-observer.</p>
</list-item>
<list-item>
<p>Then, this observation-generating entity must be registered into an OGC Catalog Service (CS-W) [
<xref ref-type="bibr" rid="b4-sensors-12-06307">4</xref>
], so that its observations can be discovered by other applications.</p>
</list-item>
<list-item>
<p>Driver observations should be represented using the O&M (Observation & Measurement) language [
<xref ref-type="bibr" rid="b8-sensors-12-06307">8</xref>
]. O&M defines a domain-independent conceptual model for the representation of–spatiotemporal– sensed data.</p>
</list-item>
<list-item>
<p>Finally, the human-generated sensor resources have to be registered, and made discoverable and accessible using a set of basic Web Services, such as the Sensor Observation Service (SOS) [
<xref ref-type="bibr" rid="b8-sensors-12-06307">8</xref>
] (SWE only standardizes their interfaces).</p>
</list-item>
</list>
</p>
<p>Due to the difficulty of addressing the above points in our in-vehicle environment, in this work we have sought to drift toward the recent initiative of blending the Sensor Web with Semantic Web technologies, into what is referred to as the Semantic Sensor Web [
<xref ref-type="bibr" rid="b9-sensors-12-06307">9</xref>
,
<xref ref-type="bibr" rid="b10-sensors-12-06307">10</xref>
]. Notwithstanding the fact that, as stated in the position paper presented in [
<xref ref-type="bibr" rid="b32-sensors-12-06307">32</xref>
], it can be hard to measure how successful these recent initiatives are, we will explore the use of URI-based (Uniform Resource Identifier) descriptions of human-generated observations encoded using the Resource Description Framework (RDF), as this is an accepted Semantic Web standard [
<xref ref-type="bibr" rid="b33-sensors-12-06307">33</xref>
]. This will facilitate building many applications such as Web mashups and, as we will discuss and illustrate in Section 5, by adopting Linked Data principles (Linked Sensor Data [
<xref ref-type="bibr" rid="b32-sensors-12-06307">32</xref>
]), “to use URIs as reference for look-up as well as RDF and SPARQL (SPARQL Protocol and RDF Query Language) for storage, access, and querying”.</p>
<p>So far, adopting what we can call Semantic Sensor Web principles, the following sub-sections will discuss how to describe, store and access the HMI-collected driver-generated observations.</p>
<sec>
<label>4.1.</label>
<title>Semantic Description of Driver-Generated Observations</title>
<p>Annotating human-generated observations using semantic models (
<italic>i.e.</italic>
, RDF and OWL) can provide important benefits over other schemes:
<list list-type="bullet">
<list-item>
<p>It offers the ability to reason and make inferences from observations using semantic technologies, giving access to the wider set of applications that makes use of the Semantic Web.</p>
</list-item>
<list-item>
<p>It enables the straightforward use of querying mechanisms, such as SPARQL, to discover new information.</p>
</list-item>
<list-item>
<p>It provides the possibility of integrating new observations with the great amount of information enabled through RDF and OWL in the Semantic Web. This point is closely related to the Linked Data concept introduced by Berners-Lee, which refers to “data published on the Web in such a way that it is machine-readable, its meaning is explicitly defined, it is linked to other external data sets, and can in turn be linked to from external data sets [
<xref ref-type="bibr" rid="b34-sensors-12-06307">34</xref>
].”</p>
</list-item>
</list>
</p>
<p>In order to provide a semantic representation for the driver's observations, we have followed the approach proposed by Henson
<italic>et al.</italic>
[
<xref ref-type="bibr" rid="b35-sensors-12-06307">35</xref>
] based on the encoding of the OGC Observations and Measurements language (O&M) in OWL (the Web Ontology Language [
<xref ref-type="bibr" rid="b36-sensors-12-06307">36</xref>
]). In O&M-OWL an ontology covers a subset of concepts in O&M, and, similarly to what is proposed in [
<xref ref-type="bibr" rid="b35-sensors-12-06307">35</xref>
] for general sensor observations, we think it can also offer interesting possibilities for managing human-generated observations.
<xref ref-type="fig" rid="f3-sensors-12-06307">Figure 3</xref>
shows the translation of O&M into OWL, adapted to our driver-generated observations scenario; it is important to note that, as can be seen in the figure, the O&M property
<italic>procedure</italic>
(denoting the instrument, algorithm or process used to collect the observation) is the “driver”.</p>
<p>In O&M-OWL, relations between concepts are described using RDF triples, which correspond to a subject-predicate-object structure. As an example, the O&M-OWL representation of a driver-generated observation of the presence of dense fog on a road would be as listed in
<xref ref-type="table" rid="t1-sensors-12-06307">Table 1</xref>
:</p>
<p>In this example it is important to point out that, as discussed in Section 3, the in-vehicle HMI system collects (by engaging in either driver-initiated or sensor-initiated dialogue) only the driver's description of the observed phenomenon (
<italic>i.e.</italic>
, the
<italic>om:featureOfInterest</italic>
). Consequently, the HMI architecture has to automatically provide all the remaining data to be included in the O&M-OWL representation describing the human-generated observation. This includes the particular situation of the car on the road (
<italic>om:observationLocation</italic>
) and the observation time (
<italic>om:samplingTime</italic>
). It is also important to note that, as shown in the example, the observation entity (
<italic>om:procedure</italic>
) can be linked to a particular driver or to an anonymous driver (or a nickname). This can be very useful when addressing the relevant issue of privacy management of human-generated observations (see the discussion and study in Section 6).</p>
<p>Moreover, as we stated at the beginning of this subsection, by using Linked Data principles data published on the Semantic Web can be reused in the sensor annotation procedure. This makes it possible to annotate sensor data by creating RDF links to other data from sources like DBpedia [
<xref ref-type="bibr" rid="b37-sensors-12-06307">37</xref>
], which is more efficient and “shareable” than defining new ontologies with their corresponding concepts and relationships. To illustrate this,
<xref ref-type="fig" rid="f4-sensors-12-06307">Figure 4</xref>
shows how, in our previous example, the O&M
<italic>location</italic>
value (
<italic>om:location_1</italic>
) can be linked to a specific location, “Sigüenza,” defined by DBpedia (
<italic>dbpedia:Sigüenza</italic>
) using the property
<italic>location</italic>
defined by the DBpedia ontology (
<italic>dbpedia-owl</italic>
). Furthermore, in DBpedia, the object “Sigüenza” is related to other objects. For example, “Sigüenza” is defined as part of another location named “Guadalajara” (see
<xref ref-type="fig" rid="f4-sensors-12-06307">Figure 4</xref>
). The flexibility and richness of structured information offered by Linked Data opens the possibility of performing advanced queries and inferences on these driver-generated observations, as we show in the following subsection.</p>
</sec>
<sec>
<label>4.2.</label>
<title>Publishing on the Semantic Sensor Web</title>
<p>Together with the use of O&M-OWL to generate driver-generated observations encoded as a set of RDF triples, it is important to consider how these semantically annotated observations can be accessed for inference or query.</p>
<p>In our work, in contrast to the use of semantically enabled OGC services proposed in [
<xref ref-type="bibr" rid="b35-sensors-12-06307">35</xref>
] (in particular the extension of SOS to SemSOS), we have explored a preliminary step towards making human-generated observations accessible using the existing information space of the Web. We stored RDF driver-observations in public repositories (
<italic>i.e.</italic>
, SPARQL Endpoints [
<xref ref-type="bibr" rid="b38-sensors-12-06307">38</xref>
]). By doing so O&M-OWL observations encoded in RDF and linked to specific ontologies can be shared with other systems and applications through the Semantic Sensor Web (SSW). The information published in the SSW can then be used for a wide variety of purposes: it can be further mashed up with other information to acquire yet higher levels of knowledge, it can be pooled to analyze patterns of use of applications (for example for a Road Safety Authority), or it can be fed back to applications (e.g., other connected car HMI systems), thus closing an information loop, with a connected entity producing and consuming context-aware information.</p>
<p>To illustrate with an example, applications could have access to the human-generated observations stored as RDF Graphs, which can be retrieved via SPARQL queries. Through these queries it will be possible to filter the RDF triples in the repository that fulfill a set of desired conditions. The following example shows a SPARQL query searching for driver-generated observations from roads in a specific area, “Guadalajara,” and with “denseFog” as the observed property.
<list list-type="simple">
<list-item>
<p>PREFIX environment:<
<ext-link ext-link-type="uri" xlink:href="http://www.sensor.gaps.upm.es/environment/">http://www.sensor.gaps.upm.es/environment/</ext-link>
></p>
</list-item>
<list-item>
<p>PREFIX dbpedia:<
<ext-link ext-link-type="uri" xlink:href="http://dbpedia.org/resource/">http://dbpedia.org/resource/</ext-link>
></p>
</list-item>
<list-item>
<p>PREFIX om:<
<ext-link ext-link-type="uri" xlink:href="http://www.opengis.net/om/1.0">http://www.opengis.net/om/1.0</ext-link>
></p>
</list-item>
<list-item>
<p>PREFIX rdf:<
<ext-link ext-link-type="uri" xlink:href="http://www.w3.org/1999/02/22-rdf-syntax-ns#">http://www.w3.org/1999/02/22-rdf-syntax-ns#</ext-link>
></p>
</list-item>
<list-item>
<p>PREFIX dbpedia-owl:<
<ext-link ext-link-type="uri" xlink:href="http://dbpedia.org/ontology/">http://dbpedia.org/ontology/</ext-link>
></p>
</list-item>
<list-item>
<p>SELECT DISTINCT ?obs WHERE {
<list list-type="simple">
<list-item>
<p>?c rdf:type environment:Road .</p>
</list-item>
<list-item>
<p>?obs om:featureOfInterest ?c ;
<list list-type="simple">
<list-item>
<p>om:observedProperty environment:denseFog ;</p>
</list-item>
<list-item>
<p>om:observationLocation ?loc .</p>
</list-item>
</list>
</p>
</list-item>
<list-item>
<p>?loc dbpedia-owl:isPartOf dbpedia:Guadalajara.</p>
</list-item>
</list>
</p>
</list-item>
<list-item>
<p>}</p>
</list-item>
</list>
</p>
<p>In this example the query works by matching the triples RDF in the “WHERE” clause against the triples in the RDF graph stored in the repository. Our RDF example in Section 4.1 matches this clause because the observation was linked to a specific location datum in the DBpedia domain. Thus, information available in the Semantic Web is reused. The observation was linked to the resource “Sigüenza,” which is related to the resource “Guadalajara” through the DBpedia ontology (by virtue of the
<italic>isPartOf</italic>
property”). Thus the query result may include the values corresponding to our particular observation (along with all the other observations published in the repository that may fulfill the query requirements). In this case the value (
<italic>om:obs_1</italic>
), that represents an observation from a specific road segment (
<italic>om:road_1</italic>
), would be assigned to the variable
<italic>?obs</italic>
.</p>
<p>Additional knowledge from semantically annotated driver-observations could be obtained by using rule-based reasoning to infer new ontological assertions from known instances and class descriptions. For example, the driver-generated observation of a road under dense foggy conditions in the previous subsection could be used by a Road Safety Authority monitoring application to warn other drivers entering the area. A driver planning a trip through this area using a navigator connected to the Semantic Sensor Web (in her car or mobile phone) could be alerted of the dense fog and be advised to take an alternative route.</p>
</sec>
</sec>
<sec>
<label>5.</label>
<title>Experimental Setup</title>
<p>We set up an experiment to perform an exploratory analysis of the different approaches and technologies we have considered for sharing driver-generated observations, collected using in-vehicle HMI systems, through the Semantic Sensor Web. Our testing scenario approximates the realistic connected car environment developed in the MARTA [
<xref ref-type="bibr" rid="b12-sensors-12-06307">12</xref>
] (Mobility for Advanced Transport Networks) research project, in which several HMI systems were developed for different In-Vehicle Information Systems (IVIS) and Advanced Driver Assistance Systems (ADAS) applications.</p>
<p>The instantiation of the W3C MMI architecture described in Section 3, including mechanisms to annotate and publish driver-observations (Section 4), was carried out on an On-Board Unit (OBU) in charge of managing the Human-Machine Interaction. This OBU was integrated with the new technologies (GRPRS—General packet radio service, UMTS—Universal Mobile Telecommunications System, HDSPA—High Speed Downlink Packet Access, CALM—Continuous Air interface for Long and Medium distance,
<italic>etc</italic>
.) developed in MARTA to give support to V2V (vehicle to vehicle) and V2I (vehicle to infrastructure) communications.</p>
<p>The final implementation was integrated in a
<italic>CarPC</italic>
, which is a computer designed to be specifically installed and run in vehicles. The
<italic>CarPC</italic>
was set up with a Linux OS, a Java Virtual machine and release 3.4 of the OSGi platform [
<xref ref-type="bibr" rid="b24-sensors-12-06307">24</xref>
].</p>
<p>
<xref ref-type="fig" rid="f5-sensors-12-06307">Figure 5</xref>
presents the main components we developed on the vehicle side of our implementation. As described in Section 3, the Interaction Manager of our MMI architecture was implemented using SCXML, so a specific bundle was developed including the SCXML engine provided by Apache Commons SCXML [
<xref ref-type="bibr" rid="b39-sensors-12-06307">39</xref>
]. Both the driver-initiated and the sensor-initiated dialogues were implemented using SCXML documents invoking proprietary Telefónica R&D speech technologies (ASR—Automatic Speech Recognition and TTS—Text to Speech) accessed through a Speech Server bundle. Dialogue management also included interaction with events from buttons on the steering wheel (which served to carry out functions such as allowing the driver to generate an order to start a driver-initiated dialogue).</p>
<p>The event-based communication layer (again, see Section 3)—an important element in the W3C MMI architecture—was supported by the EventAdmin OSGi Service bundle, which is a standard way of dealing with events using the publish/subscribe model. It is through this service that the SCXML-based Interaction Manager interacts with the Speech Server bundle (ASR/TTS) as well as with several bundles receiving sensor data (
<italic>i.e.</italic>
, Sensor Components). Data from Local Sensor Components (car-sensors) were received through specific wrapping components that accessed the CAN (Controller Area Network) bus, while a specific bundle, including Internet access through GPRS, was developed to access the Semantic Sensor Web (
<italic>i.e.</italic>
, to query RDF repositories).</p>
<p>Another important experimental development was the integration of several technologies to reach the final goal of making the driver-generated observations, collected through the SCXML dialogues, available on the Semantic Sensor Web. To this end we followed two major development steps:
<list list-type="bullet">
<list-item>
<p>First, a specific bundle (the Semantic Annotation bundle in
<xref ref-type="fig" rid="f5-sensors-12-06307">Figure 5</xref>
) was implemented. This bundle receives events from the Interaction Manager and generates RDF annotations using the O&M-OWL model. As discussed in Section 4, to complete the data in all the generated RDF triples, this bundle was connected to other in-car information systems; in our case to the navigation system, to obtain the road name (
<italic>om:observation</italic>
), current time (
<italic>om:samplingTime</italic>
) and position (as the precise km on a particular road,
<italic>om:observationLocation</italic>
).</p>
</list-item>
<list-item>
<p>Second, each time the Semantic Annotation bundle generates a RDF annotated driver-observation, a Semantic Sensor Web publication bundle (SSWP, see
<xref ref-type="fig" rid="f5-sensors-12-06307">Figure 5</xref>
) is used to publish it in a RDF repository. For this purpose we have made use of features provided by Sesame [
<xref ref-type="bibr" rid="b40-sensors-12-06307">40</xref>
]: an open source Java Framework for the storage and querying of RDF data. More specifically, we have used the Sesame workbench to create an offline repository. Consequently, as shown in
<xref ref-type="fig" rid="f6-sensors-12-06307">Figure 6</xref>
, each time the SSWP bundle generates an RDF annotation, Sesame is used to add the corresponding new RDF triples into an RDF repository (more specifically into a SPARQL EndPoint [
<xref ref-type="bibr" rid="b38-sensors-12-06307">38</xref>
]).</p>
</list-item>
</list>
</p>
<p>However, it is important to notice that driver-observations stored as RDF triples in repositories can be only accessed by sending SPARQL queries to a SPARQL endpoint. In RDF the resources are identified by means of URIs. These URIs used in the SPARQL repositories are not dereferenceable, meaning that they cannot be accessed from a Semantic Web Browser and therefore by a growing variety of Linked Data applications and clients. For example, in our particular car-related scenario the resources in the namespace
<italic>environment</italic>
(used in the example in Section 4) can be found following the URL
<italic>
<ext-link ext-link-type="uri" xlink:href="http://www.sensor.gaps.upm.es/environment/">http://www.sensor.gaps.upm.es/environment/</ext-link>
</italic>
. However, the SPARQL endpoint is accessible through the local address
<italic>
<ext-link ext-link-type="uri" xlink:href="http://www.sensor.gaps.upm.es/openrdf-sesame/repositories/environment">http://www.sensor.gaps.upm.es/openrdf-sesame/repositories/environment</ext-link>
</italic>
. Therefore, the RDF in this repository only will be accessible locally by the SPARQL clients, making it necessary to perform a mapping that allows access through semantic browsers and linked data clients.</p>
<p>To tackle this difficulty, Pubby [
<xref ref-type="bibr" rid="b41-sensors-12-06307">41</xref>
], a Linked Data Front End for SPARQL Endpoints was integrated with our initial Sesame repository, as depicted in
<xref ref-type="fig" rid="f6-sensors-12-06307">Figure 6</xref>
. Pubby also provides a server (only requiring a servlet container such as Apache Tomcat) that is in charge of mapping the URIs retrieved by SPARQL endpoints to dereferenceable URIs. Pubby handles requests from semantic browsers by connecting to the SPARQL endpoint, requesting from it information regarding the original URI, and returning the results to the client through an access point. So, with the Pubby server configured to run at
<italic>
<ext-link ext-link-type="uri" xlink:href="http://www.sensor.gaps.upm.es/environment/">http://www.sensor.gaps.upm.es/environment/</ext-link>
</italic>
, when the semantic browser or linked data client decides to access a particular URI, such as
<italic>
<ext-link ext-link-type="uri" xlink:href="http://www.sensor.gaps.upm.es/environment/Road">http://www.sensor.gaps.upm.es/environment/Road</ext-link>
</italic>
, it accesses the Pubby server, which then collects the information regarding the resource in question from the SPARQL endpoint (
<italic>
<ext-link ext-link-type="uri" xlink:href="http://www.sensor.gaps.upm.es/openrdf-sesame/repositories/environment">http://www.sensor.gaps.upm.es/openrdf-sesame/repositories/environment</ext-link>
</italic>
). The resource information is then returned to the client in machine-readable format.</p>
<p>This, in sum, is how we are able to make the new driver-generated observations collected through in-vehicle HMI systems shareable over the Semantic Sensor Web.</p>
</sec>
<sec>
<label>6.</label>
<title>Performance Analysis and Concept Validation</title>
<p>In addition to the experimental setup implemented on an On-Board Unit, the same software components were integrated in a driving simulator environment (see
<xref ref-type="fig" rid="f7-sensors-12-06307">Figure 7</xref>
), so we could have a flexible and safe testing environment for performance analyses and usability studies.</p>
<p>The driving simulator was designed with the open-source driving simulator
<italic>VDrift</italic>
[
<xref ref-type="bibr" rid="b42-sensors-12-06307">42</xref>
] (details of our implementation are presented in [
<xref ref-type="bibr" rid="b43-sensors-12-06307">43</xref>
]). The driving simulator and the interaction framework were integrated through a standard connection in order to make contextual information available to the interaction framework. The HMI system consists of an application developed using OSGi and SCXML technologies, with which the driver can report a limited number of 16 observations (driver-initiated dialogue) or confirm a specific situation in a dialogue that is automatically initiated when a vehicle sensor (or set of sensors) detects a reportable situation such as a broken down car stopped on the side of the road (see Section 3). Performance tests and a conceptual validation of the scenario with potential users were carried out using this driving simulation framework.</p>
<sec>
<label>6.1.</label>
<title>Performance Analysis</title>
<p>A set of performance tests were carried out to guarantee proper response times within which to inform the HMI system of context changes. Response times were measured for varying numbers of concurrent contextual information sources (that could correspond to both sensors and Sensor Web sources), and for a varying degree of complexity of these sources.</p>
<p>Since our implementation is SCXML-based, our performance analysis focused on measuring response times for the SCXML machines involved in processing the events coming from different numbers of concurrent context sources, demanding SCXML processing with different complexities (
<italic>i.e.</italic>
, involving a different number of states). The test consisted in measuring the time elapsed from the arrival of a set of events to the SCXML structure, until their complete processing by this structure (
<italic>i.e.</italic>
, with the generation of a stable output). c (
<italic>x</italic>
) of events, simulating the arrival of
<italic>x</italic>
concurrent sensor observations, triggered the activation of
<italic>x</italic>
state machines, which in turn triggered a chain of transitions of a given number of states (
<italic>y</italic>
) representing the processing needs of these state machines.</p>
<p>Simulation results are presented in
<xref ref-type="fig" rid="f8-sensors-12-06307">Figure 8</xref>
. The colormap in the figure corresponds to the SCXML processing response times in ms. on a grid with: 20 columns, for a variable number of concurrent events or context sources (
<italic>x</italic>
= 1 to 20); and 49 rows corresponding to a variable number of states (
<italic>y</italic>
= 2 to 50) corresponding to several possible processing demands for each one of the concurrent context sources. An SCXML state machine with 2 states can process a single sensor detecting a simple event, such as a low level of fuel (a state machine that only changes its state if the fuel level drops below a given threshold), while a machine with 50 states might be what would be needed to model the driver's steering behavior.</p>
<p>To analyze the performance results in
<xref ref-type="fig" rid="f8-sensors-12-06307">Figure 8</xref>
, we assumed that response times below 250 ms. were appropriate for the reactive behavior of an Interaction Manager in safety-critical situations, for example (as proposed in [
<xref ref-type="bibr" rid="b44-sensors-12-06307">44</xref>
]) to suspend the interaction when the driver is carrying out a difficult maneuver in traffic. Within these low response times we found that our system was able to manage a variety of configurations, corresponding to the darker blue areas of
<xref ref-type="fig" rid="f8-sensors-12-06307">Figure 8</xref>
, ranging from around 20 context sources requiring low-complexity processing (less than 4 states), to a small number of sources (less than 4) demanding high processing (from 25 to 50 states).</p>
<p>Other areas of interest in
<xref ref-type="fig" rid="f8-sensors-12-06307">Figure 8</xref>
(light blue and yellow) are covered by response times lower than 4 s. In these areas the information processed by the context sources can be used by the HMI Interaction Manager at the lower pace of turn exchanges with the user. Thus, for example, the HMI system can inform the driver that the observation she is reporting has already been confirmed by other drivers and already been communicated to local authorities. For these response times our performance analysis revealed that an average number of 18 context sources could be used demanding average processing load of 35 states each.</p>
<p>Above these areas our implementation handles a number of concurrent sources greater than 20 with complexities of over 35 states in processing times greater than 4 s. (red zone in the upper right corner of
<xref ref-type="fig" rid="f8-sensors-12-06307">Figure 8</xref>
). This performance area may be acceptable for deriving useful high-level contextual information that is relevant during the course of a journey, but not safety-critical nor 2-way interaction intensive (otherwise interaction with the driver would be badly interrupted, causing frustration and distraction).</p>
</sec>
<sec>
<label>6.2.</label>
<title>Test Scenario Design and Validation</title>
<p>In the driving simulator we implemented the Lane Change Test (LCT) [
<xref ref-type="bibr" rid="b45-sensors-12-06307">45</xref>
] in order to carry out quantitative and qualitative measures about driving performance degradation while the driver interacts with an interaction framework, and vice versa. In the upcoming tests, each driver will be involved in a driving task in which she had to keep a speed of 60 km/h along a 3-lane road. In addition, the test drivers will be instructed to keep in a specific lane indicated by signs that appear at regular intervals on both sides of the road (
<xref ref-type="fig" rid="f7-sensors-12-06307">Figure 7</xref>
). At the same time the drivers will be asked to execute a secondary task that is not restricted by the standard. In our case, we developed a task related to the publication of driver-generated observations through a HMI system following our approach described in this paper. Two kinds of such observations were considered: observations provided freely by the driver and observations recorded by the system (
<italic>i.e.</italic>
, generated passively by the driver). As mentioned previously (in Section 3), interactions will either be system-initiated or driver-initiated.</p>
<p>To provide some form of validation of the test scenario we produced a questionnaire designed to elucidate what kinds of information users might be willing to receive and to share with smart applications, in which contexts, and whether they would have concerns about the idea. We had 33 respondents, 22 male and 11 female; most with at least some driving experience, of which 24 declared they were either good or expert drivers. The questionnaire included items with a 5-point Likert response format, with anchors in the extremes (“strongly disagree,” assigned a value of −2, and “strongly agree,” assigned a value of 2); multiple choice questions; and the option to write comments for some of the questions. We now present some results from this preliminary validation based on the expectations of potential users.</p>
<p>To begin, we asked potential users whether they would regard useful a system that would give them information related to the three basic ontological dimensions of driving: the driver, the vehicle and the environment. Respondents were the most positive about the usefulness of being
<italic>consumers</italic>
of information from the driving environment (road conditions, weather and traffic), and the most skeptical about information concerning the driver. However, responses were widely varied across different kinds of information. The left half of
<xref ref-type="fig" rid="f9-sensors-12-06307">Figure 9</xref>
shows the mean value of the responses for each of these items. Detection of sleepiness and distraction were, on average, better received than the monitoring of driving quality, the latter not being generally regarded as providing useful information. Furthermore, respondents expressed a variety of concerns (right half of
<xref ref-type="fig" rid="f9-sensors-12-06307">Figure 9</xref>
) regarding the collection of all such information and the prospect of it being shared with other parties (
<italic>i.e.</italic>
, with the focus on drivers as
<italic>producers</italic>
of information) except that which concerns the mechanical condition of the car (its positive value indicates lack of concern, on average). The more troubling sources of information were driving style and route planning (the latter shown in blue, as an environmental item, though it has a clear component of personal information about the driver). The respondents' comments revealed details of the concerns. The most common concern by far was privacy (11 respondents stated it), followed by fear that the information might be inappropriately used (7 respondents), reluctance on account of the expected increase in workload, stress and distraction (5 respondents), and feeling controlled (3 respondents). The greatest source of concern, in any case, was that the system might record information about the driver without his or her knowledge (red bar at the far right in
<xref ref-type="fig" rid="f9-sensors-12-06307">Figure 9</xref>
).</p>
<p>For none of the sources of information considered was there a correlation found between the expected usefulness of receiving it and the corresponding concerns associated with its collection by the system (
<italic>i.e.</italic>
, between the items on the left hand side of
<xref ref-type="fig" rid="f9-sensors-12-06307">Figure 9</xref>
—usefulness and the corresponding ones on the right hand side of
<xref ref-type="fig" rid="f9-sensors-12-06307">Figure 9</xref>
—concerns). This may suggest that there is a degree of independence between how useful people find services and the concerns they may have regarding how the information is collected and used, as indeed the nature of the information matters greatly also; or it could be revealing a relative independence of the potential users' willingness to be informed from their reluctance to have information about their driving behavior collected, even when the functioning of the services require collecting the information. Though more discriminative testing is needed to sort out these intricacies, respondents' comments reveal varying attitudes (which may account for the lack of correlation), from those who express interest in the scenario services and few concerns; to those who showed less interest while expressing reluctance due to privacy and other concerns; to those with little interest simply because they don't see how the information could be useful (regardless of privacy and other concerns).</p>
<p>When asked whether they would prefer the interaction initiative to fall entirely on the system, or on the driver, or whether they would prefer a mixed initiative scheme, the latter was clearly preferred (67% respondents expressed agreement with this preference). Interestingly, however, there were differences regarding opinions of the three initiative schemes depending on the driving situation. Specifically, we distinguished between driving in an urban area and driving on other roads (e.g., a motorway or country road). For urban areas we found that mixed initiative was thought to be the more comfortable interaction set-up by only slightly more respondents (39%) than driver (30%) or system initiative (24%); for other roads mixed initiative was clearly favored in terms of comfort (64%). System initiative was thought the safer mode of interaction in urban areas (49%), however, while on other roads the figures for system initiative were closer to those for mixed initiative (33% and 39% respectively). When asked which interaction set-up was expected to be the most distracting, mixed initiative was chosen the least (and, thus, overall we can infer that it was considered the least distracting mode) both for urban areas (12%) and other roads (18%).</p>
<p>The preference for mixed initiative can be taken as a first indication that users might be willing to contribute information voluntarily, feeling in control of the information provided. On the other hand, as mentioned above, potential users reject the thought of unknown information being recorded by the system. Personal information is very sensitive (and almost anything pertaining to the driver seems to be regarded as such), and confidence in the knowledge of what the information will be used for seems crucial for acceptance. The context of the driving activity also has to be taken into account. With these observations we are now in a better position to formulate a scenario of use combining the lane-change task with an appropriately designed secondary task (interaction through a dialogue system).</p>
</sec>
</sec>
<sec>
<label>7.</label>
<title>Conclusions and Further Research</title>
<p>The central theme presented in this paper has been that interconnected objects, embedding Human-Machine Interaction (HMI) technologies, can play an important role to obtain relevant human-generated real-world information that can be shared with other users, connected objects or applications.</p>
<p>As a particular connected object scenario, we have discussed how the design of in-vehicle HMI systems can make driver-generated observations shareable on the Semantic Sensor Web. Our approach is based on collecting observations from drivers using an HMI system built following the W3C MMI architecture, incorporating semantic annotation using O&M-OWL, and making the RDF-generated data available to SPARQL Endpoints and Linked Data Front-Ends. An experimental setup, integrating different HMI and Semantic Web technologies, implemented on an OSGi platform for a connected car On-Board Unit, has also been presented.</p>
<p>This experimental framework, upon which we are developing a test scenario for a “connected car”, has served to illustrate the possibilities of integrating HMI systems into emerging Sensor Web initiatives. It has also highlighted important challenges that need to be addressed, some related to HMI system development while others to the future evolution of the Sensor Web.</p>
<p>We have begun the validation of the test scenario that we are developing to look experimentally at driver-initiated and sensor-initiated dialogues. The approach seems sound, since, on the one hand, potential drivers believe the proposed driving-assistance systems in the scenario can be useful, and on the other, they show willingness to actively engage in conversation with the on-board HMI system, contributing information voluntarily. A great amount of care has to be taken, however, to provide users with a sense of control of the information that is being shared, especially personal information, including that describing driver behavior. It will be interesting to observe the effects of information sharing during the upcoming driving-interaction test-runs (using the Lane-Change Test mentioned in Section 6.2). We will also look at whether, and to what extent, inaccuracies in sensor information and limitations on the number of allowed reported observations generate frustration in drivers, and whether this leads to unsafe interaction patterns. Yet another focus of attention will be the impact of interaction problems, such as misunderstandings and non-understandings of user utterances, on driving quality, user acceptance and indeed the efficiency and limitations of using interaction as a source of sensed information.</p>
<p>The continual emergence of new terms such as Sensor Web, Real-World Internet, Semantic Sensor Web, Semantic Sensor Internet or Human Sensor Web, suggests that more fundamental research effort is required to advance in how to effectively articulate and provide access to human-generated observations, human sensors, sensor observations and the Internet. We believe that the use of Semantic Web principles and technologies can assist in this task, but as the amount of human-generated and sensor-generated data grows, scalability and efficiency for intensive distributed computing will become critical factors. Our future research will also address mechanisms for the proper management of privacy and quality of information, two key aspects for the successful sharing of human-generated observations.</p>
</sec>
</body>
<back>
<ack>
<p>The activities in this paper were funded by the Spanish Ministry of Science and Technology TEC2009-14719-C02-02 project, and MARTA project, CDTI, 3rd CENIT Program, INGENIO 2010 of the Spanish Government.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="b1-sensors-12-06307">
<label>1.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weiser</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>The computer for the 21st century</article-title>
<source>Sci. Am.</source>
<year>1991</year>
<volume>265</volume>
<fpage>94</fpage>
<lpage>105</lpage>
</element-citation>
</ref>
<ref id="b2-sensors-12-06307">
<label>2.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Sundmaeker</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Guillermin</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Friess</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Woelfflé</surname>
<given-names>S</given-names>
</name>
</person-group>
<article-title>Vision and Challenges for Realising the Internet of Things</article-title>
<source>Cluster of European Research Projects on the Internet of Things (CERP-IoT)</source>
<publisher-name>Brussels</publisher-name>
<publisher-loc>Belgium</publisher-loc>
<year>2010</year>
</element-citation>
</ref>
<ref id="b3-sensors-12-06307">
<label>3.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bröring</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Echterhoff</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Jirka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Simonis</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Everding</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Stasch</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Liang</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lemmens</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>New generation sensor web enablement</article-title>
<source>Sensors</source>
<year>2011</year>
<volume>11</volume>
<fpage>2652</fpage>
<lpage>2699</lpage>
<pub-id pub-id-type="pmid">22163760</pub-id>
</element-citation>
</ref>
<ref id="b4-sensors-12-06307">
<label>4.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Sensorpedia</collab>
</person-group>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.sensorpedia.com/">http://www.sensorpedia.com/</ext-link>
(accessed on 2 May 2012)</comment>
</element-citation>
</ref>
<ref id="b5-sensors-12-06307">
<label>5.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Sensormap</collab>
</person-group>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://atom.research.microsoft.com/sensewebv3/sensormap/">atom.research.microsoft.com/sensewebv3/sensormap/</ext-link>
(accessed on 2 May 2012)</comment>
</element-citation>
</ref>
<ref id="b6-sensors-12-06307">
<label>6.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Sensorbase</collab>
</person-group>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://sensorbase.org">sensorbase.org</ext-link>
(accessed on 6 June 2011)</comment>
</element-citation>
</ref>
<ref id="b7-sensors-12-06307">
<label>7.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Pachube</collab>
</person-group>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.pachube.com">www.pachube.com</ext-link>
(accessed on 2 May 2012)</comment>
</element-citation>
</ref>
<ref id="b8-sensors-12-06307">
<label>8.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>OGC Open Geospatial Consortium, Inc</collab>
</person-group>
<article-title>Sensor Web Enablement (SWE)</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.opengeospatial.org/projects/groups/sensorweb">www.opengeospatial.org/projects/groups/sensorweb</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b9-sensors-12-06307">
<label>9.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Knoesis Center, Semantic Sensor Web Project</collab>
</person-group>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://wiki.knoesis.org/index.php/SSW">http://wiki.knoesis.org/index.php/SSW</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b10-sensors-12-06307">
<label>10.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sheth</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Henson</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Sahoo</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Semantic sensor web</article-title>
<source>IEEE Int. Comput.</source>
<year>2008</year>
<volume>12</volume>
<fpage>78</fpage>
<lpage>83</lpage>
</element-citation>
</ref>
<ref id="b11-sensors-12-06307">
<label>11.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Sigüenza</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Blanco</surname>
<given-names>J.L.</given-names>
</name>
<name>
<surname>Bernat</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hernández</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Using SCXML for Semantic Sensor Networks</article-title>
<conf-name>Proceedings of the International Workshop on Semantic Sensor Networks at the 9th International Semantic Web Conference (ISWC)</conf-name>
<conf-loc>Shanghai, China</conf-loc>
<conf-date>7–11 November 2010</conf-date>
</element-citation>
</ref>
<ref id="b12-sensors-12-06307">
<label>12.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>MARTA Project</collab>
</person-group>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.cenitmarta.org">www.cenitmarta.org</ext-link>
(accessed on 6 June 2011)</comment>
</element-citation>
</ref>
<ref id="b13-sensors-12-06307">
<label>13.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Foerster</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Jirka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Stasch</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Pross</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Everding</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Bröring</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Juerrens</surname>
<given-names>E.H.</given-names>
</name>
</person-group>
<article-title>Integrating Human Observations and Sensor Observations—the Example of a Noise Mapping Community</article-title>
<conf-name>Proceedings of towards Digital Earth Workshop at Future Internet Symposium</conf-name>
<conf-loc>Berlin, Germany</conf-loc>
<conf-date>20 September 2010</conf-date>
</element-citation>
</ref>
<ref id="b14-sensors-12-06307">
<label>14.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sigüenza</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Pardo</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Blanco</surname>
<given-names>J.L.</given-names>
</name>
<name>
<surname>Bernat</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Garijo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hernández</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Bridging the semantic sensor web and multimodal human-machine interaction using SCXML</article-title>
<source>Int. J. Sens. Wirel. Commun. Control Spec. Issue Semant. Sens. Netw.</source>
<year>2012</year>
<comment>in press</comment>
</element-citation>
</ref>
<ref id="b15-sensors-12-06307">
<label>15.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>W3C</collab>
</person-group>
<article-title>Multimodal Architecture and Interfaces</article-title>
<source>W3C Candidate Recommendation 12 January 2012</source>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.w3.org/TR/mmi-arch/">http://www.w3.org/TR/mmi-arch/</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b16-sensors-12-06307">
<label>16.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Jirka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Bröring</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Foerster</surname>
<given-names>T</given-names>
</name>
</person-group>
<article-title>Handling the Semantics of Sensor Observables within SWE Discovery Solutions</article-title>
<conf-name>Proceedings of International Symposium on Collaborative Technologies and Systems, Workshop on Sensor Web Enablement (SWE 2010)</conf-name>
<conf-loc>Chicago, IL, USA</conf-loc>
<conf-date>17–21 May 2010</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>2010</year>
<fpage>322</fpage>
<lpage>329</lpage>
</element-citation>
</ref>
<ref id="b17-sensors-12-06307">
<label>17.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>McGlaun</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Althoff</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Lang</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Rigoll</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Towards Multimodal Error Management: Experimental Evaluation of User Strategies in Event of Faulty Application Behavior in Automotive Environments</article-title>
<conf-name>Proceedings of the 7th World Multiconference on Systems, Cybernetics, and Informatics (SCI)</conf-name>
<conf-loc>Orlando, FL, USA</conf-loc>
<conf-date>27–30 July 2003</conf-date>
<fpage>462</fpage>
<lpage>466</lpage>
</element-citation>
</ref>
<ref id="b18-sensors-12-06307">
<label>18.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Pieraccini</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Dayanidhi</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Bloom</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Dahan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Phillips</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Goodman</surname>
<given-names>B.R.</given-names>
</name>
<name>
<surname>Prasad</surname>
<given-names>K.V.</given-names>
</name>
</person-group>
<article-title>A Multimodal Conversational Interface for a Concept Vehicle</article-title>
<conf-name>Proceedings of the Eurospeech</conf-name>
<conf-loc>Geneva, Switzerland</conf-loc>
<conf-date>1–4 September 2003</conf-date>
</element-citation>
</ref>
<ref id="b19-sensors-12-06307">
<label>19.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Amditis</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Kussmann</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Polynchronopoulos</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Engström</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Andreone</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>System Architecture for Integrated Adaptive HMI Solutions</article-title>
<conf-name>Proceedings of Intelligent Vehicles Symposium</conf-name>
<conf-loc>Tokyo, Japan</conf-loc>
<conf-date>13–15 June 2006</conf-date>
<fpage>388</fpage>
<lpage>391</lpage>
</element-citation>
</ref>
<ref id="b20-sensors-12-06307">
<label>20.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hervás</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Bravo</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Fontecha</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>A context model based on ontological languages: A proposal for information visualization</article-title>
<source>J. Univ. Comput. Sci.</source>
<year>2010</year>
<volume>16</volume>
<fpage>1539</fpage>
<lpage>1555</lpage>
</element-citation>
</ref>
<ref id="b21-sensors-12-06307">
<label>21.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Nelson</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>Defining interfaces as Services in Embedded Vehicle Software, Research and Advanced Engineering</article-title>
<conf-name>Proceedings of the Automotive Software Workshop</conf-name>
<conf-loc>San Diego, USA</conf-loc>
<conf-date>10–12 January 2004</conf-date>
</element-citation>
</ref>
<ref id="b22-sensors-12-06307">
<label>22.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Goodchild</surname>
<given-names>M.F.</given-names>
</name>
</person-group>
<article-title>Citizens as sensors: The world of volunteered geography</article-title>
<source>GeoJournal</source>
<year>2007</year>
<volume>69</volume>
<fpage>211</fpage>
<lpage>221</lpage>
</element-citation>
</ref>
<ref id="b23-sensors-12-06307">
<label>23.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Hoch</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Schweigert</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Althoff</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Rigoll</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>The BMW SURF Project: A Contribution to the Research on Cognitive Vehicles</article-title>
<conf-name>Proceedings of the IEEE Intelligent Vehicles Symposium</conf-name>
<conf-loc>Istambul, Turkey</conf-loc>
<conf-date>13–15 June 2007</conf-date>
<fpage>692</fpage>
<lpage>697</lpage>
</element-citation>
</ref>
<ref id="b24-sensors-12-06307">
<label>24.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>OSGi Alliance</collab>
</person-group>
<article-title>OSGi Service Platform Release 3</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.osgi.org/Download/File?url=/download/r3/r3.book.pdf">http://www.osgi.org/Download/File?url=/download/r3/r3.book.pdf</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b25-sensors-12-06307">
<label>25.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>W3C</collab>
</person-group>
<article-title>State Chart XML (SCXML): State Machine Notation for Control Abstraction</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.w3.org/TR/scxml/">http://www.w3.org/TR/scxml/</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b26-sensors-12-06307">
<label>26.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Harel</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Statecharts: A visual formalism for complex systems</article-title>
<source>Sci. Comput. Program.</source>
<year>1987</year>
<volume>8</volume>
<fpage>231</fpage>
<lpage>274</lpage>
</element-citation>
</ref>
<ref id="b27-sensors-12-06307">
<label>27.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>W3C</collab>
</person-group>
<article-title>Voice Extensible Markup Language (VoiceXML) 3.0</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.w3.org/TR/voicexml30/">http://www.w3.org/TR/voicexml30/</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b28-sensors-12-06307">
<label>28.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Sigüenza</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Blanco</surname>
<given-names>J.L.</given-names>
</name>
<name>
<surname>Bernat</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hernández</surname>
<given-names>L.</given-names>
</name>
</person-group>
<article-title>Using SCXML to Integrate Semantic Sensor Information into Context-Aware User Interfaces</article-title>
<conf-name>Proceedings of the International Workshop on Semantic Sensor Web, in Conjunction with the 2nd International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management</conf-name>
<conf-loc>Valencia, Spain</conf-loc>
<conf-date>27–28 October 2010</conf-date>
<person-group person-group-type="editor">
<name>
<surname>Salvatore</surname>
<given-names>F.P.</given-names>
</name>
<name>
<surname>Carlos</surname>
<given-names>E.P.</given-names>
</name>
</person-group>
<publisher-loc>ScitePress</publisher-loc>
<publisher-name>Portugal</publisher-name>
<year>2010</year>
<fpage>47</fpage>
<lpage>59</lpage>
</element-citation>
</ref>
<ref id="b29-sensors-12-06307">
<label>29.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vollrath</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Speech and driving-solution or problem?</article-title>
<source>Intell. Trans. Syst.</source>
<year>2007</year>
<volume>1</volume>
<fpage>89</fpage>
<lpage>94</lpage>
</element-citation>
</ref>
<ref id="b30-sensors-12-06307">
<label>30.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kuter</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Golbeck</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Using probabilistic confidence models for trust inference in Web-based social networks</article-title>
<source>J. ACM Trans. Int. Technol. (TOIT)</source>
<year>2010</year>
<volume>10</volume>
<fpage>1</fpage>
<lpage>23</lpage>
</element-citation>
</ref>
<ref id="b31-sensors-12-06307">
<label>31.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>López de Ipiña</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Díaz de Sarralde</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>García Zubia</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>An ambient assisted living platform integrating RFID data-on-tag care annotations and twitter</article-title>
<source>J. Univ. Comput. Sci.</source>
<year>2010</year>
<volume>16</volume>
<fpage>1521</fpage>
<lpage>1538</lpage>
</element-citation>
</ref>
<ref id="b32-sensors-12-06307">
<label>32.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Keßler</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Janowicz</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>Linking Sensor Data–Why, to What, and How?</article-title>
<conf-name>Proceedings of the International Workshop on Semantic Sensor Networks at the 9th International Semantic Web Conference (ISWC)</conf-name>
<conf-loc>Shanghai, China</conf-loc>
<conf-date>7–11 November 2010</conf-date>
</element-citation>
</ref>
<ref id="b33-sensors-12-06307">
<label>33.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>W3C</collab>
</person-group>
<article-title>Resource Description Framework (RDF)</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.w3.org/RDF/">http://www.w3.org/RDF/</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b34-sensors-12-06307">
<label>34.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bizer</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Heath</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Berners-Lee</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Linked data-the story so far</article-title>
<source>Int. J. Semantic Web Inf. Syst.</source>
<year>2009</year>
<volume>5</volume>
<fpage>1</fpage>
<lpage>22</lpage>
</element-citation>
</ref>
<ref id="b35-sensors-12-06307">
<label>35.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Henson</surname>
<given-names>C.A.</given-names>
</name>
<name>
<surname>Pschorr</surname>
<given-names>J.K.</given-names>
</name>
<name>
<surname>Sheth</surname>
<given-names>A.P.</given-names>
</name>
<name>
<surname>Thuirunarayan</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>SemSOS: Semantic Sensor Observation Service</article-title>
<conf-name>Proceedings of the International Symposium on Collaborative Technologies and Systems (CTS 2009)</conf-name>
<conf-loc>Baltimore, MD, USA</conf-loc>
<conf-date>18–22 May 2009</conf-date>
</element-citation>
</ref>
<ref id="b36-sensors-12-06307">
<label>36.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>W3C</collab>
</person-group>
<article-title>OWL Web Ontology Language</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.w3.org/2004/OWL/">http://www.w3.org/2004/OWL/</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b37-sensors-12-06307">
<label>37.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>DBpedia</collab>
</person-group>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://dbpedia.org">dbpedia.org</ext-link>
(accesed on 2 May 2012)</comment>
</element-citation>
</ref>
<ref id="b38-sensors-12-06307">
<label>38.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Prud'hommeaux</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Seaborne</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>SPARQL Query Language for RDF</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.w3.org/TR/rdf-sparql-query/">http://www.w3.org/TR/rdf-sparql-query/</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b39-sensors-12-06307">
<label>39.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Apache Commons SCXML</collab>
</person-group>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://commons.apache.org/scxml/">http://commons.apache.org/scxml/</ext-link>
(accessed on 2 May 2012)</comment>
</element-citation>
</ref>
<ref id="b40-sensors-12-06307">
<label>40.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Sesame</collab>
</person-group>
<article-title>RDF Schema Querying and Storage</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www.openrdf.org">http://www.openrdf.org</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b41-sensors-12-06307">
<label>41.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<name>
<surname>Cyganiak</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Bizer</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Pubby—A Linked Data Frontend for SPARQL Endpoints</article-title>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://www4.wiwiss.fu-berlin.de/pubby">http://www4.wiwiss.fu-berlin.de/pubby</ext-link>
(accessed on 28 February 2012)</comment>
</element-citation>
</ref>
<ref id="b42-sensors-12-06307">
<label>42.</label>
<element-citation publication-type="webpage">
<person-group person-group-type="author">
<collab>Vdrift</collab>
</person-group>
<comment>Available online:
<ext-link ext-link-type="uri" xlink:href="http://vdrift.net">http://vdrift.net</ext-link>
(accessed on 2 May 2012)</comment>
</element-citation>
</ref>
<ref id="b43-sensors-12-06307">
<label>43.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Blanco</surname>
<given-names>J.L.</given-names>
</name>
<name>
<surname>Sigüenza</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Díaz</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Sendra</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Hernández</surname>
<given-names>L</given-names>
</name>
</person-group>
<article-title>Reworking Spoken Dialogue Systems with Context Awareness and Information Prioritisation to Reduce Driver Workload</article-title>
<conf-name>Proceedings of the NAG-DAGA International Conference on Acoustics</conf-name>
<conf-loc>Rotterdam, Netherlands</conf-loc>
<conf-date>23–26 March 2009</conf-date>
</element-citation>
</ref>
<ref id="b44-sensors-12-06307">
<label>44.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Nishimoto</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Shioya</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Takahasi</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Daigo</surname>
<given-names>H.</given-names>
</name>
</person-group>
<article-title>A Study of Dialogue Management Principles Corresponding to the Driver's Workload</article-title>
<conf-name>Proceedings of the Biennial on Digital Signal Processing for In-Vehicle and Mobile Systems</conf-name>
<conf-loc>Sesimbra, Portugal</conf-loc>
<conf-date>2–3 September 2005</conf-date>
</element-citation>
</ref>
<ref id="b45-sensors-12-06307">
<label>45.</label>
<element-citation publication-type="book">
<article-title>ISO 26022:2010(E)</article-title>
<source>Road vehicles—Ergonomic aspects of transport information and control systems—Simulated lane change test to assess in-vehicle secondary task demand</source>
<publisher-name>International Organization for Standardization</publisher-name>
<publisher-loc>Geneva, Switzerland</publisher-loc>
<year>2010</year>
</element-citation>
</ref>
</ref-list>
<glossary>
<title>Abbreviations</title>
<def-list>
<def-item>
<term>ADAS</term>
<def>
<p>Advanced Driver Assistance Systems</p>
</def>
</def-item>
<def-item>
<term>ASR</term>
<def>
<p>Automatic Speech Recognition</p>
</def>
</def-item>
<def-item>
<term>CALM</term>
<def>
<p>Continuous Air interface for Long and Medium distance</p>
</def>
</def-item>
<def-item>
<term>CAN</term>
<def>
<p>Controller Area Network</p>
</def>
</def-item>
<def-item>
<term>CS</term>
<def>
<p>Catalogue Service</p>
</def>
</def-item>
<def-item>
<term>GPRS</term>
<def>
<p>General Packet Radio Service</p>
</def>
</def-item>
<def-item>
<term>HDSPA</term>
<def>
<p>High Speed Downlink Packet Access</p>
</def>
</def-item>
<def-item>
<term>HMI</term>
<def>
<p>Human Machine Interaction</p>
</def>
</def-item>
<def-item>
<term>IVIS</term>
<def>
<p>In-Vehicle Information Systems</p>
</def>
</def-item>
<def-item>
<term>LCT</term>
<def>
<p>Lane Change Test</p>
</def>
</def-item>
<def-item>
<term>MARTA</term>
<def>
<p>Mobility for Advanced Transport Networks</p>
</def>
</def-item>
<def-item>
<term>MMI</term>
<def>
<p>W3C's Multimodal Architecture and Interfaces</p>
</def>
</def-item>
<def-item>
<term>OBU</term>
<def>
<p>On-Board Unit</p>
</def>
</def-item>
<def-item>
<term>OGC</term>
<def>
<p>Open Geospatial Consortium</p>
</def>
</def-item>
<def-item>
<term>OSGI</term>
<def>
<p>Open Services Gateway Initiative</p>
</def>
</def-item>
<def-item>
<term>OWL</term>
<def>
<p>Web Ontology Language</p>
</def>
</def-item>
<def-item>
<term>O&M</term>
<def>
<p>Observation & Measurement</p>
</def>
</def-item>
<def-item>
<term>RDF</term>
<def>
<p>Resource Description Framework</p>
</def>
</def-item>
<def-item>
<term>SAN</term>
<def>
<p>Sensor and Actuator Network</p>
</def>
</def-item>
<def-item>
<term>SCXML</term>
<def>
<p>State Chart eXtensible Markup Language</p>
</def>
</def-item>
<def-item>
<term>SensorML</term>
<def>
<p>Sensor Model Language</p>
</def>
</def-item>
<def-item>
<term>SOS</term>
<def>
<p>Sensor Observation Service</p>
</def>
</def-item>
<def-item>
<term>SPARQL</term>
<def>
<p>SPARQL Protocol and RDF Query Language</p>
</def>
</def-item>
<def-item>
<term>SSW</term>
<def>
<p>Semantic Sensor Web</p>
</def>
</def-item>
<def-item>
<term>SWE</term>
<def>
<p>Sensor Web Enablement</p>
</def>
</def-item>
<def-item>
<term>TTS</term>
<def>
<p>Text to Speech</p>
</def>
</def-item>
<def-item>
<term>UMTS</term>
<def>
<p>Universal Mobile Telecommunications System</p>
</def>
</def-item>
<def-item>
<term>URI</term>
<def>
<p>Uniform Resource Identifier</p>
</def>
</def-item>
<def-item>
<term>URL</term>
<def>
<p>Uniform Resource Locator</p>
</def>
</def-item>
<def-item>
<term>V2I</term>
<def>
<p>Vehicle to Infrastructure</p>
</def>
</def-item>
<def-item>
<term>V2V</term>
<def>
<p>Vehicle to Vehicle</p>
</def>
</def-item>
<def-item>
<term>W3C</term>
<def>
<p>World Wide Web Consortium</p>
</def>
</def-item>
</def-list>
</glossary>
</back>
<floats-group>
<fig id="f1-sensors-12-06307" position="float">
<label>Figure 1.</label>
<caption>
<p>In-vehicle HMI system for connected cars.</p>
</caption>
<graphic xlink:href="sensors-12-06307f1"></graphic>
</fig>
<fig id="f2-sensors-12-06307" position="float">
<label>Figure 2.</label>
<caption>
<p>In-vehicle HMI system to collect driver observations, following the W3C MMI Architecture.</p>
</caption>
<graphic xlink:href="sensors-12-06307f2"></graphic>
</fig>
<fig id="f3-sensors-12-06307" position="float">
<label>Figure 3.</label>
<caption>
<p>O&M-OWL model (adapted from [
<xref ref-type="bibr" rid="b35-sensors-12-06307">35</xref>
]) applied to driver-generated observations.</p>
</caption>
<graphic xlink:href="sensors-12-06307f3"></graphic>
</fig>
<fig id="f4-sensors-12-06307" position="float">
<label>Figure 4.</label>
<caption>
<p>Driver-generated observations linked to DBpedia resources.</p>
</caption>
<graphic xlink:href="sensors-12-06307f4"></graphic>
</fig>
<fig id="f5-sensors-12-06307" position="float">
<label>Figure 5.</label>
<caption>
<p>Experimental setup for publishing driver-generated observations in the Semantic Sensor Web.</p>
</caption>
<graphic xlink:href="sensors-12-06307f5"></graphic>
</fig>
<fig id="f6-sensors-12-06307" position="float">
<label>Figure 6.</label>
<caption>
<p>Connection between our Experimental Setup and the Semantic Sensor Web for publishing driver-generated observations.</p>
</caption>
<graphic xlink:href="sensors-12-06307f6"></graphic>
</fig>
<fig id="f7-sensors-12-06307" position="float">
<label>Figure 7.</label>
<caption>
<p>Driving simulator for performance analysis and usability evaluation following the Lane Change Test protocol.</p>
</caption>
<graphic xlink:href="sensors-12-06307f7"></graphic>
</fig>
<fig id="f8-sensors-12-06307" position="float">
<label>Figure 8.</label>
<caption>
<p>SCXML response times for different numbers of context sources of varying complexity (
<italic>i.e.</italic>
, number of states).</p>
</caption>
<graphic xlink:href="sensors-12-06307f8"></graphic>
</fig>
<fig id="f9-sensors-12-06307" position="float">
<label>Figure 9.</label>
<caption>
<p>Means* of responses to items
<sup></sup>
related to expected usefulness of different sources of information to the driver (left half of the figure) and to concerns about having information registered (and possibly shared) by the system (right half)
<sup></sup>
.</p>
<p>Notes: * 95% confidence intervals are shown;
<sup></sup>
Items are grouped by color/texture, denoting, from left to right, driver- (tan/plain), vehicle- (grey/crossed) and environment-related (blue/dashed) items;
<sup></sup>
The higher the concern, the more negative the corresponding value. An extra item of concerns about having information registered without the driver's knowledge is shown in red.</p>
</caption>
<graphic xlink:href="sensors-12-06307f9"></graphic>
</fig>
<table-wrap id="t1-sensors-12-06307" position="float">
<label>Table 1.</label>
<caption>
<p>O&M-OWL representation of a driver-generated observation of dense fog.</p>
</caption>
<table frame="hsides" rules="none">
<tbody>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">om:obs_1</td>
<td align="left" valign="top" rowspan="1" colspan="1">rdf:type</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:Observation</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">om:obs_1</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:featureOfInterest</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:road_1</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">om:road_1</td>
<td align="left" valign="top" rowspan="1" colspan="1">rdf:type</td>
<td align="left" valign="top" rowspan="1" colspan="1">environment:Road</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">environment: Road</td>
<td align="left" valign="top" rowspan="1" colspan="1">rdfs:subClassOf</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:Feature</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">om:obs_1</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:observedProperty</td>
<td align="left" valign="top" rowspan="1" colspan="1">environment:denseFog</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">environment:denseFog</td>
<td align="left" valign="top" rowspan="1" colspan="1">rdf:type</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:Property</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">om:obs_1</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:samplingTime</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:time_1</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">om:time_1</td>
<td align="left" valign="top" rowspan="1" colspan="1">rdf:type</td>
<td align="left" valign="top" rowspan="1" colspan="1">owl-time:Instant</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">om:time_1</td>
<td align="left" valign="top" rowspan="1" colspan="1">owl-time:date-time</td>
<td align="left" valign="top" rowspan="1" colspan="1">“20110610T08:55:00”</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">om:obs_1</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:procedure</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:human_1</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">om:human_1</td>
<td align="left" valign="top" rowspan="1" colspan="1">rdf:type</td>
<td align="left" valign="top" rowspan="1" colspan="1">environment:Driver</td>
</tr>
<tr>
<td align="left" valign="top" rowspan="1" colspan="1">om:obs_1</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:observationLocation</td>
<td align="left" valign="top" rowspan="1" colspan="1">om:location_1 .</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn id="tfn1-sensors-12-06307">
<p>(Explanatory note:
<italic>om</italic>
is used as a namespace for O&M and is placed, with a colon, before the concepts defined in the O&M schema; concepts from the environment ontology contain the namespace
<italic>environment</italic>
, and
<italic>dbpedia</italic>
represents a link from a location observation to a dbpedia URI).</p>
</fn>
</table-wrap-foot>
</table-wrap>
</floats-group>
</pmc>
<affiliations>
<list></list>
<tree>
<noCountry>
<name sortKey="Bernat, Jesus" sort="Bernat, Jesus" uniqKey="Bernat J" first="Jesús" last="Bernat">Jesús Bernat</name>
<name sortKey="Blanco, Jose Luis" sort="Blanco, Jose Luis" uniqKey="Blanco J" first="José Luis" last="Blanco">José Luis Blanco</name>
<name sortKey="Conejero, David" sort="Conejero, David" uniqKey="Conejero D" first="David" last="Conejero">David Conejero</name>
<name sortKey="Diaz Pardo, David" sort="Diaz Pardo, David" uniqKey="Diaz Pardo D" first="David" last="Díaz-Pardo">David Díaz-Pardo</name>
<name sortKey="G Mez, Luis Hernandez" sort="G Mez, Luis Hernandez" uniqKey="G Mez L" first="Luis Hernández" last="G Mez">Luis Hernández G Mez</name>
<name sortKey="Siguenza, Alvaro" sort="Siguenza, Alvaro" uniqKey="Siguenza A" first="Álvaro" last="Sigüenza">Álvaro Sigüenza</name>
<name sortKey="Vancea, Vasile" sort="Vancea, Vasile" uniqKey="Vancea V" first="Vasile" last="Vancea">Vasile Vancea</name>
</noCountry>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Ncbi/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002125 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd -nk 002125 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Ncbi
   |étape=   Merge
   |type=    RBID
   |clé=     PMC:3386742
   |texte=   Sharing Human-Generated Observations by Integrating HMI and the Semantic Sensor Web
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Ncbi/Merge/RBID.i   -Sk "pubmed:22778643" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Ncbi/Merge/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024