Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Humancomputer interaction in ubiquitous computing environments

Identifieur interne : 001613 ( Istex/Corpus ); précédent : 001612; suivant : 001614

Humancomputer interaction in ubiquitous computing environments

Auteurs : J. H. Abawajy ; J. H. Abawajy

Source :

RBID : ISTEX:B2CCAD37D1C07EBC9030354F59150F834862E3F1

Abstract

Purpose The purpose of this paper is to explore characteristics of humancomputer interaction when the human body and its movements become input for interaction and interface control in pervasive computing settings. Designmethodologyapproach The paper quantifies the performance of human movement based on Fitt's Law and discusses some of the human factors and technical considerations that arise in trying to use human body movements as an input medium. Findings The paper finds that new interaction technologies utilising human movements may provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm. Practical implications In pervasive computing environments the challenge is to create intuitive and userfriendly interfaces. Application domains that may utilize human body movements as input are surveyed here and the paper addresses issues such as culture, privacy, security and ethics raised by movement of a user's bodybased interaction styles. Originalityvalue The paper describes the utilization of human body movements as input for interaction and interface control in pervasive computing settings.

Url:
DOI: 10.1108/17427370910950311

Links to Exploration step

ISTEX:B2CCAD37D1C07EBC9030354F59150F834862E3F1

Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Humancomputer interaction in ubiquitous computing environments</title>
<author wicri:is="90%">
<name sortKey="Abawajy, J H" sort="Abawajy, J H" uniqKey="Abawajy J" first="J. H." last="Abawajy">J. H. Abawajy</name>
</author>
<author wicri:is="90%">
<name sortKey="Abawajy, J H" sort="Abawajy, J H" uniqKey="Abawajy J" first="J. H." last="Abawajy">J. H. Abawajy</name>
<affiliation>
<mods:affiliation>School of Engineering and Information Technology, Deakin University, Geelong, Australia</mods:affiliation>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:B2CCAD37D1C07EBC9030354F59150F834862E3F1</idno>
<date when="2009" year="2009">2009</date>
<idno type="doi">10.1108/17427370910950311</idno>
<idno type="url">https://api.istex.fr/document/B2CCAD37D1C07EBC9030354F59150F834862E3F1/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">001613</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">Humancomputer interaction in ubiquitous computing environments</title>
<author wicri:is="90%">
<name sortKey="Abawajy, J H" sort="Abawajy, J H" uniqKey="Abawajy J" first="J. H." last="Abawajy">J. H. Abawajy</name>
</author>
<author wicri:is="90%">
<name sortKey="Abawajy, J H" sort="Abawajy, J H" uniqKey="Abawajy J" first="J. H." last="Abawajy">J. H. Abawajy</name>
<affiliation>
<mods:affiliation>School of Engineering and Information Technology, Deakin University, Geelong, Australia</mods:affiliation>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="j">International Journal of Pervasive Computing and Communications</title>
<idno type="ISSN">1742-7371</idno>
<imprint>
<publisher>Emerald Group Publishing Limited</publisher>
<date type="published" when="2009-04-03">2009-04-03</date>
<biblScope unit="volume">5</biblScope>
<biblScope unit="issue">1</biblScope>
<biblScope unit="page" from="61">61</biblScope>
<biblScope unit="page" to="77">77</biblScope>
</imprint>
<idno type="ISSN">1742-7371</idno>
</series>
<idno type="istex">B2CCAD37D1C07EBC9030354F59150F834862E3F1</idno>
<idno type="DOI">10.1108/17427370910950311</idno>
<idno type="filenameID">3610050105</idno>
<idno type="original-pdf">3610050105.pdf</idno>
<idno type="href">17427370910950311.pdf</idno>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">1742-7371</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass></textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract">Purpose The purpose of this paper is to explore characteristics of humancomputer interaction when the human body and its movements become input for interaction and interface control in pervasive computing settings. Designmethodologyapproach The paper quantifies the performance of human movement based on Fitt's Law and discusses some of the human factors and technical considerations that arise in trying to use human body movements as an input medium. Findings The paper finds that new interaction technologies utilising human movements may provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm. Practical implications In pervasive computing environments the challenge is to create intuitive and userfriendly interfaces. Application domains that may utilize human body movements as input are surveyed here and the paper addresses issues such as culture, privacy, security and ethics raised by movement of a user's bodybased interaction styles. Originalityvalue The paper describes the utilization of human body movements as input for interaction and interface control in pervasive computing settings.</div>
</front>
</TEI>
<istex>
<corpusName>emerald</corpusName>
<author>
<json:item>
<name>J.H. Abawajy</name>
</json:item>
<json:item>
<name>J.H. Abawajy</name>
<affiliations>
<json:string>School of Engineering and Information Technology, Deakin University, Geelong, Australia</json:string>
</affiliations>
</json:item>
</author>
<subject>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>Computer applications</value>
</json:item>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>Manmachine interface</value>
</json:item>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>Human anatomy</value>
</json:item>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>Human physiology</value>
</json:item>
</subject>
<language>
<json:string>eng</json:string>
</language>
<abstract>Purpose The purpose of this paper is to explore characteristics of humancomputer interaction when the human body and its movements become input for interaction and interface control in pervasive computing settings. Designmethodologyapproach The paper quantifies the performance of human movement based on Fitt's Law and discusses some of the human factors and technical considerations that arise in trying to use human body movements as an input medium. Findings The paper finds that new interaction technologies utilising human movements may provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm. Practical implications In pervasive computing environments the challenge is to create intuitive and userfriendly interfaces. Application domains that may utilize human body movements as input are surveyed here and the paper addresses issues such as culture, privacy, security and ethics raised by movement of a user's bodybased interaction styles. Originalityvalue The paper describes the utilization of human body movements as input for interaction and interface control in pervasive computing settings.</abstract>
<qualityIndicators>
<score>6.944</score>
<pdfVersion>1.3</pdfVersion>
<pdfPageSize>521.575 x 680.315 pts</pdfPageSize>
<refBibsNative>true</refBibsNative>
<keywordCount>4</keywordCount>
<abstractCharCount>1154</abstractCharCount>
<pdfWordCount>7964</pdfWordCount>
<pdfCharCount>49146</pdfCharCount>
<pdfPageCount>17</pdfPageCount>
<abstractWordCount>162</abstractWordCount>
</qualityIndicators>
<title>Humancomputer interaction in ubiquitous computing environments</title>
<genre.original>
<json:string>research-article</json:string>
</genre.original>
<genre>
<json:string>research-article</json:string>
</genre>
<host>
<volume>5</volume>
<publisherId>
<json:string>ijpcc</json:string>
</publisherId>
<pages>
<last>77</last>
<first>61</first>
</pages>
<issn>
<json:string>1742-7371</json:string>
</issn>
<issue>1</issue>
<subject>
<json:item>
<value>Engineering</value>
</json:item>
<json:item>
<value>Electrical & electronic engineering</value>
</json:item>
<json:item>
<value>Computer & software engineering</value>
</json:item>
<json:item>
<value>Information & knowledge management</value>
</json:item>
<json:item>
<value>Information & communications technology</value>
</json:item>
</subject>
<genre>
<json:string>Journal</json:string>
</genre>
<language>
<json:string>unknown</json:string>
</language>
<title>International Journal of Pervasive Computing and Communications</title>
<doi>
<json:string>10.1108/ijpcc</json:string>
</doi>
</host>
<publicationDate>2009</publicationDate>
<copyrightDate>2009</copyrightDate>
<doi>
<json:string>10.1108/17427370910950311</json:string>
</doi>
<id>B2CCAD37D1C07EBC9030354F59150F834862E3F1</id>
<score>1</score>
<fulltext>
<json:item>
<original>true</original>
<mimetype>application/pdf</mimetype>
<extension>pdf</extension>
<uri>https://api.istex.fr/document/B2CCAD37D1C07EBC9030354F59150F834862E3F1/fulltext/pdf</uri>
</json:item>
<json:item>
<original>false</original>
<mimetype>application/zip</mimetype>
<extension>zip</extension>
<uri>https://api.istex.fr/document/B2CCAD37D1C07EBC9030354F59150F834862E3F1/fulltext/zip</uri>
</json:item>
<istex:fulltextTEI uri="https://api.istex.fr/document/B2CCAD37D1C07EBC9030354F59150F834862E3F1/fulltext/tei">
<teiHeader>
<fileDesc>
<titleStmt>
<title level="a" type="main" xml:lang="en">Humancomputer interaction in ubiquitous computing environments</title>
</titleStmt>
<publicationStmt>
<authority>ISTEX</authority>
<publisher>Emerald Group Publishing Limited</publisher>
<availability>
<p>EMERALD</p>
</availability>
<date>2009</date>
</publicationStmt>
<sourceDesc>
<biblStruct type="inbook">
<analytic>
<title level="a" type="main" xml:lang="en">Humancomputer interaction in ubiquitous computing environments</title>
<author>
<persName>
<forename type="first">J.H.</forename>
<surname>Abawajy</surname>
</persName>
</author>
<author>
<persName>
<forename type="first">J.H.</forename>
<surname>Abawajy</surname>
</persName>
<affiliation>School of Engineering and Information Technology, Deakin University, Geelong, Australia</affiliation>
</author>
</analytic>
<monogr>
<title level="j">International Journal of Pervasive Computing and Communications</title>
<idno type="pISSN">1742-7371</idno>
<idno type="DOI">10.1108/ijpcc</idno>
<imprint>
<publisher>Emerald Group Publishing Limited</publisher>
<date type="published" when="2009-04-03"></date>
<biblScope unit="volume">5</biblScope>
<biblScope unit="issue">1</biblScope>
<biblScope unit="page" from="61">61</biblScope>
<biblScope unit="page" to="77">77</biblScope>
</imprint>
</monogr>
<idno type="istex">B2CCAD37D1C07EBC9030354F59150F834862E3F1</idno>
<idno type="DOI">10.1108/17427370910950311</idno>
<idno type="filenameID">3610050105</idno>
<idno type="original-pdf">3610050105.pdf</idno>
<idno type="href">17427370910950311.pdf</idno>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<creation>
<date>2009</date>
</creation>
<langUsage>
<language ident="en">en</language>
</langUsage>
<abstract>
<p>Purpose The purpose of this paper is to explore characteristics of humancomputer interaction when the human body and its movements become input for interaction and interface control in pervasive computing settings. Designmethodologyapproach The paper quantifies the performance of human movement based on Fitt's Law and discusses some of the human factors and technical considerations that arise in trying to use human body movements as an input medium. Findings The paper finds that new interaction technologies utilising human movements may provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm. Practical implications In pervasive computing environments the challenge is to create intuitive and userfriendly interfaces. Application domains that may utilize human body movements as input are surveyed here and the paper addresses issues such as culture, privacy, security and ethics raised by movement of a user's bodybased interaction styles. Originalityvalue The paper describes the utilization of human body movements as input for interaction and interface control in pervasive computing settings.</p>
</abstract>
<textClass>
<keywords scheme="keyword">
<list>
<head>Keywords</head>
<item>
<term>Computer applications</term>
</item>
<item>
<term>Manmachine interface</term>
</item>
<item>
<term>Human anatomy</term>
</item>
<item>
<term>Human physiology</term>
</item>
</list>
</keywords>
</textClass>
<textClass>
<keywords scheme="Emerald Subject Group">
<list>
<label>cat-ENGG</label>
<item>
<term>Engineering</term>
</item>
<label>cat-EEE</label>
<item>
<term>Electrical & electronic engineering</term>
</item>
<label>cat-CSE</label>
<item>
<term>Computer & software engineering</term>
</item>
</list>
</keywords>
</textClass>
<textClass>
<keywords scheme="Emerald Subject Group">
<list>
<label>cat-IKM</label>
<item>
<term>Information & knowledge management</term>
</item>
<label>cat-ICT</label>
<item>
<term>Information & communications technology</term>
</item>
</list>
</keywords>
</textClass>
</profileDesc>
<revisionDesc>
<change when="2009-04-03">Published</change>
</revisionDesc>
</teiHeader>
</istex:fulltextTEI>
<json:item>
<original>false</original>
<mimetype>text/plain</mimetype>
<extension>txt</extension>
<uri>https://api.istex.fr/document/B2CCAD37D1C07EBC9030354F59150F834862E3F1/fulltext/txt</uri>
</json:item>
</fulltext>
<metadata>
<istex:metadataXml wicri:clean="corpus emerald not found" wicri:toSee="no header">
<istex:xmlDeclaration>version="1.0" encoding="UTF-8"</istex:xmlDeclaration>
<istex:document><!-- Auto generated NISO JATS XML created by Atypon out of MCB DTD source files. Do Not Edit! -->
<article dtd-version="1.0" xml:lang="en" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">ijpcc</journal-id>
<journal-id journal-id-type="doi">10.1108/ijpcc</journal-id>
<journal-title-group>
<journal-title>International Journal of Pervasive Computing and Communications</journal-title>
</journal-title-group>
<issn pub-type="ppub">1742-7371</issn>
<publisher>
<publisher-name>Emerald Group Publishing Limited</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.1108/17427370910950311</article-id>
<article-id pub-id-type="original-pdf">3610050105.pdf</article-id>
<article-id pub-id-type="filename">3610050105</article-id>
<article-categories>
<subj-group subj-group-type="type-of-publication">
<compound-subject>
<compound-subject-part content-type="code">research-article</compound-subject-part>
<compound-subject-part content-type="label">Research paper</compound-subject-part>
</compound-subject>
</subj-group>
<subj-group subj-group-type="subject">
<compound-subject>
<compound-subject-part content-type="code">cat-ENGG</compound-subject-part>
<compound-subject-part content-type="label">Engineering</compound-subject-part>
</compound-subject>
<subj-group>
<compound-subject>
<compound-subject-part content-type="code">cat-EEE</compound-subject-part>
<compound-subject-part content-type="label">Electrical & electronic engineering</compound-subject-part>
</compound-subject>
<subj-group>
<compound-subject>
<compound-subject-part content-type="code">cat-CSE</compound-subject-part>
<compound-subject-part content-type="label">Computer & software engineering</compound-subject-part>
</compound-subject>
</subj-group>
</subj-group>
</subj-group>
<subj-group subj-group-type="subject">
<compound-subject>
<compound-subject-part content-type="code">cat-IKM</compound-subject-part>
<compound-subject-part content-type="label">Information & knowledge management</compound-subject-part>
</compound-subject>
<subj-group>
<compound-subject>
<compound-subject-part content-type="code">cat-ICT</compound-subject-part>
<compound-subject-part content-type="label">Information & communications technology</compound-subject-part>
</compound-subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Human‐computer interaction in ubiquitous computing environments</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="editor">
<string-name>
<given-names>J.H.</given-names>
<surname>Abawajy</surname>
</string-name>
</contrib>
</contrib-group>
<contrib-group>
<contrib contrib-type="author">
<string-name>
<given-names>J.H.</given-names>
<surname>Abawajy</surname>
</string-name>
<aff>School of Engineering and Information Technology, Deakin University, Geelong, Australia</aff>
</contrib>
</contrib-group>
<pub-date pub-type="ppub">
<day>03</day>
<month>04</month>
<year>2009</year>
</pub-date>
<volume>5</volume>
<issue>1</issue>
<issue-title>Advances in pervasive computing</issue-title>
<issue-title content-type="short">Advances in pervasive computing</issue-title>
<fpage>61</fpage>
<lpage>77</lpage>
<permissions>
<copyright-statement>© Emerald Group Publishing Limited</copyright-statement>
<copyright-year>2009</copyright-year>
<license license-type="publisher">
<license-p></license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="17427370910950311.pdf"></self-uri>
<abstract>
<sec>
<title content-type="abstract-heading">Purpose</title>
<x></x>
<p>The purpose of this paper is to explore characteristics of human‐computer interaction when the human body and its movements become input for interaction and interface control in pervasive computing settings.</p>
</sec>
<sec>
<title content-type="abstract-heading">Design/methodology/approach</title>
<x></x>
<p>The paper quantifies the performance of human movement based on Fitt's Law and discusses some of the human factors and technical considerations that arise in trying to use human body movements as an input medium.</p>
</sec>
<sec>
<title content-type="abstract-heading">Findings</title>
<x></x>
<p>The paper finds that new interaction technologies utilising human movements may provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm.</p>
</sec>
<sec>
<title content-type="abstract-heading">Practical implications</title>
<x></x>
<p>In pervasive computing environments the challenge is to create intuitive and user‐friendly interfaces. Application domains that may utilize human body movements as input are surveyed here and the paper addresses issues such as culture, privacy, security and ethics raised by movement of a user's body‐based interaction styles.</p>
</sec>
<sec>
<title content-type="abstract-heading">Originality/value</title>
<x></x>
<p>The paper describes the utilization of human body movements as input for interaction and interface control in pervasive computing settings.</p>
</sec>
</abstract>
<kwd-group>
<kwd>Computer applications</kwd>
<x>, </x>
<kwd>Man‐machine interface</kwd>
<x>, </x>
<kwd>Human anatomy</kwd>
<x>, </x>
<kwd>Human physiology</kwd>
</kwd-group>
<custom-meta-group>
<custom-meta>
<meta-name>peer-reviewed</meta-name>
<meta-value>yes</meta-value>
</custom-meta>
<custom-meta>
<meta-name>academic-content</meta-name>
<meta-value>yes</meta-value>
</custom-meta>
<custom-meta>
<meta-name>rightslink</meta-name>
<meta-value>included</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
<ack>
<p>The author would like to thank Maliha Omar for her help in completing this paper.</p>
</ack>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>The vision of computing in the 21st century is envisaged that vast numbers of networked computers will permeate our environment, heralding new ways of interaction and new applications. The “logically malleable” nature of the computer lends itself to an almost limitless functionality and as the computer is invisibly “woven into the fabric of space” (
<xref ref-type="bibr" rid="b28">Pingali
<italic>et al.</italic>
, 2003</xref>
), it is being woven into the fabric of human society. This new computing paradigm, variously known as pervasive or ubiquitous computing, aspires to create technology that assist us in our everyday lives, functioning invisibly and unobtrusively in the background and freeing people to a large extent from tedious routine tasks (
<xref ref-type="bibr" rid="b42">Weiser, 1999</xref>
).</p>
<p>One of the most significant challenges in pervasive computing environment is to create intuitive and user‐friendly interfaces. In the pervasive world, computers will be encountered in unfamiliar settings and often may not even be visible or immediately recognizable as computers. This invisibility can frustrate users if they cannot easily control or manage their environment. Also, the sheer complexity of the networked systems that may eventuate is likely to border upon mind‐boggling. In addition, as the computer itself becomes invisible, the nature and quality of interactions and interfaces that were once seen against the backdrop of the “machine” will be highlighted. Last but not least, as the numbers of people interacting with computers increases many times, the goal of making the benefits of this technology universally accessible becomes more pressing. Thus, pervasive computing will require a revolution in human‐computer interaction for interacting with small, distributed, and often embedded devices which must present a unified interface to users.</p>
<p>The main goal of pervasive computing is to make computing and technology simple to use, everywhere around us, accessible to people with minimal technical expertise, reliable and more intuitive. To achieve some of these objectives, pervasive computing will require a new methods for design and development of user interfaces that do not make assumptions about the available input and output devices. In pervasive computing, input will be moved beyond the explicit nature of keyboards‐based input and selection (from pointing devices) to a greater variety of input technologies. In particular, a shift from explicit means of human input to more implicit forms of inputs that support more natural human forms of communication (such as handwriting, speech, and gestures) will become prevalent. This means, users will be able to interact naturally with computers in the same way face‐to‐face human‐human interaction takes place.</p>
<p>The trends toward pervasive computing is driving research into ever‐more‐natural forms of human‐computer interaction (HCI). Conventional HCI works through a user conforming to static devices (e.g. keyboard, mouse, and visual display unit) using them in a pre‐defined way. In addition, these primary interfaces force users to master new techniques as well as restrict the range of the interaction. Moreover, present human‐computer interaction does not take into account the non‐verbal behaviour of users leading to some authors characterizing computers as “autistic” in nature (
<xref ref-type="bibr" rid="b3">Alexander and Sarrafzadeh, 2004</xref>
). Although an uncomfortable characterization given the tragic nature of this condition, the metaphor of great potential denied by the inability to communicate effectively is apt. The vision of computers that can respond to the emotion has spawned a new area of research into perceptual user interfaces (PUI's) (
<xref ref-type="bibr" rid="b38">Turk and Robertson, 2000</xref>
).</p>
<p>Existing technologies such as gloves and suits which pick up user movements are intrusive and therefore not fully pervasive. In this paper, we examine the viability of human body movement as likely to provide flexible, naturalistic interfaces and support the pervasive computing paradigm. As computers become embedded into everyday objects, the movement of the human body is becoming an increasingly important topic in the field of human‐computer interaction (HCI). Although traditional input devices such as the keyboard, mouse and touch screen already involve bodily movements, new interaction technologies utilising human movements aim to provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm.</p>
<p>In addition to complementing the growing number of HCI research, the paper makes several contributions. We will describe the promises of the human body movement, its limitations, and the obstacles that must still be overcome. We will quantify human movement performance through Fitt's Law (
<xref ref-type="bibr" rid="b11">Fitts, 1954</xref>
). Then, we present a taxonomy of body movement with respect to HCI. The implications of the anatomical structure of the body and the range of movement possible for HCI are presented. The range of applications that may utilize human body movements as input is surveyed. As a person's body movements can reveal a vast amount and range of information about the individual, the implications of privacy, security and ethics on human body movements based HCIs is discussed.</p>
<p>The rest of the paper is organized as follows. In section II, the fundamentals of the human body movements is discussed. In section III, the performance of human movement based on the Fitt's Law and implications of factors such as aging and health on the performance model are also discussed. In section IV, the taxonomy of human body movements is discussed. The implications for privacy, security and ethics are discussed in section VII. A survey of representative movement capture technologies is presented in section V. Some of the applications that may utilise human body movements as input are discussed in section VI. The conclusion is presented in section VIII.</p>
</sec>
<sec>
<title>2. Principles of human movements</title>
<p>Movement is fundamental to human life. Expansion of the lungs carry air in and out of the body, contractions of cardiac muscle circulate blood throughout the body and the digestive system moves ingested food on its journey through the body. These and other visceral movements occur largely under involuntary control and are necessary for the maintenance of life. Other movements involved in the course of life may be voluntary, reflexive or unconscious.</p>
<p>The human skeletal system is of critical importance in many human movements, providing leverage and support. The long bones (those that are longer than they are wide) act as levers whilst the joints act as axes of rotation. The shape of the bones determines the freedom of movement and the number of planes of movement of the joint that are possible. For example, the ankle joint allows only one plane of movement and foot extension is limited by the contact of the talus and calcaneus bones, such that some dancers feel the need to have bone surgically removed to allow greater extension (
<xref ref-type="bibr" rid="b13">Hamill and Knutzen, 2003</xref>
). It can thus be seen that, in most cases, there is a limited envelope of body movements that is possible.</p>
<p>Providing the forces for motion are the muscles, of which there are three types, classified by muscle fibre type: skeletal, smooth (visceral) and cardiac muscle. The first is often referred to as voluntary muscle and the latter two as involuntary muscle. This distinction is not clear‐cut as evidenced by the reflexive actions such as withdrawing from a painful stimuli and the ability of some trained individuals to alter their heart rate voluntarily. Motor neurones supply groups of muscle fibres with their innervation forming motor units. The finer the degree of movement control, the less fibres are in each motor unit. These motor neurones are connected to the primary motor cortex of the brain via the spinal cord or in the case of facial muscles, the pons. This area conveys the ability to move muscles independently and perform fine movements.</p>
<p>Up to this point the story of movement is relatively simple and follows from principles of physiology, anatomy and biomechanics. However, simple physical movement requires coordination and timing of muscle activity to be effective. It is postulated that the brain forms a plan or program of commands, which is then carried out resulting in the desired action (
<xref ref-type="bibr" rid="b5">Banich, 1997</xref>
;
<xref ref-type="bibr" rid="b16">Jacko and Sears, 2003</xref>
). Observation of and experiments using speech tend to lend credence to the idea of pre‐planning of motor activity. For the proper articulation of some vowels, lip rounding is required whilst consonants may be articulated with or without lip rounding. When a consonant precedes a vowel that requires lip rounding (for example “u”) that consonant is formed using lip rounding. Conversely, if the vowel does not require lip rounding (for example, “i”) the preceding consonant is formed without lip rounding. This is referred to as coarticulation(
<xref ref-type="bibr" rid="b5">Banich, 1997</xref>
). Further support for the theory that sequences of motor behaviour are pre‐planned comes from experiments involving subjects speaking sentences of varying lengths. Subjects were given sentences to speak when given a signal. The response time between the signal and beginning speech related directly to the length of the sentence. If no preplanning occurred the response time would be expected to be the same irrespective of the length of the sentence.</p>
</sec>
<sec>
<title>3. Quantify human movement performance</title>
<p>To gain insight into human body movement performance, we will quantify the performance of human movement and implications of factors such as aging and health on the performance model are discussed in this section. In this paper, we view human psychomotor behaviour as an information‐processing problem and use Fitt's Law (
<xref ref-type="bibr" rid="b11">Fitts, 1954</xref>
) to model human movement performance.</p>
<p>Described by Paul Fitts in 1954 and derived from information theory, Fitt's Law is an attempt to mathematically quantify human movement performance. Movement time is predicted for movements between two targets based on distance between and size of targets. Fitt's Law may be defined as:
<xref ref-type="fig" rid="F_3610050105002">(Equation 1)</xref>
where
<italic>T</italic>
is time to acquire a target,
<italic>W</italic>
is target width,
<italic>A</italic>
is distance to the target and
<italic>a</italic>
,
<italic>b</italic>
and
<italic>c</italic>
are constants.</p>
<p>In HCI, this relation relates to pointing/target selection and supports the intuitive notion that accuracy of movement decreases with velocity (
<xref ref-type="bibr" rid="b1">Accot and Zhai, 1997</xref>
). It has, however, been found to apply well for a wide range of limb movements and muscle groups for a variety of subjects under diverse circumstances, including under a microscope and underwater (
<xref ref-type="bibr" rid="b22">MacKenzie and Buxton, 1992</xref>
). Movement time for actions such as finger manipulations, head nodding and foot tapping have all been accurately predicted across a variety of health states and age groups.</p>
<p>Although providing a reliable model, Fitt's Law applies to only one type of movement and fails to describe other movements accurately. Efforts have been made to extend the model to describe angular motion (
<xref ref-type="bibr" rid="b19">Kondraske, 1994</xref>
), trajectory‐based movements (
<xref ref-type="bibr" rid="b1">Accot and Zhai, 1997</xref>
) and two‐dimensional upper limb performance (
<xref ref-type="bibr" rid="b45">Yang
<italic>et al.</italic>
, 2001</xref>
). These authors report success in finding invariances in these movement types and the latter two groups point to potential applications in rehabilitation and biomechanics.</p>
<p>Much research has been carried out to identify invariances in human movement that may provide useful insights for designing human‐computer interaction. Complicating this, however, is the wide variation in physical abilities between individuals and for individuals across time. Whilst childhood development, in the absence of disability or illness, tends to follow a predictable path with established milestones, aging is a more complex issue. Though the aged population is the most stereotyped group in society variation of ability levels within this group are the widest among the age groups (
<xref ref-type="bibr" rid="b14">Hawthorn, 1998</xref>
).</p>
<p>Performance declines due to aging begin in an individual's thirties and can be hastened by illness and lifestyle factors.
<xref ref-type="bibr" rid="b14">Hawthorn (1998)</xref>
reports that 70 per cent of Americans over 65 suffer mobility problems due to conditions ranging from mild arthritis to stroke. General aging changes result in reduced muscle strength and endurance, slower motor response time, which is accentuated with increased task complexity, and reduced ability to control and modify forces applied (
<xref ref-type="bibr" rid="b14">Hawthorn, 1998</xref>
).
<xref ref-type="bibr" rid="b14">Hawthorn (1998)</xref>
further reports the elderly as having increased difficulty with cursor positioning when acquiring small targets, such that it has been suggested that an age correction be added to Fitt's Law.</p>
<p>Tracking of body movements (head, arms, torso, and legs) is necessary to interpret pose and motion in many applications.
<xref ref-type="bibr" rid="b44">Wu
<italic>et al.</italic>
(2003)</xref>
identified three important issues in articulated motion analysis: representation (joint angles or motion of all the sub‐parts), computational paradigms (deterministic or probabilistic), and computation reduction. They propose a dynamic Markov network that uses mean field Monte Carlo algorithms so that a set of low dimensional particle filters interact with each other to solve a high dimensional problem collaboratively.</p>
</sec>
<sec>
<title>4. Taxonomy of body movements</title>
<p>Humans use a very wide variety of gestures ranging from simple actions of using the hand to point at objects to the more complex actions that express feelings and allow communication with others. As shown in
<xref ref-type="fig" rid="F_3610050105001">Figure 1</xref>
, we classify movement into main general categories: micro‐movements and macro‐movements. Both micro‐movements and macro‐movements can be utilised as both implicit and explicit input for human‐computer interaction.</p>
<p>Non‐verbal communication includes facial expressions, tones of voice, gestures, eye contact, spatial arrangements, patterns of touch, expressive movement, cultural differences, and other “nonverbal” acts. Research suggests that nonverbal communication is more important in understanding human behavior than words alone – the nonverbal “channels” seem to be more powerful than what people say.</p>
<sec>
<title>4.1 Micro‐movements</title>
<sec>
<title>4.1.1 Eye movements</title>
<p>As the eyes are a rich source of information for gathering context in our everyday lives, the potential of
<italic>eye movement</italic>
as a form of input have been investigated by HCI researchers. Six extrinsic muscles allow for vertical, horizontal and rotational movements of the eyeball. Eye movement data is regarded as particularly useful due to the high sampling rate possible (1‐2 ms) and the non‐intrusive nature of data collection (
<xref ref-type="bibr" rid="b32">Salvucci and Anderson, 2001</xref>
).</p>
<p>The nature of the human visual system is such that high resolution is restricted to a small area, requiring the gaze to shift to each area of interest, indicating changes in visual attention and reflecting the cognitive state of the individual (
<xref ref-type="bibr" rid="b32">Salvucci and Anderson, 2001</xref>
). These eye movements, referred to as saccadic movements, have been studied in an effort to derive finer indicators of cognitive activity such as reading, searching and exploring. The human eye has potential to be used as hands free method of input for many tasks. For example, a system in which a user gazes at a given link, then blinks in order to click through is a common type of gaze‐based interface that is controlled completely by the eyes. Another practical example of gaze‐based input system is an automatic scrolling techniques. Contemporary scrolling techniques rely on the explicit initiation of scrolling by the user. Since the act of scrolling is tightly coupled with the users ability to absorb information via the visual channel, the use of eye gaze information is therefore a natural choice for enhancing scrolling techniques (
<xref ref-type="bibr" rid="b20">Kumar
<italic>et al.</italic>
, 2007</xref>
). A system whereby eye position coordinates were obtained using corneal reflections which where then translated into mouse‐pointer coordinates is described in (
<xref ref-type="bibr" rid="b2">Adjouadi,
<italic>et al.</italic>
, 2004</xref>
). Using similar methods,
<xref ref-type="bibr" rid="b33">Sibert and Jacob (2000)</xref>
demonstrated a significant speed advantage of eye gaze selection over mouse selection in healthy subjects and regard it as a natural, hands free method of input.</p>
<p>Eye movements accurately reflect visual attention and cognitive thought processes (Tobii, Technology, 2006). Eye movement tracking tools such as Tobii 1750 eye tracker (Tobii, Technology, 2006) offers the ability to measure and visualize spontaneous and emotional responses to your communication instead of relying solely on verbal and conscious responses. If the cognitive state of the user can be accurately gauged, this may be applied to tailoring educational programs to the individual just as a human tutor would vary the delivery of instruction according to a student's progress or lack thereof (
<xref ref-type="bibr" rid="b3">Alexander and Sarrafzadeh, 2004</xref>
). Help programs and other features designed to assist the user could, similarly be improved by utilising knowledge of the user's cognitive state, more accurately delivering advice and gaining feedback as to it's effectiveness.</p>
</sec>
<sec>
<title>4.1.2 Human facial movements</title>
<p>The muscles of
<italic>facial expression</italic>
, numbering over twenty, allow a wide variety of movements and convey a wide range of emotions (
<xref ref-type="bibr" rid="b12">Gunes and Piccardi, 2005</xref>
;
<xref ref-type="bibr" rid="b18">Kapur
<italic>et al.</italic>
, 2005</xref>
). Classic psychological theory has it that humans demonstrate six basic emotions: happiness, anger, sadness, surprise, disgust, and fear, these being products of evolution and being universally displayed and recognized (
<xref ref-type="bibr" rid="b26">Pantic and Rothkrantz, 2003</xref>
). More recent work argues that emotions cannot be so easily categorized and that the expression of emotions is culturally dependent (
<xref ref-type="bibr" rid="b26">Pantic and Rothkrantz, 2003</xref>
). If so, designing affective interfaces for deployment in different cultures will be more complex than just using a different language.</p>
<p>Whilst there is disagreement about the categorization of
<italic>emotions</italic>
, varied research shows that automated systems can recognize a range of emotions with 64 per cent to 98 per cent accuracy as compared to human experiments where recognition rates are 70 per cent to 98 per cent (
<xref ref-type="bibr" rid="b26">Pantic and Rothkrantz, 2003</xref>
). Such variations in recognition rates would mitigate against their use, at least as unimodal input, in critical applications but they may be practical in less critical applications. Video games or movies may be produced that alter their narrative in response to the viewer's emotions as inferred from their facial expression. For example, an interactive video may moderate it's content if a young viewer was expressing sheer terror. Further to this a logging facility may be provided so that the parents may be alerted to the reaction allowing them to examine the choice of viewing material more closely or become aware of atypical reactions.</p>
<p>In addition to perceiving affect, visual tracking of human facial movements has been used to control mouse cursor movements, for example moving the head with an open mouth which causes an object to be dragged (
<xref ref-type="bibr" rid="b26">Pantic and Rothkrantz, 2003</xref>
). The authors regard that this would aid those with hand and speech disabilities. Mouth movements have similarly been tracked in a system described by
<xref ref-type="bibr" rid="b8">de Silva
<italic>et al.</italic>
(2004)</xref>
. The mouth‐controlled interface described has been used to control audio programs, graphics programs and as an adjunctive text‐entry method for small keyboards.</p>
</sec>
</sec>
<sec>
<title>4.2 Macro‐movements</title>
<p>Body movements, particularly arm and hand gestures, during a conversation convey a wealth of contextual information to the listener but to date most research has centred upon recognising and categorising facial expression (
<xref ref-type="bibr" rid="b12">Gunes and Piccardi, 2005</xref>
;
<xref ref-type="bibr" rid="b18">Kapur
<italic>et al.</italic>
, 2005</xref>
). Both authors express the opinion that detection of affect will rely upon assessment of multimodal input, part of which will be facial and body movement.</p>
<p>Though rarely consciously thought about the act of bipedal locomotion, it is a complex learned skill that is sometimes described as “controlled falling over”. As such, in the absence of pathology, a person's gait shows a high degree of invariance and features that enable humans to recognize people from even displays (
<xref ref-type="bibr" rid="b29">Qvarfordt and Zhai, 2005</xref>
). This leads it to being a useful biometric for identification that may be used for security purposes or as part of implicit input into a context‐aware system.</p>
<p>Body movement may be used to manipulate what are termed tangible interfaces; these being interfaces that “couple digital information with real world physical objects” (
<xref ref-type="bibr" rid="b10">Fishkin, 2004</xref>
). Fishkin mentions a number of products that are illustrative of tangible interfaces. For example, the “Sketchpad”, a small keychain computer that if shaken will clear the display and a children's game where figures are moved whilst telling a story, which is recorded and may be played back later. Interaction designs such as these simplify the mechanics of the use of the product. Rather than finding the correct button to push or needing to understand the concept of recording the interfaces harness intuitive processes for their manipulation. Beyond simplicity, such designs allow the operation of the product to be moved to the periphery of attention in line with the concept of “calm technology” as envisioned by
<xref ref-type="bibr" rid="b43">Weiser and Brown (2005)</xref>
.</p>
<p>Designs using gestures as commands would, similarly promise simplicity and intuitiveness. A possible scenario that illustrates this would be referring to a number of open documents on a computer whilst typing a document. Presently one must acquire the mouse, locate the target icon, move the cursor to the icon and click. If the correct document is successfully opened one then has to use scroll bars or a mouse wheel to move through the pages. An alternative could use gestures similar to the movements used when leafing through physical documents. For example, by moving two or three fingers towards or away from the palm the user could move to the next document whilst moving one finger could move from page to page. The user would face less interruption and save considerable cognitive effort when navigating between and within documents.</p>
</sec>
</sec>
<sec>
<title>5. Movement capture technologies</title>
<p>As mobile and embedded computing devices become more pervasive, it is becoming obvious that the nature of interactions between users and computers must evolve. Design of usable interfaces aims at providing simplicity of interaction, reducing the cognitive and physical burden required to manipulate the computer. An awareness of the developmental considerations of younger users and the decline in the abilities of the aged may yield valuable insights into designing efficient interfaces for all age groups (
<xref ref-type="bibr" rid="b14">Hawthorn, 1998</xref>
;
<xref ref-type="bibr" rid="b34">Strommen, 1993</xref>
). These authors report both groups perform better when complex movements are devolved into discrete movements, the former primarily due to lack of cognitive development and the latter primarily due to decline in physical abilities. It is, however unlikely that a “one size fits all” approach to design of interfaces will be realistic for all but the simplest interactions and customisable or adaptable interfaces will be required.</p>
<p>Movement may be used as explicit input by users, either as unimodal input or as part of a multimodal input design. There are many existing technologies that allow human movement data to be captured by a wide variety of means, either directly or indirectly. Indirect systems have been designed utilising video (
<xref ref-type="bibr" rid="b12">Gunes and Piccardi, 2005</xref>
), infrared (
<xref ref-type="bibr" rid="b32">Salvucci and Anderson, 2001</xref>
), distance sensors (
<xref ref-type="bibr" rid="b15">Ishikawa
<italic>et al</italic>
., 2005</xref>
), laser (
<xref ref-type="bibr" rid="b31">Reilly, 1998</xref>
) and radar (
<xref ref-type="bibr" rid="b39">van Dorp and Groen, 2003</xref>
).</p>
<p>Direct movement capture technologies involve wearable computing platforms and sensor augmentation of artifacts or devices. Accelerometers sense movements in two or three dimensions and have been used in wearable implementations (
<xref ref-type="bibr" rid="b9">DeVaul
<italic>et al.</italic>
, 2003</xref>
;
<xref ref-type="bibr" rid="b21">Ling, 2003</xref>
;
<xref ref-type="bibr" rid="b35">Sung
<italic>et al.</italic>
, 2005</xref>
), incorporated into a Samsung mobile phone and a keychain computer (
<xref ref-type="bibr" rid="b10">Fishkin, 2004</xref>
). Novel approaches include a wrist‐mounted video camera to capture finger movements and arm‐mounted sensing of electrical activity associated with hand movement (EMG) (
<xref ref-type="bibr" rid="b40">Vardy
<italic>et al.</italic>
, 1999</xref>
).</p>
<p>In the world of ubiquitous computing envisaged by Mark Weiser (
<xref ref-type="bibr" rid="b43">Weiser and Brown, 2005</xref>
) the individual is likely to interact with most if not all the above sensors and computing technologies. This “Brave New World” vision may seem a distant one but as was noted most of the technologies already exist albeit some being in their infancy. The main hurdles in implementation of this vision are the meaningful interpretation of data collected and effectively communicating between the many applications and devices that will be involved.</p>
</sec>
<sec>
<title>6. Applications</title>
<p>The range of applications that may utilise human body movements as input is potentially huge in areas such as the arts, sports and leisure. In this section, we describe human movement as computer interaction for some applications.</p>
<sec>
<title>6.1 Digital photo album</title>
<p>
<xref ref-type="bibr" rid="b17">Jin
<italic>et al.</italic>
(2004)</xref>
describe a digital photo album using gestural input via a touchscreen, which is reported by users to be a more natural and emotionally satisfying experience. The photo album uses a limited number of commands with design emphasis being on usability and lending a book “feel” to digital media representation, rather than upon efficiency. This is appropriate given that the motivation for recording and viewing the images is primarily an emotional one. When considering replacement technologies for the traditional keyboard, efficiency becomes more important and the range of commands increases substantially.</p>
<p>Whilst sign language provides an effective mode of communication for those with hearing disabilities, replacement of the keyboard with gestural input does not seem feasible given that it would require learning well over 60 signs. More likely, given the advances in speech recognition technology, would be that gestures might augment speech input replacing some commands normally input using keyboard shortcuts or the mouse. This is particularly so given the finding that speaking commands can have adverse short‐term memory effects (
<xref ref-type="bibr" rid="b16">Jacko and Sears, 2003</xref>
).</p>
</sec>
<sec>
<title>6.2 Performance artists</title>
<p>Performance artists, such as dancers will be able to explore and extend their expressiveness by interfacing with applications that translate representations of their movements into other media, such as audio or visual output. Those without formal training in the arts will be afforded opportunities for artistic expression through technology like Rokeby's very nervous system (
<xref ref-type="bibr" rid="b25">Ng, 2004</xref>
). This system translates movement into sound and has been installed in a number of galleries and public places.</p>
<p>Learning a musical instrument may be made easier through visual feedback with regard to finger positioning in the case of a guitar or violin. Haptic feedback has been investigated using gloves made of electro‐active polymers (
<xref ref-type="bibr" rid="b7">De Rossi
<italic>et al.</italic>
, 2001</xref>
) that could be used to gently guide a player's fingers into the ideal position. Further to this, there may not even be the need to have an instrument, freeing a musician from the constraints imposed by the physical structure an instrument is required to have to produce a particular sound.</p>
</sec>
<sec>
<title>6.3 Sports training</title>
<p>Sports training and performance may similarly be enhanced allowing athletes and their trainers to monitor movement in real‐time or review later. The ability of a coach during a match to assess the fatigue of a player or detect changes that indicate incipient injury would be of particular in professional sport where an injury can cost an athlete a high‐paid career or a team a victory.</p>
</sec>
<sec>
<title>6.4 Entertainment and personal communication</title>
<p>Movement interaction is being utilised in games and will likely be incorporated into other forms of entertainment and personal communication. The Sony Eyetoy is a very popular toy which places video images of the player into the game allowing them to interact more directly with the characters and there are online games allow a user to control a representation of themselves in a virtual world. Interactive television is at present in it's infancy, programs having been aired where viewers are asked to vote upon alternative endings and reality shows where the public is asked to vote to remove participants have proved phenomenally successful.</p>
<p>Whilst the motivation for reality shows is to a large part due to the economics of television production, their success would seem to lie in a desire for less passive entertainment. One could imagine people using virtual reality immersion technology to participate in all genres of entertainment. This would create a paradigm shift in the creation of these works with the emphasis being on the creation of characters and environment rather than a predetermined story line. The same technology will allow people to enter a chat rooms or conference room, talk and through haptic feedback even “touch” people who are physically on the other side of the world. Where environments have the requisite computing infrastructure, perhaps a museum or art gallery, people will be able to take realistic virtual tours. One can imagine an art teacher at a rural school in Australia taking her class for a virtual excursion to the Louvre or a prospective homebuyer touring a number of properties in less time than it takes to drive to one.</p>
</sec>
<sec>
<title>6.5 Gaze‐based user authentication</title>
<p>Eye gaze tracking as a form of input is already used in research, such as figuring out what parts of a photograph people look at first, and to assist the disabled who are unable to make normal use of a keyboard and pointing device. The recent increase in accuracy and decrease in cost of eye gaze tracking systems (
<xref ref-type="bibr" rid="b4">Amir
<italic>et al.</italic>
, 2005</xref>
) has fostered the research in the area. For example, the common approaches to entering passwords by way of keyboard, mouse, touch screen or any traditional input device, are frequently vulnerable to attacks such as shoulder surfing and password snooping. Shoulder‐surfing is an attack on password authentication that has traditionally been hard to defeat. It can be done remotely using binoculars and cameras, using keyboard acoustics (
<xref ref-type="bibr" rid="b46">Zhuang
<italic>et al.</italic>
, 2005</xref>
;
<xref ref-type="bibr" rid="b6">Berger
<italic>et al.</italic>
, 2006</xref>
). To address these problems, an effort that explores how gaze information can be effectively used as input to an authentication system has been recently researched (
<xref ref-type="bibr" rid="b36">Thorpe
<italic>et al.</italic>
, 2005</xref>
;
<xref ref-type="bibr" rid="b23">Maeder
<italic>et al.</italic>
, 2004</xref>
;
<xref ref-type="bibr" rid="b20">Kumar
<italic>et al.</italic>
, 2007</xref>
). All these system show that gaze‐based password input provides an improvement over current techniques.</p>
<p>An example of a gaze‐based user authentication scheme is discussed in
<xref ref-type="bibr" rid="b23">Maeder
<italic>et al.</italic>
(2004)</xref>
. In order to login, a user is presented with an image and must dwell upon previously specified points of interest on the image in a predetermined order.
<xref ref-type="bibr" rid="b36">Thorpe
<italic>et al.</italic>
(2005)</xref>
describe that an eye‐gaze based method could permit unobservable passwords of the same strength provided by textual or graphical password schemes by allowing the user to select parts of the password with their eyes (e.g. by eye fixation for a specified period denoting selection), and not echoing the input on the screen. EyePassword (
<xref ref-type="bibr" rid="b20">Kumar
<italic>et al.</italic>
, 2007</xref>
) uses gaze‐based typing as an alternative to normal keyboard and mouse input. Computer vision techniques are used to track the orientation of the users pupil to calculate the position of the users gaze on the screen. A user enters authentication information by selecting from an on‐screen keyboard using only the orientation of their pupils (i.e. the position of their gaze on screen), making eavesdropping by a malicious observer largely impractical.</p>
<p>The current prototype systems showed that gaze‐based system could be used as an alternative approach of the conventional password entry scheme while at the same time retaining the virtues of conventional passwords (e.g. the ease of use) and mitigating shoulder‐surfing and acoustics attacks. Users do not need to learn new way of entering their password. At the same time, gaze‐based password entry makes detecting the users password by shoulder surfing a considerably harder task, thereby increasing the security of the password at the point of entry. Still, gaze‐based password entry might work poorly for certain people, such as those with thick glasses, special contact lenses, or lazy eyes. Also, there may still be a barrier for the average person because she needs to go through a calibration process in which the software measures how quickly her eyes move.</p>
<p>Eye movement tracking technology involves a high‐resolution camera and a series of infrared light‐emitting diodes. This hardware is embedded into the bezel of expensive monitors. The camera picks up the movement of the pupil and the reflection of the infrared light off the cornea, which is used as a reference point because it doesn't move. There are some signs that eye‐tracking technology could find its way to the consumer market soon. Apple's desktops and laptops are now equipped with a built‐in camera for videoconferencing. If a higher‐resolution camera, infrared light emitting diodes (LEDs), and software were added, Apple's machines would be able to support gaze information as input to an authentication system.</p>
</sec>
<sec>
<title>6.6 Context‐aware system</title>
<p>Context‐awareness is an emerging concept of pervasive computing. Applications need to become increasingly autonomous and invisible, by placing greater reliance on knowledge of context and reducing interactions with users. Invisibility of applications will be accomplished in part by reducing input from users and replacing it with knowledge of context.</p>
<p>Applications will have greater awareness of context, and thus will be able to provide more intelligent services that reduce the burden on users to direct and interact with applications. The context of an application may include any information that can be used to characterize the situation of an entity, where an entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves (
<xref ref-type="bibr" rid="b11">Fitts, 1954</xref>
). Many applications will resemble agents that carry out tasks on behalf of users by exploiting the rich sets of services available within computing environments.</p>
<p>Human body movements can provide a wealth of data indicating the individual's activity and their emotional, cognitive and physiological state. In addition, to allowing new forms of explicit input, body movement can provide implicit input to context‐aware systems. Non‐verbal behaviour is an important part of direct human‐to‐human communication with facial expressions and gestures conveying the context of statements.
<xref ref-type="bibr" rid="b26">Pantic and Rothkrantz (2003)</xref>
state findings indicating that when engaged in conversation the listener determines whether they are liked or disliked by relying primarily upon facial expression followed by vocal intonation, with the actual words being of minor significance. The perceived need to add “emoticons” to written communications, as often seen in emails, reflects a desire to add context and more accurately convey meaning.</p>
<p>Context is the representation of the information that is relevant to the individuals and devices within the space. The context must be a composition of relevant information: a mere collection of information is of lower value. One reason is ease of access. A context that represents just the recorded interaction in the space is of lower value than one that represents indexed recorded interaction, which is in turn of lower value than one that represents summarized interaction organized into a structure. In a design session, the interaction of the participants contains a great deal of useful information, but watching and listening to it after the fact would be tedious indeed – designs can go on for months or years. Better would be the ability to go directly to specific segments of recorded material. The more flexible the access mechanism (e.g. by topic rather than by specific words) the better. Even better if the ability to access a summary of the interaction is provided: a coherent representation of the design as a structure of decisions and their rationale.</p>
<p>Another reason that the context must be a composition of relevant information is that it must guide the computational understanding of further interaction in the space. Movement recognition technology can be used in a context‐aware system to identify a person's activity and provide appropriate services. Research by
<xref ref-type="bibr" rid="b21">Ling (2003)</xref>
using data obtained from five accelerometers placed on subjects performing twenty everyday activities yielded a recognition rate of 84.26 per cent without individual training.
<xref ref-type="bibr" rid="b30">Randall and Muller (2000)</xref>
report a recognition rate of 85‐90 per cent for a range of ambulatory activities using two accelerometers. In the “Tourist Guide” application described by these authors the detection of a change from walking to running causes cessation of an audio commentary to the avoid situations such as where the user is distracted whilst trying to cross a busy road.</p>
</sec>
</sec>
<sec>
<title>7. Privacy, security and ethical considerations</title>
<p>Many things that are technical feasible and harmless within the lab may have sincere implications in the real world. If technologies for tracking, tracing and mining (e.g. social network analysis, location based services, context‐aware systems) are deployed beyond the lab the question of choice becomes a real issue are users aware of it and can they opt‐out?</p>
<p>Pervasive computing systems may have implications for privacy, security and safety, as a result of their ability to: gather sensitive data, for example on users' everyday interactions, movements, preferences and attitudes, without user intervention or consent; retrieve and use information from large databases/archives of stored data; and alter the environment via actuating devices.</p>
<p>As previously stated, a person's body movements can reveal a vast amount and range of information about the individual. When one considers a world permeated with vast numbers of networked computing devices of ever increasing power, issues concerning the control of the information are raised. Privacy is probably the first issue that comes to most people's minds, yet the concept of what constitutes privacy is problematic even to ethicists.
<xref ref-type="bibr" rid="b41">Volkman (2003)</xref>
asserts that what is referred to as “the right to privacy” is derived from the Neo‐Lockean theory of natural rights, these being the right to life, liberty and property. Though this may seem more the realm of ethicists and philosophers, it will be computer professionals that will design and implement these systems. This is particularly important given that in the rapidly changing area of information technology, legislation lags behind technology leaving what Moor calls a “policy vacuum”.</p>
<p>Many of the emerging applications discussed during this paper can present ethical dilemmas.
<xref ref-type="bibr" rid="b18">Kapur
<italic>et al.</italic>
(2005)</xref>
present one scenario where a lawyer in a courtroom uses a laptop to monitor a witness's emotional state. In a similar vein, prospective employees may have their emotional state monitored or movement assessed during a job interview or workers in a factory monitored for reasons other than safety. Aside from whether these are ethical uses of technology, there is the question of how much reliance should be placed upon the results of analyses obtained. Recognition rates of 85‐95 per cent are impressive results but conversely this translates to an error rate 5‐15 per cent. Basing decisions that effect people's lives upon information with such errors may result in unjust treatment of people. Virtual reality poses it's own special conundrums. If the representations of individuals in a virtual world have the same rights and responsibilities as in the physical world or is it enter at your own risk? Further to this what are the distinctions between public and private spaces in a virtual world?</p>
<p>Security is, also an important issue. Even if one voluntarily passes information to a third party and is satisfied that the information will be misused there needs to be reassurances that the information remain secure. It is commonly held views that as networks become larger, so do security risks. The predominance of wireless technology in future network will create further vulnerabilities. When networks can possibly extend from inside the human body to anywhere in the world potential consequences of security breaches could be severe. The loss of data on a hard drive due to a virus may pale into insignificance when compared to an attack effecting control of a robotic surgeon.</p>
<p>Development of future systems will require that security be an integral part of the system in contrast to many previous systems where security mechanisms where added on later often in response to discovered vulnerabilities. Aside from potential liability issues ethical behaviour of computer professionals is necessary if they are to be regarded as a professional group by the wider community. For the full potential of ubiquitous computing to be realised and widely accepted the general public needs to have trust in the system and those that develop and maintain them.</p>
<sec>
<title>7.1 Social and cultural dimensions</title>
<p>Many things that are technical feasible and harmless within the lab may have sincere implications in the real world. There is no such thing as a universal form of human body movement communication and each culture has its own rules of communication. For example, a smile can be considered a friendly gesture in one culture while it may be regarded as insulting or it can signal embarrassment in another culture. Failure to recognize and account for differences in culturally‐based communication styles may be regarded as improper, discourteous, or disrespectful.</p>
<p>Given the inter‐regional dimension of the network, it will be possible to engage in comparative inter‐cultural and crosscultural studies of place, mediated discourse and embodied interaction. This will give us a heightened awareness of the diversity of generic, stylistic and discursive resources, yet also enable us to document specific similarities and formats across settings and activities. Body movements, particularly arm and hand gestures, during a conversation convey a wealth of contextual information to the listener but to date most research has centred upon recognizing and categorizing facial expression (
<xref ref-type="bibr" rid="b12">Gunes and Piccardi, 2005</xref>
;
<xref ref-type="bibr" rid="b18">Kapur
<italic>et al.</italic>
, 2005</xref>
). Both authors express the opinion that detection of affect will rely upon assessment of multimodal input, part of which will be facial and body movement. Intra‐cultural and inter‐cultural variations in gestures would tend to support the need to not solely rely upon one modality. This may be illustrated by the two differing interpretations that may be made of a simple hand gesture. In Western society presenting an open hand palm forward signals “stop” whereas in Mediterranean countries this may be interpreted as a curse. For example, the “thumbs up” sign is an obscene gesture in Afghanistan and is considered “impolite” in Australia, but in almost every other part of the world, it simply means “okay”. Miscalculating the pertinence of cross‐cultural communications can be counter‐productive at best, or abysmal at worst.</p>
<p>Gestures form the basis on how humans interact with one another therefore is natural and invisible to each other. Gesture recognition is one of the most important things to ease human‐computer interaction. However, in such a multicultural country as UK it is nice to have a little overview over the different meanings of the same gesture. Different cultures use different symbols to mean the same thing. For this type of computing to work a universal gesturing language would have to be created and taught. Thus, understanding and accepting cultural differences is critical if one expects to be successful in an overseas assignment.</p>
</sec>
</sec>
<sec>
<title>8. Conclusion</title>
<p>The notion of a computer is changing. The traditional image of a box, a screen and a keyboard is rapidly being replaced by concepts wherein computing power is distributed among a multitude of devices. Ubiquitous computing is about the omni‐presence of invisible technology in our environments. As computers become embedded into everyday objects, effective natural human‐computer interaction becomes critical. Although traditional input devices such as the keyboard, mouse and touch screen already involve bodily movements, new interaction technologies utilising human movements aim to provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm.</p>
<p>Research into human body movement as input for HCI is a fertile area of information technology research; a wide range of technologies having been investigated and many interesting systems designed. Human body movement will be likely to play a significant role in the era of ubiquitous or pervasive computing. Whilst one shies away from predicting what this future will look like, it is certain to be a very different world. Designs using gestures as commands would, similarly promise simplicity and intuitiveness.</p>
<fig position="float" id="F_3610050105001">
<label>
<bold>Figure 1
<x> </x>
</bold>
</label>
<caption>
<p> Taxonomy of body movements</p>
</caption>
<graphic xlink:href="3610050105001.tif"></graphic>
</fig>
<fig position="float" id="F_3610050105002">
<caption>
<p>Equation 1</p>
</caption>
<graphic xlink:href="3610050105002.tif"></graphic>
</fig>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="b1">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Accot</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Zhai</surname>
,
<given-names>S.</given-names>
</string-name>
</person-group>
(
<year>1997</year>
), “
<article-title>
<italic>Beyond Fitts' law: models for trajectory‐based HCI tasks</italic>
</article-title>
”,
<italic>Proceedings of the ACM CHI Conference on Human Factors in Computing Systems</italic>
,
<publisher-loc>
<italic>Los Angeles, CA</italic>
</publisher-loc>
, pp.
<fpage>295</fpage>
<x></x>
<lpage>302</lpage>
.</mixed-citation>
</ref>
<ref id="b2">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Adjouadi</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Sesin</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Ayala</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Cabrerizo</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
(
<year>2004</year>
), “
<article-title>
<italic>Remote eye gaze tracking system as a computer interface for persons with severe motor disability</italic>
</article-title>
”,
<italic>Proceedings of the 9th International Conference on Computers Helping People with Special Needs</italic>
,
<publisher-loc>
<italic>Paris</italic>
</publisher-loc>
, pp.
<fpage>761</fpage>
<x></x>
<lpage>6</lpage>
.</mixed-citation>
</ref>
<ref id="b3">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Alexander</surname>
,
<given-names>S.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Sarrafzadeh</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
(
<year>2004</year>
), “
<article-title>
<italic>Interfaces that adapt like humans</italic>
</article-title>
”,
<source>
<italic>Proceedings of 6th Computer Human Interaction 6th Asia Pacific Conference (APCHI 2004), Rotorua</italic>
</source>
, pp.
<fpage>641</fpage>
<x></x>
<lpage>5</lpage>
.</mixed-citation>
</ref>
<ref id="b4">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Amir</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Zimet</surname>
,
<given-names>L.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Sangiovanni‐Vincentelli</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Kao</surname>
,
<given-names>S.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>An embedded system for an eye‐detection sensor</italic>
</article-title>
”,
<source>
<italic>Computer Vision and Image Understanding</italic>
</source>
, CVIU special issue on “Eye detection and tracking”, Vol.
<volume>98</volume>
No.
<issue>1</issue>
, pp.
<fpage>104</fpage>
<x></x>
<lpage>23</lpage>
.</mixed-citation>
</ref>
<ref id="b5">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Banich</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
(
<year>1997</year>
),
<source>
<italic>Neuropsychology – the Neural Bases of Mental Function</italic>
</source>
,
<publisher-name>Houghton Mifflin</publisher-name>
,
<publisher-loc>New York</publisher-loc>
,
<publisher-loc>NY</publisher-loc>
.</mixed-citation>
</ref>
<ref id="b6">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Berger</surname>
,
<given-names>Y.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Wool</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Yeredor</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
(
<year>2006</year>
), “
<article-title>
<italic>Dictionary attacks using keyboard acoustic emanations</italic>
</article-title>
”,
<source>
<italic>Proceedings of the Computer and Communications Security (CCS)</italic>
</source>
,
<italic>Alexandria</italic>
,
<italic>VA</italic>
, pp.
<fpage>245</fpage>
<x></x>
<lpage>54</lpage>
.</mixed-citation>
</ref>
<ref id="b7">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>De Rossi</surname>
,
<given-names>D.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Lorussi</surname>
,
<given-names>F.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Mazzoldi</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Scilingo</surname>
,
<given-names>E.P.</given-names>
</string-name>
</person-group>
(
<year>2001</year>
), “
<article-title>
<italic>Active dressware: wearable proprioceptive systems based on electroactive polymers</italic>
</article-title>
”,
<italic>Proceedings of the 5th International Symposium on Wearable Computers</italic>
,
<publisher-loc>
<italic>Zurich</italic>
</publisher-loc>
, pp.
<fpage>161</fpage>
<x></x>
<lpage>2</lpage>
.</mixed-citation>
</ref>
<ref id="b8">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>de Silva</surname>
,
<given-names>G.C.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Lyons</surname>
,
<given-names>M.J.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Tetsutani</surname>
,
<given-names>N.</given-names>
</string-name>
</person-group>
(
<year>2004</year>
), “
<article-title>
<italic>Vision based acquisition of mouth actions for human‐computer interaction</italic>
</article-title>
”,
<italic>Proceedings of the 8th Pacific Rim International Conference on Artificial Intelligence</italic>
,
<publisher-loc>
<italic>Auckland</italic>
</publisher-loc>
, pp.
<fpage>959</fpage>
<x></x>
<lpage>60</lpage>
.</mixed-citation>
</ref>
<ref id="b9">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>DeVaul</surname>
,
<given-names>R.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Sung</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Gips</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Pentland</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
(
<year>2003</year>
), “
<article-title>
<italic>MIThril 2003: applications and architecture</italic>
</article-title>
”,
<italic>Proceedings of the 7th IEEE International Symposium on Wearable Computers</italic>
,
<publisher-loc>
<italic>White Plains, NY</italic>
</publisher-loc>
, pp.
<fpage>4</fpage>
<x></x>
<lpage>11</lpage>
.</mixed-citation>
</ref>
<ref id="b10">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Fishkin</surname>
,
<given-names>K.P.</given-names>
</string-name>
</person-group>
(
<year>2004</year>
), “
<article-title>
<italic>A taxonomy for and analysis of tangible interfaces</italic>
</article-title>
”,
<source>
<italic>Personal and Ubiquitous Computing</italic>
</source>
, Vol.
<volume>8</volume>
No.
<issue>5</issue>
, pp.
<fpage>347</fpage>
<x></x>
<lpage>58</lpage>
.</mixed-citation>
</ref>
<ref id="b11">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Fitts</surname>
,
<given-names>P.</given-names>
</string-name>
</person-group>
(
<year>1954</year>
), “
<article-title>
<italic>The information capacity of the human motor system in controlling the amplitude of movement</italic>
</article-title>
”,
<source>
<italic>Journal of Experimental Psychology</italic>
</source>
, Vol.
<volume>47</volume>
, pp.
<fpage>381</fpage>
<x></x>
<lpage>91</lpage>
.</mixed-citation>
</ref>
<ref id="b12">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Gunes</surname>
,
<given-names>H.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Piccardi</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>Automatic visual recognition of face and body action units</italic>
</article-title>
”,
<italic>Proceedings of the 3rd International Conference on Information Technology and Applications</italic>
,
<italic>Sydney</italic>
, pp.
<fpage>668</fpage>
<x></x>
<lpage>73</lpage>
.</mixed-citation>
</ref>
<ref id="b13">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Hamill</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Knutzen</surname>
,
<given-names>K.M.</given-names>
</string-name>
</person-group>
(
<year>2003</year>
),
<source>
<italic>Biomechanical Basis of Human Movement</italic>
</source>
,
<edition>2nd ed.</edition>
,
<publisher-name>Lippincott Williams and Wilkins</publisher-name>
,
<publisher-loc>Philadelphia</publisher-loc>
,
<publisher-loc>PA</publisher-loc>
.</mixed-citation>
</ref>
<ref id="b14">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Hawthorn</surname>
,
<given-names>D.</given-names>
</string-name>
</person-group>
(
<year>1998</year>
), “
<article-title>
<italic>Psychophysical aging and human computer interface design</italic>
</article-title>
”,
<italic>Proceedings of the Australasian Conference on Computer Human Interaction</italic>
,
<publisher-loc>
<italic>Adelaide</italic>
</publisher-loc>
, pp.
<fpage>281</fpage>
<x></x>
<lpage>91</lpage>
.</mixed-citation>
</ref>
<ref id="b15">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Ishikawa</surname>
,
<given-names>T.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Horry</surname>
,
<given-names>Y.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Hoshino</surname>
,
<given-names>T.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>Touchless input device and gesture commands</italic>
</article-title>
”,
<italic>Proceedings of the International Conference on Consumer Electronics</italic>
,
<italic>Las Vegas</italic>
, NV, pp.
<fpage>205</fpage>
<x></x>
<lpage>6</lpage>
.</mixed-citation>
</ref>
<ref id="b16">
<mixed-citation>
<person-group person-group-type="editor">
<string-name>
<surname>Jacko</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="editor">
<string-name>
<surname>Sears</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
(Eds) (
<year>2003</year>
),
<source>
<italic>The Human‐Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications</italic>
</source>
,
<publisher-name>Lawrence Erlbaum Associates</publisher-name>
,
<publisher-loc>New Jersey</publisher-loc>
,
<publisher-loc>NJ</publisher-loc>
.</mixed-citation>
</ref>
<ref id="b17">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Jin</surname>
,
<given-names>Y.K.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Choi</surname>
,
<given-names>S.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Chung</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Myung</surname>
,
<given-names>I.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Lee</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Kim</surname>
,
<given-names>M.C.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Woo</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
(
<year>2004</year>
), “
<article-title>
<italic>GIA: design of a gesture‐based interaction photo album</italic>
</article-title>
”,
<source>
<italic>Personal and Ubiquitous Computing [Online]</italic>
</source>
, Vol.
<volume>8</volume>
No.
<issue>3/4</issue>
, pp.
<fpage>227</fpage>
<x></x>
<lpage>33</lpage>
.</mixed-citation>
</ref>
<ref id="b18">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Kapur</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Kapur</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Virji‐Babul</surname>
,
<given-names>N.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Tzanetakis</surname>
,
<given-names>G.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Driessen</surname>
,
<given-names>P.F.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>Gesture‐based affective computing on motion capture data</italic>
</article-title>
”,
<italic>Proceedings of the 1st International Conference on Affective Computing and Intelligent Interaction</italic>
,
<publisher-loc>
<italic>Beijing</italic>
</publisher-loc>
, pp.
<fpage>1</fpage>
<x></x>
<lpage>7</lpage>
.</mixed-citation>
</ref>
<ref id="b19">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Kondraske</surname>
,
<given-names>G.V.</given-names>
</string-name>
</person-group>
(
<year>1994</year>
), “
<article-title>
<italic>An angular motion Fitts' Law for human performance modeling and prediction</italic>
</article-title>
”,
<italic>Proceedings of the 16th Annual Engineering in Medicine and Biology Society Conference</italic>
,
<publisher-loc>
<italic>Baltimore, MD</italic>
</publisher-loc>
, pp.
<fpage>307</fpage>
<x></x>
<lpage>8</lpage>
.</mixed-citation>
</ref>
<ref id="b20">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Kumar</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Paepcke</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Winograd</surname>
,
<given-names>T.</given-names>
</string-name>
</person-group>
(
<year>2007</year>
), “
<article-title>
<italic>EyePoint: practical pointing and selection using gaze and keyboard</italic>
</article-title>
”,
<italic>Proceedings of the CHI: Conference on Human Factors in Computing Systems</italic>
,
<publisher-loc>San Jose, CA.</publisher-loc>
, pp.
<fpage>421</fpage>
<x></x>
<lpage>30</lpage>
.</mixed-citation>
</ref>
<ref id="b21">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Ling</surname>
,
<given-names>B.</given-names>
</string-name>
</person-group>
(
<year>2003</year>
), “
<article-title>
<italic>Physical activity recognition from acceleration data under semi‐naturalistic conditions</italic>
</article-title>
”, Master's thesis,
<publisher-name>Massachusetts Institute of Technology (MIT)</publisher-name>
,
<publisher-loc>Massachusetts, MA</publisher-loc>
.</mixed-citation>
</ref>
<ref id="b22">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>MacKenzie</surname>
,
<given-names>I.S.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Buxton</surname>
,
<given-names>A.S.</given-names>
</string-name>
</person-group>
(
<year>1992</year>
), “
<article-title>
<italic>Extending Fitts' law to two‐dimensional tasks</italic>
</article-title>
”,
<italic>Proceedings of the ACM CHI 1992 Conference on Human Factors in Computing Systems</italic>
,
<publisher-loc>
<italic>Monterey, CA</italic>
</publisher-loc>
, pp.
<fpage>219</fpage>
<x></x>
<lpage>26</lpage>
.</mixed-citation>
</ref>
<ref id="b23">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Maeder</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Fookes</surname>
,
<given-names>C.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Sridharan</surname>
,
<given-names>S.</given-names>
</string-name>
</person-group>
(
<year>2004</year>
), “
<article-title>
<italic>Gaze based user authentication for personal computer applications</italic>
</article-title>
”,
<italic>Proceedings of the International Symposium on Intelligent Multimedia, Video and Speech Processing</italic>
,
<publisher-name>IEEE</publisher-name>
,
<publisher-loc>Hong Kong</publisher-loc>
, pp.
<fpage>727</fpage>
<x></x>
<lpage>30</lpage>
.</mixed-citation>
</ref>
<ref id="b24">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Murtagh</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
(
<year>1994</year>
),
<source>
<italic>General Practice</italic>
</source>
,
<publisher-name>McGraw‐Hill</publisher-name>
,
<publisher-loc>Sydney</publisher-loc>
.</mixed-citation>
</ref>
<ref id="b25">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Ng</surname>
,
<given-names>K.C.</given-names>
</string-name>
</person-group>
(
<year>2004</year>
), “
<article-title>
<italic>Music via motion: transdomain mapping of motion and sound for interactive performances</italic>
</article-title>
”,
<source>
<italic>Proceedings of the IEEE</italic>
</source>
, Vol.
<volume>92</volume>
No.
<issue>4</issue>
, pp.
<fpage>645</fpage>
<x></x>
<lpage>55</lpage>
.</mixed-citation>
</ref>
<ref id="b26">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Pantic</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Rothkrantz</surname>
,
<given-names>L.J.M.</given-names>
</string-name>
</person-group>
(
<year>2003</year>
), “
<article-title>
<italic>Toward an affect sensitive multimodal human‐computer interaction</italic>
</article-title>
”,
<source>
<italic>Proceedings of the IEEE</italic>
</source>
, Vol.
<volume>91</volume>
No.
<issue>9</issue>
, pp.
<fpage>1370</fpage>
<x></x>
<lpage>90</lpage>
.</mixed-citation>
</ref>
<ref id="b27">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Peplow</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>Robot surgeons scrub up</italic>
</article-title>
”, available at:
<ext-link ext-link-type="uri" xlink:href="http://news.nature.com.ezproxy.lib.deakin.edu.au//news/2005/051024/051024-11.html">http://news.nature.com.ezproxy. lib.deakin.edu.au//news/2005/051024/051024‐11.html</ext-link>
(accessed 28 October 2005).</mixed-citation>
</ref>
<ref id="b28">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Pingali</surname>
,
<given-names>G.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Pinhanez</surname>
,
<given-names>C.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Levas</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Kjeldsen</surname>
,
<given-names>R.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Podlaseck</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Chen</surname>
,
<given-names>H.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Sukaviriya</surname>
,
<given-names>N.</given-names>
</string-name>
</person-group>
(
<year>2003</year>
), “
<article-title>
<italic>Steerable interfaces for pervasive computing spaces</italic>
</article-title>
”,
<italic>Proceedings of the 1st IEEE International Conference on Pervasive Computing and Communications</italic>
,
<publisher-loc>
<italic>Fort Worth, TX</italic>
</publisher-loc>
.</mixed-citation>
</ref>
<ref id="b29">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Qvarfordt</surname>
,
<given-names>P.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Zhai</surname>
,
<given-names>S.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>Conversing with the user based on eye‐gaze patterns”,</italic>
</article-title>
<italic>Proceedings of the SIGCHI Conference on Human‐Factors in Computing Systems</italic>
,
<publisher-loc>
<italic>Portland, Oregon</italic>
</publisher-loc>
, pp.
<fpage>221</fpage>
<x></x>
<lpage>30</lpage>
.</mixed-citation>
</ref>
<ref id="b30">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Randall</surname>
,
<given-names>C.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Muller</surname>
,
<given-names>H.</given-names>
</string-name>
</person-group>
(
<year>2000</year>
), “
<article-title>
<italic>Context awareness by analysing accelerometer data</italic>
</article-title>
”,
<italic>Proceedings of the 4th International Symposium on Wearable Computers</italic>
,
<publisher-loc>
<italic>Atlanta, GA</italic>
</publisher-loc>
, pp.
<fpage>175</fpage>
<x></x>
<lpage>6</lpage>
.</mixed-citation>
</ref>
<ref id="b31">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Reilly</surname>
,
<given-names>R.B.</given-names>
</string-name>
</person-group>
(
<year>1998</year>
), “
<article-title>
<italic>Applications of face and gesture recognition for human‐computer interaction</italic>
</article-title>
”,
<italic>Proceedings of the 6th ACM International Conference on Multimedia</italic>
,
<publisher-loc>
<italic>Bristol</italic>
</publisher-loc>
, pp.
<fpage>20</fpage>
<x></x>
<lpage>7</lpage>
.</mixed-citation>
</ref>
<ref id="b32">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Salvucci</surname>
,
<given-names>D.D.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Anderson</surname>
,
<given-names>J.R.</given-names>
</string-name>
</person-group>
(
<year>2001</year>
) “
<article-title>
<italic>Automated eye movement protocol analysis</italic>
</article-title>
”,
<source>
<italic>Human‐Computer Interaction</italic>
</source>
, Vol.
<volume>16</volume>
No.
<issue>1</issue>
, pp.
<fpage>38</fpage>
<x></x>
<lpage>49</lpage>
.</mixed-citation>
</ref>
<ref id="b33">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Sibert</surname>
,
<given-names>L.E.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Jacob</surname>
,
<given-names>R.J.K.</given-names>
</string-name>
</person-group>
(
<year>2000</year>
), “
<article-title>
<italic>Evaluation of eye gaze interaction</italic>
</article-title>
”,
<italic>Proceedings of the ACM Conference on Human Factors in Computing Systems</italic>
,
<publisher-loc>
<italic>The Hague</italic>
</publisher-loc>
, pp.
<fpage>281</fpage>
<x></x>
<lpage>8</lpage>
.</mixed-citation>
</ref>
<ref id="b34">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Strommen</surname>
,
<given-names>E.S.</given-names>
</string-name>
</person-group>
(
<year>1993</year>
), “
<article-title>
<italic>Is it easier to hop or walk? Development issues in interface design</italic>
</article-title>
”,
<source>
<italic>Human‐Computer Interaction [Online]</italic>
</source>
, Vol.
<volume>8</volume>
, pp.
<fpage>337</fpage>
<x></x>
<lpage>52</lpage>
.</mixed-citation>
</ref>
<ref id="b35">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Sung</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Marci</surname>
,
<given-names>C.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Pentland</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>Wearable feedback systems for rehabilitation</italic>
</article-title>
”,
<source>
<italic>Journal of NeuroEngineering and Rehabilitation</italic>
</source>
, Vol.
<volume>2</volume>
No.
<issue>17</issue>
, pp.
<fpage>1</fpage>
<x></x>
<lpage>12</lpage>
.</mixed-citation>
</ref>
<ref id="b36">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Thorpe</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>van Oorschot</surname>
,
<given-names>P.C.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Somayaji</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>Passthoughts: authenticating with our minds</italic>
</article-title>
”,
<italic>Proceedings of the New Security Paradigns Workshop. Lake Arrowhead, CA</italic>
,
<publisher-name>ACM Press</publisher-name>
,
<publisher-loc>New York</publisher-loc>
,
<publisher-loc>NY</publisher-loc>
, pp.
<fpage>45</fpage>
<x></x>
<lpage>56</lpage>
.</mixed-citation>
</ref>
<ref id="b37">
<mixed-citation>
<person-group person-group-type="author">
<string-name>Tobii Technology</string-name>
</person-group>
(
<year>2006</year>
), “
<article-title>
<italic>AB, Tobii 1750 eye tracker</italic>
</article-title>
”,
<publisher-loc>Sweden</publisher-loc>
, available at:
<ext-link ext-link-type="uri" xlink:href="http://www.tobii.com">www.tobii.com</ext-link>
</mixed-citation>
</ref>
<ref id="b38">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Turk</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Robertson</surname>
,
<given-names>G.</given-names>
</string-name>
</person-group>
(
<year>2000</year>
), “
<article-title>
<italic>Perceptual user interfaces</italic>
</article-title>
”,
<source>
<italic>C‐ACM</italic>
</source>
, Vol.
<volume>43</volume>
No.
<issue>3</issue>
, pp.
<fpage>32</fpage>
<x></x>
<lpage>4</lpage>
.</mixed-citation>
</ref>
<ref id="b39">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>van Dorp</surname>
,
<given-names>P.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Groen</surname>
,
<given-names>F.C.A.</given-names>
</string-name>
</person-group>
(
<year>2003</year>
), “
<article-title>
<italic>Human walking estimation with radar</italic>
</article-title>
”,
<italic>IEE Proceedings of Radar, Sonar and Navigation</italic>
, Vol.
<volume>150</volume>
No.
<issue>5</issue>
, pp.
<fpage>356</fpage>
<x></x>
<lpage>65</lpage>
.</mixed-citation>
</ref>
<ref id="b40">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Vardy</surname>
,
<given-names>A.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Robinson</surname>
,
<given-names>J.A.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Cheng</surname>
,
<given-names>L.‐T.</given-names>
</string-name>
</person-group>
(
<year>1999</year>
), “
<article-title>
<italic>The WristCam as input device</italic>
</article-title>
”,
<italic>Proceedings of the 3rd International Symposium on Wearable Computers</italic>
,
<publisher-loc>
<italic>San Francisco, CA</italic>
</publisher-loc>
, pp.
<fpage>199</fpage>
<x></x>
<lpage>202</lpage>
.</mixed-citation>
</ref>
<ref id="b41">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Volkman</surname>
,
<given-names>R.</given-names>
</string-name>
</person-group>
(
<year>2003</year>
), “
<article-title>
<italic>Privacy as life, liberty, property</italic>
</article-title>
”,
<source>
<italic>Ethics and Information Technology</italic>
</source>
, Vol.
<volume>5</volume>
, pp.
<fpage>199</fpage>
<x></x>
<lpage>210</lpage>
.</mixed-citation>
</ref>
<ref id="b42">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Weiser</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
(
<year>1999</year>
), “
<article-title>
<italic>Turning pervasive computing into mediated spaces</italic>
</article-title>
”,
<source>
<italic>IBM System Journal</italic>
</source>
, Vol.
<volume>38</volume>
No.
<issue>4</issue>
, pp.
<fpage>677</fpage>
<x></x>
<lpage>92</lpage>
.</mixed-citation>
</ref>
<ref id="b43">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Weiser</surname>
,
<given-names>M</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Brown</surname>
,
<given-names>J.S.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>The coming age of calm technology</italic>
</article-title>
”, available at:
<ext-link ext-link-type="uri" xlink:href="http://www.johnseelybrown.com/calmtech.pdf">www.johnseelybrown.com/calmtech.pdf</ext-link>
(accessed 3 December 2005).</mixed-citation>
</ref>
<ref id="b44">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Wu</surname>
,
<given-names>Y.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Hua</surname>
,
<given-names>G.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Yu</surname>
,
<given-names>T.</given-names>
</string-name>
</person-group>
(
<year>2003</year>
), “
<article-title>
<italic>Tracking articulated body by dynamic Markov network</italic>
</article-title>
”,
<italic>Proceedings of the ICCV</italic>
,
<publisher-loc>
<italic>Nice</italic>
</publisher-loc>
, pp.
<fpage>1094</fpage>
<x></x>
<lpage>101</lpage>
.</mixed-citation>
</ref>
<ref id="b45">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Yang</surname>
,
<given-names>N.F.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Jin</surname>
,
<given-names>D.W.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Zhang</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Huang</surname>
,
<given-names>C.H.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Wang</surname>
,
<given-names>R.C.</given-names>
</string-name>
</person-group>
(
<year>2001</year>
), “
<article-title>
<italic>An extending Fitts' law for human upper limb performance evaluation</italic>
</article-title>
”,
<italic>Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society</italic>
,
<publisher-loc>
<italic>Istanbul, Turkey</italic>
</publisher-loc>
, pp.
<fpage>1240</fpage>
<x></x>
<lpage>43</lpage>
.</mixed-citation>
</ref>
<ref id="b46">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Zhuang</surname>
,
<given-names>L.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Zhou</surname>
,
<given-names>F.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Tygar</surname>
,
<given-names>J.D.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>Keyboard acoustic emanations revisited</italic>
</article-title>
”,
<source>
<italic>Proceedings of the 12th ACM Conference on Computer and Communications Security (CCS)</italic>
</source>
,
<italic>Alexandria, VA</italic>
,
<publisher-name>ACM Press</publisher-name>
,
<publisher-loc>New York</publisher-loc>
,
<publisher-loc>NY</publisher-loc>
, pp.
<fpage>373</fpage>
<x></x>
<lpage>82</lpage>
.</mixed-citation>
</ref>
</ref-list>
<ref-list>
<title>Further reading</title>
<ref id="b47">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Amit</surname>
,
<given-names>K.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Chowdhury</surname>
,
<given-names>R.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Chellappa</surname>
,
<given-names>R.</given-names>
</string-name>
</person-group>
(
<year>2003</year>
), “
<article-title>
<italic>Towards a view invariant gait recognition algorithm</italic>
</article-title>
”,
<italic>Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance</italic>
,
<publisher-loc>
<italic>Miami, FL</italic>
</publisher-loc>
, pp.
<fpage>143</fpage>
<x></x>
<lpage>50</lpage>
.</mixed-citation>
</ref>
<ref id="b48">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Bobick</surname>
,
<given-names>A.F.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Davis</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
(
<year>2001</year>
), “
<article-title>
<italic>The recognition of human movement using temporal templates</italic>
</article-title>
”,
<source>
<italic>IEEE Transactions on PAMI</italic>
</source>
, Vol.
<volume>23</volume>
No.
<issue>3</issue>
, pp.
<fpage>257</fpage>
<x></x>
<lpage>67</lpage>
.</mixed-citation>
</ref>
<ref id="b49">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Bobick</surname>
,
<given-names>A.F.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Johnson</surname>
,
<given-names>A.Y.</given-names>
</string-name>
</person-group>
(
<year>2001</year>
), “
<article-title>
<italic>Gait recognition using static, activity specific parameters</italic>
</article-title>
”,
<italic>Proceedings of the 2001 Computer Society Conference on Computer Vision and Pattern Recognition</italic>
,
<publisher-loc>
<italic>Kauai, HI</italic>
</publisher-loc>
, pp.
<fpage>1423</fpage>
<x></x>
<lpage>30</lpage>
.</mixed-citation>
</ref>
<ref id="b50">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Bohn</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Coroama</surname>
,
<given-names>V.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Langheinrich</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Mattern</surname>
,
<given-names>F.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Rohs</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
(
<year>2004</year>
), “
<article-title>
<italic>Social, economic, and ethical implications of ambient intelligence and ubiquitous computing</italic>
</article-title>
”,
<publisher-name>Institute for Pervasive Computing, ETH</publisher-name>
,
<publisher-loc>Zurich</publisher-loc>
, available at:
<ext-link ext-link-type="uri" xlink:href="http://www.vs.inf.ethz.ch/publ/papers/socialambient.pdf">www.vs.inf.ethz.ch/publ/papers/socialambient.pdf</ext-link>
</mixed-citation>
</ref>
<ref id="b51">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Choras</surname>
,
<given-names>R.S.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Choras</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
(
<year>2002</year>
), “
<article-title>
<italic>Computer visual system analyzing the influence of stimulants on human motion</italic>
</article-title>
”,
<source>
<italic>Lecture Notes in Computer Science</italic>
</source>
, Vol.
<volume>2492/2002</volume>
,
<publisher-name>Springer</publisher-name>
,
<publisher-loc>Heidelberg</publisher-loc>
, pp.
<fpage>241</fpage>
<x></x>
<lpage>50</lpage>
.</mixed-citation>
</ref>
<ref id="b52">
<mixed-citation>
<person-group person-group-type="author">
<string-name>Exploring nonverbal communication</string-name>
</person-group>
, available at:
<ext-link ext-link-type="uri" xlink:href="http://zzyx.ucsc.edu/archer/intro.html">http://zzyx.ucsc.edu/ archer/intro.html</ext-link>
</mixed-citation>
</ref>
<ref id="b53">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Jacob</surname>
,
<given-names>R.J.K.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Karn</surname>
,
<given-names>K.S.</given-names>
</string-name>
</person-group>
(
<year>2003</year>
), “
<article-title>
<italic>Eye tracking in human‐computer interaction and usability research: ready to deliver the promises (section commentary)</italic>
</article-title>
”, in
<person-group person-group-type="editor">
<string-name>
<surname>Hyona</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="editor">
<string-name>
<surname>Radach</surname>
,
<given-names>R.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="editor">
<string-name>
<surname>Deubel</surname>
,
<given-names>H.</given-names>
</string-name>
</person-group>
(Eds),
<source>
<italic>The Mind's Eye: Cognitive and Applied Aspects of Eye Movement Research</italic>
</source>
,
<publisher-name>Elsevier Science</publisher-name>
,
<publisher-loc>Amsterdam</publisher-loc>
, pp.
<fpage>573</fpage>
<x></x>
<lpage>605</lpage>
.</mixed-citation>
</ref>
<ref id="b54">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Jilin</surname>
,
<given-names>T.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Huang</surname>
,
<given-names>T.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Hai</surname>
,
<given-names>T.</given-names>
</string-name>
</person-group>
(
<year>2005</year>
), “
<article-title>
<italic>Face as mouse through visual face tracking</italic>
</article-title>
”,
<italic>Proceedings of the 2nd Canadian Conference on Camera and Robot Vision</italic>
,
<publisher-loc>
<italic>Victoria, BC,</italic>
</publisher-loc>
pp.
<fpage>339</fpage>
<x></x>
<lpage>46</lpage>
.</mixed-citation>
</ref>
<ref id="b55">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Jong‐Sung</surname>
,
<given-names>K.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Hyuk</surname>
,
<given-names>J.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Wookho</surname>
,
<given-names>S.</given-names>
</string-name>
</person-group>
(
<year>2004</year>
), “
<article-title>
<italic>A new means of HCI: EMG‐MOUSE</italic>
</article-title>
”,
<italic>Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics</italic>
,
<italic>The Hague</italic>
, pp.
<fpage>100</fpage>
<x></x>
<lpage>4</lpage>
.</mixed-citation>
</ref>
<ref id="b56">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Lisetti</surname>
,
<given-names>C.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>LeRouge</surname>
,
<given-names>C.</given-names>
</string-name>
</person-group>
(
<year>2004</year>
), “
<article-title>
<italic>Affective computing in tele‐home health</italic>
</article-title>
”,
<source>
<italic>Proceedings of the 37th Hawaii International Conference on System Sciences</italic>
</source>
,
<italic>Big Island, HI</italic>
, pp.
<fpage>1</fpage>
<x></x>
<lpage>8</lpage>
.</mixed-citation>
</ref>
<ref id="b57">
<mixed-citation>
<person-group person-group-type="author">
<string-name>
<surname>Walker</surname>
,
<given-names>M.</given-names>
</string-name>
</person-group>
,
<person-group person-group-type="author">
<string-name>
<surname>Burnham</surname>
,
<given-names>D.</given-names>
</string-name>
</person-group>
and
<person-group person-group-type="author">
<string-name>
<surname>Borland</surname>
,
<given-names>R.</given-names>
</string-name>
</person-group>
(
<year>1994</year>
),
<source>
<italic>Psychology</italic>
</source>
,
<edition>2nd ed.</edition>
,
<publisher-name>John Wiley & Sons</publisher-name>
,
<publisher-loc>Sydney</publisher-loc>
.</mixed-citation>
</ref>
</ref-list>
<app-group>
<app>
<title>Corresponding author</title>
<p>J.H. Abawajy can be contacted at: jemal@deakin.edu.au</p>
</app>
</app-group>
</back>
</article>
</istex:document>
</istex:metadataXml>
<mods version="3.6">
<titleInfo lang="en">
<title>Humancomputer interaction in ubiquitous computing environments</title>
</titleInfo>
<titleInfo type="alternative" lang="en" contentType="CDATA">
<title>Humancomputer interaction in ubiquitous computing environments</title>
</titleInfo>
<name type="personal">
<namePart type="given">J.H.</namePart>
<namePart type="family">Abawajy</namePart>
</name>
<name type="personal">
<namePart type="given">J.H.</namePart>
<namePart type="family">Abawajy</namePart>
<affiliation>School of Engineering and Information Technology, Deakin University, Geelong, Australia</affiliation>
</name>
<typeOfResource>text</typeOfResource>
<genre type="research-article" displayLabel="research-article"></genre>
<originInfo>
<publisher>Emerald Group Publishing Limited</publisher>
<dateIssued encoding="w3cdtf">2009-04-03</dateIssued>
<copyrightDate encoding="w3cdtf">2009</copyrightDate>
</originInfo>
<language>
<languageTerm type="code" authority="iso639-2b">eng</languageTerm>
<languageTerm type="code" authority="rfc3066">en</languageTerm>
</language>
<physicalDescription>
<internetMediaType>text/html</internetMediaType>
</physicalDescription>
<abstract>Purpose The purpose of this paper is to explore characteristics of humancomputer interaction when the human body and its movements become input for interaction and interface control in pervasive computing settings. Designmethodologyapproach The paper quantifies the performance of human movement based on Fitt's Law and discusses some of the human factors and technical considerations that arise in trying to use human body movements as an input medium. Findings The paper finds that new interaction technologies utilising human movements may provide more flexible, naturalistic interfaces and support the ubiquitous or pervasive computing paradigm. Practical implications In pervasive computing environments the challenge is to create intuitive and userfriendly interfaces. Application domains that may utilize human body movements as input are surveyed here and the paper addresses issues such as culture, privacy, security and ethics raised by movement of a user's bodybased interaction styles. Originalityvalue The paper describes the utilization of human body movements as input for interaction and interface control in pervasive computing settings.</abstract>
<subject>
<genre>Keywords</genre>
<topic>Computer applications</topic>
<topic>Manmachine interface</topic>
<topic>Human anatomy</topic>
<topic>Human physiology</topic>
</subject>
<relatedItem type="host">
<titleInfo>
<title>International Journal of Pervasive Computing and Communications</title>
</titleInfo>
<genre type="Journal">journal</genre>
<subject>
<genre>Emerald Subject Group</genre>
<topic authority="SubjectCodesPrimary" authorityURI="cat-ENGG">Engineering</topic>
<topic authority="SubjectCodesSecondary" authorityURI="cat-EEE">Electrical & electronic engineering</topic>
<topic authority="SubjectCodesSecondary" authorityURI="cat-CSE">Computer & software engineering</topic>
</subject>
<subject>
<genre>Emerald Subject Group</genre>
<topic authority="SubjectCodesPrimary" authorityURI="cat-IKM">Information & knowledge management</topic>
<topic authority="SubjectCodesSecondary" authorityURI="cat-ICT">Information & communications technology</topic>
</subject>
<identifier type="ISSN">1742-7371</identifier>
<identifier type="PublisherID">ijpcc</identifier>
<identifier type="DOI">10.1108/ijpcc</identifier>
<part>
<date>2009</date>
<detail type="title">
<title>Advances in pervasive computing</title>
</detail>
<detail type="volume">
<caption>vol.</caption>
<number>5</number>
</detail>
<detail type="issue">
<caption>no.</caption>
<number>1</number>
</detail>
<extent unit="pages">
<start>61</start>
<end>77</end>
</extent>
</part>
</relatedItem>
<identifier type="istex">B2CCAD37D1C07EBC9030354F59150F834862E3F1</identifier>
<identifier type="DOI">10.1108/17427370910950311</identifier>
<identifier type="filenameID">3610050105</identifier>
<identifier type="original-pdf">3610050105.pdf</identifier>
<identifier type="href">17427370910950311.pdf</identifier>
<accessCondition type="use and reproduction" contentType="copyright">© Emerald Group Publishing Limited</accessCondition>
<recordInfo>
<recordContentSource>EMERALD</recordContentSource>
</recordInfo>
</mods>
</metadata>
<serie></serie>
</istex>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Istex/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001613 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Istex/Corpus/biblio.hfd -nk 001613 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Istex
   |étape=   Corpus
   |type=    RBID
   |clé=     ISTEX:B2CCAD37D1C07EBC9030354F59150F834862E3F1
   |texte=   Humancomputer interaction in ubiquitous computing environments
}}

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024