Serveur d'exploration sur l'OCR

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A Framework for Interacting with Paper

Identifieur interne : 002690 ( Main/Merge ); précédent : 002689; suivant : 002691

A Framework for Interacting with Paper

Auteurs : Peter Robinson ; Dan Sheppard ; Richard Watts ; Robert Harding ; Steve Lay

Source :

RBID : ISTEX:9C870BA8F4F70A4AA77C8E1B88A311FEC3A55AFE

English descriptors

Abstract

This paper reports on ways of using digitised video from television cameras in user interfaces for computer systems. The DigitalDesk is built around an ordinary physical desk and can be used as such, but it has extra capabilities. A video camera mounted above the desk, pointing down at the work surface, is used to detect where the user is pointing and to read documents that are placed on the desk. A computer‐driven projector is also mounted above the desk, allowing the system to project electronic objects onto the work surface and onto real paper documents. The animated paper documents project is considering particular applications of the technology in electronic publishing. The goal is to combine electronic and printed documents to give a richer presentation than that afforded by either separate medium. This paper describes the framework that has been developed to assist with the preparation and presentation of these mixed‐media documents. The central component is a registry that associates physical locations on pieces of paper with actions. This is surrounded by a number of adaptors that assist with the creation of new documents either from scratch or by translating from conventional hypermedia, and also allow the documents to be edited. Finally the DigitalDesk itself identifies pieces of paper and animates them with the actions described in the registry.

Url:
DOI: 10.1111/1467-8659.16.3conferenceissue.34

Links toward previous steps (curation, corpus...)


Links to Exploration step

ISTEX:9C870BA8F4F70A4AA77C8E1B88A311FEC3A55AFE

Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">A Framework for Interacting with Paper</title>
<author>
<name sortKey="Robinson, Peter" sort="Robinson, Peter" uniqKey="Robinson P" first="Peter" last="Robinson">Peter Robinson</name>
</author>
<author>
<name sortKey="Sheppard, Dan" sort="Sheppard, Dan" uniqKey="Sheppard D" first="Dan" last="Sheppard">Dan Sheppard</name>
</author>
<author>
<name sortKey="Watts, Richard" sort="Watts, Richard" uniqKey="Watts R" first="Richard" last="Watts">Richard Watts</name>
</author>
<author>
<name sortKey="Harding, Robert" sort="Harding, Robert" uniqKey="Harding R" first="Robert" last="Harding">Robert Harding</name>
</author>
<author>
<name sortKey="Lay, Steve" sort="Lay, Steve" uniqKey="Lay S" first="Steve" last="Lay">Steve Lay</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:9C870BA8F4F70A4AA77C8E1B88A311FEC3A55AFE</idno>
<date when="1997" year="1997">1997</date>
<idno type="doi">10.1111/1467-8659.16.3conferenceissue.34</idno>
<idno type="url">https://api.istex.fr/document/9C870BA8F4F70A4AA77C8E1B88A311FEC3A55AFE/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">000234</idno>
<idno type="wicri:Area/Istex/Curation">000231</idno>
<idno type="wicri:Area/Istex/Checkpoint">001A11</idno>
<idno type="wicri:doubleKey">0167-7055:1997:Robinson P:a:framework:for</idno>
<idno type="wicri:Area/Main/Merge">002690</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">A Framework for Interacting with Paper</title>
<author>
<name sortKey="Robinson, Peter" sort="Robinson, Peter" uniqKey="Robinson P" first="Peter" last="Robinson">Peter Robinson</name>
<affiliation>
<wicri:noCountry code="subField">3QG</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Sheppard, Dan" sort="Sheppard, Dan" uniqKey="Sheppard D" first="Dan" last="Sheppard">Dan Sheppard</name>
<affiliation>
<wicri:noCountry code="subField">3QG</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Watts, Richard" sort="Watts, Richard" uniqKey="Watts R" first="Richard" last="Watts">Richard Watts</name>
<affiliation>
<wicri:noCountry code="subField">3QG</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Harding, Robert" sort="Harding, Robert" uniqKey="Harding R" first="Robert" last="Harding">Robert Harding</name>
<affiliation>
<wicri:noCountry code="subField">3QG</wicri:noCountry>
</affiliation>
</author>
<author>
<name sortKey="Lay, Steve" sort="Lay, Steve" uniqKey="Lay S" first="Steve" last="Lay">Steve Lay</name>
<affiliation>
<wicri:noCountry code="subField">3QG</wicri:noCountry>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="j">Computer Graphics Forum</title>
<idno type="ISSN">0167-7055</idno>
<idno type="eISSN">1467-8659</idno>
<imprint>
<publisher>Blackwell Publishers Ltd</publisher>
<pubPlace>Oxford, UK and Boston, USA</pubPlace>
<date type="published" when="1997-09">1997-09</date>
<biblScope unit="volume">16</biblScope>
<biblScope unit="supplement">s3</biblScope>
<biblScope unit="page" from="C329">C329</biblScope>
<biblScope unit="page" to="C334">C334</biblScope>
</imprint>
<idno type="ISSN">0167-7055</idno>
</series>
<idno type="istex">9C870BA8F4F70A4AA77C8E1B88A311FEC3A55AFE</idno>
<idno type="DOI">10.1111/1467-8659.16.3conferenceissue.34</idno>
<idno type="ArticleID">CGF170</idno>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">0167-7055</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Augmented Reality</term>
<term>Human‐Computer Interaction</term>
<term>Information retrieval</term>
<term>Registration</term>
<term>Video see‐through</term>
</keywords>
</textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">This paper reports on ways of using digitised video from television cameras in user interfaces for computer systems. The DigitalDesk is built around an ordinary physical desk and can be used as such, but it has extra capabilities. A video camera mounted above the desk, pointing down at the work surface, is used to detect where the user is pointing and to read documents that are placed on the desk. A computer‐driven projector is also mounted above the desk, allowing the system to project electronic objects onto the work surface and onto real paper documents. The animated paper documents project is considering particular applications of the technology in electronic publishing. The goal is to combine electronic and printed documents to give a richer presentation than that afforded by either separate medium. This paper describes the framework that has been developed to assist with the preparation and presentation of these mixed‐media documents. The central component is a registry that associates physical locations on pieces of paper with actions. This is surrounded by a number of adaptors that assist with the creation of new documents either from scratch or by translating from conventional hypermedia, and also allow the documents to be edited. Finally the DigitalDesk itself identifies pieces of paper and animates them with the actions described in the registry.</div>
</front>
</TEI>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/OcrV1/Data/Main/Merge
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002690 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Merge/biblio.hfd -nk 002690 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    OcrV1
   |flux=    Main
   |étape=   Merge
   |type=    RBID
   |clé=     ISTEX:9C870BA8F4F70A4AA77C8E1B88A311FEC3A55AFE
   |texte=   A Framework for Interacting with Paper
}}

Wicri

This area was generated with Dilib version V0.6.32.
Data generation: Sat Nov 11 16:53:45 2017. Site generation: Mon Mar 11 23:15:16 2024