Serveur d'exploration autour du Bourgeois gentilhomme

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates

Identifieur interne : 001952 ( Istex/Corpus ); précédent : 001951; suivant : 001953

A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates

Auteurs : Louise T. Su

Source :

RBID : ISTEX:F3C5ABC3A6DB016C093BFDEF05C6804F6E718BEB

English descriptors

Abstract

This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based on actual interaction with the search engines. User evaluation was based on 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non‐performance (user‐related) measures were also applied. Each participant searched his/her own topic on all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post‐search interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1, relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions on all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based on these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback on strengths and weaknesses of search engines for system improvement.

Url:
DOI: 10.1002/asi.10334

Links to Exploration step

ISTEX:F3C5ABC3A6DB016C093BFDEF05C6804F6E718BEB

Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates</title>
<author>
<name sortKey="Su, Louise T" sort="Su, Louise T" uniqKey="Su L" first="Louise T." last="Su">Louise T. Su</name>
<affiliation>
<mods:affiliation>Formerly Assistant Professor, University of Pittsburgh, Pittsburgh, PA 15260; 593 Wenhwa Road, Rende Shiang, Tainan, Taiwan 717, ROC</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>E-mail: louisetcsu@aol.com</mods:affiliation>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:F3C5ABC3A6DB016C093BFDEF05C6804F6E718BEB</idno>
<date when="2003" year="2003">2003</date>
<idno type="doi">10.1002/asi.10334</idno>
<idno type="url">https://api.istex.fr/ark:/67375/WNG-QNZ931P3-W/fulltext.pdf</idno>
<idno type="wicri:Area/Istex/Corpus">001952</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Corpus" wicri:corpus="ISTEX">001952</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates
<ref type="note" target="#fn1"></ref>
</title>
<author>
<name sortKey="Su, Louise T" sort="Su, Louise T" uniqKey="Su L" first="Louise T." last="Su">Louise T. Su</name>
<affiliation>
<mods:affiliation>Formerly Assistant Professor, University of Pittsburgh, Pittsburgh, PA 15260; 593 Wenhwa Road, Rende Shiang, Tainan, Taiwan 717, ROC</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>E-mail: louisetcsu@aol.com</mods:affiliation>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="j" type="main">Journal of the American Society for Information Science and Technology</title>
<title level="j" type="alt">JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY</title>
<idno type="ISSN">1532-2882</idno>
<idno type="eISSN">1532-2890</idno>
<imprint>
<biblScope unit="vol">54</biblScope>
<biblScope unit="issue">13</biblScope>
<biblScope unit="page" from="1193">1193</biblScope>
<biblScope unit="page" to="1223">1223</biblScope>
<biblScope unit="page-count">31</biblScope>
<publisher>Wiley Subscription Services, Inc., A Wiley Company</publisher>
<pubPlace>Hoboken</pubPlace>
<date type="published" when="2003-11">2003-11</date>
</imprint>
<idno type="ISSN">1532-2882</idno>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">1532-2882</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="Teeft" xml:lang="en">
<term>Academic orientation</term>
<term>Alta</term>
<term>Alta vista</term>
<term>American society</term>
<term>Anova</term>
<term>Anova results</term>
<term>Anova tests</term>
<term>Boolean</term>
<term>Borgman</term>
<term>Browser</term>
<term>Common usage</term>
<term>Complete relevance</term>
<term>Comprehensiveness</term>
<term>Computer experience</term>
<term>Computer ownership</term>
<term>Connectivity</term>
<term>Content analysis</term>
<term>Current study</term>
<term>Database</term>
<term>Discipline interaction</term>
<term>Engine discipline interaction</term>
<term>Entire sample</term>
<term>Good links</term>
<term>Good results</term>
<term>Graduate schools</term>
<term>Helpful results</term>
<term>Hotbot</term>
<term>Humanities</term>
<term>Humanities undergraduates</term>
<term>Information science</term>
<term>Infoseek</term>
<term>Infoseek lycos</term>
<term>Interface</term>
<term>Internet</term>
<term>Internet experience</term>
<term>Invalid links</term>
<term>Irrelevant hits</term>
<term>Keywords</term>
<term>Kruskal</term>
<term>Kruskal wallis tests</term>
<term>Lycos</term>
<term>Main effect</term>
<term>Measure engine</term>
<term>Nding</term>
<term>Ndings</term>
<term>Negative comments</term>
<term>Netscape</term>
<term>Online</term>
<term>Online documentation</term>
<term>Other engines</term>
<term>Other software</term>
<term>Output display</term>
<term>Overall performance</term>
<term>Participant</term>
<term>Participant experiences</term>
<term>Personal interests</term>
<term>Positive comments</term>
<term>Previous study</term>
<term>Qualitative data</term>
<term>Quantitative analysis</term>
<term>Quantitative data</term>
<term>Query</term>
<term>Relative performance</term>
<term>Relevance measures</term>
<term>Relevant documents</term>
<term>Relevant hits</term>
<term>Relevant information</term>
<term>Relevant items</term>
<term>Response time</term>
<term>Retrieval</term>
<term>Retrieving</term>
<term>Satisfaction ratings</term>
<term>Sciences undergraduates</term>
<term>Search comprehensiveness</term>
<term>Search engines</term>
<term>Search interface</term>
<term>Search options</term>
<term>Search queries</term>
<term>Search requirements</term>
<term>Search results</term>
<term>Search strategy</term>
<term>Search time</term>
<term>Searcher</term>
<term>Second method</term>
<term>Social sciences</term>
<term>Software</term>
<term>Spearman</term>
<term>Srivastava</term>
<term>Standard deviations</term>
<term>Subject expertise</term>
<term>System features</term>
<term>Time period</term>
<term>Total sample</term>
<term>Tukey</term>
<term>Tukey post</term>
<term>User</term>
<term>User criteria</term>
<term>User evaluation</term>
<term>User satisfaction</term>
<term>User satisfaction measures</term>
<term>Valid links</term>
<term>Verbal data</term>
<term>Vista</term>
<term>Wallis</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based on actual interaction with the search engines. User evaluation was based on 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non‐performance (user‐related) measures were also applied. Each participant searched his/her own topic on all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post‐search interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1, relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions on all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based on these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback on strengths and weaknesses of search engines for system improvement.</div>
</front>
</TEI>
<istex>
<corpusName>wiley</corpusName>
<keywords>
<teeft>
<json:string>lycos</json:string>
<json:string>alta</json:string>
<json:string>alta vista</json:string>
<json:string>infoseek</json:string>
<json:string>social sciences</json:string>
<json:string>search results</json:string>
<json:string>search engines</json:string>
<json:string>user satisfaction</json:string>
<json:string>information science</json:string>
<json:string>american society</json:string>
<json:string>overall performance</json:string>
<json:string>entire sample</json:string>
<json:string>online</json:string>
<json:string>vista</json:string>
<json:string>humanities</json:string>
<json:string>internet</json:string>
<json:string>response time</json:string>
<json:string>spearman</json:string>
<json:string>positive comments</json:string>
<json:string>output display</json:string>
<json:string>retrieval</json:string>
<json:string>anova</json:string>
<json:string>user</json:string>
<json:string>system features</json:string>
<json:string>comprehensiveness</json:string>
<json:string>user criteria</json:string>
<json:string>software</json:string>
<json:string>search interface</json:string>
<json:string>online documentation</json:string>
<json:string>infoseek lycos</json:string>
<json:string>ndings</json:string>
<json:string>current study</json:string>
<json:string>borgman</json:string>
<json:string>query</json:string>
<json:string>participant</json:string>
<json:string>relevant items</json:string>
<json:string>negative comments</json:string>
<json:string>valid links</json:string>
<json:string>relevant hits</json:string>
<json:string>total sample</json:string>
<json:string>wallis</json:string>
<json:string>common usage</json:string>
<json:string>search requirements</json:string>
<json:string>kruskal</json:string>
<json:string>user satisfaction measures</json:string>
<json:string>nding</json:string>
<json:string>search time</json:string>
<json:string>quantitative data</json:string>
<json:string>complete relevance</json:string>
<json:string>connectivity</json:string>
<json:string>tukey</json:string>
<json:string>content analysis</json:string>
<json:string>relative performance</json:string>
<json:string>subject expertise</json:string>
<json:string>anova tests</json:string>
<json:string>graduate schools</json:string>
<json:string>user evaluation</json:string>
<json:string>other engines</json:string>
<json:string>hotbot</json:string>
<json:string>kruskal wallis tests</json:string>
<json:string>tukey post</json:string>
<json:string>database</json:string>
<json:string>main effect</json:string>
<json:string>anova results</json:string>
<json:string>srivastava</json:string>
<json:string>participant experiences</json:string>
<json:string>retrieving</json:string>
<json:string>search comprehensiveness</json:string>
<json:string>boolean</json:string>
<json:string>discipline interaction</json:string>
<json:string>searcher</json:string>
<json:string>browser</json:string>
<json:string>verbal data</json:string>
<json:string>keywords</json:string>
<json:string>netscape</json:string>
<json:string>relevant information</json:string>
<json:string>engine discipline interaction</json:string>
<json:string>search queries</json:string>
<json:string>quantitative analysis</json:string>
<json:string>search strategy</json:string>
<json:string>relevance measures</json:string>
<json:string>standard deviations</json:string>
<json:string>time period</json:string>
<json:string>humanities undergraduates</json:string>
<json:string>internet experience</json:string>
<json:string>search options</json:string>
<json:string>satisfaction ratings</json:string>
<json:string>sciences undergraduates</json:string>
<json:string>second method</json:string>
<json:string>helpful results</json:string>
<json:string>academic orientation</json:string>
<json:string>invalid links</json:string>
<json:string>irrelevant hits</json:string>
<json:string>relevant documents</json:string>
<json:string>personal interests</json:string>
<json:string>qualitative data</json:string>
<json:string>computer ownership</json:string>
<json:string>previous study</json:string>
<json:string>good results</json:string>
<json:string>good links</json:string>
<json:string>computer experience</json:string>
<json:string>other software</json:string>
<json:string>measure engine</json:string>
<json:string>interface</json:string>
<json:string>methodology</json:string>
<json:string>relevance</json:string>
<json:string>information problem</json:string>
<json:string>statistical analysis</json:string>
<json:string>rank order</json:string>
<json:string>great majority</json:string>
<json:string>class projects</json:string>
<json:string>total number</json:string>
<json:string>good choice</json:string>
<json:string>major search engines</json:string>
<json:string>good information</json:string>
<json:string>precision ratio</json:string>
<json:string>best precision performer</json:string>
<json:string>various measures</json:string>
<json:string>search outputs</json:string>
<json:string>best relevance</json:string>
<json:string>pilot study</json:string>
<json:string>search topic</json:string>
<json:string>good choices</json:string>
<json:string>performance ratings</json:string>
<json:string>better choice</json:string>
<json:string>engines need</json:string>
<json:string>netscape navigator</json:string>
<json:string>evaluative comments</json:string>
<json:string>open text</json:string>
<json:string>overall characteristics</json:string>
<json:string>more experience</json:string>
<json:string>rating</json:string>
<json:string>leighton</json:string>
<json:string>option</json:string>
<json:string>search output</json:string>
<json:string>performance measures</json:string>
<json:string>best score</json:string>
<json:string>other search engines</json:string>
<json:string>search process</json:string>
<json:string>utility measures</json:string>
<json:string>information problems</json:string>
<json:string>search engines need</json:string>
<json:string>academic disciplines</json:string>
<json:string>rank humanities</json:string>
<json:string>gordon pathak</json:string>
<json:string>computer experiences</json:string>
<json:string>information retrieval performance</json:string>
<json:string>leighton srivastava</json:string>
<json:string>overall system success</json:string>
<json:string>boolean logic</json:string>
<json:string>value rating</json:string>
<json:string>exact values</json:string>
<json:string>machine rankings</json:string>
<json:string>multidimensional evaluation</json:string>
<json:string>search tools</json:string>
<json:string>relevance score</json:string>
<json:string>next section</json:string>
<json:string>online catalogs</json:string>
<json:string>data collection</json:string>
<json:string>higher user satisfaction</json:string>
<json:string>information need</json:string>
<json:string>useful feedback</json:string>
<json:string>better performer</json:string>
<json:string>information systems</json:string>
<json:string>more information</json:string>
<json:string>more females</json:string>
<json:string>slow speed</json:string>
<json:string>good format</json:string>
<json:string>retrieval results</json:string>
<json:string>additional keywords</json:string>
<json:string>good organization</json:string>
<json:string>positive remarks</json:string>
<json:string>relevant literature</json:string>
<json:string>negative remarks</json:string>
<json:string>highest number</json:string>
<json:string>unique criteria</json:string>
<json:string>search terms</json:string>
<json:string>social science</json:string>
<json:string>participant backgrounds</json:string>
<json:string>satisfaction measures</json:string>
<json:string>mexico city</json:string>
<json:string>criterion</json:string>
<json:string>search</json:string>
<json:string>engine</json:string>
</teeft>
</keywords>
<author>
<json:item>
<name>Louise T. Su</name>
<affiliations>
<json:string>E-mail: louisetcsu@aol.com</json:string>
<json:string>Formerly Assistant Professor, University of Pittsburgh, Pittsburgh, PA 15260; 593 Wenhwa Road, Rende Shiang, Tainan, Taiwan 717, ROC</json:string>
<json:string>E-mail: louisetcsu@aol.com</json:string>
</affiliations>
</json:item>
</author>
<articleId>
<json:string>ASI10334</json:string>
</articleId>
<arkIstex>ark:/67375/WNG-QNZ931P3-W</arkIstex>
<language>
<json:string>eng</json:string>
</language>
<originalGenre>
<json:string>article</json:string>
</originalGenre>
<abstract>This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based on actual interaction with the search engines. User evaluation was based on 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non‐performance (user‐related) measures were also applied. Each participant searched his/her own topic on all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post‐search interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1, relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions on all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based on these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback on strengths and weaknesses of search engines for system improvement.</abstract>
<qualityIndicators>
<score>10</score>
<pdfWordCount>22573</pdfWordCount>
<pdfCharCount>137371</pdfCharCount>
<pdfVersion>1.3</pdfVersion>
<pdfPageCount>31</pdfPageCount>
<pdfPageSize>630 x 810 pts</pdfPageSize>
<refBibsNative>true</refBibsNative>
<abstractWordCount>260</abstractWordCount>
<abstractCharCount>1838</abstractCharCount>
<keywordCount>0</keywordCount>
</qualityIndicators>
<title>A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates</title>
<genre>
<json:string>article</json:string>
</genre>
<host>
<title>Journal of the American Society for Information Science and Technology</title>
<language>
<json:string>unknown</json:string>
</language>
<doi>
<json:string>10.1002/(ISSN)1532-2890</json:string>
</doi>
<issn>
<json:string>1532-2882</json:string>
</issn>
<eissn>
<json:string>1532-2890</json:string>
</eissn>
<publisherId>
<json:string>ASI</json:string>
</publisherId>
<volume>54</volume>
<issue>13</issue>
<pages>
<first>1193</first>
<last>1223</last>
<total>31</total>
</pages>
<genre>
<json:string>journal</json:string>
</genre>
<subject>
<json:item>
<value>evaluation</value>
</json:item>
<json:item>
<value>information retrieval</value>
</json:item>
<json:item>
<value>end user searching</value>
</json:item>
<json:item>
<value>online searching</value>
</json:item>
<json:item>
<value>search engines</value>
</json:item>
<json:item>
<value>World Wide Web</value>
</json:item>
<json:item>
<value>performance</value>
</json:item>
<json:item>
<value>user satisfaction</value>
</json:item>
<json:item>
<value>usability</value>
</json:item>
<json:item>
<value>Research Article</value>
</json:item>
</subject>
</host>
<namedEntities>
<unitex>
<date></date>
<geogName></geogName>
<orgName></orgName>
<orgName_funder></orgName_funder>
<orgName_provider></orgName_provider>
<persName></persName>
<placeName></placeName>
<ref_url></ref_url>
<ref_bibl></ref_bibl>
<bibl></bibl>
</unitex>
</namedEntities>
<ark>
<json:string>ark:/67375/WNG-QNZ931P3-W</json:string>
</ark>
<categories>
<inist>
<json:string>1 - sciences humaines et sociales</json:string>
</inist>
</categories>
<publicationDate>2003</publicationDate>
<copyrightDate>2003</copyrightDate>
<doi>
<json:string>10.1002/asi.10334</json:string>
</doi>
<id>F3C5ABC3A6DB016C093BFDEF05C6804F6E718BEB</id>
<score>1</score>
<fulltext>
<json:item>
<extension>pdf</extension>
<original>true</original>
<mimetype>application/pdf</mimetype>
<uri>https://api.istex.fr/ark:/67375/WNG-QNZ931P3-W/fulltext.pdf</uri>
</json:item>
<json:item>
<extension>zip</extension>
<original>false</original>
<mimetype>application/zip</mimetype>
<uri>https://api.istex.fr/ark:/67375/WNG-QNZ931P3-W/bundle.zip</uri>
</json:item>
<istex:fulltextTEI uri="https://api.istex.fr/ark:/67375/WNG-QNZ931P3-W/fulltext.tei">
<teiHeader>
<fileDesc>
<titleStmt>
<title level="a" type="main" xml:lang="en">A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates
<ref type="note" target="#fn1"></ref>
</title>
</titleStmt>
<publicationStmt>
<authority>ISTEX</authority>
<publisher>Wiley Subscription Services, Inc., A Wiley Company</publisher>
<pubPlace>Hoboken</pubPlace>
<availability>
<licence>Copyright © 2003 Wiley Periodicals, Inc.</licence>
</availability>
<date type="published" when="2003-11"></date>
</publicationStmt>
<notesStmt>
<note type="content-type" subtype="article" source="article" scheme="https://content-type.data.istex.fr/ark:/67375/XTP-6N5SZHKN-D">article</note>
<note type="publication-type" subtype="journal" scheme="https://publication-type.data.istex.fr/ark:/67375/JMC-0GLKJH51-B">journal</note>
</notesStmt>
<sourceDesc>
<biblStruct type="article">
<analytic>
<title level="a" type="main" xml:lang="en">A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates
<ref type="note" target="#fn1"></ref>
</title>
<author xml:id="author-0000">
<persName>
<forename type="first">Louise T.</forename>
<surname>Su</surname>
</persName>
<email>louisetcsu@aol.com</email>
<affiliation>
<address>
<addrLine>Formerly Assistant Professor</addrLine>
<orgName type="institution">University of Pittsburgh</orgName>
<address>
<addrLine>Pittsburgh</addrLine>
<addrLine>PA 15260; 593 Wenhwa Road</addrLine>
<addrLine>Rende Shiang</addrLine>
<addrLine>Tainan</addrLine>
<addrLine>Taiwan 717</addrLine>
<addrLine>ROC</addrLine>
</address>
<addrLine>Pittsburgh</addrLine>
<addrLine>PA 15260; 593 Wenhwa Road</addrLine>
<addrLine>Rende Shiang</addrLine>
<addrLine>Tainan</addrLine>
<addrLine>Taiwan 717</addrLine>
<addrLine>ROC</addrLine>
</address>
</affiliation>
</author>
<idno type="istex">F3C5ABC3A6DB016C093BFDEF05C6804F6E718BEB</idno>
<idno type="ark">ark:/67375/WNG-QNZ931P3-W</idno>
<idno type="DOI">10.1002/asi.10334</idno>
<idno type="unit">ASI10334</idno>
<idno type="toTypesetVersion">file:ASI.ASI10334.pdf</idno>
</analytic>
<monogr>
<title level="j" type="main">Journal of the American Society for Information Science and Technology</title>
<title level="j" type="alt">JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY</title>
<idno type="pISSN">1532-2882</idno>
<idno type="eISSN">1532-2890</idno>
<idno type="book-DOI">10.1002/(ISSN)1532-2890</idno>
<idno type="book-part-DOI">10.1002/asi.v54:13</idno>
<idno type="product">ASI</idno>
<imprint>
<biblScope unit="vol">54</biblScope>
<biblScope unit="issue">13</biblScope>
<biblScope unit="page" from="1193">1193</biblScope>
<biblScope unit="page" to="1223">1223</biblScope>
<biblScope unit="page-count">31</biblScope>
<publisher>Wiley Subscription Services, Inc., A Wiley Company</publisher>
<pubPlace>Hoboken</pubPlace>
<date type="published" when="2003-11"></date>
</imprint>
</monogr>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<abstract xml:lang="en" style="main">
<head>Abstract</head>
<p>This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based on actual interaction with the search engines. User evaluation was based on 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non‐performance (user‐related) measures were also applied. Each participant searched his/her own topic on all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post‐search interviews to provide reactions to the search results and overall performance. The study found significant differences in precision P
<hi rend="subscript">R1</hi>
, relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions on all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based on these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback on strengths and weaknesses of search engines for system improvement.</p>
</abstract>
<textClass>
<keywords rend="articleCategory">
<term>Research Article</term>
</keywords>
<keywords rend="tocHeading1">
<term>Research Articles</term>
</keywords>
</textClass>
<textClass>
<keywords ana="subject">
<term ref="psi.asis.org/digital/evaluation">evaluation</term>
<term ref="psi.asis.org/digital/information+retrieval">information retrieval</term>
<term ref="psi.asis.org/digital/end+user+searching">end user searching</term>
<term ref="psi.asis.org/digital/online+searching">online searching</term>
<term ref="psi.asis.org/digital/search+engines">search engines</term>
<term ref="psi.asis.org/digital/World+Wide+Web">World Wide Web</term>
<term ref="psi.asis.org/digital/performance">performance</term>
<term ref="psi.asis.org/digital/user+satisfaction">user satisfaction</term>
<term ref="psi.asis.org/digital/usability">usability</term>
</keywords>
</textClass>
<langUsage>
<language ident="en"></language>
</langUsage>
</profileDesc>
</teiHeader>
</istex:fulltextTEI>
<json:item>
<extension>txt</extension>
<original>false</original>
<mimetype>text/plain</mimetype>
<uri>https://api.istex.fr/ark:/67375/WNG-QNZ931P3-W/fulltext.txt</uri>
</json:item>
</fulltext>
<metadata>
<istex:metadataXml wicri:clean="Wiley, elements deleted: body">
<istex:xmlDeclaration>version="1.0" encoding="UTF-8" standalone="yes"</istex:xmlDeclaration>
<istex:document>
<component version="2.0" type="serialArticle" xml:lang="en">
<header>
<publicationMeta level="product">
<publisherInfo>
<publisherName>Wiley Subscription Services, Inc., A Wiley Company</publisherName>
<publisherLoc>Hoboken</publisherLoc>
</publisherInfo>
<doi registered="yes">10.1002/(ISSN)1532-2890</doi>
<issn type="print">1532-2882</issn>
<issn type="electronic">1532-2890</issn>
<idGroup>
<id type="product" value="ASI"></id>
</idGroup>
<titleGroup>
<title type="main" xml:lang="en" sort="JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY">Journal of the American Society for Information Science and Technology</title>
<title type="short">J. Am. Soc. Inf. Sci.</title>
</titleGroup>
<selfCitationGroup>
<citation type="ancestor" xml:id="cit1">
<journalTitle>Journal of the American Society for Information Science</journalTitle>
<accessionId ref="info:x-wiley/issn/00028231">0002-8231</accessionId>
<accessionId ref="info:x-wiley/issn/10974571">1097-4571</accessionId>
<pubYear year="2000">2000</pubYear>
<vol>51</vol>
<issue>14</issue>
</citation>
</selfCitationGroup>
</publicationMeta>
<publicationMeta level="part" position="130">
<doi origin="wiley" registered="yes">10.1002/asi.v54:13</doi>
<numberingGroup>
<numbering type="journalVolume" number="54">54</numbering>
<numbering type="journalIssue">13</numbering>
</numberingGroup>
<coverDate startDate="2003-11">November 2003</coverDate>
</publicationMeta>
<publicationMeta level="unit" type="article" position="30" status="forIssue">
<doi origin="wiley" registered="yes">10.1002/asi.10334</doi>
<idGroup>
<id type="unit" value="ASI10334"></id>
</idGroup>
<countGroup>
<count type="pageTotal" number="31"></count>
</countGroup>
<titleGroup>
<title type="articleCategory">Research Article</title>
<title type="tocHeading1">Research Articles</title>
</titleGroup>
<copyright ownership="publisher">Copyright © 2003 Wiley Periodicals, Inc.</copyright>
<eventGroup>
<event type="manuscriptReceived" date="2001-12-05"></event>
<event type="manuscriptRevised" date="2002-09-03"></event>
<event type="manuscriptAccepted" date="2003-03-17"></event>
<event type="firstOnline" date="2003-07-10"></event>
<event type="publishedOnlineFinalForm" date="2003-10-02"></event>
<event type="xmlConverted" agent="Converter:JWSART34_TO_WML3G version:2.3.4 mode:FullText source:FullText result:FullText" date="2010-03-30"></event>
<event type="xmlConverted" agent="Converter:WILEY_ML3G_TO_WILEY_ML3GV2 version:3.8.8" date="2014-01-06"></event>
<event type="xmlConverted" agent="Converter:WML3G_To_WML3G version:4.1.7 mode:FullText,remove_FC" date="2014-10-30"></event>
</eventGroup>
<numberingGroup>
<numbering type="pageFirst">1193</numbering>
<numbering type="pageLast">1223</numbering>
</numberingGroup>
<subjectInfo>
<subject href="psi.asis.org/digital/evaluation">evaluation</subject>
<subject href="psi.asis.org/digital/information+retrieval">information retrieval</subject>
<subject href="psi.asis.org/digital/end+user+searching">end user searching</subject>
<subject href="psi.asis.org/digital/online+searching">online searching</subject>
<subject href="psi.asis.org/digital/search+engines">search engines</subject>
<subject href="psi.asis.org/digital/World+Wide+Web">World Wide Web</subject>
<subject href="psi.asis.org/digital/performance">performance</subject>
<subject href="psi.asis.org/digital/user+satisfaction">user satisfaction</subject>
<subject href="psi.asis.org/digital/usability">usability</subject>
</subjectInfo>
<objectNameGroup>
<objectName elementName="appendix">Appendix</objectName>
</objectNameGroup>
<linkGroup>
<link type="toTypesetVersion" href="file:ASI.ASI10334.pdf"></link>
</linkGroup>
</publicationMeta>
<contentMeta>
<countGroup>
<count type="figureTotal" number="2"></count>
<count type="tableTotal" number="23"></count>
<count type="referenceTotal" number="25"></count>
<count type="wordTotal" number="26651"></count>
</countGroup>
<titleGroup>
<title type="main" xml:lang="en">A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates
<link href="#fn1"></link>
</title>
</titleGroup>
<creators>
<creator xml:id="au1" creatorRole="author" affiliationRef="#af1">
<personName>
<givenNames>Louise T.</givenNames>
<familyName>Su</familyName>
</personName>
<contactDetails>
<email>louisetcsu@aol.com</email>
</contactDetails>
</creator>
</creators>
<affiliationGroup>
<affiliation xml:id="af1" countryCode="TW" type="organization">
<unparsedAffiliation>Formerly Assistant Professor, University of Pittsburgh, Pittsburgh, PA 15260; 593 Wenhwa Road, Rende Shiang, Tainan, Taiwan 717, ROC</unparsedAffiliation>
</affiliation>
</affiliationGroup>
<abstractGroup>
<abstract type="main" xml:lang="en">
<title type="main">Abstract</title>
<p>This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based on actual interaction with the search engines. User evaluation was based on 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non‐performance (user‐related) measures were also applied. Each participant searched his/her own topic on all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post‐search interviews to provide reactions to the search results and overall performance. The study found significant differences in precision P
<sub>R1</sub>
, relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions on all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based on these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback on strengths and weaknesses of search engines for system improvement.</p>
</abstract>
</abstractGroup>
</contentMeta>
<noteGroup>
<note xml:id="fn1">
<p>Partial results of this work were presented during the 1999 ASIS Annual Meeting, October 31–November 4, 1999, Washington, D.C., and appeared as “Evaluation of Web search engines by undergraduates” on pages 98–114 in the Proceedings of the 62nd American Society for Information Science, 36, edited by M.M.K. Hlava and L. Woods.</p>
</note>
</noteGroup>
</header>
</component>
</istex:document>
</istex:metadataXml>
<mods version="3.6">
<titleInfo lang="en">
<title>A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates</title>
</titleInfo>
<titleInfo type="alternative" contentType="CDATA" lang="en">
<title>A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates</title>
</titleInfo>
<name type="personal">
<namePart type="given">Louise T.</namePart>
<namePart type="family">Su</namePart>
<affiliation>Formerly Assistant Professor, University of Pittsburgh, Pittsburgh, PA 15260; 593 Wenhwa Road, Rende Shiang, Tainan, Taiwan 717, ROC</affiliation>
<affiliation>E-mail: louisetcsu@aol.com</affiliation>
<role>
<roleTerm type="text">author</roleTerm>
</role>
</name>
<typeOfResource>text</typeOfResource>
<genre type="article" displayLabel="article" authority="ISTEX" authorityURI="https://content-type.data.istex.fr" valueURI="https://content-type.data.istex.fr/ark:/67375/XTP-6N5SZHKN-D">article</genre>
<originInfo>
<publisher>Wiley Subscription Services, Inc., A Wiley Company</publisher>
<place>
<placeTerm type="text">Hoboken</placeTerm>
</place>
<dateIssued encoding="w3cdtf">2003-11</dateIssued>
<dateCaptured encoding="w3cdtf">2001-12-05</dateCaptured>
<dateValid encoding="w3cdtf">2003-03-17</dateValid>
<copyrightDate encoding="w3cdtf">2003</copyrightDate>
</originInfo>
<language>
<languageTerm type="code" authority="rfc3066">en</languageTerm>
<languageTerm type="code" authority="iso639-2b">eng</languageTerm>
</language>
<physicalDescription>
<extent unit="figures">2</extent>
<extent unit="tables">23</extent>
<extent unit="references">25</extent>
<extent unit="words">26651</extent>
</physicalDescription>
<abstract lang="en">This paper presents an application of the model described in Part I to the evaluation of Web search engines by undergraduates. The study observed how 36 undergraduate used four major search engines to find information for their own individual problems and how they evaluated these engines based on actual interaction with the search engines. User evaluation was based on 16 performance measures representing five evaluation criteria: relevance, efficiency, utility, user satisfaction, and connectivity. Non‐performance (user‐related) measures were also applied. Each participant searched his/her own topic on all four engines and provided satisfaction ratings for system features and interaction and reasons for satisfaction. Each also made relevance judgements of retrieved items in relation to his/her own information need and participated in post‐search interviews to provide reactions to the search results and overall performance. The study found significant differences in precision PR1, relative recall, user satisfaction with output display, time saving, value of search results, and overall performance among the four engines and also significant engine by discipline interactions on all these measures. In addition, the study found significant differences in user satisfaction with response time among four engines, and significant engine by discipline interaction in user satisfaction with search interface. None of the four search engines dominated in every aspect of the multidimensional evaluation. Content analysis of verbal data identified a number of user criteria and users evaluative comments based on these criteria. Results from both quantitative analysis and content analysis provide insight for system design and development, and useful feedback on strengths and weaknesses of search engines for system improvement.</abstract>
<note type="content">*Partial results of this work were presented during the 1999 ASIS Annual Meeting, October 31–November 4, 1999, Washington, D.C., and appeared as “Evaluation of Web search engines by undergraduates” on pages 98–114 in the Proceedings of the 62nd American Society for Information Science, 36, edited by M.M.K. Hlava and L. Woods.</note>
<relatedItem type="host">
<titleInfo>
<title>Journal of the American Society for Information Science and Technology</title>
</titleInfo>
<titleInfo type="abbreviated">
<title>J. Am. Soc. Inf. Sci.</title>
</titleInfo>
<genre type="journal" authority="ISTEX" authorityURI="https://publication-type.data.istex.fr" valueURI="https://publication-type.data.istex.fr/ark:/67375/JMC-0GLKJH51-B">journal</genre>
<subject>
<genre>index-terms</genre>
<topic authorityURI="psi.asis.org/digital/evaluation">evaluation</topic>
<topic authorityURI="psi.asis.org/digital/information+retrieval">information retrieval</topic>
<topic authorityURI="psi.asis.org/digital/end+user+searching">end user searching</topic>
<topic authorityURI="psi.asis.org/digital/online+searching">online searching</topic>
<topic authorityURI="psi.asis.org/digital/search+engines">search engines</topic>
<topic authorityURI="psi.asis.org/digital/World+Wide+Web">World Wide Web</topic>
<topic authorityURI="psi.asis.org/digital/performance">performance</topic>
<topic authorityURI="psi.asis.org/digital/user+satisfaction">user satisfaction</topic>
<topic authorityURI="psi.asis.org/digital/usability">usability</topic>
</subject>
<subject>
<genre>article-category</genre>
<topic>Research Article</topic>
</subject>
<identifier type="ISSN">1532-2882</identifier>
<identifier type="eISSN">1532-2890</identifier>
<identifier type="DOI">10.1002/(ISSN)1532-2890</identifier>
<identifier type="PublisherID">ASI</identifier>
<part>
<date>2003</date>
<detail type="volume">
<caption>vol.</caption>
<number>54</number>
</detail>
<detail type="issue">
<caption>no.</caption>
<number>13</number>
</detail>
<extent unit="pages">
<start>1193</start>
<end>1223</end>
<total>31</total>
</extent>
</part>
</relatedItem>
<relatedItem type="preceding">
<titleInfo>
<title>Journal of the American Society for Information Science</title>
</titleInfo>
<identifier type="ISSN">0002-8231</identifier>
<identifier type="ISSN">1097-4571</identifier>
<part>
<date point="end">2000</date>
<detail type="volume">
<caption>last vol.</caption>
<number>51</number>
</detail>
<detail type="issue">
<caption>last no.</caption>
<number>14</number>
</detail>
</part>
</relatedItem>
<identifier type="istex">F3C5ABC3A6DB016C093BFDEF05C6804F6E718BEB</identifier>
<identifier type="ark">ark:/67375/WNG-QNZ931P3-W</identifier>
<identifier type="DOI">10.1002/asi.10334</identifier>
<identifier type="ArticleID">ASI10334</identifier>
<accessCondition type="use and reproduction" contentType="copyright">Copyright © 2003 Wiley Periodicals, Inc.</accessCondition>
<recordInfo>
<recordContentSource authority="ISTEX" authorityURI="https://loaded-corpus.data.istex.fr" valueURI="https://loaded-corpus.data.istex.fr/ark:/67375/XBH-L0C46X92-X">wiley</recordContentSource>
<recordOrigin>Wiley Subscription Services, Inc., A Wiley Company</recordOrigin>
</recordInfo>
</mods>
<json:item>
<extension>json</extension>
<original>false</original>
<mimetype>application/json</mimetype>
<uri>https://api.istex.fr/ark:/67375/WNG-QNZ931P3-W/record.json</uri>
</json:item>
</metadata>
<serie></serie>
</istex>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/BourgeoisGentilV1/Data/Istex/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001952 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Istex/Corpus/biblio.hfd -nk 001952 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    BourgeoisGentilV1
   |flux=    Istex
   |étape=   Corpus
   |type=    RBID
   |clé=     ISTEX:F3C5ABC3A6DB016C093BFDEF05C6804F6E718BEB
   |texte=   A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates
}}

Wicri

This area was generated with Dilib version V0.6.33.
Data generation: Sun Sep 29 22:08:28 2019. Site generation: Mon Mar 11 10:07:23 2024