Serveur d'exploration sur les dispositifs haptiques

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Computational Intelligence Techniques for Tactile Sensing Systems

Identifieur interne : 002482 ( Pmc/Curation ); précédent : 002481; suivant : 002483

Computational Intelligence Techniques for Tactile Sensing Systems

Auteurs : Paolo Gastaldo ; Luigi Pinna ; Lucia Seminara ; Maurizio Valle ; Rodolfo Zunino

Source :

RBID : PMC:4118344

Abstract

Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach.


Url:
DOI: 10.3390/s140610952
PubMed: 24949646
PubMed Central: 4118344

Links toward previous steps (curation, corpus...)


Links to Exploration step

PMC:4118344

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Computational Intelligence Techniques for Tactile Sensing Systems</title>
<author>
<name sortKey="Gastaldo, Paolo" sort="Gastaldo, Paolo" uniqKey="Gastaldo P" first="Paolo" last="Gastaldo">Paolo Gastaldo</name>
</author>
<author>
<name sortKey="Pinna, Luigi" sort="Pinna, Luigi" uniqKey="Pinna L" first="Luigi" last="Pinna">Luigi Pinna</name>
</author>
<author>
<name sortKey="Seminara, Lucia" sort="Seminara, Lucia" uniqKey="Seminara L" first="Lucia" last="Seminara">Lucia Seminara</name>
</author>
<author>
<name sortKey="Valle, Maurizio" sort="Valle, Maurizio" uniqKey="Valle M" first="Maurizio" last="Valle">Maurizio Valle</name>
</author>
<author>
<name sortKey="Zunino, Rodolfo" sort="Zunino, Rodolfo" uniqKey="Zunino R" first="Rodolfo" last="Zunino">Rodolfo Zunino</name>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">PMC</idno>
<idno type="pmid">24949646</idno>
<idno type="pmc">4118344</idno>
<idno type="url">http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4118344</idno>
<idno type="RBID">PMC:4118344</idno>
<idno type="doi">10.3390/s140610952</idno>
<date when="2014">2014</date>
<idno type="wicri:Area/Pmc/Corpus">002482</idno>
<idno type="wicri:Area/Pmc/Curation">002482</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a" type="main">Computational Intelligence Techniques for Tactile Sensing Systems</title>
<author>
<name sortKey="Gastaldo, Paolo" sort="Gastaldo, Paolo" uniqKey="Gastaldo P" first="Paolo" last="Gastaldo">Paolo Gastaldo</name>
</author>
<author>
<name sortKey="Pinna, Luigi" sort="Pinna, Luigi" uniqKey="Pinna L" first="Luigi" last="Pinna">Luigi Pinna</name>
</author>
<author>
<name sortKey="Seminara, Lucia" sort="Seminara, Lucia" uniqKey="Seminara L" first="Lucia" last="Seminara">Lucia Seminara</name>
</author>
<author>
<name sortKey="Valle, Maurizio" sort="Valle, Maurizio" uniqKey="Valle M" first="Maurizio" last="Valle">Maurizio Valle</name>
</author>
<author>
<name sortKey="Zunino, Rodolfo" sort="Zunino, Rodolfo" uniqKey="Zunino R" first="Rodolfo" last="Zunino">Rodolfo Zunino</name>
</author>
</analytic>
<series>
<title level="j">Sensors (Basel, Switzerland)</title>
<idno type="eISSN">1424-8220</idno>
<imprint>
<date when="2014">2014</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass></textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">
<p>Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach.</p>
</div>
</front>
<back>
<div1 type="bibliography">
<listBibl>
<biblStruct>
<analytic>
<author>
<name sortKey="Schmitz, A" uniqKey="Schmitz A">A. Schmitz</name>
</author>
<author>
<name sortKey="Pattacini, U" uniqKey="Pattacini U">U. Pattacini</name>
</author>
<author>
<name sortKey="Nori, F" uniqKey="Nori F">F. Nori</name>
</author>
<author>
<name sortKey="Natale, L" uniqKey="Natale L">L. Natale</name>
</author>
<author>
<name sortKey="Metta, G" uniqKey="Metta G">G. Metta</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Ascia, A" uniqKey="Ascia A">A. Ascia</name>
</author>
<author>
<name sortKey="Biso, M" uniqKey="Biso M">M. Biso</name>
</author>
<author>
<name sortKey="Ansaldo, A" uniqKey="Ansaldo A">A. Ansaldo</name>
</author>
<author>
<name sortKey="Schmitz, A" uniqKey="Schmitz A">A. Schmitz</name>
</author>
<author>
<name sortKey="Ricci, D" uniqKey="Ricci D">D. Ricci</name>
</author>
<author>
<name sortKey="Natale, L" uniqKey="Natale L">L. Natale</name>
</author>
<author>
<name sortKey="Metta, G" uniqKey="Metta G">G. Metta</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dahiya, R" uniqKey="Dahiya R">R. Dahiya</name>
</author>
<author>
<name sortKey="Cattin, D" uniqKey="Cattin D">D. Cattin</name>
</author>
<author>
<name sortKey="Adami, A" uniqKey="Adami A">A. Adami</name>
</author>
<author>
<name sortKey="Collini, C" uniqKey="Collini C">C. Collini</name>
</author>
<author>
<name sortKey="Barboni, L" uniqKey="Barboni L">L. Barboni</name>
</author>
<author>
<name sortKey="Valle, M" uniqKey="Valle M">M. Valle</name>
</author>
<author>
<name sortKey="Lorenzelli, L" uniqKey="Lorenzelli L">L. Lorenzelli</name>
</author>
<author>
<name sortKey="Oboe, R" uniqKey="Oboe R">R. Oboe</name>
</author>
<author>
<name sortKey="Metta, G" uniqKey="Metta G">G. Metta</name>
</author>
<author>
<name sortKey="Brunetti, F" uniqKey="Brunetti F">F. Brunetti</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dahl, T S" uniqKey="Dahl T">T.S. Dahl</name>
</author>
<author>
<name sortKey="Swere, E A R" uniqKey="Swere E">E.A.R. Swere</name>
</author>
<author>
<name sortKey="Palmer, A" uniqKey="Palmer A">A. Palmer</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Argall, B" uniqKey="Argall B">B. Argall</name>
</author>
<author>
<name sortKey="Billard, A" uniqKey="Billard A">A. Billard</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kandel, E R" uniqKey="Kandel E">E.R. Kandel</name>
</author>
<author>
<name sortKey="Schwartz, J H" uniqKey="Schwartz J">J.H. Schwartz</name>
</author>
<author>
<name sortKey="Jessell, T M" uniqKey="Jessell T">T.M. Jessell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dahiya, R S" uniqKey="Dahiya R">R.S. Dahiya</name>
</author>
<author>
<name sortKey="Mittendorfer, P" uniqKey="Mittendorfer P">P. Mittendorfer</name>
</author>
<author>
<name sortKey="Valle, M" uniqKey="Valle M">M. Valle</name>
</author>
<author>
<name sortKey="Cheng, G" uniqKey="Cheng G">G. Cheng</name>
</author>
<author>
<name sortKey="Lumelsky, V J" uniqKey="Lumelsky V">V.J. Lumelsky</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Decherchi, S" uniqKey="Decherchi S">S. Decherchi</name>
</author>
<author>
<name sortKey="Gastaldo, P" uniqKey="Gastaldo P">P. Gastaldo</name>
</author>
<author>
<name sortKey="Dahiya, R S" uniqKey="Dahiya R">R.S. Dahiya</name>
</author>
<author>
<name sortKey="Valle, M" uniqKey="Valle M">M. Valle</name>
</author>
<author>
<name sortKey="Zunino, R" uniqKey="Zunino R">R. Zunino</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Fishel, J A" uniqKey="Fishel J">J.A. Fishel</name>
</author>
<author>
<name sortKey="Loeb, G E" uniqKey="Loeb G">G.E. Loeb</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Mazid, A M" uniqKey="Mazid A">A.M. Mazid</name>
</author>
<author>
<name sortKey="Russell, R A" uniqKey="Russell R">R.A. Russell</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Enomoto, T" uniqKey="Enomoto T">T. Enomoto</name>
</author>
<author>
<name sortKey="Ohnishi, K" uniqKey="Ohnishi K">K. Ohnishi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dahiya, R S" uniqKey="Dahiya R">R.S. Dahiya</name>
</author>
<author>
<name sortKey="Metta, G" uniqKey="Metta G">G. Metta</name>
</author>
<author>
<name sortKey="Cannata, G" uniqKey="Cannata G">G. Cannata</name>
</author>
<author>
<name sortKey="Valle, M" uniqKey="Valle M">M. Valle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Iwata, H" uniqKey="Iwata H">H. Iwata</name>
</author>
<author>
<name sortKey="Sugano, S" uniqKey="Sugano S">S. Sugano</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Tawil, D S" uniqKey="Tawil D">D.S. Tawil</name>
</author>
<author>
<name sortKey="Rye, D" uniqKey="Rye D">D. Rye</name>
</author>
<author>
<name sortKey="Velonaki, M" uniqKey="Velonaki M">M. Velonaki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Flagg, A" uniqKey="Flagg A">A. Flagg</name>
</author>
<author>
<name sortKey="Tam, D" uniqKey="Tam D">D. Tam</name>
</author>
<author>
<name sortKey="Maclean, K" uniqKey="Maclean K">K. MacLean</name>
</author>
<author>
<name sortKey="Flagg, R" uniqKey="Flagg R">R. Flagg</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Johnsson, M" uniqKey="Johnsson M">M. Johnsson</name>
</author>
<author>
<name sortKey="Balkenius, C" uniqKey="Balkenius C">C. Balkenius</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jimenez, A R" uniqKey="Jimenez A">A.R. Jiménez</name>
</author>
<author>
<name sortKey="Soembagijo, A S" uniqKey="Soembagijo A">A.S. Soembagijo</name>
</author>
<author>
<name sortKey="Reynaerts, D" uniqKey="Reynaerts D">D. Reynaerts</name>
</author>
<author>
<name sortKey="Van Brussel, H" uniqKey="Van Brussel H">H. van Brussel</name>
</author>
<author>
<name sortKey="Ceres, R" uniqKey="Ceres R">R. Ceres</name>
</author>
<author>
<name sortKey="Pons, J L" uniqKey="Pons J">J.L. Pons</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jamali, N" uniqKey="Jamali N">N. Jamali</name>
</author>
<author>
<name sortKey="Sammut, S" uniqKey="Sammut S">S. Sammut</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Gastaldo, P" uniqKey="Gastaldo P">P. Gastaldo</name>
</author>
<author>
<name sortKey="Pinna, L" uniqKey="Pinna L">L. Pinna</name>
</author>
<author>
<name sortKey="Seminara, L" uniqKey="Seminara L">L. Seminara</name>
</author>
<author>
<name sortKey="Valle, M" uniqKey="Valle M">M. Valle</name>
</author>
<author>
<name sortKey="Zunino, R" uniqKey="Zunino R">R. Zunino</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Signoretto, M" uniqKey="Signoretto M">M. Signoretto</name>
</author>
<author>
<name sortKey="De Lathauwerb, L" uniqKey="De Lathauwerb L">L. de Lathauwerb</name>
</author>
<author>
<name sortKey="Suykens, J A K" uniqKey="Suykens J">J.A.K. Suykens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Dahiya, R S" uniqKey="Dahiya R">R.S. Dahiya</name>
</author>
<author>
<name sortKey="Metta, G" uniqKey="Metta G">G. Metta</name>
</author>
<author>
<name sortKey="Valle, M" uniqKey="Valle M">M. Valle</name>
</author>
<author>
<name sortKey="Sandini, G" uniqKey="Sandini G">G. Sandini</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Lee, H K" uniqKey="Lee H">H.-K. Lee</name>
</author>
<author>
<name sortKey="Chang, S I" uniqKey="Chang S">S.-I. Chang</name>
</author>
<author>
<name sortKey="Yoon, E" uniqKey="Yoon E">E. Yoon</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Nalwa, H S" uniqKey="Nalwa H">H.S. Nalwa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Li, C" uniqKey="Li C">C. Li</name>
</author>
<author>
<name sortKey="Wu, P M" uniqKey="Wu P">P.-M. Wu</name>
</author>
<author>
<name sortKey="Shutter, L A" uniqKey="Shutter L">L.A. Shutter</name>
</author>
<author>
<name sortKey="Narayan, R K" uniqKey="Narayan R">R.K. Narayan</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Seminara, L" uniqKey="Seminara L">L. Seminara</name>
</author>
<author>
<name sortKey="Pinna, L" uniqKey="Pinna L">L. Pinna</name>
</author>
<author>
<name sortKey="Valle, M" uniqKey="Valle M">M. Valle</name>
</author>
<author>
<name sortKey="Basiric, L" uniqKey="Basiric L">L. Basiricò</name>
</author>
<author>
<name sortKey="Loi, A" uniqKey="Loi A">A. Loi</name>
</author>
<author>
<name sortKey="Cosseddu, P" uniqKey="Cosseddu P">P. Cosseddu</name>
</author>
<author>
<name sortKey="Bonfiglio, A" uniqKey="Bonfiglio A">A. Bonfiglio</name>
</author>
<author>
<name sortKey="Ascia, A" uniqKey="Ascia A">A. Ascia</name>
</author>
<author>
<name sortKey="Bisio, M" uniqKey="Bisio M">M. Bisio</name>
</author>
<author>
<name sortKey="Ansaldo, A" uniqKey="Ansaldo A">A. Ansaldo</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Pinna, L" uniqKey="Pinna L">L. Pinna</name>
</author>
<author>
<name sortKey="Valle, M" uniqKey="Valle M">M. Valle</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Sinapov, J" uniqKey="Sinapov J">J. Sinapov</name>
</author>
<author>
<name sortKey="Sukhoy, V" uniqKey="Sukhoy V">V. Sukhoy</name>
</author>
<author>
<name sortKey="Sahai, R" uniqKey="Sahai R">R. Sahai</name>
</author>
<author>
<name sortKey="Stoytchev, A" uniqKey="Stoytchev A">A. Stoytchev</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Kroemer, O" uniqKey="Kroemer O">O. Kroemer</name>
</author>
<author>
<name sortKey="Lampert, C H" uniqKey="Lampert C">C.H. Lampert</name>
</author>
<author>
<name sortKey="Peters, J" uniqKey="Peters J">J. Peters</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Naya, F" uniqKey="Naya F">F. Naya</name>
</author>
<author>
<name sortKey="Yamato, J" uniqKey="Yamato J">J. Yamato</name>
</author>
<author>
<name sortKey="Shinozawa, K" uniqKey="Shinozawa K">K. Shinozawa</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Jamali, N" uniqKey="Jamali N">N. Jamali</name>
</author>
<author>
<name sortKey="Sammut, C" uniqKey="Sammut C">C. Sammut</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Scholkopf, B" uniqKey="Scholkopf B">B. Schölkopf</name>
</author>
<author>
<name sortKey="Smola, A J" uniqKey="Smola A">A.J. Smola</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Zhao, Q" uniqKey="Zhao Q">Q. Zhao</name>
</author>
<author>
<name sortKey="Zhou, G" uniqKey="Zhou G">G. Zhou</name>
</author>
<author>
<name sortKey="Adali, T" uniqKey="Adali T">T. Adali</name>
</author>
<author>
<name sortKey="Zhang, L" uniqKey="Zhang L">L. Zhang</name>
</author>
<author>
<name sortKey="Cichocki, A" uniqKey="Cichocki A">A. Cichocki</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Signoretto, M" uniqKey="Signoretto M">M. Signoretto</name>
</author>
<author>
<name sortKey="Dinh, Q T" uniqKey="Dinh Q">Q.T. Dinh</name>
</author>
<author>
<name sortKey="De Lathauwer, L" uniqKey="De Lathauwer L">L. de Lathauwer</name>
</author>
<author>
<name sortKey="Suykens, J A K" uniqKey="Suykens J">J.A.K. Suykens</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Evgeniou, T" uniqKey="Evgeniou T">T. Evgeniou</name>
</author>
<author>
<name sortKey="Pontil, M" uniqKey="Pontil M">M. Pontil</name>
</author>
<author>
<name sortKey="Poggio, T" uniqKey="Poggio T">T. Poggio</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Rifkin, R" uniqKey="Rifkin R">R. Rifkin</name>
</author>
<author>
<name sortKey="Klautau, A" uniqKey="Klautau A">A. Klautau</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bishop, C M" uniqKey="Bishop C">C.M. Bishop</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Bartlett, P" uniqKey="Bartlett P">P. Bartlett</name>
</author>
<author>
<name sortKey="Boucheron, S" uniqKey="Boucheron S">S. Boucheron</name>
</author>
<author>
<name sortKey="Lugosi, G" uniqKey="Lugosi G">G. Lugosi</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Chapelle, O" uniqKey="Chapelle O">O. Chapelle</name>
</author>
<author>
<name sortKey="Vapnik, V" uniqKey="Vapnik V">V. Vapnik</name>
</author>
<author>
<name sortKey="Bousquet, O" uniqKey="Bousquet O">O. Bousquet</name>
</author>
<author>
<name sortKey="Mukherjee, S" uniqKey="Mukherjee S">S. Mukherjee</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Anguita, D" uniqKey="Anguita D">D. Anguita</name>
</author>
<author>
<name sortKey="Ridella, S" uniqKey="Ridella S">S. Ridella</name>
</author>
<author>
<name sortKey="Rivieccio, F" uniqKey="Rivieccio F">F. Rivieccio</name>
</author>
<author>
<name sortKey="Zunino, R" uniqKey="Zunino R">R. Zunino</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Decherchi, S" uniqKey="Decherchi S">S. Decherchi</name>
</author>
<author>
<name sortKey="Gastaldo, P" uniqKey="Gastaldo P">P. Gastaldo</name>
</author>
<author>
<name sortKey="Redi, J" uniqKey="Redi J">J. Redi</name>
</author>
<author>
<name sortKey="Zunino, R" uniqKey="Zunino R">R. Zunino</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="Decherchi, S" uniqKey="Decherchi S">S. Decherchi</name>
</author>
<author>
<name sortKey="Gastaldo, P" uniqKey="Gastaldo P">P. Gastaldo</name>
</author>
<author>
<name sortKey="Ridella, S" uniqKey="Ridella S">S. Ridella</name>
</author>
<author>
<name sortKey="Zunino, R" uniqKey="Zunino R">R. Zunino</name>
</author>
<author>
<name sortKey="Anguita, D" uniqKey="Anguita D">D. Anguita</name>
</author>
</analytic>
</biblStruct>
<biblStruct>
<analytic>
<author>
<name sortKey="De Lathauwer, L" uniqKey="De Lathauwer L">L. De Lathauwer</name>
</author>
<author>
<name sortKey="De Moor, B" uniqKey="De Moor B">B. De Moor</name>
</author>
<author>
<name sortKey="Vandewalle, J" uniqKey="Vandewalle J">J. Vandewalle</name>
</author>
</analytic>
</biblStruct>
</listBibl>
</div1>
</back>
</TEI>
<pmc article-type="research-article">
<pmc-dir>properties open_access</pmc-dir>
<front>
<journal-meta>
<journal-id journal-id-type="nlm-ta">Sensors (Basel)</journal-id>
<journal-id journal-id-type="iso-abbrev">Sensors (Basel)</journal-id>
<journal-title-group>
<journal-title>Sensors (Basel, Switzerland)</journal-title>
</journal-title-group>
<issn pub-type="epub">1424-8220</issn>
<publisher>
<publisher-name>MDPI</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="pmid">24949646</article-id>
<article-id pub-id-type="pmc">4118344</article-id>
<article-id pub-id-type="doi">10.3390/s140610952</article-id>
<article-id pub-id-type="publisher-id">sensors-14-10952</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Computational Intelligence Techniques for Tactile Sensing Systems</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Gastaldo</surname>
<given-names>Paolo</given-names>
</name>
<xref rid="c1-sensors-14-10952" ref-type="corresp">
<sup>*</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pinna</surname>
<given-names>Luigi</given-names>
</name>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Seminara</surname>
<given-names>Lucia</given-names>
</name>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Valle</surname>
<given-names>Maurizio</given-names>
</name>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Zunino</surname>
<given-names>Rodolfo</given-names>
</name>
</contrib>
</contrib-group>
<aff id="af1-sensors-14-10952">Department of Electric, Electronic, Telecommunication Engineering and Naval Architecture, DITEN, University of Genoa, Via Opera Pia 11a, 16145 Genova, Italy; E-Mails:
<email>luigi.pinna@unige.it</email>
(L.P.);
<email>lucia.seminara@unige.it</email>
(L.S.);
<email>maurizio.valle@unige.it</email>
(M.V.);
<email>rodolfo.zunino@unige.it</email>
(R.Z.)</aff>
<author-notes>
<corresp id="c1-sensors-14-10952">
<label>*</label>
Author to whom correspondence should be addressed; E-Mail:
<email>paolo.gastaldo@unige.it</email>
; Tel.: +39-010-353-2268; Fax: +39-010-353-2175.</corresp>
</author-notes>
<pub-date pub-type="collection">
<month>6</month>
<year>2014</year>
</pub-date>
<pub-date pub-type="epub">
<day>19</day>
<month>6</month>
<year>2014</year>
</pub-date>
<volume>14</volume>
<issue>6</issue>
<fpage>10952</fpage>
<lpage>10976</lpage>
<history>
<date date-type="received">
<day>15</day>
<month>1</month>
<year>2014</year>
</date>
<date date-type="rev-recd">
<day>05</day>
<month>6</month>
<year>2014</year>
</date>
<date date-type="accepted">
<day>10</day>
<month>6</month>
<year>2014</year>
</date>
</history>
<permissions>
<copyright-statement>© 2014 by the authors; licensee MDPI, Basel, Switzerland.</copyright-statement>
<copyright-year>2014</copyright-year>
<license>
<license-p>This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (
<ext-link ext-link-type="uri" xlink:href="http://creativecommons.org/licenses/by/3.0/">http://creativecommons.org/licenses/by/3.0/</ext-link>
).</license-p>
</license>
</permissions>
<abstract>
<p>Tactile sensing helps robots interact with humans and objects effectively in real environments. Piezoelectric polymer sensors provide the functional building blocks of the robotic electronic skin, mainly thanks to their flexibility and suitability for detecting dynamic contact events and for recognizing the touch modality. The paper focuses on the ability of tactile sensing systems to support the challenging recognition of certain qualities/modalities of touch. The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities; its main results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications. An experimental campaign involving 70 participants using three different modalities in touching the upper surface of the sensor array was conducted, and confirmed the validity of the approach.</p>
</abstract>
<kwd-group>
<kwd>electronic skin</kwd>
<kwd>touch modalities</kwd>
<kwd>pattern recognition</kwd>
<kwd>computational intelligence</kwd>
<kwd>human-robot interaction</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec sec-type="intro">
<label>1.</label>
<title>Introduction</title>
<p>Electronic skin enables robots to sense their surroundings through touch. In this sense, robots represent an ideal stimulus to establish a controlled interaction with humans in a real world environment, making it possible to study both the cognitive and physical aspects of the robot-environment interaction. To enable the robot to grasp and manipulate objects, touch sensors can be integrated into the hands (e.g., [
<xref rid="b1-sensors-14-10952" ref-type="bibr">1</xref>
<xref rid="b3-sensors-14-10952" ref-type="bibr">3</xref>
]). Hands and upper arms are covered with the artificial skin to touch-triggered withdrawal reflexes [
<xref rid="b4-sensors-14-10952" ref-type="bibr">4</xref>
], while tactile sensors specifically integrated on the arms can be used, for example, to indicate position adjustments [
<xref rid="b5-sensors-14-10952" ref-type="bibr">5</xref>
]. To improve robot-environment interaction, other parts of the robot body can be covered with tactile sensors, e.g., the hands, the arms, the cheeks, the feet and the torso.</p>
<p>Nevertheless, reliable tactile systems are still an open issue as many technological and system issues remain unresolved and require a strong interdisciplinary effort to be addressed effectively. Technologies for effective signal transduction involve both materials and electronics aspects. However, as the overall performance depends on how the different building blocks are integrated, research on system issues has to be coupled to transducer development. In particular, as in the human perceptual mechanism a number of components of the sensory system manage information coming from the large number of skin receptors [
<xref rid="b6-sensors-14-10952" ref-type="bibr">6</xref>
], the effective utilization of tactile sensors requires research attention towards issues like deciphering the information contained in tactile data [
<xref rid="b7-sensors-14-10952" ref-type="bibr">7</xref>
]. Therefore, the design of a tactile sensing system should also include effective methods for the interpretation of sensor data. Such aspect is crucial in that sensor data typically support the recognition of either certain properties of the contact surfaces or certain qualities/modalities of touch.</p>
<p>Pattern-recognition methods proved to be effective in specific tasks such as materials classification and/or recognition of materials textures, patterns, shapes, hardness and size (e.g., [
<xref rid="b8-sensors-14-10952" ref-type="bibr">8</xref>
<xref rid="b12-sensors-14-10952" ref-type="bibr">12</xref>
]). However, few works report on the classification of touch modalities and gestures [
<xref rid="b13-sensors-14-10952" ref-type="bibr">13</xref>
<xref rid="b15-sensors-14-10952" ref-type="bibr">15</xref>
]. Suitable computational models are required to accomplish these goals. Contact materials have been recognized by Support Vector Machine (SVM), Regularized Least Square (RLS) and Regularized Extreme Learning Machine (RELM) [
<xref rid="b8-sensors-14-10952" ref-type="bibr">8</xref>
], feature extraction from sensory data using self-organizing maps (SOMs) is reported in [
<xref rid="b16-sensors-14-10952" ref-type="bibr">16</xref>
], neural network algorithms applied on tactile data have been used to obtain specific surface features of the contact object [
<xref rid="b17-sensors-14-10952" ref-type="bibr">17</xref>
] and Bayes trees allow one to distinguish different materials from their surface texture [
<xref rid="b18-sensors-14-10952" ref-type="bibr">18</xref>
].</p>
<p>Touch-modality recognition is the specific sensorial problem that is tackled in this paper. The skin surface is subject to a variety of possible stimuli, and the system is expected to discriminate the various modalities of physical interaction. The problem complexity stems from the bi-dimensional sensing structure, which is augmented by the time-varying distribution nature of the stimulus and pressure pattern.
<xref rid="f1-sensors-14-10952" ref-type="fig">Figure 1</xref>
illustrates the problem setting by showing three touch modalities, namely, sliding the finger, brushing a paintbrush and rolling a washer.</p>
<p>The application of ML techniques to touch-modality recognition showed that the processing of sensor data under a tensor-based representation could yield promising recognition performance without a significant increase in complexity [
<xref rid="b19-sensors-14-10952" ref-type="bibr">19</xref>
]; that work mostly proved the specific advantages of tensor-based sensor processing over a conventional, vector-based representation of raw data.</p>
<p>The research presented in this paper tackles the critical issue of performance evaluation and reliability assessment of the tensor-based approach for the accurate classification of touch modalities, with specific attention paid to the generalization ability of the deployed methods, and the possibility to address realistic recognition problems. The theoretical framework introduced in [
<xref rid="b20-sensors-14-10952" ref-type="bibr">20</xref>
] is used to derive a ML-based system for pattern recognition that deals with the interpretation of touch modalities and is specifically designed to treat tensor signals. The novelty of the proposed approach, therefore, consists both in characterizing the tensor-based paradigm in terms of expected performance from a quantitative viewpoint, and in applying ML techniques in multi-class recognition domains</p>
<p>In this work, tactile data have been acquired by an electronic skin based on a piezoelectric sensor array. An experimental campaign involving 70 participants has been conducted to employ the tensor-based pattern-recognition system for the classification of touch modalities in three different bi-class classification problems and in the 3-class classification problem involving all three touch modalities.</p>
<p>This paper is organized as follows: Section 2 describes the tactile sensing system, which includes the sensor array based on polyvinylidene fluoride (PVDF) sensing elements, the interface electronics and the data acquisition and processing. Section 3 illustrates the theory of the kernel-based algorithm to deal with tensor data, while Section 4 suggests a practical model-selection procedure. Section 5 describes the experimental campaign and discusses experimental results. Concluding remarks are contained in Section 6.</p>
</sec>
<sec>
<label>2.</label>
<title>Tactile Sensing System Based on Piezoelectric Transducers</title>
<p>Though the only requirement for the sensor array is to provide
<italic>tensor</italic>
signals to be managed by the proposed ML-based system, in this paper the method is specifically demonstrated with an electronic skin based on
<italic>piezoelectric</italic>
transducers. In the following, the properties of the sensing material introduce the description of the tactile acquisition system.</p>
<sec>
<label>2.1.</label>
<title>The Sensing Material</title>
<p>A large number of daily tasks involve dynamic contacts and hence it is desirable to have an electronic skin which is responsive to a wide range of mechanical stimuli (in humans this range is approximately 0–1 kHz). While static and quasi-static contact events are usually managed by capacitive tactile elements, piezoelectric technology is suitable for detecting dynamic contact events. Piezoelectric polymers are particularly interesting in that they are mechanically flexible, conformable, and exhibit wider frequency bandwidth [
<xref rid="b21-sensors-14-10952" ref-type="bibr">21</xref>
,
<xref rid="b22-sensors-14-10952" ref-type="bibr">22</xref>
]. Further, they are low-cost, can be prepared in thin films and can be cut into any desired shape [
<xref rid="b23-sensors-14-10952" ref-type="bibr">23</xref>
].</p>
<p>The electromechanical response of piezoelectric polymers can be recorded either in the form of charge generation or in form of a change in capacitance [
<xref rid="b24-sensors-14-10952" ref-type="bibr">24</xref>
]. Accordingly, they can be easily manufactured and integrated with flexible PCB and electronic interface circuitry can be easily developed using off-the-shelf electronics. In particular, polyvinylidene fluoride (PVDF) has been chosen as piezoelectric polymer to build the sensor array. The PVDF film was a circular portion (diameter = 7 cm) of a commercial foil from MEAS—Measurement Specialties Inc. (Hampton, VA, USA).</p>
</sec>
<sec>
<label>2.2.</label>
<title>The Tactile Acquisition System</title>
<p>To fabricate an electronic skin system based on piezoelectric transducer arrays, issues concerning the manufacturing technology, the interface electronics and the system integration have to be addressed. PVDF must be first integrated into structures which also include a substrate and a protective layer. The piezoelectric film is glued to a flexible printed circuit board (PCB) structure which is conformable and flexible, covered by an elastic layer to protect the sensor from physical damage or chemical contamination.</p>
<p>In order to build the sensor array (
<xref rid="f2-sensors-14-10952" ref-type="fig">Figure 2</xref>
), the piezoelectric film features
<italic>ad hoc</italic>
metal contacts (16 square electrodes on the PVDF lower surface and a ground layer on the PVDF top), which are deposited by inkjet printing [
<xref rid="b25-sensors-14-10952" ref-type="bibr">25</xref>
]. The underlying PCB substrate is provided with metal electrodes and tracks to extract the lower PVDF signals. Once the PVDF film has been glued on the PCB substrate, a polydimethylsiloxane (PDMS) 2 mm thick elastomer layer is directly integrated on top [
<xref rid="b24-sensors-14-10952" ref-type="bibr">24</xref>
].
<xref rid="f3-sensors-14-10952" ref-type="fig">Figure 3</xref>
shows a scheme of the overall tactile acquisition system.</p>
<p>The mechanical-to-electrical transduction by each PVDF taxel is measured as a generated charge and converted to a voltage by a 16-channel charge amplifier. It includes a charge amplifier (CA) [
<xref rid="b26-sensors-14-10952" ref-type="bibr">26</xref>
] cascaded with a band pass filter (BPF) featuring a bandwidth from 0.3 Hz to 1.5 kHz. The bandwidth of the CA + BPF is 2.5 Hz–1.5 kHz.</p>
<p>The subsequent step is to acquire the tactile system output data both to be visualized by the operator and to be processed by the pattern-recognition system for touch modality classification. For this aim, output signals are acquired at 3 k Samples per second by a DAQ board (NI PCI-6071E) and visualized by a LabVIEW™ graphic user interface (GUI) in the time domain.</p>
<p>As the
<italic>time</italic>
-varying tactile interaction is conveyed through the skin protective layer to the 2D geometry of the sensor array, it seems that a touch-modality aware sensing framework should involve a tactile hardware that can yield a
<italic>tensor</italic>
signal. This morphology of the tactile signal includes both time-varying features that actually contribute to determine the touch modality (e.g., contact pressure on the electronic skin, stimulus duration,
<italic>etc.</italic>
) and the sensor spatial arrangement. The following sections illustrate an approach to the management of tensor-like data for extracting directly meaningful information about the original mechanical stimulus.</p>
</sec>
</sec>
<sec>
<label>3.</label>
<title>Machine Learning for Touch Modality Recognition</title>
<p>Several works in the literature adopt Machine Learning (ML) algorithms for pattern-recognition tasks in tactile sensing systems (e.g., [
<xref rid="b8-sensors-14-10952" ref-type="bibr">8</xref>
,
<xref rid="b27-sensors-14-10952" ref-type="bibr">27</xref>
<xref rid="b30-sensors-14-10952" ref-type="bibr">30</xref>
]). The rationale is that ML techniques can support predictive systems that make reliable decisions on unseen input samples [
<xref rid="b31-sensors-14-10952" ref-type="bibr">31</xref>
]. This ability is especially appealing in the case of the interpretation of sensor data, as complex, non-linear mechanisms characterize the underlying phenomenon to be modeled and an explicit formalization of the input-output relationship is difficult to attain. ML technologies model the input-output function by a “learning from examples” approach; eventual implementations can vary according to different application scenarios, but all share a common probabilistic setting.</p>
<p>Tactile data are first processed by feature-extraction and transformed into a multi-dimensional vector to feed the learning algorithm. The ability of the feature space to characterize the underlying perceptual phenomenon is crucial to the effectiveness of the whole pattern-recognition task. The feature-extraction process, though, may bring about the loss of some structural information embedded in the original structure of the tactile data. The literature already proved [
<xref rid="b20-sensors-14-10952" ref-type="bibr">20</xref>
,
<xref rid="b32-sensors-14-10952" ref-type="bibr">32</xref>
,
<xref rid="b33-sensors-14-10952" ref-type="bibr">33</xref>
] that tensors provide an efficient tool to describe multidimensional structured data, and that the corresponding learning methods can favorably exploit the
<italic>a priori</italic>
information of data structure to achieve satisfactory generalization abilities. The theoretical framework introduced in [
<xref rid="b20-sensors-14-10952" ref-type="bibr">20</xref>
] allows one to extend every learning machine based on kernel methods to a tensor-based learning model. Such feature is attractive in that kernel methods support both supervised paradigms (
<italic>i.e.</italic>
, learning schemes that address classification problems) and unsupervised paradigms (
<italic>i.e.</italic>
, learning schemes that address clustering problems). This in turn may provide an effective tool in the specific case of touch recognition, as the inherent complexity in discriminating classes of gestures/modalities that are often overlapping may hinder a straightforward implementation of supervised learning tools. This aspect will actually be discussed in more details in Section 4. The following Section 3.1 will deal briefly with kernel methods for ML-based pattern recognition. Section 3.2 will discuss the theoretical framework that allows one to extend kernel methods to tensor data.</p>
<sec>
<label>3.1.</label>
<title>Kernel Methods for Pattern Recognition and Tensor-Based Representation</title>
<p>The empirical learning of a generic mapping function
<italic>γ</italic>
stems from a training procedure that uses a dataset,
<bold>X</bold>
, holding
<italic>N
<sub>p</sub>
</italic>
patterns (samples). In a binary classification problem, each pattern includes a data vector,
<bold>x</bold>
∈ ℜ
<italic>
<sup>n</sup>
</italic>
, and its category label
<italic>y</italic>
∈ {−1, 1}. When developing data-driven classifiers, the learning phase requires both
<bold>x</bold>
and
<italic>y</italic>
to build up a decision rule. After training, the system processes data that do not belong to the training set and ascribes each test sample to a predicted category,
<italic>ŷ</italic>
. The function that predicts the class of a sample is a sharp decision function,
<italic>ŷ</italic>
=
<italic>sign</italic>
(
<italic>f</italic>
(
<bold>x</bold>
)), where
<italic>f</italic>
(
<bold>x</bold>
) is expected to effectively approximate the ‘true’ mapping function
<italic>γ</italic>
.</p>
<p>In pattern-recognition technologies, the class of kernel methods embeds the techniques that express
<italic>f</italic>
(
<bold>x</bold>
) as a weighted sum of some nonlinear “kernel” basis functions. Generalization ability relies on two main concepts: the function
<italic>f</italic>
(
<bold>x</bold>
) belongs to a reproducing kernel Hilbert space (RKHS), and regularization theory is used as the conceptual basis [
<xref rid="b31-sensors-14-10952" ref-type="bibr">31</xref>
]. The former concept—in practice—means that kernel classifiers benefit from the so-called kernel trick [
<xref rid="b31-sensors-14-10952" ref-type="bibr">31</xref>
]: patterns
<bold>x</bold>
<italic>
<sub>i</sub>
</italic>
and
<bold>x</bold>
<italic>
<sub>j</sub>
</italic>
are projected in a high-dimensional Hilbert space, where the mapping function is easier to retrieve. A kernel function
<italic>K</italic>
(
<bold>x</bold>
<italic>
<sub>i</sub>
</italic>
,
<bold>x</bold>
<italic>
<sub>j</sub>
</italic>
) allows to handle only inner products between pattern pairs, disregarding the specific mappings of individual patterns. The kernel trick allows setting up the non-linear variant of virtually any algorithm that can be formalized in terms of dot products.</p>
<p>Regularized Least Square (RLS) [
<xref rid="b34-sensors-14-10952" ref-type="bibr">34</xref>
] and Support Vector Machines (SVM's) are very popular implementations of such kernel machines [
<xref rid="b31-sensors-14-10952" ref-type="bibr">31</xref>
]. Both techniques belong to the class of regularized kernel methods. Thus, they identify the
<italic>f</italic>
(
<bold>x</bold>
) that best approximates
<italic>γ</italic>
by exploiting a cost function in which a positive parameter, λ, rules the tradeoff between the empirical error and a regularizing term. The decision function
<italic>f</italic>
<sub>RLS</sub>
(
<bold>x</bold>
) can be formalized as follows:
<disp-formula id="FD1">
<label>(1)</label>
<mml:math id="mm1">
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mtext mathvariant="italic">RLS</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mtext mathvariant="bold">x</mml:mtext>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>β</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mi>K</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mtext mathvariant="bold">x</mml:mtext>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mtext mathvariant="bold">x</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>K</italic>
( , ) is a kernel function and
<bold>β</bold>
= [
<italic>β
<sub>1</sub>
,…,β
<sub>Np</sub>
</italic>
] is a vector of scalar coefficients;
<bold>β</bold>
can be obtained as follows:
<disp-formula id="FD2">
<label>(2)</label>
<mml:math id="mm2">
<mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtext mathvariant="bold">K</mml:mtext>
<mml:mo>+</mml:mo>
<mml:mi>λ</mml:mi>
<mml:mtext mathvariant="bold">I</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mtext mathvariant="bold">y</mml:mtext>
</mml:mrow>
</mml:math>
</disp-formula>
where λ is the regularization parameter, and
<bold>K</bold>
is the matrix of kernel functions
<italic>K</italic>
(
<bold>x</bold>
<italic>
<sub>i</sub>
</italic>
,
<bold>x</bold>
). In the case of SVMs, the decision function
<italic>f</italic>
<sub>SVM</sub>
(
<bold>x</bold>
) is given by:
<disp-formula id="FD3">
<label>(3)</label>
<mml:math id="mm3">
<mml:mrow>
<mml:msub>
<mml:mi>f</mml:mi>
<mml:mrow>
<mml:mtext mathvariant="italic">SVM</mml:mtext>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mtext mathvariant="bold">x</mml:mtext>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mtext mathvariant="italic">Nsv</mml:mtext>
</mml:mrow>
</mml:munderover>
<mml:mrow>
<mml:msub>
<mml:mi>α</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mtext>K</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mtext mathvariant="bold">x</mml:mtext>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mtext mathvariant="bold">x</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mi>b</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where the number of support vectors
<italic>N
<sub>sv</sub>
</italic>
, the “bias” term
<italic>b</italic>
, and coefficients
<italic>α</italic>
<sub>i</sub>
are computed by the training algorithm, which minimizes a quadratic cost function [
<xref rid="b31-sensors-14-10952" ref-type="bibr">31</xref>
]. The eventual generalization performance of a SVM also depends on the setting of a scalar parameter,
<italic>C</italic>
, which rules the trade-off between accuracy and complexity in the training process, and actually plays the role of 1/λ [
<xref rid="b31-sensors-14-10952" ref-type="bibr">31</xref>
].</p>
<p>The theoretical framework presented in [
<xref rid="b20-sensors-14-10952" ref-type="bibr">20</xref>
] showed that the above formalism can be fruitfully applied to the sensor-based domain, and introduced a kernel function for developing tensor-based models. This result is noteworthy in that it allows every kernel machine to deal with tensors, provided that the kernel function proposed in [
<xref rid="b20-sensors-14-10952" ref-type="bibr">20</xref>
] is used. This in turn means that both the kernel methods discussed above can be extended to tensor-based learning. For the sake of repeatability, the
<xref rid="APP1" ref-type="app">Appendix</xref>
provides the procedure to handle sensor data and represent them in a tensor-based framework, for further processing within a kernel-based paradigm.</p>
</sec>
<sec>
<label>3.2.</label>
<title>Applying the ML-Based Framework to the Recognition of Touch Modalities</title>
<p>The proposed framework tackles the interpretation of touch modalities by adopting a classification scheme that exploits tensor-based kernel methods. The ML approach splits the pattern-recognition problem into two tasks:
<list list-type="simple">
<list-item>
<label>(1)</label>
<p>The definition of a suitable descriptive basis for the input signal provided by the sensor (or sensor array),
<italic>i.e.</italic>
, a tensor-based description
<inline-graphic xlink:href="sensors-14-10952i1.jpg"></inline-graphic>
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
, where
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
is a tensor space:
<disp-formula id="FD4">
<label>(4)</label>
<mml:math id="mm4">
<mml:mrow>
<mml:mi mathvariant="bold-script">L</mml:mi>
<mml:mo>=</mml:mo>
<mml:mi>ϕ</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="bold-script">S</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>In
<xref rid="FD4" ref-type="disp-formula">Equation (4)</xref>
,
<inline-graphic xlink:href="sensors-14-10952i3.jpg"></inline-graphic>
is the 3rd order tensor that characterizes sensor outputs, and the process
<italic>ϕ</italic>
works out a tensor-based description from
<inline-graphic xlink:href="sensors-14-10952i3.jpg"></inline-graphic>
, thus preserving the structure of the signal originally provided by the tactile sensor.</p>
</list-item>
<list-item>
<label>(2)</label>
<p>The empirical learning of a model for the non-linear function,
<italic>γ</italic>
that maps the feature space,
<italic>F</italic>
, into the set of tactile stimuli of interest:
<disp-formula id="FD5">
<label>(5)</label>
<mml:math id="mm5">
<mml:mrow>
<mml:mi mathvariant="script">L</mml:mi>
<mml:mo></mml:mo>
<mml:mtext>T</mml:mtext>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
</list-item>
</list>
</p>
<p>In principle, the learning system in
<xref rid="FD5" ref-type="disp-formula">Equation (5)</xref>
could be designed to receive as input the tensor
<inline-graphic xlink:href="sensors-14-10952i3.jpg"></inline-graphic>
directly. However, a pre-processing may be needed to better characterize the underlying tactile phenomenon. In this regard, one should take into account that the pre-processing
<italic>ϕ</italic>
should satisfy
<inline-graphic xlink:href="sensors-14-10952i1.jpg"></inline-graphic>
∈ ℜ
<italic>
<sup>l</sup>
</italic>
<sup>(1)</sup>
⨂ ℜ
<italic>
<sup>l</sup>
</italic>
<sup>(2)</sup>
⨂ ℜ
<italic>
<sup>l(3)</sup>
</italic>
, where
<italic>l</italic>
(1),
<italic>l</italic>
(2), and
<italic>l</italic>
(3) are pattern-independent quantities.</p>
<p>Accordingly, the proposed ML scheme models the mapping function
<italic>γ</italic>
by using a dataset
<bold>X</bold>
holding
<italic>N
<sub>p</sub>
</italic>
patterns (samples), where each pattern includes a data tensor
<inline-graphic xlink:href="sensors-14-10952i1.jpg"></inline-graphic>
and its category label
<italic>y</italic>
∈ {−1, 1}. Even though T usually includes several tactile stimuli and a multiclass problem is addressed, this paper aims to evaluate the advantages of introducing a tensor-based approach; hence a binary classification problem is considered without loss of generality. The literature indeed provides several effective strategies to tackle a multiclass classification scheme by integrating binary classifiers [
<xref rid="b35-sensors-14-10952" ref-type="bibr">35</xref>
].</p>
<p>The setting of the machine adjustable parameters affects the generalization ability of a machine-learning model,
<italic>i.e.</italic>
, its ability to attain a reliable accuracy of previously unseen patterns. Indeed, the training phase is usually supported by a
<italic>model-selection</italic>
procedure, which is designed to estimate the parameter setting that may yield the most effective generalization ability. Such crucial aspect will be addressed in Section 4. In the case of tensor-SVM, three parameters are involved. The first parameter is the quantity
<italic>C</italic>
, which characterizes the learning model itself (see Section 3.1). The remaining parameters characterize the kernel function
<italic>K</italic>
described in the
<xref rid="APP1" ref-type="app">Appendix</xref>
: the width
<italic>σ</italic>
of the Gaussian kernel, and the number of columns
<italic>α</italic>
in the matrices
<inline-formula>
<mml:math id="mm6">
<mml:mrow>
<mml:msubsup>
<mml:mtext mathvariant="bold">V</mml:mtext>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>α</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
and
<inline-formula>
<mml:math id="mm7">
<mml:mrow>
<mml:msubsup>
<mml:mtext mathvariant="bold">V</mml:mtext>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>α</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
. The last two parameters obviously are also involved in the configuration settings of tensor-RLS; in this case the configuration set is completed by the regularization parameter
<italic>λ</italic>
, as per
<xref rid="FD2" ref-type="disp-formula">Equation (2)</xref>
.</p>
<p>Although a specific value of
<italic>σ</italic>
and
<italic>α</italic>
parameters can be set for each factor kernel
<italic>k</italic>
<sup>z</sup>
, the present research adopts one
<italic>σ</italic>
value and one
<italic>α</italic>
value for every
<italic>k</italic>
<sup>z</sup>
. The role of the latter parameter is crucial because
<italic>α</italic>
is the only quantity that is not included in the configuration of a conventional kernel machine. As anticipated above, a conventional choice for this parameter is
<italic>α</italic>
=
<italic>Q
<sub>z</sub>
</italic>
<italic>z</italic>
∈ {1, …,
<italic>Z</italic>
}. On the other hand, one should also consider that SVD can effectively take out redundancy or noise from data, and this property may prove appealing in the pattern-recognition application at hand. Thus, by fine tuning
<italic>α</italic>
one can expect to decrease the quantity of noise that affects the tensor patterns (or, more precisely, the unfolding of the tensors themselves), and in turn boost the generalization ability. As a result, in the present research the
<italic>α</italic>
quantity is treated as a configurable parameter whose value can vary between 1 and
<italic>Q
<sub>z</sub>
</italic>
.</p>
</sec>
</sec>
<sec>
<label>4.</label>
<title>Effective Model Selection to Boost Generalization Performance</title>
<sec>
<label>4.1.</label>
<title>The Problem of Effective Model Selection</title>
<p>The specific problem of interpretation of touch modalities poses major challenges to inductive learning methodologies, which induce a general rule from a set of observed instances. In fact, a relevant constraint is that the training set (
<italic>i.e.</italic>
, the observed instances) actually conveys reliable information about the unknown general rule. In the case of interpretation of touch modalities, though, the setup of such training set may not represent a straightforward task, due to the impossibility of collecting training data that are not affected by the subjective nature of the interpretation of a predetermined ‘abstract’ touch modality. For example, the same touch modality may generate stimuli that differ in the amount of pressure applied and in the length of the time window spanned by the gesture. Eventually, one cannot avoid the presence of a level of overlap between stimuli that in principle generate from different touch modalities.</p>
<p>The presence of noise in the training data is obviously a problem that learning machines should cope with. Indeed, the interpretation of touch modalities represents an applicative scenario in which such problem may prove critical. In this sense, the main concern is the generalization ability of the pattern-recognition system (determined by the settings of the corresponding model parameters),
<italic>i.e.</italic>
, its ability to correctly classify patterns that were not included in the training set. The accuracy at predicting unseen data is the practical criterion to evaluate the effectiveness of a trained system. Hence, the final goal of a training procedure is to define the machine parameterization that can lead to the most effective generalization performances; this process is usually named model selection. In fact, estimating the generalization performance of a learning machine is not a straightforward task. In principle, the literature [
<xref rid="b31-sensors-14-10952" ref-type="bibr">31</xref>
] provides a variety of theoretical criteria to bound the generalization error of a ML system, but these approaches often lack in practicality. On the other hand, one may exploit empirical criteria [
<xref rid="b36-sensors-14-10952" ref-type="bibr">36</xref>
], which use a subset of training data to support the estimation of the generalization performance. However, the empirical estimation of the generalization error may prove difficult in the presence of limited training set or in the presence of noisy data. Indeed, applications that deal with the interpretation of tactile data may suffer from both problems, as: (1) collecting training data can be onerous and (2) it is difficult to remove noise from this kind of experiments.</p>
<p>In the conventional formalization, the “true” generalization error, π, of a classifier is unknown because one cannot predict the classifier's behavior over the entire distribution of data; therefore, one uses the performance on the empirical training set as an estimate of π, and bounds the associate generalization performance by means of statistical penalty terms:
<disp-formula id="FD6">
<label>(6)</label>
<mml:math id="mm8">
<mml:mrow>
<mml:mrow>
<mml:mi>π</mml:mi>
</mml:mrow>
<mml:mo></mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>χ</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>τ</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where ν is the error scored on the empirical training set, χ measures the complexity of the space of classifying functions, and τ penalizes the finiteness of the training set. In general, the task of computing χ may prove quite difficult, as the notion of “complexity” is not standard.</p>
</sec>
<sec>
<label>4.2.</label>
<title>Conventional Approaches to Model Selection</title>
<p>The literature provides a certain variety of methods for the analytical estimation of a classifier's generalization ability; most approaches derive a bound for implementing
<xref rid="FD6" ref-type="disp-formula">Equation (6)</xref>
by taking into account the degrees of freedom in the classifier adjustable parameters, and the configuration of the space of admissible functions that the classifier may take upon [
<xref rid="b37-sensors-14-10952" ref-type="bibr">37</xref>
,
<xref rid="b38-sensors-14-10952" ref-type="bibr">38</xref>
]. These methods involve a profound theoretical formalism and exhibit general applicability; nonetheless, due to the general assumptions in the classifier characterization, they mostly fail in deriving bounds that have some practical value.</p>
<p>On the other hand, empirical approaches to the estimation of generalization performance prove effective in practical domains, and are therefore widely adopted in real-world applications. Cross-validation [
<xref rid="b37-sensors-14-10952" ref-type="bibr">37</xref>
,
<xref rid="b39-sensors-14-10952" ref-type="bibr">39</xref>
] represents a popular option toward that end; the rationale of this approach is to estimate the error, π, by using available data to mimic the overall training problem. In practice, one splits the available data into a training set, used to minimize ν by adjusting the machine parameters, and a test set, which does not enter the training process and is only used to measure the actual prediction of π In order to minimize any biasing from the random-splitting process, one iterates the entire procedure in several independent runs, and computes the eventual estimated value by some statistical descriptor (e.g., average, minimum, maximum,
<italic>etc.</italic>
). One typically retains the “best” classifier, that is, the parameter settings that yielded the smallest predicted error. Although that procedure is quite popular in the literature, it might yet suffer from some statistical biasing, since the test-set performance often drives the choice of the implemented classifier, and thus enters the training process, albeit in an indirect manner.</p>
<p>The research presented here, therefore, adopts a more rigorous approach, involving the splitting into three independent data sets: a “training” set, a “cross validation” set (having the same meaning and purpose described above), and a “test” set, which is taken into account only after selecting the target classifier, and is used to predict the machine's generalization ability. Iterating this procedure over several independent runs removes any randomness sampling influence. This procedure will be adopted in the experimental verification of the touch-modality recognition.</p>
<p>An issue of these methods might consist in the fact that the empirical methods do not take into account the actual capabilities of the classifier model that is being trained, and only rely on the iterated training process for scanning the space of admissible functions. Integrating both the empirical sample and the theoretical model of the classifier yields a more accurate estimate of the generalization ability.</p>
</sec>
<sec>
<label>4.3.</label>
<title>Enhancing Model Selection by Maximal-Discrepancy Method</title>
<p>The analysis described in [
<xref rid="b40-sensors-14-10952" ref-type="bibr">40</xref>
] showed that the estimate of π in
<xref rid="FD6" ref-type="disp-formula">Equation (6)</xref>
can be improved by exploiting the notion of complexity formalized in the Maximal Discrepancy (MD) framework [
<xref rid="b40-sensors-14-10952" ref-type="bibr">40</xref>
] to assess χ. Given a training set, a classifier, and a classifier's parameterization, the MD framework estimates χ by exploiting the quantity
<italic>ν̄</italic>
;
<italic>ν̄</italic>
represents the average error scored by the classifier on
<italic>N</italic>
artificial datasets obtained by randomly swapping each time half of the labels in the original training set. Eventually, one sets χ = 1−2
<italic>ν̄</italic>
; therefore, the complexity χ is high if the classifier can learn noise. In fact, a complex classifier is usually prone to overfitting [
<xref rid="b36-sensors-14-10952" ref-type="bibr">36</xref>
],
<italic>i.e.</italic>
, an effective performance on the data included in the training set but a poor performance when processing unseen data. Thus, a highest representation capability may also lead the classification machine to model noise.</p>
<p>In [
<xref rid="b41-sensors-14-10952" ref-type="bibr">41</xref>
], the authors indeed showed that the ability of the MD framework to estimate complexity could be further improved. In particular, the methodology discussed in [
<xref rid="b41-sensors-14-10952" ref-type="bibr">41</xref>
] proved that a more accurate estimate of χ can be achieved by taking as reference the level of complexity reached by the classifier when tackling the problem represented in the original training set (
<italic>i.e.</italic>
, the complexity reached to score the training error ν). As a result, the quantity
<italic>ν̄</italic>
should be assessed by using classifiers that do not show a complexity greater than the “reference” complexity. A convenient procedure to estimate the reference complexity is given in [
<xref rid="b41-sensors-14-10952" ref-type="bibr">41</xref>
]; it requires to compute two quantities:
<list list-type="bullet">
<list-item>
<p>the hyperplane
<bold>β</bold>
<sup>(RKM)</sup>
that separates the two classes (“+1” and “−1”) of the dataset
<bold>X</bold>
according to a classifier based on a regularized kernel machine;</p>
</list-item>
<list-item>
<p>the hyperplane
<bold>β</bold>
<sup>(REF)</sup>
that one obtains by a unsupervised evaluation of the dataset
<bold>X</bold>
.</p>
</list-item>
</list>
</p>
<p>The latter quantity can be actually computed by adopting a two-step process:
<list list-type="simple">
<list-item>
<label>(1)</label>
<p>divide the dataset
<bold>X</bold>
into two clusters by using an unsupervised clustering method; let X
<sup>(</sup>
<italic>
<sup>a</sup>
</italic>
<sup>)</sup>
denote the subset of data assigned to the first cluster, and X
<sup>(</sup>
<italic>
<sup>b</sup>
</italic>
<sup>)</sup>
denote the remaining subset of data assigned to the second cluster.</p>
</list-item>
<list-item>
<label>(2)</label>
<p>obtain the hyperplanes
<bold>β</bold>
<sup>(+)</sup>
and
<bold>β</bold>
<sup>(−)</sup>
as follows:
<list list-type="simple">
<list-item>
<label>a.</label>
<p>assign the artificial label “+1” to the data belonging to X
<sup>(</sup>
<italic>
<sup>a</sup>
</italic>
<sup>)</sup>
, and the artificial label “−1” to the data belonging to X
<sup>(</sup>
<italic>
<sup>b</sup>
</italic>
<sup>)</sup>
; apply a conventional training to this problem to obtain the hyperplane
<bold>β</bold>
<sup>(+)</sup>
that separates the two classes.</p>
</list-item>
<list-item>
<label>b.</label>
<p>Assign the artificial label “−1” to the data belonging to X
<sup>(</sup>
<italic>
<sup>a</sup>
</italic>
<sup>)</sup>
, and the artificial label “+1” to the data belonging to X
<sup>(</sup>
<italic>
<sup>b</sup>
</italic>
<sup>)</sup>
; apply a conventional training to this problem to obtain the hyperplane
<bold>β</bold>
<sup>(−)</sup>
that separates the two classes.</p>
</list-item>
<list-item>
<label>c.</label>
<p>Set
<bold>β</bold>
<sup>(REF)</sup>
as follows
<disp-formula id="FD7">
<label>(7)</label>
<mml:math id="mm9">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtext mathvariant="italic">REF</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo>arg</mml:mo>
<mml:mo>min</mml:mo>
</mml:mrow>
<mml:mtext>w</mml:mtext>
</mml:munder>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo>+</mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtext mathvariant="italic">RKM</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo></mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtext mathvariant="italic">RKM</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
</list-item>
</list>
</p>
</list-item>
</list>
</p>
<p>The rationale behind this approach can be explained by analyzing the configuration schematized in
<xref rid="f4-sensors-14-10952" ref-type="fig">Figure 4</xref>
.
<xref rid="f4-sensors-14-10952" ref-type="fig">Figure 4a</xref>
proposes a problem in which the data belonging to
<bold>X</bold>
are intrinsically organized in two clusters. Thus, the unsupervised evaluation of the dataset would lead to the situation illustrated in
<xref rid="f4-sensors-14-10952" ref-type="fig">Figure 4b</xref>
, which reports the approximate position of the hyperplane
<bold>β</bold>
<sup>(REF)</sup>
. One may conclude that
<bold>β</bold>
<sup>(REF)</sup>
characterizes the “natural” distribution of data. On the other hand, the position of the hyperplane
<bold>β</bold>
<sup>(RKM)</sup>
would result from the analysis of the empirical data;
<italic>i.e.</italic>
,
<bold>β</bold>
<sup>(RKM)</sup>
is obtained by taking into account the actual labels associated to each pattern. As a result, two different situations may arise from the proposed example.
<xref rid="f4-sensors-14-10952" ref-type="fig">Figure 4c</xref>
refers to the first situation, in which it is supposed that clusters match the actual classes. Conversely,
<xref rid="f4-sensors-14-10952" ref-type="fig">Figure 4d</xref>
refers to the opposite situation, in which it is supposed that actual classes do not match the natural distribution of data. In the case of
<xref rid="f4-sensors-14-10952" ref-type="fig">Figure 4c</xref>
,
<bold>β</bold>
<sup>(REF)</sup>
<bold>β</bold>
<sup>(RKM)</sup>
; in the case of
<xref rid="f4-sensors-14-10952" ref-type="fig">Figure 4d</xref>
,
<bold>β</bold>
<sup>(REF)</sup>
<bold>β</bold>
<sup>(RKM)</sup>
. Therefore whenever the result of clustering matches the true distribution of pattern classes, the unsupervised separation surface
<bold>β</bold>
<sup>(REF)</sup>
and the real classification surface must coincide
<bold>β</bold>
<sup>RKM)</sup>
. Of course, the opposite case may occur, in which the target distribution is totally uncorrelated with the obtained clusters. In general, however,
<bold>β</bold>
<sup>(REF)</sup>
and
<bold>β</bold>
<sup>(RKM)</sup>
set the constraints for the admissible solutions to the classification problem at hand, as proved in [
<xref rid="b41-sensors-14-10952" ref-type="bibr">41</xref>
]. As a major consequence, such constraints should represent the reference when assessing χ by adopting the MD framework.</p>
<p>For the sake of clarity, Algorithm 1 reports the full procedure to assess χ. In the case of the present framework, the tensor-based versions of SVM and RLS are the classification tools that support the interpretation of touch modalities. Therefore, model selection is designed to set the best parameterization for those machines. The tensor-based version of the kernel k-means clustering method provided the unsupervised tool to be used in the model selection procedure.</p>
<array>
<tbody>
<tr>
<td valign="bottom" colspan="2" rowspan="1">
<hr></hr>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">
<bold>Algorithm 1</bold>
Complexity Assessment</td>
</tr>
<tr>
<td valign="bottom" colspan="2" rowspan="1">
<hr></hr>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Input:</td>
<td valign="top" align="left" rowspan="1" colspan="1">training set
<bold>X</bold>
= {(
<inline-graphic xlink:href="sensors-14-10952i1.jpg"></inline-graphic>
,
<italic>y</italic>
)
<italic>
<sub>i</sub>
</italic>
;
<italic>i</italic>
= 1,..,
<italic>N
<sub>p</sub>
</italic>
},</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">kernel parameters
<italic>σ</italic>
and
<italic>α</italic>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">regularization coefficient
<italic>λ</italic>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1"></td>
<td valign="top" align="left" rowspan="1" colspan="1">scaling coefficient ε</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Output:</td>
<td valign="top" align="left" rowspan="1" colspan="1">estimated complexity χ</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">I.</td>
<td valign="top" align="left" rowspan="1" colspan="1">
<italic>Compute the kernel</italic>
<list list-type="simple">
<list-item>
<p>Build the kernel matrix
<bold>K</bold>
on data {
<inline-graphic xlink:href="sensors-14-10952i1.jpg"></inline-graphic>
;
<italic>i</italic>
= 1,..,
<italic>N
<sub>p</sub>
</italic>
} with parameters
<italic>σ</italic>
and
<italic>α</italic>
</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">II.</td>
<td valign="top" align="left" rowspan="1" colspan="1">
<italic>Unsupervised Clustering</italic>
<list list-type="order">
<list-item>
<p>Divide the data {
<inline-graphic xlink:href="sensors-14-10952i1.jpg"></inline-graphic>
;
<italic>i</italic>
= 1,..,
<italic>N
<sub>p</sub>
</italic>
} into two clusters by exploiting kernel-kmeans with kernel
<bold>K</bold>
</p>
</list-item>
<list-item>
<p>Denote with X
<sup>(</sup>
<italic>
<sup>a</sup>
</italic>
<sup>)</sup>
and X
<sup>(b)</sup>
the two clusters</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">III.</td>
<td valign="top" align="left" rowspan="1" colspan="1">
<italic>Compute</italic>
<bold>β</bold>
<sup>(RKM)</sup>
<list list-type="order">
<list-item>
<p>Train the RC on the original training set
<bold>X: β</bold>
<sup>(RKM)</sup>
= RCtraining(
<italic>λ</italic>
,
<bold>K</bold>
,
<italic>y</italic>
)</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">IV.</td>
<td valign="top" align="left" rowspan="1" colspan="1">
<italic>Compute</italic>
<bold>β</bold>
<sup>(+)</sup>
<list list-type="order">
<list-item>
<p>Apply an artificial labeling schema: X
<sup>(</sup>
<italic>
<sup>a</sup>
</italic>
<sup>)</sup>
<italic>y</italic>
= +1, X
<sup>(</sup>
<italic>
<sup>b</sup>
</italic>
<sup>)</sup>
<italic>y</italic>
= −1</p>
</list-item>
<list-item>
<p>Train the RC on the dataset X
<sup>(</sup>
<italic>
<sup>a</sup>
</italic>
<sup>)</sup>
∪ X
<sup>(</sup>
<italic>
<sup>b</sup>
</italic>
<sup>)</sup>
with kernel
<bold>K</bold>
:
<bold>β</bold>
<sup>(+)</sup>
= RCtraining(
<italic>λ</italic>
,
<bold>K</bold>
,
<italic>y</italic>
)</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">V.</td>
<td valign="top" align="left" rowspan="1" colspan="1">
<italic>Compute</italic>
<bold>β</bold>
<sup>(-)</sup>
<list list-type="order">
<list-item>
<p>Apply an artificial labeling schema: X
<sup>(</sup>
<italic>
<sup>a</sup>
</italic>
<sup>)</sup>
<italic>y</italic>
= +1, X
<sup>(</sup>
<italic>
<sup>b</sup>
</italic>
<sup>)</sup>
<italic>y</italic>
= −1</p>
</list-item>
<list-item>
<p>Train the RC on the dataset X
<sup>(</sup>
<italic>
<sup>a</sup>
</italic>
<sup>)</sup>
∪ X
<sup>(</sup>
<italic>
<sup>b</sup>
</italic>
<sup>)</sup>
with kernel
<bold>K</bold>
:
<bold>β</bold>
<sup>(-)</sup>
= RCtraining(
<italic>λ</italic>
,
<bold>K</bold>
,
<italic>y</italic>
)</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">VI.</td>
<td valign="top" align="left" rowspan="1" colspan="1">
<italic>Set reference</italic>
<list list-type="simple">
<list-item>
<p>Set Γ
<sub>0</sub>
=
<bold>β</bold>
<sup>t</sup>
<bold></bold>
</p>
</list-item>
<list-item>
<p>where:
<inline-formula>
<mml:math id="mm10">
<mml:mrow>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munder>
<mml:mrow>
<mml:mo>arg</mml:mo>
<mml:mo>min</mml:mo>
</mml:mrow>
<mml:mtext>w</mml:mtext>
</mml:munder>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo>+</mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtext mathvariant="italic">RKM</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mo></mml:mo>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo></mml:mo>
<mml:msup>
<mml:mrow>
<mml:mtext mathvariant="bold">β</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mtext mathvariant="italic">RKM</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>
</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">VII.</td>
<td valign="top" align="left" rowspan="1" colspan="1">
<italic>Compute v̅</italic>
<list list-type="order">
<list-item>
<p>Set ν̄ = 0</p>
</list-item>
<list-item>
<p>for
<italic>i</italic>
= 1 to
<italic>N</italic>
<list list-type="alpha-lower">
<list-item>
<p>Set
<italic>λ̂</italic>
=
<italic>λ</italic>
</p>
</list-item>
<list-item>
<p>Generate an artificial training set
<bold>X</bold>
* = {(
<inline-graphic xlink:href="sensors-14-10952i1.jpg"></inline-graphic>
,
<italic>y*</italic>
)
<italic>
<sub>i</sub>
</italic>
;
<italic>i</italic>
=1,..,
<italic>N
<sub>p</sub>
</italic>
}, where
<italic>y*</italic>
is obtained by randomly swapping half of the label in
<italic>y</italic>
</p>
</list-item>
<list-item>
<p>Train the RC on
<bold>X</bold>
*
<list list-type="simple">
<list-item>
<p>
<bold>β</bold>
= RCtraining(
<italic>λ̂</italic>
,
<bold>K</bold>
,
<italic>y*</italic>
)</p>
</list-item>
</list>
</p>
</list-item>
<list-item>
<p>Set Γ
<sup>*</sup>
= β
<sup>t</sup>
</p>
</list-item>
<list-item>
<p>Compute the classification error ν* on
<bold>X</bold>
*</p>
</list-item>
<list-item>
<p>if (Γ
<sup>*</sup>
> Γ
<sub>0</sub>
) then: set
<italic>λ̂</italic>
= ε·
<italic>λ;</italic>
goto 2.c;</p>
</list-item>
<list-item>
<p>
<italic>ν̄</italic>
=
<italic>ν̄</italic>
+ ν*</p>
</list-item>
</list>
</p>
</list-item>
<list-item>
<p>Set χ = 1 − 2
<italic>ν̄</italic>
/
<italic>N</italic>
</p>
</list-item>
</list>
</td>
</tr>
<tr>
<td valign="bottom" colspan="2" rowspan="1">
<hr></hr>
</td>
</tr>
</tbody>
</array>
</sec>
</sec>
<sec>
<label>5.</label>
<title>Results and Discussion</title>
<sec>
<label>5.1.</label>
<title>Dataset and Preprocessing</title>
<p>The 70 participants involved in the experiments were required to touch the outer surface of the sensor array using three reference actions (= modalities of touch): sliding the finger, brushing a paintbrush and rolling a washer. The corresponding outputs of the sensor array were collected to build the dataset used as benchmark to test the proposed framework. Not to influence the participant and to allow his/her subjective gesture interpretation, each person was given a written protocol as a guide for the experiments. No particular indications were given to the participants about the duration of the stimuli and the pressure level to apply (the only constraint was to complete every single touch within a time window of 7 s).</p>
<p>For each reference action, every participant was asked to first touch the sensor array moving horizontally over a random line, then repeating the action over a randomly chosen vertical line (two different acquisitions). The participant was therefore asked to repeat the six experiments in the same order to get a second sampling. This is because the first sampling was intended as practicing the imagined gesture by touching the real skin, therefore enabling
<italic>a</italic>
<italic>more spontaneous and natural behavior</italic>
” for the more
<italic>aware</italic>
second sampling.</p>
<p>The number of acquired patterns (each pattern consists of 16 time signals corresponding to
<italic>charge response vs. time</italic>
provided by each sensor building the sensor array) was 840 (70 participants, three modalities, four patterns for each modality—
<italic>i.e.</italic>
, horizontal and vertical gestures, two runs each). Half of these patterns were actually used in the pattern-recognition analysis, corresponding to the second sampling by each participant and ensuring more spontaneous behavior.</p>
<p>The collected patterns were expressed by a 3-dimensional tensor,
<inline-graphic xlink:href="sensors-14-10952i3.jpg"></inline-graphic>
∈ ℜ
<sup>4</sup>
⨂ ℜ
<sup>4</sup>
⨂ ℜ
<sup>21000</sup>
. The extension of the 3rd component was determined by the time window allowed in each experiment (7 s) and the adopted sample rate (3 ksps). In fact, when applying the tensor-based kernel approach to those original signals, one is expected to work out the SVD of a matrix having 21,000+ elements in one of its dimensions; such a computationally impractical task would prove ineffective in terms of numerical accuracy. As a consequence, the pre-processing
<italic>ϕ</italic>
discussed in Section 3.2 remapped the original tensor mostly to reduce the dimensionality of the 3rd component of
<inline-graphic xlink:href="sensors-14-10952i3.jpg"></inline-graphic>
.</p>
<p>The implementation of
<italic>ϕ</italic>
adopted in this work was designed to take into account two main issues. First, only a limited portion of the 21,000 elements in the 3rd component of
<inline-graphic xlink:href="sensors-14-10952i3.jpg"></inline-graphic>
actually carry information about the tactile stimulus: for any pattern, the signal of interest lies within a limited time window, whose width depends on the pattern itself. Secondly, the preprocessing,
<italic>ϕ</italic>
, should satisfy:
<inline-graphic xlink:href="sensors-14-10952i1.jpg"></inline-graphic>
=
<italic>ϕ</italic>
(
<inline-graphic xlink:href="sensors-14-10952i3.jpg"></inline-graphic>
), with
<inline-graphic xlink:href="sensors-14-10952i1.jpg"></inline-graphic>
∈ ℜ
<sup>4</sup>
⨂ ℜ
<sup>4</sup>
⨂ ℜ
<italic>
<sup>l</sup>
</italic>
<sup>(3)</sup>
, where
<italic>l</italic>
(3) is a pattern-independent quantity.</p>
<p>The localization of the relevant time window in the 3rd component of
<inline-graphic xlink:href="sensors-14-10952i3.jpg"></inline-graphic>
was obtained by analyzing the amount of energy provided by the single elements of the sensor. In the following, for the
<italic>i</italic>
-th pattern,
<inline-formula>
<mml:math id="mm11">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold-script">S</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
is the 3rd-order tensor obtained after extracting the relevant time window from
<inline-graphic xlink:href="sensors-14-10952i3.jpg"></inline-graphic>
<sub>i</sub>
; thus
<inline-graphic xlink:href="sensors-14-10952i3.jpg"></inline-graphic>
<sub>i</sub>
∈ ℜ
<sup>4</sup>
⨂ ℜ
<sup>4</sup>
⨂ ℜ
<italic>
<sup>Si</sup>
</italic>
. Then, Algorithm 2 was adopted to shrink the 3rd component of
<inline-formula>
<mml:math id="mm12">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="bold-script">S</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
. In this algorithm (
<xref rid="f5-sensors-14-10952" ref-type="fig">Figure 5</xref>
), a subsampling strategy is applied to work out
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<sub>i</sub>
from
<inline-formula>
<mml:math id="mm13">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="script">S</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
. The tensor
<inline-formula>
<mml:math id="mm14">
<mml:mrow>
<mml:msub>
<mml:mover accent="true">
<mml:mi mathvariant="script">S</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
and the expected size
<italic>D</italic>
of the 3rd component of
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<sub>i</sub>
are the algorithm inputs.</p>
<array>
<tbody>
<tr>
<td valign="bottom" colspan="2" rowspan="1">
<hr></hr>
</td>
</tr>
<tr>
<td valign="top" align="left" colspan="2" rowspan="1">
<bold>Algorithm 2</bold>
Data Pre-processing</td>
</tr>
<tr>
<td valign="bottom" colspan="2" rowspan="1">
<hr></hr>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Input:</td>
<td valign="top" align="left" rowspan="1" colspan="1">tensor
<inline-formula>
<mml:math id="mm15">
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="script">S</mml:mi>
<mml:mo>¯</mml:mo>
</mml:mover>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">parameter
<italic>D</italic>
</td>
</tr>
<tr>
<td valign="top" align="left" rowspan="1" colspan="1">Output:</td>
<td valign="top" align="left" rowspan="1" colspan="1">tensor
<bold>L</bold>
</td>
</tr>
<tr>
<td valign="top" align="left" colspan="2" rowspan="1">
<italic>Compute the sampling interval</italic>
</td>
</tr>
<tr>
<td valign="top" align="left" colspan="2" rowspan="1">  = └
<italic>S
<sub>i</sub>
</italic>
/
<italic>D</italic>
</td>
</tr>
<tr>
<td valign="top" align="left" colspan="2" rowspan="1">
<italic>Obtain</italic>
<bold>
<italic>L</italic>
</bold>
</td>
</tr>
<tr>
<td valign="top" align="left" colspan="2" rowspan="1">
<italic>p</italic>
= 1</td>
</tr>
<tr>
<td valign="top" align="left" colspan="2" rowspan="1">for
<italic>d = 1,..,D</italic>
</td>
</tr>
<tr>
<td valign="top" align="left" colspan="2" rowspan="1">
<bold>L</bold>
<sub>(:,:</sub>
,
<italic>
<sub>d</sub>
</italic>
<sub>)</sub>
=
<inline-formula>
<mml:math id="mm16">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mo>:</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>:</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>d</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mover accent="true">
<mml:mtext mathvariant="script">S</mml:mtext>
<mml:mo>¯</mml:mo>
</mml:mover>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mo>:</mml:mo>
<mml:mo>,</mml:mo>
<mml:mo>:</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>p</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td valign="top" align="left" colspan="2" rowspan="1">
<bold>
<italic>p</italic>
=
<italic>p</italic>
+</bold>
</td>
</tr>
<tr>
<td valign="bottom" colspan="2" rowspan="1">
<hr></hr>
</td>
</tr>
</tbody>
</array>
</sec>
<sec>
<label>5.2.</label>
<title>Experiments: Binary Classification</title>
<p>The effectiveness of the pattern-recognition system in the classification of touch modalities was evaluated by using the dataset obtained as per Section 5.1. The final dataset only covered 65 out of the original 70 participants to remove apparent outliers or extremely noisy results, and only the patterns collected in the 2nd run from each participant entered the pattern-recognition simulations. The experimental session involved three binary classification problems:
<list list-type="alpha-upper">
<list-item>
<p>“brushing a paintbrush”
<italic>versus</italic>
“rolling a washer”;</p>
</list-item>
<list-item>
<p>“brushing a paintbrush”
<italic>versus</italic>
“sliding the finger”;</p>
</list-item>
<list-item>
<p>“rolling a washer”
<italic>versus</italic>
“sliding the finger”.</p>
</list-item>
</list>
</p>
<p>The dataset included—for each modality and for each participant—both the horizontal and the vertical gestures, thus each binary testbed held 260 patterns (2 modalities × 65 participants × 2 gestures). The generalization performance of the ML-based classification was measured by randomly splitting the dataset into a training set and a validation set, holding 180 patterns and 80 patterns, respectively. The former drove the adjustment of the classifiers parameters, thus supporting model selection. The latter was used to measure classification accuracy on unseen data (
<italic>i.e.</italic>
, an empirical estimation of the term π in
<xref rid="FD6" ref-type="disp-formula">Equation (6)</xref>
). The two sets never shared any participant; this made it possible to estimate the generalization ability of the ML algorithm with respect to unseen users, as well. To provide statistical robustness in the generalization estimates, the splitting process was iterated over five different training/validation pairs (
<italic>i.e.</italic>
, five different, independent runs were completed).</p>
<p>The following settings were adopted for the three kernel parameters that determined the generalization performances of the tensor-based classifiers:
<list list-type="bullet">
<list-item>
<p>
<italic>λ</italic>
∈ {10
<sup>−3</sup>
, 10
<sup>−2</sup>
, 10
<sup>−1</sup>
, 10
<sup>0</sup>
, 10
<sup>1</sup>
};</p>
</list-item>
<list-item>
<p>σ ∈ {2
<sup>−4</sup>
, 2
<sup>−3</sup>
, 2
<sup>−2</sup>
, 2
<sup>−1</sup>
, 2°, 2
<sup>1</sup>
, 2
<sup>2</sup>
, 2
<sup>3</sup>
, 2
<sup>4</sup>
};</p>
</list-item>
<list-item>
<p>α ∈ {1,
<italic>Q
<sub>z</sub>
</italic>
/2,
<italic>Q
<sub>z</sub>
</italic>
}.</p>
</list-item>
</list>
</p>
<p>Parameter
<italic>C</italic>
in SVM plays the role of 1/
<italic>λ</italic>
, where
<italic>λ</italic>
is the quantity that rules the trade-off between the empirical error and a regularizing term (as per Section 3.1). Parameter σ characterizes the specific kernel function adopted in this work (see
<xref rid="APP1" ref-type="app">Appendix</xref>
). The options for the parameter α indicate three options to the removal of columns from matrixes
<bold>V</bold>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
in the kernel computation (see
<xref rid="APP1" ref-type="app">Appendix</xref>
): setting α = 1 meant that only the column associated with the largest singular value was retained; when α =
<italic>Q
<sub>z</sub>
</italic>
/2, half of the significant columns were kept, whereas setting α =
<italic>Q
<sub>z</sub>
</italic>
implied that no column was removed from
<bold>V</bold>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
. Overall, the model selection procedure aimed to pick out the most effective setup from among the various available configurations. In each run, the procedure described in Algorithm 1 processed the training set and identified the parameter values yielding the best predicted performance. Eventually, the performance on the validation set gave an estimate of the actual generalization error, π, thus measuring the relative effectiveness of the model selection procedures.</p>
<p>
<xref rid="t1-sensors-14-10952" ref-type="table">Tables 1</xref>
,
<xref rid="t2-sensors-14-10952" ref-type="table">2</xref>
and
<xref rid="t3-sensors-14-10952" ref-type="table">3</xref>
give the simulation results for the classification problems A, B, and C, respectively. Each table reports on the performance attained on the associate problem when applying the tensor-SVM model. The experiments involved three settings for parameter
<italic>D</italic>
, which drove the sub-sampling rate in the pre-processing approach as per Algorithm 2:
<italic>D</italic>
∈ {20, 50, 100}. As a result, for each pair (run,
<italic>D</italic>
), the table gives:
<list list-type="bullet">
<list-item>
<p>the classification error percentage attained on the validation set by the ML-based predictor;</p>
</list-item>
<list-item>
<p>the parameters setting {
<italic>λ</italic>
, σ, α} used in the predictor as a result of the model selection procedure.</p>
</list-item>
</list>
</p>
<p>Likewise,
<xref rid="t4-sensors-14-10952" ref-type="table">Tables 4</xref>
,
<xref rid="t5-sensors-14-10952" ref-type="table">5</xref>
and
<xref rid="t6-sensors-14-10952" ref-type="table">6</xref>
report on the simulation results obtained by using the tensor-RLS predictor model. The graphs in
<xref rid="f6-sensors-14-10952" ref-type="fig">Figure 6</xref>
recap visually the table results, and provide a chart for each classification problem. In each chart, the
<italic>x</italic>
axis marks the five runs, and the
<italic>y</italic>
axis gives the classification error on the validation set. For each run, six values are plotted: the classification errors attained by tensor-SVM @
<italic>D</italic>
= {20, 50, 100}, and the classification errors attained by tensor-RLS @
<italic>D</italic>
= {20, 50, 100}.</p>
<p>Empirical evidence proves that the tensor-based pattern-recognition technologies could effectively support the classification problems. An analysis of numerical results leads deriving some remarks. First, accuracy values confirmed that the touch-modality recognition problem involved a challenging task. In some ways, this might be ascribed to the protocol for data collection, which was designed to avoid specific constraints on the participants' behavior but ultimately widened the variance in empirical data. On one hand, the protocol ensured that gestures were spontaneous and natural; on the other hand, this inevitably induced a level of overlap between stimuli that in principle belonged to different touch modalities</p>
<p>The core of the research presented in this paper consists in a practical approach to the model-selection problem for effective parameter setting in real applications. Toward that purpose, the graphs in
<xref rid="f7-sensors-14-10952" ref-type="fig">Figure 7</xref>
highlight the advantages of the Maximal-Discrepancy criterion to model selection, and compare the model selection performed according to Algorithm 1, and the model selection resulting from conventional cross-validation. In the latter tests, the training set including 180 patterns was repeatedly split into a training and a test set for model selection, and the remaining 80 patterns formed the validation set for unbiased error estimate; the same validation set was used to evaluate the generalization error scored by the former method under MD-based selection. For the sake of brevity,
<xref rid="f7-sensors-14-10952" ref-type="fig">Figure 7</xref>
only refers to problem A, but similar results were observed for all problems. The graphs give the classification errors (on the common validation set) by the tensor-SVMs.
<xref rid="f7-sensors-14-10952" ref-type="fig">Figures 7</xref>
refer to the experiments with
<italic>D</italic>
= 20,
<italic>D</italic>
= 50, and
<italic>D</italic>
= 100, respectively. In each graph, the
<italic>x</italic>
axis marks the five different runs, whereas the
<italic>y</italic>
axis gives the classification error (error percentage on the validation set).</p>
<p>
<xref rid="f8-sensors-14-10952" ref-type="fig">Figure 8</xref>
illustrates the results obtained on problem A with tensor-RLS. The graphs again prove that model selection supported by Algorithm 1 mostly yielded lower validation errors than those achieved by conventional cross-validation.</p>
<p>Secondly, numerical results seem to suggest that–overall–tensor-SVM slightly outperformed tensor-RLS on the various classification problems. However, one should consider that, in both cases, the classification error scored on a specific classification problem varies significantly across the different runs. This behavior actually confirms that the presence of noise and variance may be a major concern when dealing with tactile data.</p>
<p>Finally, classification problem C proved to be the most difficult task for the ML systems, as the predictors sometimes could not attain a lower classification error than 20%. Such a result indicates that the involved touch modalities (“sliding the finger” and “rolling a washer”) proved quite difficult to discriminate. Conversely, the best performances in terms of classification error were obtained for problem A, in which tensor-SVM scored a classification error of 2.5%.</p>
</sec>
<sec>
<label>5.3.</label>
<title>Experiments: Multiclass Problem</title>
<p>A second empirical session addressed a 3-class classification problem, involving the touch modalities covered by the dataset. Three “1-versus-all” predictors were independently trained to solve as many binary classification problems (“one touch modality
<italic>versus</italic>
the others”); as a result, each predictor could yield two alternative results: either the test pattern was ascribed to the classifier-specific touch modality, or the classifier prompted a “don't know” outcome. The system finally assigned a touch modality to a test pattern according to the following rules:
<list list-type="simple">
<list-item>
<label>(1)</label>
<p>if one classifier ascribed the test pattern to a specific touch modality, whereas the other modules both prompted a “don't know” outcome, the pattern was classified accordingly;</p>
</list-item>
<list-item>
<label>(2)</label>
<p>otherwise, the pattern was categorized according to the predictor whose decision function,
<italic>f</italic>
(
<bold>x</bold>
), turned out to be highest.</p>
</list-item>
</list>
</p>
<p>The multiclass experiment was set up according to the following algorithm:
<list list-type="simple">
<list-item>
<label>(1)</label>
<p>Randomly split the set of 65 participants into two subsets, TG and TT, including 45 participants and 20 participants, respectively.</p>
</list-item>
<list-item>
<label>(2)</label>
<p>For each 1-vs-all classification problem, generate a training set containing a total of 180 patterns. Half of the patterns are gathered by including –for each participant in TG—the horizontal and the vertical gestures associated to the touch modality addressed by the specific classification module (45 participants × 2 gestures = 90 patterns). The remaining 90 patterns are obtained by randomly selecting—for the participants in TG—gestures associated to the other two touch modalities.</p>
</list-item>
<list-item>
<label>(3)</label>
<p>Generate a test set by including -for each modality and for each participant in TT- both the horizontal and the vertical gestures. As a result, the test set holds (3 modalities × 20 participants × 2 gestures =) 120 patterns.</p>
</list-item>
</list>
</p>
<p>To ensure statistical robustness in the generalization estimates, the splitting/training/testing process was iterated over five different runs.
<xref rid="t7-sensors-14-10952" ref-type="table">Table 7</xref>
gives the simulation results for the classification problem. The table reports on the performance attained with the tensor-SVM and the tensor-RLS. Two quantities measure overall performances: (1) the best classification error attained on the test set over the five different runs; (2) the classification error on the test set, averaged over the five different runs. In the table, the last row points out the parameter settings that yielded the reported performances.</p>
<p>The results reported in
<xref rid="t7-sensors-14-10952" ref-type="table">Table 7</xref>
show that the tensor-based method was able to address the 3-touch classification problem. Measured performances, though, confirmed that the recognition of touch modalities did prove a challenging task, as one should deal with classification errors that are larger than 22%. As anticipated above, such results should be analyzed by taking into account the protocol adopted for data collection.</p>
</sec>
</sec>
<sec sec-type="conclusions">
<label>6.</label>
<title>Conclusions</title>
<p>This paper addressed the development of computational intelligence techniques to recognize touch modalities using an artificial skin as sensor system. The proposed pattern-recognition system is specifically designed to deal with the tensor morphology of the tactile signals.</p>
<p>A dedicated experimental campaign, involving a high number of participants, gave a representative tactile data set, e.g., a wide range of interpretations of the experimental protocol. The reported results prove that the proposed pattern-recognition system achieves consistent performance on the bi-class classification problems adopted as test bed. In this regard, the paper introduced a framework that embedded a criterion to drive model selection effectively, and therefore supported practical applications involving tactile interaction problems.</p>
<p>The paper focused on the discrimination of touch modalities, but it is worth noting that a tensor-based approach might be useful in general for discriminating tactile data. When extending the framework to a wider range of tactile data, the major benefits would certainly consist in the specific tensor-based representation, which best fits the nature of empirical data, and in the capability of Machine Learning tools to acquire classification procedure in an automated and empirical way. On the other hand, some drawbacks might derive from the need for adequate and effective features to express the mission-critical data contents, and in the requirement of a considerable amount of empirical observations to avoid or limit over-fitting phenomena.</p>
<p>A fair comparison with other ML-based approaches to the classification of touch modalities [
<xref rid="b13-sensors-14-10952" ref-type="bibr">13</xref>
<xref rid="b15-sensors-14-10952" ref-type="bibr">15</xref>
] may prove difficult to carry out because of both the lack of a common test bed and the dissimilarity in the prescribed targets. Actually, it is worth noting that the approaches proposed in the literature exploited ML technologies without addressing explicitly the issue of model parameterization, hence the specific contribution of those works consists in showing that ML tools such as SVM, SOM, AdaBoost can tackle effectively the interpretation of touch modalities. On the other hand, those methods do not cover the problem of model selection and its related outcomes, which is instead the core of the framework tackled in this paper.</p>
<p>Predicting generalization performance is in fact extremely important, in that every learning machine is characterized by a set of adjustable parameters. The design of an effective classifier requires that, first, a criterion to drive model selection should always be defined, and, second, generalization performance is evaluated only after the model parameters have been set for run-time operation. This is especially true in the presence of complex domains when few empirical data are available, as is the case of touch-modality recognition.</p>
<p>From this viewpoint, the paper has confirmed the complexity of the underlying sensorial problem, but, on the other hand, the research yielded a reliable and practical procedure to predict a system performance before deployment. A comparison of the Maximal-Discrepancy method with conventional cross-validation supported the advantages of the former approach in the tensor-based paradigm.</p>
</sec>
</body>
<back>
<ack>
<p>This work was partially supported by the European project “ROBOSKIN” about “Skin-Based Technologies and Capabilities for Safe, Autonomous and Interactive Robots”, under grant agreement No. 231500.</p>
</ack>
<notes>
<title>Author Contributions</title>
<p>Paolo Gastaldo contributed in the application of Tensor-based representation to Kernel methods and Support Vector Machines, and took care of the experiments involving machine learning methods for tactile data processing. He mostly contributed to the writing of Sections 1, 3 and 5.</p>
<p>Luigi Pinna has been involved in the design of the interface electronics, development of the LabVIEW data acquisition program, management of the experimental tests, and contributed in writing/editing the Section 2.2 of the present manuscript.</p>
<p>Lucia Seminara has been involved in the tactile system manufacturing and the design/management of the experimental tests, and contributed in writing/editing the Abstract, Introduction, Sections 2 and 5.1, Conclusion sections of the present manuscript.</p>
<p>Maurizio Valle contributed to the design of the interface electronics and the tactile system manufacturing and test, and contributed in writing/editing Section 2 of the present manuscript.</p>
<p>Rodolfo Zunino contributed in the development of model-selection techniques, including conventional methods and maximal discrepancy, and their application to the tuning of trained classifiers. He mostly contributed to the writing of Sections 1, 4 and 5.</p>
</notes>
<notes>
<title>Conflicts of Interest:</title>
<p>The authors declare no conflict of interest.</p>
</notes>
<app-group>
<app id="APP1">
<label>Appendix:</label>
<title>Tensor-Based Sensorial Signal Processing</title>
<p>This Appendix outlines the steps to compute the entries of a tensor-based kernel function,
<italic>K</italic>
. The scheme refers to the computation of a generic entry
<italic>K</italic>
(
<italic>i</italic>
,
<italic>j</italic>
) of the complete matrix, where
<italic>i</italic>
and
<italic>j</italic>
are two patterns of the dataset. First, some notations are introduced:
<list list-type="bullet">
<list-item>
<p>
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
is a
<italic>Z</italic>
-th order tensor; thus,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sup>l(1)</sup>
</italic>
⊗ ℜ
<italic>
<sup>l(2)</sup>
</italic>
⊗⋯⊗ ℜ
<italic>
<sup>l(Z)</sup>
</italic>
.</p>
</list-item>
<list-item>
<p>
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
and
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
are the tensors that characterize the
<italic>i</italic>
-th pattern and the
<italic>j</italic>
-th pattern, respectively.</p>
</list-item>
<list-item>
<p>
<italic>K</italic>
(
<italic>i</italic>
,
<italic>j</italic>
) is the kernel entry for patterns
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
and
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
; hence,
<italic>K</italic>
(
<italic>i</italic>
,
<italic>j</italic>
) ≡
<italic>K</italic>
(
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
).</p>
</list-item>
</list>
</p>
<p>The four steps to be completed to work out a kernel entry
<italic>K</italic>
(
<italic>i</italic>
,
<italic>j</italic>
) can be formalized as follows. For the sake of clarity, each step ends by reporting the inputs to be processed and the generated outputs.</p>
<p>
<list list-type="roman-upper">
<list-item>
<p>Unfolding</p>
<p>The
<italic>unfolding</italic>
of a tensor implies to convert the elements of
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
to a matrix [
<xref rid="b20-sensors-14-10952" ref-type="bibr">20</xref>
]. As
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
is a
<italic>Z</italic>
-th order tensor,
<italic>Z</italic>
matrixes are obtained by applying as many unfolding ways (or modes) [
<xref rid="b42-sensors-14-10952" ref-type="bibr">42</xref>
]. Accordingly,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
∈ ℜ
<italic>
<sup>Mz</sup>
</italic>
⊗ ℜ
<italic>
<sup>Nz</sup>
</italic>
is the
<italic>z</italic>
-th mode matrix unfolding, where
<list list-type="simple">
<list-item>
<p>
<italic>Mz</italic>
. =
<italic>l(z)</italic>
</p>
</list-item>
<list-item>
<p>
<italic>Nz</italic>
. =
<italic>l</italic>
(
<italic>z</italic>
+ 1),
<italic>l</italic>
(
<italic>z</italic>
+ 2) ⋯¨
<italic>l</italic>
(
<italic>Z</italic>
),
<italic>l</italic>
(1),
<italic>l</italic>
(2) ⋯·,
<italic>l</italic>
(
<italic>z</italic>
− 1).</p>
</list-item>
<list-item>
<p>Inputs:
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
and
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
</p>
</list-item>
<list-item>
<p>Outputs: {
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
<sub>(1)</sub>
,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
<sub>(2)</sub>
,…,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
<sub>(</sub>
<italic>
<sub>Z</sub>
</italic>
<sub>)</sub>
} and{
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
<sub>(1)</sub>
,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
<sub>(2)</sub>
,…,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
<sub>(</sub>
<italic>
<sub>Z</sub>
</italic>
<sub>)</sub>
}, where
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
∈ ℜ
<italic>
<sup>Mz</sup>
</italic>
. ⊗ ℜ
<italic>
<sup>Nz</sup>
</italic>
.</p>
</list-item>
</list>
</p>
</list-item>
<list-item>
<p>Singular Value Decomposition</p>
<p>All the matrix unfoldings
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
are factorized by using Singular Value Decomposition (SVD) [
<xref rid="b20-sensors-14-10952" ref-type="bibr">20</xref>
]. In general, a matrix unfolding
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
∈ ℜ
<italic>
<sup>Mz</sup>
</italic>
⊗ ℜ
<italic>
<sup>Nz</sup>
</italic>
yields the following factorization:
<disp-formula id="FD8">
<mml:math id="mm17">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="bold-script">L</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:msub>
<mml:mtext mathvariant="bold">U</mml:mtext>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:msub>
<mml:mtext mathvariant="bold">S</mml:mtext>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msub>
<mml:msubsup>
<mml:mtext mathvariant="bold">V</mml:mtext>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>t</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</disp-formula>
where
<bold>U</bold>
is an
<italic>Mz</italic>
×
<italic>Mz</italic>
orthogonal matrix and
<bold>V</bold>
is an
<italic>Nz</italic>
×
<italic>Nz</italic>
orthogonal matrix.
<bold>S</bold>
is an
<italic>Mz</italic>
×
<italic>Nz</italic>
matrix whose off-diagonal entries are all zeros and whose diagonal elements satisfy
<disp-formula id="FD9">
<mml:math id="mm18">
<mml:mrow>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:mo></mml:mo>
<mml:msub>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mi>Q</mml:mi>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo></mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:math>
</disp-formula>
where
<italic>Qz</italic>
=
<italic>min</italic>
(
<italic>Mz</italic>
,
<italic>Nz</italic>
). Actually, SVD allows one to obtain a matrix approximation of
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
by removing from both
<bold>S</bold>
and
<bold>V</bold>
all the columns from the (
<italic>α</italic>
+ 1)th to the
<italic>N
<sub>z</sub>
</italic>
th (thus, 1 ≤
<italic>α</italic>
<
<italic>N
<sub>z</sub>
</italic>
).</p>
<p>
<list list-type="simple">
<list-item>
<p>Inputs: {
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
<sub>(1)</sub>
,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
<sub>(2)</sub>
,…,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>i</sub>
</italic>
<sub>(</sub>
<italic>
<sub>Z</sub>
</italic>
<sub>)</sub>
} and{
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
<sub>(1)</sub>
,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
<sub>(2)</sub>
,…,
<inline-graphic xlink:href="sensors-14-10952i2.jpg"></inline-graphic>
<italic>
<sub>j</sub>
</italic>
<sub>(</sub>
<italic>
<sub>Z</sub>
</italic>
<sub>)</sub>
}</p>
</list-item>
<list-item>
<p>Outputs: {
<bold>V</bold>
<italic>
<sub>i</sub>
</italic>
<sub>(1)</sub>
,
<bold>V</bold>
<italic>
<sub>i</sub>
</italic>
<sub>(2)</sub>
,…,
<bold>V</bold>
<italic>
<sub>i</sub>
</italic>
<sub>(</sub>
<italic>
<sub>Z</sub>
</italic>
<sub>)</sub>
} and {
<bold>V</bold>
<italic>
<sub>j</sub>
</italic>
<sub>(1)</sub>
,
<bold>V</bold>
<italic>
<sub>j</sub>
</italic>
<sub>(2)</sub>
,…,
<bold>V</bold>
<italic>
<sub>j</sub>
</italic>
<sub>(</sub>
<italic>
<sub>Z</sub>
</italic>
<sub>)</sub>
}, where
<bold>V</bold>
<italic>
<sub>i</sub>
</italic>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
,
<bold>V</bold>
<italic>
<sub>j</sub>
</italic>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
∈ ℜ
<italic>
<sup>Nz</sup>
</italic>
⊗ ℜ
<italic>
<sup>Nz</sup>
</italic>
</p>
</list-item>
</list>
</p>
</list-item>
<list-item>
<p>Factor Kernel</p>
<p>A factor kernel
<italic>k</italic>
<sup>z</sup>
is worked out for each
<italic>z</italic>
∈{1, …,
<italic>Z</italic>
}. The factor kernel is defined as:
<disp-formula id="FD10">
<label>A1</label>
<mml:math id="mm19">
<mml:mrow>
<mml:msup>
<mml:mi>k</mml:mi>
<mml:mi>z</mml:mi>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:mo>exp</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msup>
<mml:mi>σ</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
<mml:msubsup>
<mml:mrow>
<mml:mrow>
<mml:mo></mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mtext mathvariant="bold">V</mml:mtext>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>α</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mtext mathvariant="bold">V</mml:mtext>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>α</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mtext>t</mml:mtext>
</mml:mrow>
</mml:msubsup>
<mml:mo></mml:mo>
<mml:msubsup>
<mml:mtext mathvariant="bold">V</mml:mtext>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>α</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
<mml:msubsup>
<mml:mtext mathvariant="bold">V</mml:mtext>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>α</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mtext>t</mml:mtext>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo></mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>F</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
where ∥·∥
<italic>
<sub>F</sub>
</italic>
is the Frobenius norm. In (
<xref rid="FD10" ref-type="disp-formula">A1</xref>
),
<inline-formula>
<mml:math id="mm20">
<mml:mrow>
<mml:msubsup>
<mml:mtext mathvariant="bold">V</mml:mtext>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>α</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
is the matrix obtained by removing from
<bold>V</bold>
<italic>
<sub>i</sub>
</italic>
<sub>(z)</sub>
all the columns from the (
<italic>α</italic>
+ 1)th to the
<italic>N
<sub>z</sub>
</italic>
th, The same notation holds for
<inline-formula>
<mml:math id="mm21">
<mml:mrow>
<mml:msubsup>
<mml:mtext mathvariant="bold">v</mml:mtext>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>z</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>α</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>
. In principle, one can set
<italic>α</italic>
=
<italic>Q
<sub>z</sub>
</italic>
<italic>z</italic>
∈ {1, …,
<italic>Z</italic>
}, as the remaining columns of
<bold>V</bold>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
do not carry any useful information on
<inline-graphic xlink:href="sensors-14-10952i1.jpg"></inline-graphic>
<sub>(</sub>
<italic>
<sub>z</sub>
</italic>
<sub>)</sub>
(see Step II). However, the present work will show that
<italic>α</italic>
can be considered a tunable parameter (with
<italic>α</italic>
<
<italic>Q
<sub>z</sub>
</italic>
). Such aspect will be discussed in details in Section 3.3.</p>
<p>
<list list-type="simple">
<list-item>
<p>Inputs: {
<bold>V</bold>
<italic>
<sub>i</sub>
</italic>
<sub>(1)</sub>
,
<bold>V</bold>
<italic>
<sub>i</sub>
</italic>
<sub>(2)</sub>
,…,
<bold>V</bold>
<italic>
<sub>i</sub>
</italic>
<sub>(</sub>
<italic>
<sub>Z</sub>
</italic>
<sub>)</sub>
} and {
<bold>V</bold>
<italic>
<sub>j</sub>
</italic>
<sub>(1)</sub>
,
<bold>V</bold>
<italic>
<sub>j</sub>
</italic>
<sub>(2)</sub>
,…,
<bold>V</bold>
<italic>
<sub>j</sub>
</italic>
<sub>(</sub>
<italic>
<sub>Z</sub>
</italic>
<sub>)</sub>
}</p>
</list-item>
<list-item>
<p>Outputs:
<italic>k</italic>
<sup>z</sup>
(
<italic>i</italic>
,
<italic>j</italic>
), with
<italic>z</italic>
= 1,..,
<italic>Z</italic>
</p>
</list-item>
</list>
</p>
</list-item>
<list-item>
<p>Kernel Entry</p>
<p>The kernel entry
<italic>K</italic>
(
<italic>i</italic>
,
<italic>j</italic>
) is finally obtained as follows:
<disp-formula id="FD11">
<label>A2</label>
<mml:math id="mm22">
<mml:mrow>
<mml:mi>K</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>=</mml:mo>
<mml:munderover>
<mml:mo></mml:mo>
<mml:mrow>
<mml:mi>z</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>Z</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msup>
<mml:mi>k</mml:mi>
<mml:mi>z</mml:mi>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
</p>
<p>Expressions (
<xref rid="FD10" ref-type="disp-formula">A1</xref>
) and (
<xref rid="FD11" ref-type="disp-formula">A2</xref>
) show that
<italic>K</italic>
(
<italic>i</italic>
,
<italic>j</italic>
) is a product of Gaussian kernels. Thus the kernel function (
<xref rid="FD11" ref-type="disp-formula">A2</xref>
) actually extends to tensor patterns the conventional Gaussian (RBF) kernel [
<xref rid="b31-sensors-14-10952" ref-type="bibr">31</xref>
], which represents a popular choice for standard learning systems based on kernel machines.</p>
<p>
<list list-type="simple">
<list-item>
<p>Inputs: {
<italic>k</italic>
<sup>1</sup>
(
<italic>i</italic>
,
<italic>j</italic>
),
<italic>k</italic>
<sup>2</sup>
(
<italic>i</italic>
,
<italic>j</italic>
), …,
<italic>k
<sup>Z</sup>
</italic>
(
<italic>i</italic>
,
<italic>j</italic>
)}</p>
</list-item>
<list-item>
<p>Output:
<italic>K</italic>
(
<italic>i</italic>
,
<italic>j</italic>
)</p>
</list-item>
</list>
</p>
</list-item>
</list>
</p>
</app>
</app-group>
<ref-list>
<title>References</title>
<ref id="b1-sensors-14-10952">
<label>1.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Schmitz</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Pattacini</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Nori</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Natale</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Metta</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>
<italic>Design</italic>
, realization and sensorization of a dexterous hand: The iCub design choices</article-title>
<conf-name>Proceedings of the 2010 IEEE-RAS International Conference on Humanoid Robots (Humanoids 2010)</conf-name>
<conf-loc>Nashville, TN, USA</conf-loc>
<conf-date>6–8 December 2010</conf-date>
</element-citation>
</ref>
<ref id="b2-sensors-14-10952">
<label>2.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Ascia</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Biso</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ansaldo</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Schmitz</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ricci</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Natale</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Metta</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Improvement of tactile capacitive sensors of the humanoid robot iCub's fingertips</article-title>
<conf-name>Proceedings of the 2011 IEEE Sensors</conf-name>
<conf-loc>Limerick, UK</conf-loc>
<conf-date>28–31 October, 2011</conf-date>
<fpage>504</fpage>
<lpage>507</lpage>
</element-citation>
</ref>
<ref id="b3-sensors-14-10952">
<label>3.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dahiya</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Cattin</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Adami</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Collini</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Barboni</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Valle</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Lorenzelli</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Oboe</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Metta</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Brunetti</surname>
<given-names>F.</given-names>
</name>
</person-group>
<article-title>Towards Tactile Sensing System on Chip for Robotic Applications</article-title>
<source>IEEE Sens. J.</source>
<year>2011</year>
<pub-id pub-id-type="doi">10.1109/JSEN.2011.2159835</pub-id>
</element-citation>
</ref>
<ref id="b4-sensors-14-10952">
<label>4.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Dahl</surname>
<given-names>T.S.</given-names>
</name>
<name>
<surname>Swere</surname>
<given-names>E.A.R.</given-names>
</name>
<name>
<surname>Palmer</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Touch-triggered protective reflexes for safer robots</article-title>
<source>New Frontiers in Human-Robot Interaction</source>
<edition>1st</edition>
<person-group person-group-type="editor">
<name>
<surname>Dautenhahn</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Saunders</surname>
<given-names>J.</given-names>
</name>
</person-group>
<publisher-name>John Benjamins Publishing Co.</publisher-name>
<publisher-loc>Amsterdam, The Netherlands</publisher-loc>
<year>2011</year>
<fpage>281</fpage>
<lpage>304</lpage>
</element-citation>
</ref>
<ref id="b5-sensors-14-10952">
<label>5.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Argall</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Billard</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>A survey of tactile human-robot interactions</article-title>
<source>Robot. Auton. Syst.</source>
<year>2010</year>
<volume>58</volume>
<fpage>1159</fpage>
<lpage>1176</lpage>
</element-citation>
</ref>
<ref id="b6-sensors-14-10952">
<label>6.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Kandel</surname>
<given-names>E.R.</given-names>
</name>
<name>
<surname>Schwartz</surname>
<given-names>J.H.</given-names>
</name>
<name>
<surname>Jessell</surname>
<given-names>T.M.</given-names>
</name>
</person-group>
<source>Principles of Neural Science</source>
<edition>4th</edition>
<publisher-name>McGraw-Hill</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>2000</year>
</element-citation>
</ref>
<ref id="b7-sensors-14-10952">
<label>7.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dahiya</surname>
<given-names>R.S.</given-names>
</name>
<name>
<surname>Mittendorfer</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Valle</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Lumelsky</surname>
<given-names>V.J.</given-names>
</name>
</person-group>
<article-title>Directions Toward Effective Utilization of Tactile Skin: A Review</article-title>
<source>IEEE Sens. J.</source>
<year>2013</year>
<volume>13</volume>
<fpage>4121</fpage>
<lpage>4138</lpage>
</element-citation>
</ref>
<ref id="b8-sensors-14-10952">
<label>8.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Decherchi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gastaldo</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Dahiya</surname>
<given-names>R.S.</given-names>
</name>
<name>
<surname>Valle</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zunino</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Tactile-Data Classification of Contact Materials Using Computational Intelligence</article-title>
<source>IEEE Trans. Robot.</source>
<year>2011</year>
<volume>37</volume>
<fpage>635</fpage>
<lpage>639</lpage>
</element-citation>
</ref>
<ref id="b9-sensors-14-10952">
<label>9.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Fishel</surname>
<given-names>J.A.</given-names>
</name>
<name>
<surname>Loeb</surname>
<given-names>G.E.</given-names>
</name>
</person-group>
<article-title>Bayesian exploration for intelligent identification of textures</article-title>
<source>Front. Neurorobot.</source>
<year>2012</year>
<volume>6</volume>
<fpage>1</fpage>
<lpage>20</lpage>
<pub-id pub-id-type="pmid">22393319</pub-id>
</element-citation>
</ref>
<ref id="b10-sensors-14-10952">
<label>10.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Mazid</surname>
<given-names>A.M.</given-names>
</name>
<name>
<surname>Russell</surname>
<given-names>R.A.</given-names>
</name>
</person-group>
<article-title>A Robotic Opto-tactile Sensor for Assessing Object Surface Texture</article-title>
<conf-name>Proceedings of the IEEE Conference on Robotics, Automation and Mechatronics</conf-name>
<conf-loc>Bangkok, Thailand</conf-loc>
<conf-date>7–9 June 2006</conf-date>
<fpage>1</fpage>
<lpage>5</lpage>
</element-citation>
</ref>
<ref id="b11-sensors-14-10952">
<label>11.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Enomoto</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Ohnishi</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>Tactile recognition system for surgical robots</article-title>
<conf-name>Proceedings of the 3rd IEEE International Conference on Industrial Informatics</conf-name>
<conf-loc>INDIN, Perth, Australia</conf-loc>
<conf-date>10–12 August, 2005</conf-date>
</element-citation>
</ref>
<ref id="b12-sensors-14-10952">
<label>12.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dahiya</surname>
<given-names>R.S.</given-names>
</name>
<name>
<surname>Metta</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Cannata</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Valle</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Guest Editorial Special Issue on Robotic Sense of Touch</article-title>
<source>IEEE Trans. Robot.</source>
<year>2011</year>
<volume>27</volume>
<fpage>385</fpage>
<lpage>388</lpage>
</element-citation>
</ref>
<ref id="b13-sensors-14-10952">
<label>13.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Iwata</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Sugano</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Human-robot-contact-state identification based on tactile recognition</article-title>
<source>IEEE Trans. Ind. Electron.</source>
<year>2005</year>
<fpage>1468</fpage>
<lpage>1477</lpage>
</element-citation>
</ref>
<ref id="b14-sensors-14-10952">
<label>14.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tawil</surname>
<given-names>D.S.</given-names>
</name>
<name>
<surname>Rye</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Velonaki</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Interpretation of the modality of touch on an artificial arm covered with an EIT-based sensitive skin</article-title>
<source>Int. J. Robot. Res.</source>
<year>2012</year>
<volume>31</volume>
<fpage>1627</fpage>
<lpage>1641</lpage>
</element-citation>
</ref>
<ref id="b15-sensors-14-10952">
<label>15.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Flagg</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Tam</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>MacLean</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Flagg</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Conductive Fur Sensing for a Gesture-Aware Furry Robot</article-title>
<conf-name>Proceedings of the IEEE Haptics Symposium 2012</conf-name>
<conf-loc>Vancouver, BC, Canada</conf-loc>
<conf-date>4–7 March, 2012</conf-date>
</element-citation>
</ref>
<ref id="b16-sensors-14-10952">
<label>16.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Johnsson</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Balkenius</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Sense of touch in robots with self-organizing maps</article-title>
<source>IEEE Trans. Robot.</source>
<year>2011</year>
<volume>27</volume>
<fpage>498</fpage>
<lpage>507</lpage>
</element-citation>
</ref>
<ref id="b17-sensors-14-10952">
<label>17.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiménez</surname>
<given-names>A.R.</given-names>
</name>
<name>
<surname>Soembagijo</surname>
<given-names>A.S.</given-names>
</name>
<name>
<surname>Reynaerts</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>van Brussel</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Ceres</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Pons</surname>
<given-names>J.L.</given-names>
</name>
</person-group>
<article-title>Featureless classification of tactile contacts in a gripper using neural networks</article-title>
<source>Sens. Actuators A Phys.</source>
<year>1997</year>
<volume>62</volume>
<fpage>488</fpage>
<lpage>491</lpage>
</element-citation>
</ref>
<ref id="b18-sensors-14-10952">
<label>18.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jamali</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Sammut</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Majority voting: Material classification by tactile sensing using surface texture</article-title>
<source>IEEE Trans. Robot.</source>
<year>2011</year>
<volume>27</volume>
<fpage>508</fpage>
<lpage>521</lpage>
</element-citation>
</ref>
<ref id="b19-sensors-14-10952">
<label>19.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gastaldo</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Pinna</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Seminara</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Valle</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Zunino</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>A Tensor-BasedPattern-Recognition Framework for the Interpretation of Touch Modality in Artificial Skin Systems</article-title>
<source>IEEE Sens. J.</source>
<year>2014</year>
<volume>14</volume>
<fpage>2216</fpage>
<lpage>2225</lpage>
</element-citation>
</ref>
<ref id="b20-sensors-14-10952">
<label>20.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Signoretto</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>de Lathauwerb</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Suykens</surname>
<given-names>J.A.K.</given-names>
</name>
</person-group>
<article-title>A kernel-based framework to tensorial Networks</article-title>
<source>Neural Netw.</source>
<year>2011</year>
<volume>24</volume>
<fpage>861</fpage>
<lpage>874</lpage>
<pub-id pub-id-type="pmid">21703821</pub-id>
</element-citation>
</ref>
<ref id="b21-sensors-14-10952">
<label>21.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dahiya</surname>
<given-names>R.S.</given-names>
</name>
<name>
<surname>Metta</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Valle</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Sandini</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Tactile Sensing: From Humans to Humanoids</article-title>
<source>IEEE Trans. Robot.</source>
<year>2010</year>
<volume>26</volume>
<fpage>1</fpage>
<lpage>20</lpage>
</element-citation>
</ref>
<ref id="b22-sensors-14-10952">
<label>22.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lee</surname>
<given-names>H.-K.</given-names>
</name>
<name>
<surname>Chang</surname>
<given-names>S.-I.</given-names>
</name>
<name>
<surname>Yoon</surname>
<given-names>E.</given-names>
</name>
</person-group>
<article-title>A Flexible Polymer Tactile Sensor: Fabrication and Modular Expandability for Large Area Deployment</article-title>
<source>J. Microelectromech. Syst.</source>
<year>2006</year>
<volume>15</volume>
<fpage>1681</fpage>
<lpage>1686</lpage>
</element-citation>
</ref>
<ref id="b23-sensors-14-10952">
<label>23.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Nalwa</surname>
<given-names>H.S.</given-names>
</name>
</person-group>
<source>Ferroelectric Polymers—Chemistry, Physics and Applications</source>
<publisher-name>Marcel Dekker Inc</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>1995</year>
</element-citation>
</ref>
<ref id="b24-sensors-14-10952">
<label>24.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>P.-M.</given-names>
</name>
<name>
<surname>Shutter</surname>
<given-names>L.A.</given-names>
</name>
<name>
<surname>Narayan</surname>
<given-names>R.K.</given-names>
</name>
</person-group>
<article-title>Dual-mode operation of flexible piezoelectric polymer diaphragm for intracranial pressure measurement</article-title>
<source>Appl. Phys. Lett.</source>
<year>2010</year>
<volume>96</volume>
<pub-id pub-id-type="doi">10.1063/1.3299003</pub-id>
</element-citation>
</ref>
<ref id="b25-sensors-14-10952">
<label>25.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seminara</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Pinna</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Valle</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Basiricò</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Loi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Cosseddu</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Bonfiglio</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ascia</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Bisio</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Ansaldo</surname>
<given-names>A.</given-names>
</name>
<etal></etal>
</person-group>
<article-title>Piezoelectric polymer transducer arrays for flexible tactile sensors</article-title>
<source>IEEE Sens. J.</source>
<year>2013</year>
<volume>13</volume>
<fpage>4022</fpage>
<lpage>4029</lpage>
</element-citation>
</ref>
<ref id="b26-sensors-14-10952">
<label>26.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Pinna</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Valle</surname>
<given-names>M.</given-names>
</name>
</person-group>
<article-title>Charge Amplifier Design Methodology for PVDF-Based Tactile Sensors</article-title>
<source>J. Circuits Syst. Comput.</source>
<year>2013</year>
<volume>22</volume>
<pub-id pub-id-type="doi">10.1142/S0218126613500667</pub-id>
</element-citation>
</ref>
<ref id="b27-sensors-14-10952">
<label>27.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sinapov</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sukhoy</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Sahai</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Stoytchev</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Vibrotactile recognition and categorization of surfaces by a humanoid robot</article-title>
<source>IEEE Trans. Robot.</source>
<year>2011</year>
<volume>27</volume>
<fpage>488</fpage>
<lpage>497</lpage>
</element-citation>
</ref>
<ref id="b28-sensors-14-10952">
<label>28.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kroemer</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Lampert</surname>
<given-names>C.H.</given-names>
</name>
<name>
<surname>Peters</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>Learning dynamic tactile sensing with robust vision-based training</article-title>
<source>IEEE Trans. Robot.</source>
<year>2011</year>
<volume>27</volume>
<fpage>545</fpage>
<lpage>557</lpage>
</element-citation>
</ref>
<ref id="b29-sensors-14-10952">
<label>29.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Naya</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Yamato</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Shinozawa</surname>
<given-names>K.</given-names>
</name>
</person-group>
<article-title>Recognizing human touching behaviors using a haptic interface for a pet robot</article-title>
<conf-name>Proceedings of the IEEE International Conference on Systems, Man and Cybernetics</conf-name>
<conf-loc>Tokyo, Japan</conf-loc>
<conf-date>12–15 October 1999</conf-date>
<volume>2</volume>
<fpage>1030</fpage>
<lpage>1034</lpage>
</element-citation>
</ref>
<ref id="b30-sensors-14-10952">
<label>30.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Jamali</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Sammut</surname>
<given-names>C.</given-names>
</name>
</person-group>
<article-title>Material classification by tactile sensing using surface textures</article-title>
<conf-name>Proceedings of the IEEE International Conference on Robotics and Automation</conf-name>
<conf-loc>Anchorage, AK, USA</conf-loc>
<conf-date>3–8 May 2010</conf-date>
<fpage>2336</fpage>
<lpage>2341</lpage>
</element-citation>
</ref>
<ref id="b31-sensors-14-10952">
<label>31.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Schölkopf</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Smola</surname>
<given-names>A.J.</given-names>
</name>
</person-group>
<source>Learning with Kernels, Support. Vector Machines</source>
<edition>1st</edition>
<publisher-name>The MIT Press</publisher-name>
<publisher-loc>Cambridge, MA, USA; London, UK</publisher-loc>
<year>2001</year>
</element-citation>
</ref>
<ref id="b32-sensors-14-10952">
<label>32.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhao</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Zhou</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Adali</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Cichocki</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>Kernelization of Tensor-Based Models for Multiway Data Analysis</article-title>
<source>IEEE Signal. Process. Mag.</source>
<year>2013</year>
<volume>30</volume>
<fpage>137</fpage>
<lpage>148</lpage>
</element-citation>
</ref>
<ref id="b33-sensors-14-10952">
<label>33.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Signoretto</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Dinh</surname>
<given-names>Q.T.</given-names>
</name>
<name>
<surname>de Lathauwer</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Suykens</surname>
<given-names>J.A.K.</given-names>
</name>
</person-group>
<article-title>Learning with tensors: A framework based on convex optimization and spectral regularization</article-title>
<source>Mach. Learn.</source>
<year>2013</year>
<comment>doi: org/10.1007/s10994-013-5366-3</comment>
</element-citation>
</ref>
<ref id="b34-sensors-14-10952">
<label>34.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Evgeniou</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Pontil</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Poggio</surname>
<given-names>T.</given-names>
</name>
</person-group>
<article-title>Regularization networks and support vector machines</article-title>
<source>J. Adv. Comput. Math.</source>
<year>2000</year>
<volume>13</volume>
<fpage>1</fpage>
<lpage>50</lpage>
</element-citation>
</ref>
<ref id="b35-sensors-14-10952">
<label>35.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Rifkin</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Klautau</surname>
<given-names>A.</given-names>
</name>
</person-group>
<article-title>In defense of one-vs-all classification</article-title>
<source>J. Mach. Learn. Res.</source>
<year>2004</year>
<volume>5</volume>
<fpage>101</fpage>
<lpage>141</lpage>
</element-citation>
</ref>
<ref id="b36-sensors-14-10952">
<label>36.</label>
<element-citation publication-type="book">
<person-group person-group-type="author">
<name>
<surname>Bishop</surname>
<given-names>C.M.</given-names>
</name>
</person-group>
<source>Pattern Recognition and Machine Learning</source>
<publisher-name>Springer Science + Business Media</publisher-name>
<publisher-loc>New York, NY, USA</publisher-loc>
<year>2006</year>
</element-citation>
</ref>
<ref id="b37-sensors-14-10952">
<label>37.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bartlett</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Boucheron</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Lugosi</surname>
<given-names>G.</given-names>
</name>
</person-group>
<article-title>Model selection and error estimation</article-title>
<source>Mach. Learn.</source>
<year>2002</year>
<volume>48</volume>
<fpage>85</fpage>
<lpage>113</lpage>
</element-citation>
</ref>
<ref id="b38-sensors-14-10952">
<label>38.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Chapelle</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Vapnik</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Bousquet</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Mukherjee</surname>
<given-names>S.</given-names>
</name>
</person-group>
<article-title>Choosing multiple parameters for Support Vector Machines</article-title>
<source>Mach. Learn.</source>
<year>2002</year>
<volume>46</volume>
<fpage>131</fpage>
<lpage>159</lpage>
</element-citation>
</ref>
<ref id="b39-sensors-14-10952">
<label>39.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Anguita</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Ridella</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Rivieccio</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Zunino</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Hyperparameter tuning criteria for Support Vector Classifiers</article-title>
<source>Neurocomputing</source>
<year>2003</year>
<volume>55</volume>
<fpage>109</fpage>
<lpage>134</lpage>
</element-citation>
</ref>
<ref id="b40-sensors-14-10952">
<label>40.</label>
<element-citation publication-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Decherchi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gastaldo</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Redi</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zunino</surname>
<given-names>R.</given-names>
</name>
</person-group>
<article-title>Maximal-Discrepancy Bounds for Regularized Classifiers</article-title>
<conf-name>Proceedings of the International Joint Conference on Neural Networks</conf-name>
<conf-loc>Atlanta, GA, USA</conf-loc>
<conf-date>14–19 June 2009</conf-date>
<publisher-name>IEEE</publisher-name>
<publisher-loc>Piscataway, NJ, USA</publisher-loc>
<year>2013</year>
<fpage>652</fpage>
<lpage>656</lpage>
</element-citation>
</ref>
<ref id="b41-sensors-14-10952">
<label>41.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>Decherchi</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Gastaldo</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Ridella</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Zunino</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Anguita</surname>
<given-names>D.</given-names>
</name>
</person-group>
<article-title>Using unsupervised analysis to constrain generalization bounds of Support Vector Classifiers</article-title>
<source>IEEE Trans. Neural Netw.</source>
<year>2010</year>
<volume>21</volume>
<fpage>424</fpage>
<lpage>438</lpage>
<pub-id pub-id-type="pmid">20123572</pub-id>
</element-citation>
</ref>
<ref id="b42-sensors-14-10952">
<label>42.</label>
<element-citation publication-type="journal">
<person-group person-group-type="author">
<name>
<surname>De Lathauwer</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>De Moor</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Vandewalle</surname>
<given-names>J.</given-names>
</name>
</person-group>
<article-title>A multilinear singular value decomposition</article-title>
<source>SIAM J. Matrix Anal. Appl.</source>
<year>2000</year>
<volume>21</volume>
<fpage>1253</fpage>
<lpage>1278</lpage>
</element-citation>
</ref>
</ref-list>
</back>
<floats-group>
<fig id="f1-sensors-14-10952" position="float">
<label>Figure 1.</label>
<caption>
<p>Touch modalities. (
<bold>a</bold>
) Paintbrush brushing; (
<bold>b</bold>
) finger sliding; (
<bold>c</bold>
) washer rolling.</p>
</caption>
<graphic xlink:href="sensors-14-10952f1"></graphic>
</fig>
<fig id="f2-sensors-14-10952" position="float">
<label>Figure 2.</label>
<caption>
<p>Tactile sensor made of an array of piezoelectric polymer transducers.</p>
</caption>
<graphic xlink:href="sensors-14-10952f2"></graphic>
</fig>
<fig id="f3-sensors-14-10952" position="float">
<label>Figure 3.</label>
<caption>
<p>Scheme of the tactile acquisition system. Focus is on the tensor representation of output data.</p>
</caption>
<graphic xlink:href="sensors-14-10952f3"></graphic>
</fig>
<fig id="f4-sensors-14-10952" position="float">
<label>Figure 4.</label>
<caption>
<p>Difference between natural distribution of data and true distribution of data.</p>
</caption>
<graphic xlink:href="sensors-14-10952f4"></graphic>
</fig>
<fig id="f5-sensors-14-10952" position="float">
<label>Figure 5.</label>
<caption>
<p>A schematization of the pre-processing strategy based on sub-sampling.</p>
</caption>
<graphic xlink:href="sensors-14-10952f5"></graphic>
</fig>
<fig id="f6-sensors-14-10952" position="float">
<label>Figure 6.</label>
<caption>
<p>Results obtained with tensor-SVM and tensor-RLS for the classification problems A, B, C: (
<bold>a</bold>
) problem A; (
<bold>b</bold>
) problem B; (
<bold>c)</bold>
problem C.</p>
</caption>
<graphic xlink:href="sensors-14-10952f6"></graphic>
</fig>
<fig id="f7-sensors-14-10952" position="float">
<label>Figure 7.</label>
<caption>
<p>A comparison between the generalization performance obtained by applying the model selection of Algorithm 1 and conventional cross-validation. The graphs refer to problem A, tensor-SVM: (
<bold>a</bold>
)
<italic>D</italic>
= 20; (
<bold>b</bold>
)
<italic>D</italic>
= 50; (
<bold>c</bold>
)
<italic>D</italic>
= 100.</p>
</caption>
<graphic xlink:href="sensors-14-10952f7"></graphic>
</fig>
<fig id="f8-sensors-14-10952" position="float">
<label>Figure 8.</label>
<caption>
<p>A comparison between the generalization performance obtained by applying the model selection of Algorithm 1 and conventional cross-validation. The graphs refer to problem A, tensor-RLS: (
<bold>a</bold>
)
<italic>D</italic>
= 20; (
<bold>b</bold>
)
<italic>D</italic>
= 50; (
<bold>c</bold>
)
<italic>D</italic>
= 100.</p>
</caption>
<graphic xlink:href="sensors-14-10952f8"></graphic>
</fig>
<table-wrap id="t1-sensors-14-10952" position="float">
<label>Table 1.</label>
<caption>
<p>Simulation results: problem A, tensor-SVM.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="bottom" rowspan="3" align="center" colspan="1"></th>
<th colspan="3" valign="bottom" align="center" rowspan="1">
<italic>D</italic>
</th>
</tr>
<tr>
<th valign="bottom" colspan="3" rowspan="1">
<hr></hr>
</th>
</tr>
<tr>
<th valign="bottom" align="center" rowspan="1" colspan="1">20</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">50</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">100</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #1</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (0.1, 2
<sup>1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">5.0 (10, 2
<sup>3</sup>
, 0)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>1</sup>
, 0)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #2</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">5.0 (0.1, 2
<sup>1</sup>
, 0)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">2.5 (1, 2
<sup>1</sup>
, 0)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">2.5 (1, 2
<sup>1</sup>
, 0)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #3</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (0.1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (0.1, 1,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #4</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (0.1, 2
<sup>1</sup>
, 0)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (0.1, 1,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">7.5 (0.1, 2
<sup>−1</sup>
, 0)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #5</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (10, 2
<sup>3</sup>
, 0)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>1</sup>
, 0)</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t2-sensors-14-10952" position="float">
<label>Table 2.</label>
<caption>
<p>Simulation results: problem B, tensor-SVM.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="bottom" rowspan="3" align="center" colspan="1"></th>
<th colspan="3" valign="bottom" align="center" rowspan="1">
<italic>D</italic>
</th>
</tr>
<tr>
<th valign="bottom" colspan="3" rowspan="1">
<hr></hr>
</th>
</tr>
<tr>
<th valign="bottom" align="center" rowspan="1" colspan="1">20</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">50</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">100</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #1</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (0.1, 1,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (0.1, 2
<sup>−1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (0.1, 2
<sup>−1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #2</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (0.1, 2
<sup>−2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">7.0 (0.1, 2
<sup>−2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (0.1, 2
<sup>−1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #3</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (0.1, 1,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (0.1, 1,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">7.5 (0.1, 1,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #4</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">17.5 (1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (0.1, 2
<sup>−1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (0.1, 2
<sup>−1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #5</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (1, 2
<sup>−1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">7.5 (1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t3-sensors-14-10952" position="float">
<label>Table 3.</label>
<caption>
<p>Simulation results: problem C, tensor-SVM.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="bottom" rowspan="3" align="center" colspan="1"></th>
<th colspan="3" valign="bottom" align="center" rowspan="1">
<italic>D</italic>
</th>
</tr>
<tr>
<th valign="bottom" colspan="3" rowspan="1">
<hr></hr>
</th>
</tr>
<tr>
<th valign="bottom" align="center" rowspan="1" colspan="1">20</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">50</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">100</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #1</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (0.1, 1,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">17.5 (0.1, 1,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">17.5 (0.1, 1,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #2</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (10, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>−1</sup>
, 0)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #3</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">20.0 (0.1, 2
<sup>1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #4</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">20.0 (1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (0.1, 2
<sup>−1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (0.1, 2
<sup>−1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #5</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">20.0 (1, 2
<sup>1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">20.0 (10, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">20.0 (10, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t4-sensors-14-10952" position="float">
<label>Table 4.</label>
<caption>
<p>Simulation results: problem A, tensor-RLS.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="bottom" rowspan="3" align="center" colspan="1"></th>
<th colspan="3" valign="bottom" align="center" rowspan="1">
<italic>D</italic>
</th>
</tr>
<tr>
<th valign="bottom" colspan="3" rowspan="1">
<hr></hr>
</th>
</tr>
<tr>
<th valign="bottom" align="center" rowspan="1" colspan="1">20</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">50</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">100</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #1</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (0.1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">7.5 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">5.0 (10, 2
<sup>4</sup>
, 0)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #2</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (0.1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #3</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">5.0 (0.1, 2
<sup>2</sup>
, 0)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">2.5 (10, 2
<sup>4</sup>
, 0)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">5.0 (1, 2
<sup>2</sup>
, 0)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #4</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (10, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (10, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #5</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">7.5 (1, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t5-sensors-14-10952" position="float">
<label>Table 5.</label>
<caption>
<p>Simulation results: problem B, tensor-RLS.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="bottom" rowspan="3" align="center" colspan="1"></th>
<th colspan="3" valign="bottom" align="center" rowspan="1">
<italic>D</italic>
</th>
</tr>
<tr>
<th valign="bottom" colspan="3" rowspan="1">
<hr></hr>
</th>
</tr>
<tr>
<th valign="bottom" align="center" rowspan="1" colspan="1">20</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">50</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">100</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #1</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (0.1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">17.5 (0.1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (0.1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #2</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (10, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (10, 2
<sup>1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (10, 2
<sup>1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #3</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (0.1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">5.0 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">5.0 (0.1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #4</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #5</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t6-sensors-14-10952" position="float">
<label>Table 6.</label>
<caption>
<p>Simulation results: problem C, tensor-RLS</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="bottom" rowspan="3" align="center" colspan="1"></th>
<th colspan="3" valign="bottom" align="center" rowspan="1">
<italic>D</italic>
</th>
</tr>
<tr>
<th valign="bottom" colspan="3" rowspan="1">
<hr></hr>
</th>
</tr>
<tr>
<th valign="bottom" align="center" rowspan="1" colspan="1">20</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">50</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">100</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #1</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (1, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (1, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">20.0 (0.1, 2
<sup>4</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #2</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>3</sup>
, 0)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">7.5 (1, 2
<sup>4</sup>
, 0)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #3</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">12.5 (0.1, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #4</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (0.1, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (10, 2
<sup>1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">10.0 (10, 2
<sup>1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">run #5</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">20.0 (100, 2
<sup>1</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">15.0 (1000, 2
<sup>3</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">17.5 (10, 2
<sup>2</sup>
,
<italic>Q
<sub>z</sub>
</italic>
/2)</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="t7-sensors-14-10952" position="float">
<label>Table 7.</label>
<caption>
<p>Results for the 3-class classification problem.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="bottom" align="center" rowspan="1" colspan="1"></th>
<th valign="bottom" align="center" rowspan="1" colspan="1">Tensor-SVM</th>
<th valign="bottom" align="center" rowspan="1" colspan="1">Tensor-RLS</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">best</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">23.4</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">22.7</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">average</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">29.0</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">26.3</td>
</tr>
<tr>
<td valign="bottom" align="center" rowspan="1" colspan="1">settings</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">
<italic>σ</italic>
= 2
<sup>1</sup>
;
<italic>C</italic>
= 10
<italic>α</italic>
=
<italic>Q
<sub>z</sub>
</italic>
/2;
<italic>D</italic>
= 100</td>
<td valign="bottom" align="center" rowspan="1" colspan="1">
<italic>σ</italic>
= 2
<sup>-1</sup>
;
<italic>C</italic>
= 100
<italic>α</italic>
=
<italic>Q
<sub>z</sub>
</italic>
/2;
<italic>D</italic>
= 100</td>
</tr>
</tbody>
</table>
</table-wrap>
</floats-group>
</pmc>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Ticri/CIDE/explor/HapticV1/Data/Pmc/Curation
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 002482 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd -nk 002482 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Ticri/CIDE
   |area=    HapticV1
   |flux=    Pmc
   |étape=   Curation
   |type=    RBID
   |clé=     PMC:4118344
   |texte=   Computational Intelligence Techniques for Tactile Sensing Systems
}}

Pour générer des pages wiki

HfdIndexSelect -h $EXPLOR_AREA/Data/Pmc/Curation/RBID.i   -Sk "pubmed:24949646" \
       | HfdSelect -Kh $EXPLOR_AREA/Data/Pmc/Curation/biblio.hfd   \
       | NlmPubMed2Wicri -a HapticV1 

Wicri

This area was generated with Dilib version V0.6.23.
Data generation: Mon Jun 13 01:09:46 2016. Site generation: Wed Mar 6 09:54:07 2024