Dr. Andy Lücking

Andy Lücking

Dr. Andy Lücking

Staff member

 

 

 

 

ContactResearch InterestsPublicationsShort Academic CVMiscellaneous

Your Name (required)

Your Email (required)

Subject

Your Message


My research interests are centered around linguistic and philosophical theories of meaning and interaction. One focus is on the interplay of speech and gesture in communication. In my work, I combine theoretical modeling and experimental methods. The domains covered range from foundational theoretical aspects of grounding non-verbal meaning to the building of applications. I am concerned with topics such as semantic and pragmatic notions of reference, deferred reference, alignment in dialogue, iconicity, demonstration and exemplification.

Total: 48

2017 (1)

  • A. Mehler and A. Lücking, “Modelle sozialer Netzwerke und Natural Language Processing: eine methodologische Randnotiz,” Soziologie, vol. 46, iss. 1, pp. 43-47, 2017.
    [BibTeX]

    @Article{Mehler:Luecking:2017,
        author =     {Alexander Mehler and Andy Lücking},
        title =     {Modelle sozialer Netzwerke und Natural Language Processing: eine methodologische Randnotiz},
        journal =     {Soziologie},
        year =     2017,
        volume =     46,
        number =     1,
        pages =     {43-47}
    }

2016 (3)

  • [PDF] [http://annals-csis.org/Volume_8/drp/83.html] [DOI] A. Lücking, “Modeling Co-Verbal Gesture Perception in Type Theory with Records,” in Proceedings of the 2016 Federated Conference on Computer Science and Information Systems, 2016, pp. 383-392. Best Paper Award
    [BibTeX]

    @inproceedings{Luecking:2016:b,
    author={L\"{u}cking, Andy},
    title={Modeling Co-Verbal Gesture Perception in Type Theory with Records},
    booktitle={Proceedings of the 2016 Federated Conference on Computer Science and Information Systems},
    year={2016},
    pages={383-392},
    series={Annals of Computer Science and Information Systems},
    volume={8},
    editor = {M. Ganzha and L. Maciaszek and M. Paprzycki},{Gdansk, Poland},
    publisher={IEEE},
    note={Best Paper Award},
    url={http://annals-csis.org/Volume_8/drp/83.html},
    doi={10.15439/2016F83},
    pdf={http://annals-csis.org/Volume_8/pliks/83.pdf}
    }
  • [PDF] A. Lücking, A. Mehler, D. Walther, M. Mauri, and D. Kurfürst, “Finding Recurrent Features of Image Schema Gestures: the FIGURE corpus,” in Proceedings of the 10th International Conference on Language Resources and Evaluation, 2016.
    [BibTeX]

    @InProceedings{Luecking:Mehler:Walther:Mauri:Kurfuerst:2016,
      author =     {L\"{u}cking, Andy and Mehler, Alexander and Walther,
                      D\'{e}sir\'{e}e and Mauri, Marcel and Kurf\"{u}rst,
                      Dennis},
      title =     {Finding Recurrent Features of Image Schema Gestures:
                      the {FIGURE} corpus},
      booktitle =     {Proceedings of the 10th International Conference on
                      Language Resources and Evaluation},
      year =     2016,
      series =     {LREC 2016},
      pdf =     {http://hucompute.org/wp-content/uploads/2016/04/lrec2016-gesture-study-final-version-short.pdf},
      location =     {Portoro\v{z} (Slovenia)}
    }
  • [PDF] A. Lücking, A. Hoenen, and A. Mehler, “TGermaCorp — A (Digital) Humanities Resource for (Computational) Linguistics,” in Proceedings of the 10th International Conference on Language Resources and Evaluation, 2016.
    [BibTeX]

    @InProceedings{Luecking:Hoenen:Mehler:2016,
      author =     {L\"{u}cking, Andy and Hoenen, Armin and Mehler,
                      Alexander},
      title =     {{TGermaCorp} -- A (Digital) Humanities Resource for
                      (Computational) Linguistics},
      booktitle =     {Proceedings of the 10th International Conference on
                      Language Resources and Evaluation},
      year =     2016,
      series =     {LREC 2016},
      pdf =     {http://hucompute.org/wp-content/uploads/2016/04/lrec2016-ttgermacorp-final.pdf},
      location =     {Portoro\v{z} (Slovenia)}
    }

2015 (3)

  • Towards a Theoretical Framework for Analyzing Complex Linguistic Networks, A. Mehler, A. Lücking, S. Banisch, P. Blanchard, and B. Frank-Job, Eds., Springer, 2015.
    [BibTeX]

    @BOOK{Mehler:Luecking:Banisch:Blanchard:Frank-Job:2015,
        editor={Mehler, Alexander and Lücking, Andy and Banisch, Sven and Blanchard, Philippe and Frank-Job, Barbara},
        year={2015},
        ISBN={978-36-662-47237-8},
        publisher={Springer},
        title={Towards a Theoretical Framework for Analyzing Complex Linguistic Networks},
        image={https://hucompute.org/wp-content/uploads/2015/09/UCS_17-2-tmp.png},
        adress={Berlin and New York},
        series={Understanding Complex Systems}}
  • A. Mehler and R. Gleim, “Linguistic Networks — An Online Platform for Deriving Collocation Networks from Natural Language Texts,” in Towards a Theoretical Framework for Analyzing Complex Linguistic Networks, A. Mehler, A. Lücking, S. Banisch, P. Blanchard, and B. Frank-Job, Eds., Springer, 2015.
    [BibTeX]

    @INCOLLECTION{Mehler:Gleim:2015:a,
        publisher={Springer},
        editor={Mehler, Alexander and Lücking, Andy and Banisch, Sven and Blanchard, Philippe and Frank-Job, Barbara},
        year={2015},
        booktitle={Towards a Theoretical Framework for Analyzing Complex Linguistic Networks},
        title={Linguistic Networks -- An Online Platform for Deriving Collocation Networks from Natural Language Texts},
        series={Understanding Complex Systems},
        author={Mehler, Alexander and Gleim, Rüdiger}}
  • [PDF] [DOI] A. Lücking, T. Pfeiffer, and H. Rieser, “Pointing and Reference Reconsidered,” Journal of Pragmatics, vol. 77, pp. 56-79, 2015.
    [Abstract] [BibTeX]

    Current semantic theory on indexical expressions claims that demonstratively used indexicals such as this lack a referent-determining meaning but instead rely on an accompanying demonstration act like a pointing gesture. While this view allows to set up a sound logic of demonstratives, the direct-referential role assigned to pointing gestures has never been scrutinized thoroughly in semantics or pragmatics. We investigate the semantics and pragmatics of co-verbal pointing from a foundational perspective combining experiments, statistical investigation, computer simulation and theoretical modeling techniques in a novel manner. We evaluate various referential hypotheses with a corpus of object identification games set up in experiments in which body movement tracking techniques have been extensively used to generate precise pointing  measurements. Statistical investigation and computer simulations show that especially distal areas in the pointing domain falsify the semantic direct-referential hypotheses concerning pointing gestures. As an alternative, we propose that reference involving pointing rests on a default inference which we specify using the empirical data. These results raise numerous problems for classical semantics–pragmatics interfaces: we argue for pre-semantic pragmatics in order to account for inferential reference in addition to classical post-semantic Gricean pragmatics.
    @ARTICLE{Luecking:Pfeiffer:Rieser:2015,
        year={2015},
        title={Pointing and Reference Reconsidered},
        author={Lücking, Andy and Pfeiffer, Thies and Rieser, Hannes},
        journal={Journal of Pragmatics},
        volume={77},
        pages={56-79},
        doi={10.1016/j.pragma.2014.12.013},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/Luecking_Pfeiffer_Rieser_Pointing_and_Reference_Reconsiderd.pdf},
        website={http://www.sciencedirect.com/science/article/pii/S037821661500003X},
        abstract={Current semantic theory on indexical expressions claims that demonstratively used indexicals such as this lack a referent-determining meaning but instead rely on an accompanying demonstration act like a pointing gesture. While this view allows to set up a sound logic of demonstratives, the direct-referential role assigned to pointing gestures has never been scrutinized thoroughly in semantics or pragmatics. We investigate the semantics and pragmatics of co-verbal pointing from a foundational perspective combining experiments, statistical investigation, computer simulation and theoretical modeling techniques in a novel manner. We evaluate various referential hypotheses with a corpus of object identification games set up in experiments in which body movement tracking techniques have been extensively used to generate precise pointing  measurements. Statistical investigation and computer simulations show that especially distal areas in the pointing domain falsify the semantic direct-referential hypotheses concerning pointing gestures. As an alternative, we propose that reference involving pointing rests on a default inference which we specify using the empirical data. These results raise numerous problems for classical semantics–pragmatics interfaces: we argue for pre-semantic pragmatics in order to account for inferential reference in addition to classical post-semantic Gricean pragmatics.}}

2014 (2)

  • [PDF] A. Mehler, T. vor der Brück, and A. Lücking, “Comparing Hand Gesture Vocabularies for HCI,” in Proceedings of HCI International 2014, 22 – 27 June 2014, Heraklion, Greece, Berlin/New York: Springer, 2014.
    [Abstract] [BibTeX]

    HCI systems are often equipped with gestural interfaces drawing on a predefined set of admitted gestures. We provide an assessment of the fitness of such gesture vocabularies in terms of their learnability and naturalness. This is done by example of rivaling gesture vocabularies of the museum information system WikiNect. In this way, we do not only provide a procedure for evaluating gesture vocabularies, but additionally contribute to design criteria to be followed by the gestures.
    @INCOLLECTION{Mehler:vor:der:Brueck:Luecking:2014,
        publisher={Springer},
        booktitle={Proceedings of HCI International 2014, 22 - 27 June 2014, Heraklion, Greece},
        author={Mehler, Alexander and vor der Brück, Tim and Lücking, Andy},
        year={2014},
        title={Comparing Hand Gesture Vocabularies for HCI},
        address={Berlin/New York},
        website={{http://link.springer.com/chapter/10.1007/978-3-319-07230-2_8#page-1}},
        abstract={HCI systems are often equipped with gestural interfaces drawing on a predefined set of admitted gestures. We provide an assessment of the fitness of such gesture vocabularies in terms of their learnability and naturalness. This is done by example of rivaling gesture vocabularies of the museum information system WikiNect. In this way, we do not only provide a procedure for evaluating gesture vocabularies, but additionally contribute to design criteria to be followed by the gestures.},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/Comparing-Gesture-Vocabularies-1_1.pdf},
        keywords={wikinect}}
  • [PDF] [DOI] A. Mehler, A. Lücking, and G. Abrami, “WikiNect: Image Schemata as a Basis of Gestural Writing for Kinetic Museum Wikis,” Universal Access in the Information Society, pp. 1-17, 2014.
    [Abstract] [BibTeX]

    This paper provides a theoretical assessment of gestures in the context of authoring image-related hypertexts by example of the museum information system WikiNect. To this end, a first implementation of gestural writing based on image schemata is provided (Lakoff in Women, fire, and dangerous things: what categories reveal about the mind. University of Chicago Press, Chicago, 1987). Gestural writing is defined as a sort of coding in which propositions are only expressed by means of gestures. In this respect, it is shown that image schemata allow for bridging between natural language predicates and gestural manifestations. Further, it is demonstrated that gestural writing primarily focuses on the perceptual level of image descriptions (Hollink et al. in Int J Hum Comput Stud 61(5):601–626, 2004). By exploring the metaphorical potential of image schemata, it is finally illustrated how to extend the expressiveness of gestural writing in order to reach the conceptual level of image descriptions. In this context, the paper paves the way for implementing museum information systems like WikiNect as systems of kinetic hypertext authoring based on full-fledged gestural writing.
    @ARTICLE{Mehler:Luecking:Abrami:2014,
        journal={Universal Access in the Information Society},
        issn={1615-5289},
        doi={10.1007/s10209-014-0386-8},
        author={Mehler, Alexander and Lücking, Andy and Abrami, Giuseppe},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/art_10.1007_s10209-014-0386-8.pdf},
        pages={1-17},
        year={2014},
        title={{WikiNect}: Image Schemata as a Basis of Gestural Writing for Kinetic Museum Wikis},
        website={http://dx.doi.org/10.1007/s10209-014-0386-8},
        abstract={This paper provides a theoretical assessment of gestures in the context of authoring image-related hypertexts by example of the museum information system WikiNect. To this end, a first implementation of gestural writing based on image schemata is provided (Lakoff in Women, fire, and dangerous things: what categories reveal about the mind. University of Chicago Press, Chicago, 1987). Gestural writing is defined as a sort of coding in which propositions are only expressed by means of gestures. In this respect, it is shown that image schemata allow for bridging between natural language predicates and gestural manifestations. Further, it is demonstrated that gestural writing primarily focuses on the perceptual level of image descriptions (Hollink et al. in Int J Hum Comput Stud 61(5):601–626, 2004). By exploring the metaphorical potential of image schemata, it is finally illustrated how to extend the expressiveness of gestural writing in order to reach the conceptual level of image descriptions. In this context, the paper paves the way for implementing museum information systems like WikiNect as systems of kinetic hypertext authoring based on full-fledged gestural writing.},
        keywords={wikinect}}

2013 (8)

  • [http://scch2013.wordpress.com/] A. Mehler, A. Lücking, T. vor der Brück, and G. Abrami, WikiNect – A Kinetic Artwork Wiki for Exhibition Visitors, 2013.
    [Poster][BibTeX]

    @MISC{Mehler:Luecking:vor:der:Brueck:2013:a,
        url={http://scch2013.wordpress.com/},
        author={Mehler, Alexander and Lücking, Andy and vor der Brück, Tim and Abrami, Giuseppe},
        month={11},
        year={2013},
        howpublished={Poster Presentation at the Scientific Computing and Cultural Heritage 2013 Conference, Heidelberg},
        title={WikiNect - A Kinetic Artwork Wiki for Exhibition Visitors},
        poster={https://hucompute.org/wp-content/uploads/2015/08/SCCHPoster2013.pdf},
        keywords={wikinect}}
  • [http://www.bkl-ev.de/bkl_workshop/archiv/workshop13_programm.php] A. Lücking, Theoretische Bausteine für einen semiotischen Ansatz zum Einsatz von Gestik in der Aphasietherapie, 2013.
    [BibTeX]

    @MISC{Luecking:2013:c,
        url={http://www.bkl-ev.de/bkl_workshop/archiv/workshop13_programm.php},
        author={Lücking, Andy},
        month={05},
        year={2013},
        howpublished={Talk at the BKL workshop 2013, Bochum},
        title={Theoretische Bausteine für einen semiotischen Ansatz zum Einsatz von Gestik in der Aphasietherapie}}
  • [http://www.ruhr-uni-bochum.de/phil-lang/investigating/index.html] A. Lücking, Eclectic Semantics for Non-Verbal Signs, 2013.
    [BibTeX]

    @MISC{Luecking:2013:d,
        url={http://www.ruhr-uni-bochum.de/phil-lang/investigating/index.html},
        author={Lücking, Andy},
        month={10},
        year={2013},
        howpublished={Talk at the Conference on Investigating semantics: Empirical and philosophical approaches, Bochum},
        title={Eclectic Semantics for Non-Verbal Signs}}
  • A. Lücking, “Multimodal Propositions? From Semiotic to Semantic Considerations in the Case of Gestural Deictics,” in Poster Abstracts of the Proceedings of the 17th Workshop on the Semantics and Pragmatics of Dialogue, Amsterdam, 2013, pp. 221-223.
    [Poster][BibTeX]

    @INPROCEEDINGS{Luecking:2013:e,
        booktitle={Poster Abstracts of the Proceedings of the 17th Workshop on the Semantics and Pragmatics of Dialogue},
        pages={221-223},
        author={Lücking, Andy},
        series={SemDial 2013},
        editor={Fernandez, Raquel and Isard, Amy},
        month={12},
        year={2013},
        title={Multimodal Propositions? From Semiotic to Semantic Considerations in the Case of Gestural Deictics},
        address={Amsterdam},
        poster={https://hucompute.org/wp-content/uploads/2015/08/dialdam2013.pdf}}
  • [PDF] A. Lücking and A. Mehler, “On Three Notions of Grounding of Artificial Dialog Companions,” Science, Technology & Innovation Studies, vol. 10, iss. 1, pp. 31-36, 2013.
    [Abstract] [BibTeX]

    We provide a new, theoretically motivated evaluation grid for assessing the conversational achievements of Artificial Dialog Companions (ADCs). The grid is spanned along three grounding problems. Firstly, it is argued that symbol grounding in general has to be instrinsic. Current approaches in this context, however, are limited to a certain kind of expression that can be grounded in this way. Secondly, we identify three requirements for conversational grounding, the process leading to mutual understanding. Finally, we sketch a test case for symbol grounding in the form of the philosophical grounding problem that involves the use of modal language. Together, the three grounding problems provide a grid that allows us to assess ADCs’ dialogical performances and to pinpoint future developments on these grounds.
    @ARTICLE{Luecking:Mehler:2013:a,
        pdf={https://hucompute.org/wp-content/uploads/2015/08/STI-final-badge.pdf},
        journal={Science, Technology \& Innovation Studies},
        author={Lücking, Andy and Mehler, Alexander},
        year={2013},
        title={On Three Notions of Grounding of Artificial Dialog Companions},
        website={http://www.sti-studies.de/ojs/index.php/sti/article/view/143},
        abstract={We provide a new, theoretically motivated evaluation grid for assessing the conversational achievements of Artificial Dialog Companions (ADCs). The grid is spanned along three grounding problems. Firstly, it is argued that symbol grounding in general has to be instrinsic. Current approaches in this context, however, are limited to a certain kind of expression that can be grounded in this way. Secondly, we identify three requirements for conversational grounding, the process leading to mutual understanding. Finally, we sketch a test case for symbol grounding in the form of the philosophical grounding problem that involves the use of modal language. Together, the three grounding problems provide a grid that allows us to assess ADCs’ dialogical performances and to pinpoint future developments on these grounds.},
        pages={31-36},
        volume={10},
        number={1}}
  • A. Lücking, “Interfacing Speech and Co-Verbal Gesture: Exemplification,” in Proceedings of the 35th Annual Conference of the German Linguistic Society, Potsdam, Germany, 2013, pp. 284-286.
    [BibTeX]

    @INPROCEEDINGS{Luecking:2013:b,
        booktitle={Proceedings of the 35th Annual Conference of the German Linguistic Society},
        pages={284-286},
        author={Lücking, Andy},
        series={DGfS 2013},
        year={2013},
        title={Interfacing Speech and Co-Verbal Gesture: Exemplification},
        address={Potsdam, Germany}}
  • A. Lücking, Ikonische Gesten. Grundzüge einer linguistischen Theorie, Berlin and Boston: De Gruyter, 2013. Zugl. Diss. Univ. Bielefeld (2011)
    [Abstract] [BibTeX]

    Nicht-verbale Zeichen, insbesondere sprachbegleitende Gesten, spielen eine herausragende Rolle in der menschlichen Kommunikation. Um eine Analyse von Gestik innerhalb derjenigen Disziplinen, die sich mit der Erforschung und Modellierung von Dialogen beschäftigen, zu ermöglichen, bedarf es einer entsprechenden linguistischen Rahmentheorie. „Ikonische Gesten“ bietet einen ersten zeichen- und wahrnehmungstheoretisch motivierten Rahmen an, in dem eine grammatische Analyse der Integration von Sprache und Gestik möglich ist. Ausgehend von einem Abriss semiotischer Zugänge zu ikonischen Zeichen wird der vorherrschende Ähnlichkeitsansatz unter Rückgriff auf Wahrnehmungstheorien zugunsten eines Exemplifikationsansatzes verworfen. Exemplifikation wird im Rahmen einer unifikationsbasierten Grammatik umgesetzt. Dort werden u.a. multimodale Wohlgeformtheit, Synchronie und multimodale Subkategorisierung als neue Gegenstände linguistischer Forschung eingeführt und im Rahmen einer integrativen Analyse von Sprache und Gestik modelliert.
    @BOOK{Luecking:2013,
        publisher={De Gruyter},
        author={Lücking, Andy},
        note={Zugl. Diss. Univ. Bielefeld (2011)},
        year={2013},
        image={https://hucompute.org/wp-content/uploads/2015/09/ikonischeGesten.jpg},
        title={Ikonische Gesten. Grundzüge einer linguistischen Theorie},
        address={Berlin and Boston},
        abstract={Nicht-verbale Zeichen, insbesondere sprachbegleitende Gesten, spielen eine herausragende Rolle in der menschlichen Kommunikation. Um eine Analyse von Gestik innerhalb derjenigen Disziplinen, die sich mit der Erforschung und Modellierung von Dialogen besch{\"a}ftigen, zu ermöglichen, bedarf es einer entsprechenden linguistischen Rahmentheorie. „Ikonische Gesten“ bietet einen ersten zeichen- und wahrnehmungstheoretisch motivierten Rahmen an, in dem eine grammatische Analyse der Integration von Sprache und Gestik möglich ist. Ausgehend von einem Abriss semiotischer Zug{\"a}nge zu ikonischen Zeichen wird der vorherrschende {\"A}hnlichkeitsansatz unter Rückgriff auf Wahrnehmungstheorien zugunsten eines Exemplifikationsansatzes verworfen. Exemplifikation wird im Rahmen einer unifikationsbasierten Grammatik umgesetzt. Dort werden u.a. multimodale Wohlgeformtheit, Synchronie und multimodale Subkategorisierung als neue Gegenst{\"a}nde linguistischer Forschung eingeführt und im Rahmen einer integrativen Analyse von Sprache und Gestik modelliert.}}
  • [PDF] [DOI] A. Lücking, K. Bergman, F. Hahn, S. Kopp, and H. Rieser, “Data-based Analysis of Speech and Gesture: The Bielefeld Speech and Gesture Alignment Corpus (SaGA) and its Applications,” Journal of Multimodal User Interfaces, vol. 7, iss. 1-2, pp. 5-18, 2013.
    [Abstract] [BibTeX]

    Communicating face-to-face, interlocutors frequently produce multimodal meaning packages consisting of speech and accompanying gestures. We discuss a systematically annotated speech and gesture corpus consisting of 25 route-and-landmark-description dialogues, the Bielefeld Speech and Gesture Alignment corpus (SaGA), collected in experimental face-to-face settings. We first describe the primary and secondary data of the corpus and its reliability assessment. Then we go into some of the projects carried out using SaGA demonstrating the wide range of its usability: on the empirical side, there is work on gesture typology, individual and contextual parameters influencing gesture production and gestures’ functions for dialogue structure. Speech-gesture interfaces have been established extending unification-based grammars. In addition, the development of a computational model of speech-gesture alignment and its implementation constitutes a research line we focus on.
    @ARTICLE{Luecking:Bergmann:Hahn:Kopp:Rieser:2012,
        journal={Journal of Multimodal User Interfaces},
        author={Lücking, Andy and Bergman, Kirsten and Hahn, Florian and Kopp, Stefan and Rieser, Hannes},
        doi={10.1007/s12193-012-0106-8},
        year={2013},
        volume={7},
        number={1-2},
        pages={5-18},
        title={Data-based Analysis of Speech and Gesture: The Bielefeld Speech and Gesture Alignment Corpus (SaGA) and its Applications},
        abstract={Communicating face-to-face, interlocutors frequently produce multimodal meaning packages consisting of speech and accompanying gestures. We discuss a systematically annotated speech and gesture corpus consisting of 25 route-and-landmark-description dialogues, the Bielefeld Speech and Gesture Alignment corpus (SaGA), collected in experimental face-to-face settings. We first describe the primary and secondary data of the corpus and its reliability assessment. Then we go into some of the projects carried out using SaGA demonstrating the wide range of its usability: on the empirical side, there is work on gesture typology, individual and contextual parameters influencing gesture production and gestures’ functions for dialogue structure. Speech-gesture interfaces have been established extending unification-based grammars. In addition, the development of a computational model of speech-gesture alignment and its implementation constitutes a research line we focus on.},
        website={http://www.springerlink.com/content/a547448u86h3116x/?MUD=MP},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/MMUI-SaGA-revision2.pdf}}

2012 (8)

  • [PDF] A. Mehler and A. Lücking, “Pathways of Alignment between Gesture and Speech: Assessing Information Transmission in Multimodal Ensembles,” in Proceedings of the International Workshop on Formal and Computational Approaches to Multimodal Communication under the auspices of ESSLLI 2012, Opole, Poland, 6-10 August, 2012.
    [Abstract] [BibTeX]

    We present an empirical account of multimodal ensembles based on Hjelmslev’s notion of selection. This is done to get measurable evidence for the existence of speech-and-gesture ensembles. Utilizing information theory, we show that there is an information transmission that makes a gestures’ representation technique predictable when merely knowing its lexical affiliate – in line with the notion of the primacy of language. Thus, there is evidence for a one-way coupling – going from words to gestures – that leads to speech-and-gesture alignment and underlies the constitution of multimodal ensembles.
    @INPROCEEDINGS{Mehler:Luecking:2012:d,
        booktitle={Proceedings of the International Workshop on Formal and Computational Approaches to Multimodal Communication under the auspices of ESSLLI 2012, Opole, Poland, 6-10 August},
        author={Mehler, Alexander and Lücking, Andy},
        editor={Gianluca Giorgolo and Katya Alahverdzhieva},
        year={2012},
        title={Pathways of Alignment between Gesture and Speech: Assessing Information Transmission in Multimodal Ensembles},
        abstract={We present an empirical account of multimodal ensembles based on Hjelmslev’s notion of selection. This is done to get measurable evidence for the existence of speech-and-gesture ensembles. Utilizing information theory, we show that there is an information transmission that makes a gestures’ representation technique predictable when merely knowing its lexical affiliate – in line with the notion of the primacy of language. Thus, there is evidence for a one-way coupling – going from words to gestures – that leads to speech-and-gesture alignment and underlies the constitution of multimodal ensembles.},
        website={http://www.researchgate.net/publication/268368670_Pathways_of_Alignment_between_Gesture_and_Speech_Assessing_Information_Transmission_in_Multimodal_Ensembles},
     pdf={https://hucompute.org/wp-content/uploads/2016/06/Mehler_Luecking_FoCoMC2012-2.pdf},
        keywords={wikinect}}
  • [PDF] A. Lücking, “Towards a Conceptual, Unification-based Speech-Gesture Interface,” in Proceedings of the International Workshop on Formal and Computational Approaches to Multimodal Communication under the auspices of ESSLLI 2012, Opole, Poland, 6-10 August, 2012.
    [Abstract] [BibTeX]

    A framework for grounding the semantics of co-verbal iconic gestures is presented. A resemblance account to iconicity is discarded in favor of an exemplification approach. It is sketched how exemplification can be captured within a unification-based grammar that provides a conceptual interface. Gestures modeled as vector sequences are the exemplificational base. Some hypotheses that follow from the general account are pointed at and remaining challenges are discussed.
    @INPROCEEDINGS{Luecking:2012,
        booktitle={Proceedings of the International Workshop on Formal and Computational Approaches to Multimodal Communication under the auspices of ESSLLI 2012, Opole, Poland, 6-10 August},
        author={Lücking, Andy},
        editor={Gianluca Giorgolo and Katya Alahverdzhieva},
        year={2012},
        title={Towards a Conceptual, Unification-based Speech-Gesture Interface},
        abstract={A framework for grounding the semantics of co-verbal iconic gestures is presented. A resemblance account to iconicity is discarded in favor of an exemplification approach. It is sketched how exemplification can be captured within a unification-based grammar that provides a conceptual interface. Gestures modeled as vector sequences are the exemplificational base. Some hypotheses that follow from the general account are pointed at and remaining challenges are discussed.},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/FoCoMoC2012-1.pdf}}
  • [PDF] A. Mehler and A. Lücking, “WikiNect: Towards a Gestural Writing System for Kinetic Museum Wikis,” in Proceedings of the International Workshop On User Experience in e-Learning and Augmented Technologies in Education (UXeLATE 2012) in Conjunction with ACM Multimedia 2012, 29 October- 2 November, Nara, Japan, 2012, pp. 7-12.
    [Abstract] [BibTeX]

    We introduce WikiNect as a kinetic museum information system that allows museum visitors to give on-site feedback about exhibitions. To this end, WikiNect integrates three approaches to Human-Computer Interaction (HCI): games with a purpose, wiki-based collaborative writing and kinetic text-technologies. Our aim is to develop kinetic technologies as a new paradigm of HCI. They dispense with classical interfaces (e.g., keyboards) in that they build on non-contact modes of communication like gestures or facial expressions as input displays. In this paper, we introduce the notion of gestural writing as a kinetic text-technology that underlies WikiNect to enable museum visitors to communicate their feedback. The basic idea is to explore sequences of gestures that share the semantic expressivity of verbally manifested speech acts. Our task is to identify such gestures that are learnable on-site in the usage scenario of WikiNect. This is done by referring to so-called transient gestures as part of multimodal ensembles, which are candidate gestures of the desired functionality. 
    @INPROCEEDINGS{Mehler:Luecking:2012:c,
        booktitle={Proceedings of the International Workshop On User Experience in e-Learning and Augmented Technologies in Education (UXeLATE 2012) in Conjunction with ACM Multimedia 2012, 29 October- 2 November, Nara, Japan},
        author={Mehler, Alexander and Lücking, Andy},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/UXeLATE2012-copyright.pdf},
        pages={7-12},
        year={2012},
        title={WikiNect: Towards a Gestural Writing System for Kinetic Museum Wikis},
        abstract={We introduce WikiNect as a kinetic museum information system that allows museum visitors to give on-site feedback about exhibitions. To this end, WikiNect integrates three approaches to Human-Computer Interaction (HCI): games with a purpose, wiki-based collaborative writing and kinetic text-technologies. Our aim is to develop kinetic technologies as a new paradigm of HCI. They dispense with classical interfaces (e.g., keyboards) in that they build on non-contact modes of communication like gestures or facial expressions as input displays. In this paper, we introduce the notion of gestural writing as a kinetic text-technology that underlies WikiNect to enable museum visitors to communicate their feedback. The basic idea is to explore sequences of gestures that share the semantic expressivity of verbally manifested speech acts. Our task is to identify such gestures that are learnable on-site in the usage scenario of WikiNect. This is done by referring to so-called transient gestures as part of multimodal ensembles, which are candidate gestures of the desired functionality. },
        website={http://www.researchgate.net/publication/262319200_WikiNect_towards_a_gestural_writing_system_for_kinetic_museum_wikis},
        keywords={wikinect}}
  • A. Lücking, S. Ptock, and K. Bergmann, “Assessing Agreement on Segmentations by Means of Staccato, the Segmentation Agreement Calculator according to Thomann,” in Gesture and Sign Language in Human-Computer Interaction and Embodied Communication, E. Efthimiou, G. Kouroupetroglou, and S. Fotina, Eds., Berlin and Heidelberg: Springer, 2012, vol. 7206, pp. 129-138.
    [Abstract] [BibTeX]

    Staccato, the Segmentation Agreement Calculator According to Thomann , is a software tool for assessing the degree of agreement of multiple segmentations of some time-related data (e.g., gesture phases or sign language constituents). The software implements an assessment procedure developed by Bruno Thomann and will be made publicly available. The article discusses the rationale of the agreement assessment procedure and points at future extensions of Staccato.
    @INCOLLECTION{Luecking:Ptock:Bergmann:2012,
        publisher={Springer},
        booktitle={Gesture and Sign Language in Human-Computer Interaction and Embodied Communication},
        website={http://link.springer.com/chapter/10.1007/978-3-642-34182-3_12},
        booksubtitle={9th International Gesture Workshop, GW 2011, Athens, Greece, May 2011, Revised Selected Papers},
        pages={129-138},
        author={Lücking, Andy and Ptock, Sebastian and Bergmann, Kirsten},
        series={Lecture Notes in Artificial Intelligence},
        volume={7206},
        editor={Eleni Efthimiou and Georgios Kouroupetroglou and Stavroula-Evita Fotina},
        year={2012},
        title={Assessing Agreement on Segmentations by Means of Staccato, the Segmentation Agreement Calculator according to Thomann},
        address={Berlin and Heidelberg},
        abstract={Staccato, the Segmentation Agreement Calculator According to Thomann , is a software tool for assessing the degree of agreement of multiple segmentations of some time-related data (e.g., gesture phases or sign language constituents). The software implements an assessment procedure developed by Bruno Thomann and will be made publicly available. The article discusses the rationale of the agreement assessment procedure and points at future extensions of Staccato.},
    }
  • [DOI] A. Mehler, A. Lücking, and P. Menke, “Assessing Cognitive Alignment in Different Types of Dialog by means of a Network Model,” Neural Networks, vol. 32, pp. 159-164, 2012.
    [Abstract] [BibTeX]

    We present a network model of dialog lexica, called TiTAN (Two-layer Time-Aligned Network) series. TiTAN  series capture the formation and structure of dialog lexica in terms of serialized graph representations. The dynamic update of TiTAN  series is driven by the dialog-inherent timing of turn-taking. The model provides a link between neural, connectionist underpinnings of dialog lexica on the one hand and observable symbolic behavior on the other. On the neural side, priming and spreading activation are modeled in terms of TiTAN networking. On the symbolic side, TiTAN  series account for cognitive alignment in terms of the structural coupling of the linguistic representations of dialog partners. This structural stance allows us to apply TiTAN  in machine learning of data of dialogical alignment. In previous studies, it has been shown that aligned dialogs can be distinguished from non-aligned ones by means of TiTAN -based modeling. Now, we simultaneously apply this model to two types of dialog: task-oriented, experimentally controlled dialogs on the one hand and more spontaneous, direction giving dialogs on the other. We ask whether it is possible to separate aligned dialogs from non-aligned ones in a type-crossing way. Starting from a recent experiment (Mehler, Lücking, & Menke, 2011a), we show that such a type-crossing classification is indeed possible. This hints at a structural fingerprint left by alignment in networks of linguistic items that are routinely co-activated during conversation.
    @ARTICLE{Mehler:Luecking:Menke:2012,
        journal={Neural Networks},
        author={Mehler, Alexander and Lücking, Andy and Menke, Peter},
        doi={10.1016/j.neunet.2012.02.013},
        volume={32},
        pages={159-164},
        year={2012},
        title={Assessing Cognitive Alignment in Different Types of Dialog by means of a Network Model},
        website={http://www.sciencedirect.com/science/article/pii/S0893608012000421},
        abstract={We present a network model of dialog lexica, called TiTAN (Two-layer Time-Aligned Network) series. TiTAN  series capture the formation and structure of dialog lexica in terms of serialized graph representations. The dynamic update of TiTAN  series is driven by the dialog-inherent timing of turn-taking. The model provides a link between neural, connectionist underpinnings of dialog lexica on the one hand and observable symbolic behavior on the other. On the neural side, priming and spreading activation are modeled in terms of TiTAN networking. On the symbolic side, TiTAN  series account for cognitive alignment in terms of the structural coupling of the linguistic representations of dialog partners. This structural stance allows us to apply TiTAN  in machine learning of data of dialogical alignment. In previous studies, it has been shown that aligned dialogs can be distinguished from non-aligned ones by means of TiTAN -based modeling. Now, we simultaneously apply this model to two types of dialog: task-oriented, experimentally controlled dialogs on the one hand and more spontaneous, direction giving dialogs on the other. We ask whether it is possible to separate aligned dialogs from non-aligned ones in a type-crossing way. Starting from a recent experiment (Mehler, Lücking, \& Menke, 2011a), we show that such a type-crossing classification is indeed possible. This hints at a structural fingerprint left by alignment in networks of linguistic items that are routinely co-activated during conversation.}}
  • A. Lücking and T. Pfeiffer, “Framing Multimodal Technical Communication. With Focal Points in Speech-Gesture-Integration and Gaze Recognition,” in Handbook of Technical Communication, A. Mehler, L. Romary, and D. Gibbon, Eds., De Gruyter Mouton, 2012, vol. 8, pp. 591-644.
    [BibTeX]

    @INCOLLECTION{Luecking:Pfeiffer:2012,
        publisher={De Gruyter Mouton},
        chapter={18},
        booktitle={Handbook of Technical Communication},
        pages={591-644},
        author={Lücking, Andy and Pfeiffer, Thies},
        series={Handbooks of Applied Linguistics},
        volume={8},
        editor={Alexander Mehler and Laurent Romary and Dafydd Gibbon},
        year={2012},
        title={Framing Multimodal Technical Communication. With Focal Points in Speech-Gesture-Integration and Gaze Recognition},
        website={http://www.degruyter.com/view/books/9783110224948/9783110224948.591/9783110224948.591.xml}}
  • P. Kubina, O. Abramov, and A. Lücking, “Barrier-free Communication,” in Handbook of Technical Communication, A. Mehler and L. Romary, Eds., Berlin and Boston: De Gruyter Mouton, 2012, vol. 8, pp. 645-706.
    [BibTeX]

    @INCOLLECTION{Kubina:Abramov:Luecking:2012,
        author={Kubina, Petra and Abramov, Olga and Lücking, Andy},
        address={Berlin and Boston},
        editor={Alexander Mehler and Laurent Romary},
        title={Barrier-free Communication},
        series={Handbooks of Applied Linguistics},
        pages={645-706},
        year={2012},
        chapter={19},
        booktitle={Handbook of Technical Communication},
        publisher={De Gruyter Mouton},
        editoratype={collaborator},
        volume={8},
        editora={Dafydd Gibbon},
        website={http://www.degruyter.com/view/books/9783110224948/9783110224948.645/9783110224948.645.xml}}
  • [PDF] [http://kyoto.evolang.org/] A. Lücking and A. Mehler, “What’s the Scope of the Naming Game? Constraints on Semantic Categorization,” in Proceedings of the 9th International Conference on the Evolution of Language, Kyoto, Japan, 2012, pp. 196-203.
    [Abstract] [BibTeX]

    The Naming Game (NG) has become a vivid research paradigm for simulation studies on language evolution and the establishment of naming conventions. Recently, NGs were used for reconstructing the creation of linguistic categories, most notably for color terms. We recap the functional principle of NGs and the latter Categorization Games (CGs) and evaluate them in the light of semantic data of linguistic categorization outside the domain of colors. This comparison reveals two specifics of the CG paradigm: Firstly, the emerging categories draw basically on the predefined topology of the learning domain. Secondly, the kind of categories that can be learnt in CGs is bound to context-independent intersective categories. This suggests that the NG and the CG focus on a special aspect of natural language categorization, which disregards context-sensitive categories used in a non-compositional manner.
    @INPROCEEDINGS{Luecking:Mehler:2012,
        url={http://kyoto.evolang.org/},
        website={https://www.researchgate.net/publication/267858061_WHAT'S_THE_SCOPE_OF_THE_NAMING_GAME_CONSTRAINTS_ON_SEMANTIC_CATEGORIZATION},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/Evolang2012-AL_AM.pdf},
        booktitle={Proceedings of the 9th International Conference on the Evolution of Language},
        pages={196-203},
        author={Lücking, Andy and Mehler, Alexander},
        year={2012},
        title={What's the Scope of the Naming Game? Constraints on Semantic Categorization},
        abstract={The Naming Game (NG) has become a vivid research paradigm for simulation studies on language evolution and the establishment of naming conventions. Recently, NGs were used for reconstructing the creation of linguistic categories, most notably for color terms. We recap the functional principle of NGs and the latter Categorization Games (CGs) and evaluate them in the light of semantic data of linguistic categorization outside the domain of colors. This comparison reveals two specifics of the CG paradigm: Firstly, the emerging categories draw basically on the predefined topology of the learning domain. Secondly, the kind of categories that can be learnt in CGs is bound to context-independent intersective categories. This suggests that the NG and the CG focus on a special aspect of natural language categorization, which disregards context-sensitive categories used in a non-compositional manner.},
        address={Kyoto, Japan}}

2011 (7)

  • [PDF] V. Ries and A. Lücking, “The SoSaBiEC Corpus: Social Structure and Bilinguality in Everyday Conversation,” in Multilingual Resources and Multilingual Applications: Proceedings of the German Society for Computational Linguistics 2011, 2011, pp. 207-210.
    [Abstract] [BibTeX]

    The SoSaBiEC corpus is comprised audio recordings of everyday interactions between familiar subjects. Thus, the material the corpus is based on is not gained in task-oriented dialogue under strict experimental control; rather, it is made up of spontaneous conversations. We describe the raw data and the annotations that constitute the corpus. Speech is transcribed at the level of words. Dialogue act oriented codings constitute a functional, qualitative annotation level. The corpus so far provides an empirical basis for studying social aspects of unrestricted language use in a familiar context .
    @INPROCEEDINGS{Ries:Luecking:2011,
        booktitle={Multilingual Resources and Multilingual Applications: Proceedings of the German Society for Computational Linguistics 2011},
        pages={207--210},
        author={Ries, Veronika and Lücking, Andy},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/Ries_Luecking.pdf},
        poster={https://hucompute.org/wp-content/uploads/2015/08/SoSaBiEC-poster.pdf}
        series={GSCL 2011},
        editor={Hanna Hedeland and Thomas Schmidt and Kai Wörner},
        year={2011},
        title={The SoSaBiEC Corpus: Social Structure and Bilinguality in Everyday Conversation},
        abstract={The SoSaBiEC corpus is comprised audio recordings of everyday interactions between familiar subjects. Thus, the material the corpus is based on is not gained in task-oriented dialogue under strict experimental control; rather, it is made up of spontaneous conversations. We describe the raw data and the annotations that constitute the corpus. Speech is transcribed at the level of words. Dialogue act oriented codings constitute a functional, qualitative annotation level. The corpus so far provides an empirical basis for studying social aspects of unrestricted language use in a familiar context .},
        location={Hamburg}}
  • [PDF] A. Lücking, S. Ptock, and K. Bergmann, “Staccato: Segmentation Agreement Calculator,” in Gesture in Embodied Communication and Human-Computer Interaction. Proceedings of the 9th International Gesture Workshop, Athens, Greece, 2011, pp. 50-53.
    [BibTeX]

    @INPROCEEDINGS{Luecking:Ptock:Bergmann:2011,
        publisher={National and Kapodistrian University of Athens},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/LueckingEA_final.pdf},
        booktitle={Gesture in Embodied Communication and Human-Computer Interaction. Proceedings of the 9th International Gesture Workshop},
        pages={50--53},
        author={Lücking, Andy and Ptock, Sebastian and Bergmann, Kirsten},
        series={GW 2011},
        editor={Eleni Efthimiou and Georgios Kouroupetroglou},
        month={5},
        year={2011},
        title={Staccato: Segmentation Agreement Calculator},
        address={Athens, Greece}}
  • [PDF] A. Mehler and A. Lücking, “A Graph Model of Alignment in Multilog,” in Proceedings of IEEE Africon 2011, Zambia, 2011.
    [BibTeX]

    @INPROCEEDINGS{Mehler:Luecking:2011,
        organization={IEEE},
        booktitle={Proceedings of IEEE Africon 2011},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/africon2011-paper-Alexander_Mehler_Andy_Luecking.pdf},
        author={Mehler, Alexander and Lücking, Andy},
        series={IEEE Africon},
        month={9},
        year={2011},
        title={A Graph Model of Alignment in Multilog},
        address={Zambia},
        website={https://www.researchgate.net/publication/267941012_A_Graph_Model_of_Alignment_in_Multilog}}
  • [PDF] A. Mehler, A. Lücking, and P. Menke, “From Neural Activation to Symbolic Alignment: A Network-Based Approach to the Formation of Dialogue Lexica,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN 2011), San Jose, California, July 31 — August 5, 2011.
    [BibTeX]

    @INPROCEEDINGS{Mehler:Luecking:Menke:2011,
        booktitle={Proceedings of the International Joint Conference on Neural Networks (IJCNN 2011), San Jose, California, July 31 -- August 5},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/neural-align-final.pdf},
        author={Mehler, Alexander and Lücking, Andy and Menke, Peter},
        year={2011},
        title={From Neural Activation to Symbolic Alignment: A Network-Based Approach to the Formation of Dialogue Lexica},
        website={{http://dx.doi.org/10.1109/IJCNN.2011.6033266}}
        }
  • [PDF] A. Lücking, O. Abramov, A. Mehler, and P. Menke, “The Bielefeld Jigsaw Map Game (JMG) Corpus,” in Abstracts of the Corpus Linguistics Conference 2011, Birmingham, 2011.
    [BibTeX]

    @INPROCEEDINGS{Luecking:Abramov:Mehler:Menke:2011,
        booktitle={Abstracts of the Corpus Linguistics Conference 2011},
        author={Lücking, Andy and Abramov, Olga and Mehler, Alexander and Menke, Peter},
        series={CL2011},
        year={2011},
        title={The Bielefeld Jigsaw Map Game (JMG) Corpus},
        website={http://www.birmingham.ac.uk/research/activity/corpus/publications/conference-archives/2011-birmingham.aspx},
        pdf={http://www.birmingham.ac.uk/documents/college-artslaw/corpus/conference-archives/2011/Paper-137.pdf},
        address={Birmingham}}
  • [PDF] A. Mehler, A. Lücking, and P. Menke, “Assessing Lexical Alignment in Spontaneous Direction Dialogue Data by Means of a Lexicon Network Model,” in Proceedings of 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), February 20–26, Tokyo, Berlin/New York, 2011, pp. 368-379.
    [BibTeX]

    @INPROCEEDINGS{Mehler:Luecking:Menke:2011:a,
        publisher={Springer},
        booktitle={Proceedings of 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), February 20--26, Tokyo},
        website={http://www.springerlink.com/content/g7p2250025u20010/},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/titan-cicling-camera-ready.pdf},
        author={Mehler, Alexander and Lücking, Andy and Menke, Peter},
        series={CICLing'11},
        pages={368-379},
        year={2011},
        title={Assessing Lexical Alignment in Spontaneous Direction Dialogue Data by Means of a Lexicon Network Model},
        address={Berlin/New York}}
  • [PDF] A. Lücking and A. Mehler, “A Model of Complexity Levels of Meaning Constitution in Simulation Models of Language Evolution,” International Journal of Signs and Semiotic Systems, vol. 1, iss. 1, pp. 18-38, 2011.
    [Abstract] [BibTeX]

    Currently, some simulative accounts exist within dynamic or evolutionary frameworks that are concerned with the development of linguistic categories within a population of language users. Although these studies mostly emphasize that their models are abstract, the paradigm categorization domain is preferably that of colors. In this paper, the authors argue that color adjectives are special predicates in both linguistic and metaphysical terms: semantically, they are intersective predicates, metaphysically, color properties can be empirically reduced onto purely physical properties. The restriction of categorization simulations to the color paradigm systematically leads to ignoring two ubiquitous features of natural language predicates, namely relativity and context-dependency. Therefore, the models for simulation models of linguistic categories are not able to capture the formation of categories like perspective-dependent predicates ‘left’ and ‘right’, subsective predicates like ‘small’ and ‘big’, or predicates that make reference to abstract objects like ‘I prefer this kind of situation’. The authors develop a three-dimensional grid of ascending complexity that is partitioned according to the semiotic triangle. They also develop a conceptual model in the form of a decision grid by means of which the complexity level of simulation models of linguistic categorization can be assessed in linguistic terms.
    @ARTICLE{Luecking:Mehler:2011,
        journal={International Journal of Signs and Semiotic Systems},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/luecking_mehler_article_IJSSS.pdf},
        author={Lücking, Andy and Mehler, Alexander},
        year={2011},
        volume={1},
        pages={18-38},
        number={1},
        title={A Model of Complexity Levels of Meaning Constitution in Simulation Models of Language Evolution},
        abstract={Currently, some simulative accounts exist within dynamic or evolutionary frameworks that are concerned with the development of linguistic categories within a population of language users. Although these studies mostly emphasize that their models are abstract, the paradigm categorization domain is preferably that of colors. In this paper, the authors argue that color adjectives are special predicates in both linguistic and metaphysical terms: semantically, they are intersective predicates, metaphysically, color properties can be empirically reduced onto purely physical properties. The restriction of categorization simulations to the color paradigm systematically leads to ignoring two ubiquitous features of natural language predicates, namely relativity and context-dependency. Therefore, the models for simulation models of linguistic categories are not able to capture the formation of categories like perspective-dependent predicates ‘left’ and ‘right’, subsective predicates like ‘small’ and ‘big’, or predicates that make reference to abstract objects like ‘I prefer this kind of situation’. The authors develop a three-dimensional grid of ascending complexity that is partitioned according to the semiotic triangle. They also develop a conceptual model in the form of a decision grid by means of which the complexity level of simulation models of linguistic categorization can be assessed in linguistic terms.}}

2010 (5)

  • A. Lücking and K. Bergmann, Introducing the Bielefeld SaGA CorpusEuropa Universität Viadrina Frankfurt/Oder: , 2010.
    [Abstract] [BibTeX]

    People communicate multimodally. Most prominently, they co-produce speech and gesture. How do they do that? Studying the interplay of both modalities has to be informed by empirically observed communication behavior. We present a corpus built of speech and gesture data gained in a controlled study. We describe 1) the setting underlying the data; 2) annotation of the data; 3) reliability evalution methods and results; and 4) applications of the corpus in the research domain of speech and gesture alignment.
    @Misc{Luecking:Bergmann:2010,
        author =     {Andy L\"{u}cking and Kirsten Bergmann},
        title =     {Introducing the {B}ielefeld {SaGA} Corpus},
        howpublished = {Talk given at \textit{Gesture: Evolution, Brain, and Linguistic
                      Structures.} 4th Conference of the International
                      Society for Gesture Studies (ISGS). Europa
                      Universit\"{a}t Viadrina Frankfurt/Oder},                  
        address={Europa Universit{\"a}t Viadrina Frankfurt/Oder},
            abstract={People communicate multimodally. Most prominently, they co-produce speech and gesture. How do they do that? Studying the interplay of both modalities has to be informed by empirically observed communication behavior. We present a corpus built of speech and gesture data gained in a controlled study. We describe 1) the setting underlying the data; 2) annotation of the data; 3) reliability evalution methods and results; and 4) applications of the corpus in the research domain of speech and gesture alignment.},
            year = {2010},
            month = {07},
            day = {28},
        date =     {2010-07-28}
    }
  • [PDF] A. Lücking, “A Semantic Account for Iconic Gestures,” in Gesture: Evolution, Brain, and Linguistic Structures, Europa Universität Viadrina Frankfurt/Oder, 2010, p. 210.
    [BibTeX]

    @INPROCEEDINGS{Luecking:2010,
        organization={4th Conference of the International Society for Gesture Studies (ISGS)},
        booktitle={Gesture: Evolution, Brain, and Linguistic Structures},
        pages={210},
        author={Lücking, Andy},
        keywords={own},
        month={7},
        year={2010},
        title={A Semantic Account for Iconic Gestures},
        address={Europa Universit{\"a}t Viadrina Frankfurt/Oder},
        pdf={https://pub.uni-bielefeld.de/download/2318565/2319962},
        website={http://pub.uni-bielefeld.de/publication/2318565}}
  • [PDF] A. Lücking, K. Bergmann, F. Hahn, S. Kopp, and H. Rieser, “The Bielefeld Speech and Gesture Alignment Corpus (SaGA),” in Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality, Malta, 2010, pp. 92-98.
    [Abstract] [BibTeX]

    People communicate multimodally. Most prominently, they co-produce speech and gesture. How do they do that? Studying the interplay of both modalities has to be informed by empirically observed communication behavior. We present a corpus built of speech and gesture data gained in a controlled study. We describe 1) the setting underlying the data; 2) annotation of the data; 3) reliability evalution methods and results; and 4) applications of the corpus in the research domain of speech and gesture alignment.
    @INPROCEEDINGS{Luecking:et:al:2010,
        organization={7th International Conference for Language Resources and Evaluation (LREC 2010)},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/saga-corpus.pdf},
        booktitle={Multimodal Corpora: Advances in Capturing, Coding and Analyzing Multimodality},
        pages={92--98},
        author={Lücking, Andy and Bergmann, Kirsten and Hahn, Florian and Kopp, Stefan and Rieser, Hannes},
        keywords={own},
        month={5},
        year={2010},
        title={The Bielefeld Speech and Gesture Alignment Corpus (SaGA)},
        address={Malta},
        abstract={People communicate multimodally. Most prominently, they co-produce speech and gesture. How do they do that? Studying the interplay of both modalities has to be informed by empirically observed communication behavior. We present a corpus built of speech and gesture data gained in a controlled study. We describe 1) the setting underlying the data; 2) annotation of the data; 3) reliability evalution methods and results; and 4) applications of the corpus in the research domain of speech and gesture alignment.},
        website={http://pub.uni-bielefeld.de/publication/2001935}}
  • [PDF] A. Mehler, P. Weiß, P. Menke, and A. Lücking, “Towards a Simulation Model of Dialogical Alignment,” in Proceedings of the 8th International Conference on the Evolution of Language (Evolang8), 14-17 April 2010, Utrecht, 2010, pp. 238-245.
    [BibTeX]

    @INPROCEEDINGS{Mehler:Weiss:Menke:Luecking:2010,
        booktitle={Proceedings of the 8th International Conference on the Evolution of Language (Evolang8), 14-17 April 2010, Utrecht},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/Alexander_Mehler_Petra_Weiss_Peter_Menke_Andy_Luecking.pdf},
        website={http://www.let.uu.nl/evolang2010.nl/},
        author={Mehler, Alexander and Wei{\ss}, Petra and Menke, Peter and Lücking, Andy},
        year={2010},
        title={Towards a Simulation Model of Dialogical Alignment},
        pages={238-245}}
  • [PDF] [DOI] A. Mehler, A. Lücking, and P. Weiß, “A Network Model of Interpersonal Alignment,” Entropy, vol. 12, iss. 6, pp. 1440-1483, 2010.
    [Abstract] [BibTeX]

    In dyadic communication, both interlocutors adapt to each other linguistically, that is, they align interpersonally. In this article, we develop a framework for modeling interpersonal alignment in terms of the structural similarity of the interlocutors’ dialog lexica. This is done by means of so-called two-layer time-aligned network series, that is, a time-adjusted graph model. The graph model is partitioned into two layers, so that the interlocutors’ lexica are captured as subgraphs of an encompassing dialog graph. Each constituent network of the series is updated utterance-wise. Thus, both the inherent bipartition of dyadic conversations and their gradual development are modeled. The notion of alignment is then operationalized within a quantitative model of structure formation based on the mutual information of the subgraphs that represent the interlocutor’s dialog lexica. By adapting and further developing several models of complex network theory, we show that dialog lexica evolve as a novel class of graphs that have not been considered before in the area of complex (linguistic) networks. Additionally, we show that our framework allows for classifying dialogs according to their alignment status. To the best of our knowledge, this is the first approach to measuring alignment in communication that explores the similarities of graph-like cognitive representations.
    @ARTICLE{Mehler:Weiss:Luecking:2010:a,
        journal={Entropy},
        author={Mehler, Alexander and Lücking, Andy and Wei{\ss}, Petra},
        year={2010},
        title={A Network Model of Interpersonal Alignment},
        volume={12},
        pages={1440-1483},
        number={6},
        doi={10.3390/e12061440},
        website={http://www.mdpi.com/1099-4300/12/6/1440/},
        pdf={http://www.mdpi.com/1099-4300/12/6/1440/pdf},
        abstract={In dyadic communication, both interlocutors adapt to each other linguistically, that is, they align interpersonally. In this article, we develop a framework for modeling interpersonal alignment in terms of the structural similarity of the interlocutors’ dialog lexica. This is done by means of so-called two-layer time-aligned network series, that is, a time-adjusted graph model. The graph model is partitioned into two layers, so that the interlocutors’ lexica are captured as subgraphs of an encompassing dialog graph. Each constituent network of the series is updated utterance-wise. Thus, both the inherent bipartition of dyadic conversations and their gradual development are modeled. The notion of alignment is then operationalized within a quantitative model of structure formation based on the mutual information of the subgraphs that represent the interlocutor’s dialog lexica. By adapting and further developing several models of complex network theory, we show that dialog lexica evolve as a novel class of graphs that have not been considered before in the area of complex (linguistic) networks. Additionally, we show that our framework allows for classifying dialogs according to their alignment status. To the best of our knowledge, this is the first approach to measuring alignment in communication that explores the similarities of graph-like cognitive representations.}}

2009 (1)

  • [PDF] A. Mehler and A. Lücking, “A Structural Model of Semiotic Alignment: The Classification of Multimodal Ensembles as a Novel Machine Learning Task,” in Proceedings of IEEE Africon 2009, September 23-25, Nairobi, Kenya, 2009.
    [Abstract] [BibTeX]

    In addition to the well-known linguistic alignment processes in dyadic communication – e.g., phonetic, syntactic, semantic alignment – we provide evidence for a genuine multimodal alignment process, namely semiotic alignment. Communicative elements from different modalities 'routinize into' cross-modal 'super-signs', which we call multimodal ensembles. Computational models of human communication are in need of expressive models of multimodal ensembles. In this paper, we exemplify semiotic alignment by means of empirical examples of the building of multimodal ensembles. We then propose a graph model of multimodal dialogue that is expressive enough to capture multimodal ensembles. In line with this model, we define a novel task in machine learning with the aim of training classifiers that can detect semiotic alignment in dialogue. This model is in support of approaches which need to gain insights into realistic human-machine communication.
    @INPROCEEDINGS{Mehler:Luecking:2009,
        publisher={IEEE},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/mehler_luecking_2009.pdf},
        website={http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?reload=true&arnumber=5308098},
        booktitle={Proceedings of IEEE Africon 2009, September 23-25, Nairobi, Kenya},
        author={Mehler, Alexander and Lücking, Andy},
        year={2009},
        title={A Structural Model of Semiotic Alignment: The Classification of Multimodal Ensembles as a Novel Machine Learning Task},
        abstract={In addition to the well-known linguistic alignment processes in dyadic communication – e.g., phonetic, syntactic, semantic alignment – we provide evidence for a genuine multimodal alignment process, namely semiotic alignment. Communicative elements from different modalities 'routinize into' cross-modal 'super-signs', which we call multimodal ensembles. Computational models of human communication are in need of expressive models of multimodal ensembles. In this paper, we exemplify semiotic alignment by means of empirical examples of the building of multimodal ensembles. We then propose a graph model of multimodal dialogue that is expressive enough to capture multimodal ensembles. In line with this model, we define a novel task in machine learning with the aim of training classifiers that can detect semiotic alignment in dialogue. This model is in support of approaches which need to gain insights into realistic human-machine communication.}}

2008 (1)

  • [PDF] A. Lücking, A. Mehler, and P. Menke, “Taking Fingerprints of Speech-and-Gesture Ensembles: Approaching Empirical Evidence of Intrapersonal Alignment in Multimodal Communication,” in LONDIAL 2008: Proceedings of the 12th Workshop on the Semantics and Pragmatics of Dialogue (SEMDIAL), King’s College London, 2008, p. 157–164.
    [BibTeX]

    @INPROCEEDINGS{Luecking:Mehler:Menke:2008,
        pdf={https://hucompute.org/wp-content/uploads/2015/08/luecking_mehler_menke_2008.pdf},
        booktitle={LONDIAL 2008: Proceedings of the 12th Workshop on the Semantics and Pragmatics of Dialogue (SEMDIAL)},
        pages={157–164},
        author={Lücking, Andy and Mehler, Alexander and Menke, Peter},
        month={June 2–4},
        year={2008},
        title={Taking Fingerprints of Speech-and-Gesture Ensembles: Approaching Empirical Evidence of Intrapersonal Alignment in Multimodal Communication},
        website={https://www.researchgate.net/publication/237305375_Taking_Fingerprints_of_Speech-and-Gesture_Ensembles_Approaching_Empirical_Evidence_of_Intrapersonal_Alignment_in_Multimodal_Communication},
        address={King's College London}}

2007 (2)

  • [PDF] [http://dx.doi.org/10.1007/s00455-007-9078-3] [DOI] C. Borr, M. Hielscher-Fastabend, and A. Lücking, “Reliability and Validity of Cervical Auscultation,” Dysphagia, vol. 22, pp. 225-234, 2007.
    [Abstract] [BibTeX]

    We conducted a two-part study that contributes to the discussion about cervical auscultation (CA) as a scientifically justifiable and medically useful tool to identify patients with a high risk of aspiration/penetration. We sought to determine (1) acoustic features that mark a deglutition act as dysphagic; (2) acoustic changes in healthy older deglutition profiles compared with those of younger adults; (3) the correctness and concordance of rater judgments based on CA; and (4) if education in CA improves individual reliability. The first part of the study focused on a comparison of the swallow morphology of dysphagic as opposed to healthy subjects� deglutition in terms of structure properties of the pharyngeal phase of deglutition. We obtained the following results. The duration of deglutition apnea is significantly higher in the older group than in the younger one. Comparing the younger group and the dysphagic group we found significant differences in duration of deglutition apnea, onset time, and number of gulps. Just one parameter, number of gulps, distinguishes significantly between the older and the dysphagic groups. The second part of the study aimed at evaluating the reliability of CA in detecting dysphagia measured as the concordance and the correctness of CA experts in classifying swallowing sounds. The interrater reliability coefficient AC1 resulted in a value of 0.46, which is to be interpreted as fair agreement. Furthermore, we found that comparison with radiologically defined aspiration/penetration for the group of experts (speech and language therapists) yielded 70% specificity and 94% sensitivity. We conclude that the swallowing sounds contain audible cues that should, in principle, permit reliable classification and view CA as an early warning system for identifying patients with a high risk of aspiration/penetration; however, it is not appropriate as a stand-alone tool.
    @ARTICLE{Borr:Luecking:Hierlscher:2007,
        publisher={Springer New York},
        url={http://dx.doi.org/10.1007/s00455-007-9078-3},
        website={http://www.springerlink.com/content/c45578u74r38m4v7/},
        journal={Dysphagia},
        pages={225--234},
        author={Borr, Christiane and Hielscher-Fastabend, Martina and Lücking, Andy},
        volume={22},
        doi={10.1007/s00455-007-9078-3},
        year={2007},
        abstract={We conducted a two-part study that contributes to the discussion about cervical auscultation (CA) as a scientifically justifiable and medically useful tool to identify patients with a high risk of aspiration/penetration. We sought to determine (1) acoustic features that mark a deglutition act as dysphagic; (2) acoustic changes in healthy older deglutition profiles compared with those of younger adults; (3) the correctness and concordance of rater judgments based on CA; and (4) if education in CA improves individual reliability. The first part of the study focused on a comparison of the swallow morphology of dysphagic as opposed to healthy subjects� deglutition in terms of structure properties of the pharyngeal phase of deglutition. We obtained the following results. The duration of deglutition apnea is significantly higher in the older group than in the younger one. Comparing the younger group and the dysphagic group we found significant differences in duration of deglutition apnea, onset time, and number of gulps. Just one parameter, number of gulps, distinguishes significantly between the older and the dysphagic groups. The second part of the study aimed at evaluating the reliability of CA in detecting dysphagia measured as the concordance and the correctness of CA experts in classifying swallowing sounds. The interrater reliability coefficient AC1 resulted in a value of 0.46, which is to be interpreted as fair agreement. Furthermore, we found that comparison with radiologically defined aspiration/penetration for the group of experts (speech and language therapists) yielded 70% specificity and 94% sensitivity. We conclude that the swallowing sounds contain audible cues that should, in principle, permit reliable classification and view CA as an early warning system for identifying patients with a high risk of aspiration/penetration; however, it is not appropriate as a stand-alone tool.},
        title={Reliability and Validity of Cervical Auscultation},
        issue={3},
        pdf={http://www.shkim.eu/cborr/ca5manuscript.pdf}}
  • A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and M. Staudacher, Locating Objects by Pointing, 2007.
    [BibTeX]

    @MISC{Kranstedt:et:al:2007,
        author={Kranstedt, Alfred and Lücking, Andy and Pfeiffer, Thies and Rieser, Hannes and Staudacher, Marc},
        keywords={own},
        month={6},
        year={2007},
        howpublished={3rd International Conference of the International Society for Gesture Studies. Evanston, IL, USA},
        title={Locating Objects by Pointing}}

2006 (6)

  • [PDF] A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and M. Staudacher, “Measuring and Reconstructing Pointing in Visual Contexts,” in brandial ’06 — Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue, Potsdam, 2006, pp. 82-89.
    [Abstract] [BibTeX]

    We describe an experiment to gather original data on geometrical aspects of pointing. In particular, we are focusing upon the concept of the pointing cone, a geometrical model of a pointing’s extension. In our setting we employed methodological and technical procedures of a new type to integrate data from annotations as well as from tracker recordings. We combined exact information on position and orientation with rater’s classifications. Our first results seem to challenge classical linguistic and philosophical theories of demonstration in that they advise to separate pointings from reference.
    @INPROCEEDINGS{Kranstedt:et:al:2006:c,
        publisher={Universit{\"a}tsverlag Potsdam},
        booktitle={brandial '06 -- Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/measure.pdf},
        pages={82--89},
        author={Kranstedt, Alfred and Lücking, Andy and Pfeiffer, Thies and Rieser, Hannes and Staudacher, Marc},
        keywords={own},
        editor={David Schlangen and Raquel Fernández},
        month={9},
        year={2006},
        title={Measuring and Reconstructing Pointing in Visual Contexts},
        abstract={We describe an experiment to gather original data on geometrical aspects of pointing. In particular, we are focusing upon the concept of the pointing cone, a geometrical model of a pointing’s extension. In our setting we employed methodological and technical procedures of a new type to integrate data from annotations as well as from tracker recordings. We combined exact information on position and orientation with rater’s classifications. Our first results seem to challenge classical linguistic and philosophical theories of demonstration in that they advise to separate pointings from reference.},
        website={http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.144.8472},
        address={Potsdam}}
  • [PDF] A. Lücking, H. Rieser, and M. Staudacher, “Multi-modal Integration for Gesture and Speech,” in brandial ’06 — Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue, Potsdam, 2006, pp. 106-113.
    [Abstract] [BibTeX]

    Demonstratives, in particular gestures that 'only' accompany speech, are not a big issue in current theories of grammar. If we deal with gestures, fixing their function is one big problem, the other one is how to integrate the representations originating from different channels and, ultimately, how to determine their composite meanings. The growing interest in multi-modal settings, computer simulations, human-machine interfaces and VR-applications increases the need for theories of multi-modal structures and events. In our workshop-contribution we focus on the integration of multi-modal contents and investigate different approaches dealing with this problem such as Johnston et al. (1997) and Johnston (1998), Johnston and Bangalore (2000), Chierchia (1995), Asher (2005), and Rieser (2005).
    @INPROCEEDINGS{Luecking:Rieser:Staudacher:2006:a,
        publisher={Universit{\"a}tsverlag Potsdam},
        booktitle={brandial '06 -- Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/mm-int-brandial-final.pdf},
        pages={106--113},
        author={Lücking, Andy and Rieser, Hannes and Staudacher, Marc},
        keywords={own},
        editor={David Schlangen and Raquel Fernández},
        month={9},
        year={2006},
        title={Multi-modal Integration for Gesture and Speech},
        abstract={Demonstratives, in particular gestures that 'only' accompany speech, are not a big issue in current theories of grammar. If we deal with gestures, fixing their function is one big problem, the other one is how to integrate the representations originating from different channels and, ultimately, how to determine their composite meanings. The growing interest in multi-modal settings, computer simulations, human-machine interfaces and VR-applications increases the need for theories of multi-modal structures and events. In our workshop-contribution we focus on the integration of multi-modal contents and investigate different approaches dealing with this problem such as Johnston et al. (1997) and Johnston (1998), Johnston and Bangalore (2000), Chierchia (1995), Asher (2005), and Rieser (2005).},
        address={Potsdam}}
  • A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and I. Wachsmuth, “Deictic Object Reference in Task-oriented Dialogue,” in Situated Communication, G. Rickheit and I. Wachsmuth, Eds., Berlin: De Gruyter Mouton, 2006, pp. 155-207.
    [Abstract] [BibTeX]

    This chapter presents an original approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we take steps to classify the role of pointing in deictic expressions and to model the focussed area of pointing gestures, the so-called pointing cone. This pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.
    @INCOLLECTION{Kranstedt:et:al:2006:b,
        publisher={De Gruyter Mouton},
        booktitle={Situated Communication},
        pages={155--207},
        author={Kranstedt, Alfred and Lücking, Andy and Pfeiffer, Thies and Rieser, Hannes and Wachsmuth, Ipke},
        keywords={own},
        editor={Gert Rickheit and Ipke Wachsmuth},
        year={2006},
        title={Deictic Object Reference in Task-oriented Dialogue},
        website={http://pub.uni-bielefeld.de/publication/1894485},
        abstract={This chapter presents an original approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we take steps to classify the role of pointing in deictic expressions and to model the focussed area of pointing gestures, the so-called pointing cone. This pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.},
        address={Berlin}}
  • A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and I. Wachsmuth, “Deixis: How to Determine Demonstrated Objects Using a Pointing Cone,” in Gesture in Human-Computer Interaction and Simulation, S. Gibet, N. Courty, and J. Kamp, Eds., Berlin: Springer, 2006, pp. 300-311.
    [Abstract] [BibTeX]

    We present a collaborative approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we attempt to measure the precision of the focussed area of a pointing gesture, the so-called pointing cone. The pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.
    @INCOLLECTION{Kranstedt:et:al:2006:a,
        publisher={Springer},
        booktitle={Gesture in Human-Computer Interaction and Simulation},
        pages={300--311},
        anote={6th International Gesture Workshop, Berder Island, France, 2005, Revised Selected Papers},
        author={Kranstedt, Alfred and Lücking, Andy and Pfeiffer, Thies and Rieser, Hannes and Wachsmuth, Ipke},
        keywords={own},
        website={http://www.springerlink.com/content/712036hp5v2q8408/},
        editor={Sylvie Gibet and Nicolas Courty and Jean-Francois Kamp},
        year={2006},
        title={Deixis: How to Determine Demonstrated Objects Using a Pointing Cone},
        address={Berlin},
        abstract={We present a collaborative approach towards a detailed understanding of the usage of pointing gestures accompanying referring expressions. This effort is undertaken in the context of human-machine interaction integrating empirical studies, theory of grammar and logics, and simulation techniques. In particular, we attempt to measure the precision of the focussed area of a pointing gesture, the so-called pointing cone. The pointing cone serves as a central concept in a formal account of multi-modal integration at the linguistic speech-gesture interface as well as in a computational model of processing multi-modal deictic expressions.}}
  • [PDF] T. Pfeiffer, A. Kranstedt, and A. Lücking, “Sprach-Gestik Experimente mit IADE, dem Interactive Augmented Data Explorer,” in Proceedings: Dritter Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR, Koblenz, 2006.
    [Abstract] [BibTeX]

    Für die empirische Erforschung natürlicher menschlicher Kommunikation sind wir auf die Akquise und Auswertung umfangreicher Daten angewiesen. Die Modalitäten, über die sich Menschen ausdrücken können, sind sehr unterschiedlich - und genauso verschieden sind die Repräsentationen, mit denen sie für die Empirie verfügbar gemacht werden können. Für eine Untersuchung des Zeigeverhaltens bei der Referenzierung von Objekten haben wir mit IADE ein Framework für die Aufzeichnung, Analyse und Resimulation von Sprach-Gestik Daten entwickelt. Mit dessen Hilfe können wir für unsere Forschung entscheidende Fortschritte in der linguistischen Experimentalmethodik machen.
    @INPROCEEDINGS{Pfeiffer:Kranstedt:Luecking:2006,
        booktitle={Proceedings: Dritter Workshop Virtuelle und Erweiterte Realit{\"a}t der GI-Fachgruppe VR/AR},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/Pfeiffer-Kranstedt-Luecking-IADE.pdf},
        author={Pfeiffer, Thies and Kranstedt, Alfred and Lücking, Andy},
        keywords={own},
        year={2006},
        title={Sprach-Gestik Experimente mit IADE, dem Interactive Augmented Data Explorer},
        abstract={Für die empirische Erforschung natürlicher menschlicher Kommunikation sind wir auf die Akquise und Auswertung umfangreicher Daten angewiesen. Die Modalit{\"a}ten, über die sich Menschen ausdrücken können, sind sehr unterschiedlich - und genauso verschieden sind die Repr{\"a}sentationen, mit denen sie für die Empirie verfügbar gemacht werden können. Für eine Untersuchung des Zeigeverhaltens bei der Referenzierung von Objekten haben wir mit IADE ein Framework für die Aufzeichnung, Analyse und Resimulation von Sprach-Gestik Daten entwickelt. Mit dessen Hilfe können wir für unsere Forschung entscheidende Fortschritte in der linguistischen Experimentalmethodik machen.},
        website={http://pub.uni-bielefeld.de/publication/2426853},
        address={Koblenz}}
  • [PDF] A. Lücking, H. Rieser, and M. Staudacher, “SDRT and Multi-modal Situated Communication,” in brandial ’06 — Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue, Potsdam, 2006, pp. 72-79.
    [BibTeX]

    @INPROCEEDINGS{Luecking:Rieser:Stauchdacher:2006:b,
        publisher={Universit{\"a}tsverlag Potsdam},
        booktitle={brandial '06 -- Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/sdrt-sitcomm-brandial-final.pdf},
        pages={72--79},
        author={Lücking, Andy and Rieser, Hannes and Staudacher, Marc},
        keywords={own},
        editor={David Schlangen and Raquel Fernández},
        month={9},
        year={2006},
        title={SDRT and Multi-modal Situated Communication},
        ={},
        website={http://publishup.uni-potsdam.de/opus4-ubp/frontdoor/index/index/docId/949},
        address={Potsdam}}

2004 (1)

  • [PDF] A. Lücking, H. Rieser, and J. Stegmann, “Statistical Support for the Study of Structures in Multi-Modal Dialogue: Inter-Rater Agreement and Synchronization,” in Catalog ’04—Proceedings of the Eighth Workshop on the Semantics and Pragmatics of Dialogue, Barcelona, 2004, pp. 56-63.
    [Abstract] [BibTeX]

    We present a statistical approach to assess relations that hold among speech and pointing gestures in and between turns in task-oriented dialogue. The units quantified over are the time-stamps of the XML-based annotation of the digital video data. It was found that, on average, gesture strokes do not exceed, but are freely distributed over the time span of their linguistic affiliates. Further, the onset of the affiliate was observed to occur earlier than gesture initiation. Moreover, we found that gestures do obey certain appropriateness conditions and contribute semantic content ('gestures save words') as well. Gestures also seem to play a functional role wrt dialogue structure: There is evidence that gestures can contribute to the bundle of features making up a turn-taking signal. Some statistical results support a partitioning of the domain, which is also reflected in certain rating difficulties. However, our evaluation of the applied annotation scheme generally resulted in very good agreement
    @INPROCEEDINGS{Luecking:Rieser:Stegmann:2004,
        organization={Department of Translation and Philology, Universitat Pompeu Fabra},
        booktitle={Catalog '04---Proceedings of the Eighth Workshop on the Semantics and Pragmatics of Dialogue},
        pdf={https://hucompute.org/wp-content/uploads/2015/08/08-lucking-etal.pdf},
        pages={56--63},
        author={Lücking, Andy and Rieser, Hannes and Stegmann, Jens},
        keywords={own},
        editor={Jonathan Ginzburg and Enric Vallduví},
        year={2004},
        title={Statistical Support for the Study of Structures in Multi-Modal Dialogue: Inter-Rater Agreement and Synchronization},
        abstract={We present a statistical approach to assess relations that hold among speech and pointing gestures in and between turns in task-oriented dialogue. The units quantified over are the time-stamps of the XML-based annotation of the digital video data. It was found that, on average, gesture strokes do not exceed, but are freely distributed over the time span of their linguistic affiliates. Further, the onset of the affiliate was observed to occur earlier than gesture initiation. Moreover, we found that gestures do obey certain appropriateness conditions and contribute semantic content ('gestures save words') as well. Gestures also seem to play a functional role wrt dialogue structure: There is evidence that gestures can contribute to the bundle of features making up a turn-taking signal. Some statistical results support a partitioning of the domain, which is also reflected in certain rating difficulties. However, our evaluation of the applied annotation scheme generally resulted in very good agreement},
        address={Barcelona}}


Since January 2011, I am a research assistant at the Text Technology Lab at the Goethe University Frankfurt.

I studied linguistics, philosophy and German philology at Bielefeld University. During my studies, I worked as scientific assistant in several projects:

  1. B1 “Speech-Gesture Alignment” in the Collaborative Research Center 673 “Alignment in Communication” (June 2006 to January 2011). In this project, I contributed in building the Speech-and-Gesture Alignment Corpus (SaGA). I also developed an account for the meaning of co-verbal iconic gestures and how they interact with speech (see dissertation.).
  2. Linguistic Networks (September 2009 to December 2010).
  3. Research Unit 437 “Text Technological Modelling of Information”, project A2 Secondary structuring of information and comparative analysis of discourse (Sekimo) (April 2006 to September 2006). In this short-term engagement, I supported the annotation of discourse structure and centering relations, and the assessment of reliability.
  4. Project B3 “Deixis in Construction Dialogues” of the Collaborative Research Center 360 “Situated Artificial Communicators” (2005). In this project, I participated in investigating the role of pointing in demonstrative reference in task-oriented dialogue.

In 2011, I received my PhD in linguistics at Bielefeld University for my prolegomena for a linguistic theory of co-verbal iconic gesture. The work has been published in 2013 as “Ikonische Gesten. Grundzüge einer linguistischen Theorie”.

I am a member of the Deutsche Gesellschaft für Sprachwissenschaft (DGfS).

Besides my research activities, I am interested in typesetting with LaTeX. I am a member of the German TeX User Society (Deutsche Anwendervereinigung TeX, DANTE).