Wahed Hemati

Wahed Hemati

Wahed Hemati
M.Sc. Computer Science

Staff member

 

 

 

 

ContactPublications

Your Name (required)

Your Email (required)

Subject

Your Message


Total: 9

2018 (1)

  • A. Mehler, W. Hemati, R. Gleim, and D. Baumartz, “Auf dem Weg zu einer Infrastruktur für die verteilte interaktive evolutionäre Verarbeitung natürlicher Sprache,” in Forschungsinfrastrukturen und digitale Informationssysteme in der germanistischen Sprachwissenschaft , H. Lobin, R. Schneider, and A. Witt, Eds., Berlin: De Gruyter, 2018, vol. 6.
    [BibTeX]

    @InCollection{Mehler:Hemati:Gleim:Baumartz:2018,
        Title                    = {{Auf dem Weg zu einer Infrastruktur für die verteilte interaktive evolutionäre Verarbeitung natürlicher Sprache}},
        Author                   = {Alexander Mehler and Wahed Hemati and Rüdiger Gleim and Daniel Baumartz},
        Booktitle                = {Forschungsinfrastrukturen und digitale Informationssysteme in der germanistischen
            Sprachwissenschaft },
        volume = {6},
        Year                     = {2018},
        Address                  = {Berlin},
        Editor                   = {Henning Lobin and Roman Schneider and Andreas Witt},
        Publisher                = {De Gruyter}
    }

2017 (5)

  • A. Mehler, R. Gleim, W. Hemati, and T. Uslu, “Skalenfreie online soziale Lexika am Beispiel von Wiktionary,” in Proceedings of 53rd Annual Conference of the Institut für Deutsche Sprache (IDS), March 14-16, Mannheim, Germany, Berlin, 2017.
    [BibTeX]

    @InProceedings{Mehler:Gleim:Hemati:Uslu:2017,
        Title                    = {{Skalenfreie online soziale Lexika am Beispiel von Wiktionary}},
        Author                   = {Alexander Mehler and Rüdiger Gleim and Wahed Hemati and Tolga Uslu},
        Booktitle                = {Proceedings of 53rd Annual Conference of the Institut für Deutsche Sprache (IDS), March 14-16, Mannheim, Germany},
        Year                     = {2017},
        Address                  = {Berlin},
        Editor                   = {Stefan Engelberg and Henning Lobin and Kathrin Steyer and Sascha Wolfer},
        Publisher                = {De Gruyter}
    }
  • A. Mehler, O. Zlatkin-Troitschanskaia, W. Hemati, D. Molerov, A. Lücking, and S. Schmidt, “Integrating Computational Linguistic Analysis of Multilingual Learning Data and Educational Measurement Approaches to Explore Student Learning in Higher Education,” in Positive Learning in the Age of Information (PLATO) — A blessing or a curse?, O. Zlatkin-Troitschanskaia, G. Wittum, and A. Dengel, Eds., Wiesbaden: Springer, 2017. in press
    [Abstract] [BibTeX]

    This chapter develops a computational linguistic model for analyzing and comparing multilingual data as well as its application to a large body of standardized assessment data from higher education. The approach employs both an automatic and a manual annotation of the data on several linguistic layers (including parts of speech, text structure and content). Quantitative features of the textual data are explored that are related to both the students’ (domain-specific knowledge) test results and their level of academic experience. The respective analysis involves statistics of distance correlation, text categorization with respect to text types (questions and distractors) as well as languages (English and German), and network analysis as a means to assess dependencies between features. The results indicate a correlation between correct test results of students and linguistic features of the verbal presentations of tests indicating a language influence on higher education test performance. It is also found that this influence relates to special language. Thus, this integrative modeling approach contributes a test basis for a large-scale analysis of learning data and points to a number of subsequent more detailed research.
    @InCollection{Mehler:Zlatkin-Troitschanskaia:Hemati:Molerov:Luecking:Schmidt:2017,
        Title                    = {Integrating Computational Linguistic Analysis of Multilingual Learning Data and Educational Measurement Approaches to Explore Student Learning in Higher Education},
        Author                   = {Alexander Mehler and Olga Zlatkin-Troitschanskaia and Wahed Hemati and Dimitri Molerov and Andy Lücking and Susanne Schmidt},
        Booktitle                = {Positive Learning in the Age of Information ({PLATO}) -- A blessing or a curse?},
        Publisher                = {Springer},
        Note = {in press},
        Abstract = {This chapter develops a computational linguistic model for analyzing and comparing multilingual data as well as its application to a large body of standardized assessment data from higher education. The approach employs both an automatic and a manual annotation of the data on several linguistic layers (including parts of speech, text structure and content). Quantitative features of the textual data are explored that are related to both the students’ (domain-specific knowledge) test results and their level of academic experience. The respective analysis involves statistics of distance correlation, text categorization with respect to text types (questions and distractors) as well as languages (English and German), and network analysis as a means to assess dependencies between features. The results indicate a correlation between correct test results of students and linguistic features of the verbal presentations of tests indicating a language influence on higher education test performance. It is also found that this influence relates to special language. Thus, this integrative modeling approach contributes a test basis for a large-scale analysis of learning data and points to a number of subsequent more detailed research.},
        Year                     = {2017},
        Address                  = {Wiesbaden},
        Editor                   = {Zlatkin-Troitschanskaia, Olga and Wittum, Gabriel and Dengel, Andreas},
    }
  • T. Uslu, W. Hemati, A. Mehler, and D. Baumartz, “TextImager as a Generic Interface to R,” in Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), 2017. Accepted
    [BibTeX]

    @inproceedings{Uslu:Hemati:Mehler:Baumartz:2017,
     author={Tolga Uslu and Wahed Hemati and Alexander Mehler and Daniel Baumartz},
     title={{TextImager} as a Generic Interface to {R}},
     booktitle={Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)},
     year={2017},
     location={Valencia, Spain},
     note={Accepted}
    }
  • W. Hemati, A. Mehler, and T. Uslu, “CRFVoter: Chemical Entity Mention, Gene and Protein Related Object recognition using a conglomerate of CRF based tools,” in BioCreative V.5. Proceedings, 2017. accepted
    [BibTeX]

    @InProceedings{Hemati:Mehler:Uslu:2017,
      author =   {Wahed Hemati and Alexander Mehler and Tolga Uslu},
      title =   {{CRFVoter}: Chemical Entity Mention, Gene and
                      Protein Related Object recognition using a
                      conglomerate of CRF based tools},
      booktitle =   {BioCreative V.5. Proceedings},
      note =   {accepted},
      year =         {2017}
    }
  • W. Hemati, T. Uslu, and A. Mehler, “TextImager as an interface to BeCalm,” in BioCreative V.5. Proceedings, 2017. accepted
    [BibTeX]

    @InProceedings{Hemati:Uslu:Mehler:2017,
      author =   {Wahed Hemati and Tolga Uslu and Alexander Mehler},
      title =   {{TextImager} as an interface to {BeCalm}},
      booktitle =   {BioCreative V.5. Proceedings}, 
      year =    {2017},
      note =   {accepted}
    }

2016 (3)

  • W. Hemati, T. Uslu, and A. Mehler, “TextImager: a Distributed UIMA-based System for NLP,” in Proceedings of the COLING 2016 System Demonstrations, 2016.
    [BibTeX]

    @inproceedings{Hemati:Uslu:Mehler:2016,
    author={Wahed Hemati and Tolga Uslu and Alexander Mehler},
    title={TextImager: a Distributed UIMA-based System for NLP},
    booktitle={Proceedings of the COLING 2016 System Demonstrations},
    year={2016},
    location={Osaka, Japan},
    organization={Federated Conference on Computer Science and Information Systems}
    }
  • [PDF] A. Mehler, T. Uslu, and W. Hemati, “Text2voronoi: An Image-driven Approach to Differential Diagnosis,” in Proceedings of the 5th Workshop on Vision and Language (VL’16) hosted by the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, 2016.
    [BibTeX]

    @inproceedings{Mehler:Uslu:Hemati:2016,
        title={Text2voronoi: An Image-driven Approach to Differential Diagnosis},
        author={Alexander Mehler and Tolga Uslu and Wahed Hemati},
        booktitle={Proceedings of the 5th Workshop on Vision and Language (VL'16) hosted by the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin},
        pdf = {https://aclweb.org/anthology/W/W16/W16-3212.pdf},
        year={2016}
    }
  • [DOI] A. Mehler, R. Gleim, T. vor der Brück, W. Hemati, T. Uslu, and S. Eger, “Wikidition: Automatic Lexiconization and Linkification of Text Corpora,” Information Technology, pp. 70-79, 2016.
    [Abstract] [BibTeX]

    We introduce a new text technology, called Wikidition, which automatically generates large scale editions of corpora of natural language texts. Wikidition combines a wide range of text mining tools for automatically linking lexical, sentential and textual units. This includes the extraction of corpus-specific lexica down to the level of syntactic words and their grammatical categories. To this end, we introduce a novel measure of text reuse and exemplify Wikidition by means of the capitularies, that is, a corpus of Medieval Latin texts.
    @Article{Mehler:et:al:2016,
      Title                    = {Wikidition: Automatic Lexiconization and Linkification of Text Corpora},
      Author                   = {Alexander Mehler and Rüdiger Gleim and Tim vor der Brück and Wahed Hemati and Tolga Uslu and Steffen Eger},
      Journal                  = {Information Technology},
      Year                     = {2016},
      pages                    = {70-79},
      doi                      = {10.1515/itit-2015-0035},
      abstract       = {We introduce a new text technology, called Wikidition, which automatically generates large
    scale editions of corpora of natural language texts. Wikidition combines a wide range of
    text mining tools for automatically linking lexical, sentential and textual units. This
    includes the extraction of corpus-specific lexica down to the level of syntactic words and
    their grammatical categories. To this end, we introduce a novel measure of text reuse and
    exemplify Wikidition by means of the capitularies, that is, a corpus of Medieval Latin
    texts.}
    }