Therefore, many people misspell words, including native English speakers. Often, the way the word is spelled has little to do with how it is pronounced. We evaluate the dataset using crowdsourcing and run a baseline seq2seq model for typo correction.English spelling can be rather tricky. We address them by combining character-based extraction rules, morphological analyzers to guess readings, and various filtering methods. Unlike other languages, Japanese poses unique challenges: (1) Japanese texts are unsegmented so that we cannot simply apply a spelling checker, and (2) the way people inputting kanji logographs results in typos with drastically different surface forms from correct ones.
In this paper, we extract over half a million Japanese typo–correction pairs from Wikipedia’s revision history. Although a large number of typo–correction pairs are needed to develop a data-driven typo correction system, no such dataset is available for Japanese. %X User generated texts contain many typos for which correction is necessary for NLP systems to work. %I Association for Computational Linguistics %S Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop %T Building a Japanese Typo Dataset from Wikipedia’s Revision History We evaluate the dataset using crowdsourcing and run a baseline seq2seq model for typo correction. User generated texts contain many typos for which correction is necessary for NLP systems to work. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research WorkshopĪssociation for Computational Linguistics We evaluate the dataset using crowdsourcing and run a baseline seq2seq model for typo correction.",īuilding a Japanese Typo Dataset from Wikipedia’s Revision History
#Typo checker mods#
Cite (Informal): Building a Japanese Typo Dataset from Wikipedia’s Revision History (Tanaka et al., ACL 2020) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: Video: = "Building a s revision history. Association for Computational Linguistics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 230–236, Online. Building a Japanese Typo Dataset from Wikipedia’s Revision History. Anthology ID: 2020.acl-srw.31 Volume: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop Month: July Year: 2020 Address: Online Venue: ACL SIG: Publisher: Association for Computational Linguistics Note: Pages: 230–236 Language: URL: DOI: 10.18653/v1/2020.acl-srw.31 Bibkey: tanaka-etal-2020-building Cite (ACL): Yu Tanaka, Yugo Murawaki, Daisuke Kawahara, and Sadao Kurohashi.
Abstract User generated texts contain many typos for which correction is necessary for NLP systems to work.