Inside IELTS: Preparing for the Test with the Experts starts September 12
Learn about the skills you need for IELTS Academic and beyond, on this free online course from the experts who produce the test.
Here’s the link.
:::::::::::::::::::::::::::::::::::
Through the corpora list
::::::::::::::::::::::::::::::::::::
TAL Journal: 2016 Volume 57-3
Call for papers
Topic: NLP for learning and teaching (Link)
Foreign Language Learning and Teaching is one of the fields where the introduction of information and communication technologies (ICT) has proved particularly fruitful. It is thus no wonder that Computer-Assisted Language Learning (CALL) has been among the first, from the 1960’s, to integrate insights and techniques from Natural Language Processing (NLP) to create intelligent computer-assisted learning environments. Since then, various other fields and disciplines have also incorporated NLP into electronic learning environments to support self-directed learning, blended learning or classroom teaching. NLP has overall contributed to the improvement of learning environments, and to the development of research in the related fields. It has allowed for the improvement of integrated systems, not to say the widening of issues in the related fields.
Today, online learning tools, Massive Open Online Courses (MOOCs), Small Private Online Courses, Computer-Assisted Pronunciation Teaching (CAPT) systems, Computer-Assisted Instruction systems for mathematics, sign language learning applications, or Intelligent Tutoring Systems (ITS), among many others, are heavy “consumers” of NLP, or about to become it.
Integrating NLP into these systems enables to consider, process and reproduce for learning purposes aspects of the content of linguistic data, to create more advanced educational resources, but also to make the communication with the learner more relevant in a teaching context.
The aspects of NLP most frequently involved are analysis of learners’ responses, feedback provision, automated generation of exercises, and the monitoring of learning progress. Other aspects related to learning and teaching also involve NLP, such as plagiarism detection, writing support, use of learner corpora or parallel corpora to detect and resolve errors, or adaptive learning systems integrating ontologies for the associated domains.
The contribution of NLP to these systems is generally regarded as positive. It must be recognized, however, that only a handful of such applications have made it to the general public as a commercial software. In most cases, the systems never left the laboratory and have a limited range of use, sometimes only as a proof of concept. Is this due, as many believe, to the high production cost of NLP resources? Is it because of the current quality of NLP results? Is it a consequence of the integration strategy of NLP into these applications?
The goal of this issue of Traitement Automatique des Langues dedicated to “NLP for learning and teaching” is to summarize the contribution of NLP to instructional systems, both at a theoretical level (opportunities, limitations, integration methods) and at the level of learning systems – or parts of systems – production.
Authors are invited to submit papers on all the aspects of the implementation of NLP into Computer-Assisted Instruction (CAI) systems for a given discipline, as well as useful tools for this task, in particular regarding, but not limited to, the following issues and tasks:
Position papers and state of the art papers are also welcome.
Language
Papers can be written in French or in English. Submissions in English will only be accepted if at least one of the authors is not a native speaker of French.
Submission guidelines
Submitted papers should be 20 to 25 pages long. Any dispensation regarding length should be previously discussed with the guest editors.
Authors are invited to submit their paper as a PDF file on http://tal-57-3.sciencesconf.org/ , by clicking on “Soumission d’un article”, after having previously registered and logged in on SciencesConf.org.
The TAL Journal follows a double-blind peer-reviewing process. All submissions must be carefully anonymized.
Stylesheets are available online on the journal website: http://www.atala.org/IMG/zip/tal-style.zip .
Important dates
Journal
Traitement Automatique des Langues is an international journal published since 1960 by ATALA (Association pour le traitement automatique des langues) with the support of CNRS. It is now published online, with an immediate open access to published papers, and annual print on demand. This does not change its editorial and reviewing process.
Guest editors
Editorial Board
*************************************************
Georges ANTONIADIS
Professeur d’informatique-linguistique
Directeur du Dpt Sciences du Langage & FLE
Responsable du master Industries de la Langue
UFR LLASIC / Laboratoire LIDILEM
Université Grenoble-Alpes, bâtiment Stendhal
CS 40700
38058 Grenoble cedex 9
Tél. : +33/0 4 76 82 77 61 Fax : +33/0 4 76 82 41 26
The fifth biennial conference on electronic lexicography, eLex 2017, will take place in Holiday Inn Leiden, Netherlands, from 19-21 September 2017.
The conference aims to investigate state-of-the-art technologies and methods for automating the creation of dictionaries. Over the past two decades, advances in NLP techniques have enabled the automatic extraction of different kinds of lexicographic information from corpora and other (digital) resources. As a result, key lexicographic tasks, such as finding collocations, definitions, example sentences, translations, are more and more beginning to be transferred from humans to machines. Automating the creation of dictionaries is highly relevant, especially for under-resourced languages, where dictionaries need to be compiled from scratch and where the users cannot wait for years, often decades, for the dictionary to be “completed”. Key questions to be discussed are: What are the best practices for automatic data extraction, crowdsourcing and data visualisation? How far can we get with Lexicography from scratch and what is the role of the lexicographer in this process?
Important dates
February 1st, 2017: abstract submissions
March 15th, 2017: reviews of abstracts
May 15th, 2017: submission of full papers
June 15th, 2017: reviews of full papers
June 25th, 2017: camera-ready copies submissions
Call for papers here: https://elex.link/elex2017/call-for-papers/
The Alphabetizer is a computer tool for putting terms in alphabetical order. Use it to sort any list online, using your computer or mobile device. This web tool — and educational resource — provides sorting functions including the ability to: put items in alphabetical order, remove HTML, capitalize and lowercase words and phrases, ignore case, order names, sort by last name, add numbers, letters and roman numerals to lists, and more.
Source: http://alphabetizer.flap.tv
![]()
*************************
Through the BAAL mail list
*************************
The International Corpus Linguistics Conference 2017 will take place from Monday 24 to Friday 28 July at the University of Birmingham.
Opening plenary
Susan Hunston, University of Birmingham
Keynote speakers
Susan Conrad (Portland State University, US)
Andrew Hardie (Lancaster University, UK)
Christian Mair (University of Freiburg, Germany)
Dan McIntyre (University of Huddersfield, UK)
Mike Scott (Aston University, UK)
::::::::::::::::
Through the corpora list
We are happy to announce the release a new version of BabelNet, a project recently featured in a TIME magazine article. BabelNet (http://babelnet.org) is the largest multilingual encyclopedic dictionary and semantic network created by means of the seamless integration of the largest multilingual Web encyclopedia – i.e., Wikipedia – with the most popular computational lexicon of English – i.e., WordNet, and other lexical resources such as Wiktionary, OmegaWiki, Wikidata, Open Multilingual WordNet, Wikiquote, VerbNet, Microsoft Terminology, GeoNames, WoNeF, ImageNet, ItalWordNet, Open Dutch WordNet and FrameNet. The integration is performed via a high-performance linking algorithm and by filling in lexical gaps with the aid of Machine Translation. The result is an encyclopedic dictionary that provides Babel synsets, i.e., concepts and named entities lexicalized in many languages and connected with large amounts of semantic relations.
Version 3.7 comes with the following new features:
More statistics are available at: http://babelnet.org/stats
BabelNet was part of the MultiJEDI project originally funded by the European Research Council and headed by Prof. Roberto Navigli at the Linguistic Computing Laboratory of the Sapienza University of Rome. BabelNet is now a self-sustained project. It is, and always will be, free for research purposes, including download. Babelscape, a Sapienza startup company, is BabelNet’s commercial support arm, thanks to which the project will be continued and improved over time.
Contact:
The BabelNet group
=====================================
Roberto Navigli
Dipartimento di Informatica
Sapienza University of Rome
Viale Regina Elena 295b (building G, second floor)
00161 Roma Italy
Phone: +39 0649255161 – Fax: +39 06 49918301
Home Page: http://wwwusers.di.uniroma1.it/~navigli