Reading notes #5 : NLDB 2022
Between the 15th and 17th of June, I attended the 27th iteration of the NLDB conference in Valencia. This was a wonderful experience where I got to eat paella, meet a bunch of interesting people, discover insightful research and present my own paper. In this article, I will present to you a few articles that caught my attention at this conference.
On-Device Language Detection and Classification of Extreme Short Text from Calendar Titles Across Languages
As the title indicates, this paper is about developing a method for detecting languages in very short texts. Events set by users in their personal calendars tend to have short titles of only a few words. This makes it challenging to detect the language used in these titles which is important for classifying these events and providing personalised services.
Uncertainty Detection in Historical Databases
Uncertainty in historic documents is problematic for historians yet unavoidable. This paper proposes to tackle this issue by developing a method for detecting uncertainty, and classifying their type and impact.
Revisiting the Past to Reinvent the Future: Topic Modeling with Single Mode Factorization
In this paper, the author went back in time to unearth a forgotten methodology for topic modelling. It provides a fascinating history of topic modelling and argues that methods such as LSI which aren’t used anymore can still beat the more recent topic models. In particular, the author argues that methods such as LDA have one great weakness in that it analyse word co-occurrence at the document level instead of on a windows-based approach like modern language models and word embedding methods. Hence, this paper is one of those that challenges accepted paradigms and makes us rethink our approach.
Metric Learning and Adaptive Boundary for Out-of-Domain Detection
When developing chatbots, one challenge is how to act when the user does not react as expected. As human conversation tends to jump from one topic to another and it is difficult to identify this shift. This paper approaches this issue as an out-of-domain detection task. By embedding utterances in a vector space, they show we can detect unexpected utterances as their vectorial representation is significantly more distant than the rest of the conversation.
Better Exploiting BERT for Few-Shot Event Detection
This paper proposes an interesting approach to event detection using a few shot learning approaches. The language model BERT is fine-tuned to produce embedding for a few examples of each class of events. The model can achieve high performance with little training data. Interestingly, the author demonstrates that using all the layers of BERT instead of only the last one improves performance on this task.
Preprocessing Requirements Documents for Automatic UML Modelling
This paper proposes a method for building UML diagrams based on requirements documents. The authors note that previous methods mistakenly use structured text as input which is not reflective of real-world situations. Their approach aims at producing UML diagrams from unstructured texts through named entity recognition and relation extraction.
A BERT-Based Model for Question Answering on Construction Incident Reports
This paper demonstrates how NLP models can be used to analyse Incident reports during construction work. Precisely, the authors framed the problem as a Question Answering task to extract information such as injury type, severity and related activity. They achieved state-of-the-art performance.
Detecting Early Signs of Depression in the Conversational Domain: The Role of Transfer Learning in Low-Resource Scenarios
Detecting signs of depression early can lead to faster recovery. With the development of conversational agents like Siri and Alexa, there is a new source of data for such analysis. However, this new data source is scarce. Thus, the authors propose to adopt a transfer learning paradigm through domain adaptation by training the model on social media data which is more widely available before testing on conversational data.
Automatically Computing Connotative Shifts of Lexical Items
Words have connotations. These are feelings or ideas that come to mind when one hears a word. This study provides a way of analysing how these connotations evolve over time or domain. The author used an SVM classifier to separate seed words related to a kind of connotations (e.g. positive/negative words). After training, the resulting hyperplane is used to define a centre point of connotation. In the meantime, embeddings for target words have been trained on two corpora A and B. The distance between the words and the hyperplane is measured and the difference between A and B distances defines the connotative shift. The resulting method shows interesting results. For example, was can observe that words such as “masks” and “positive” have gained negative connotations after 2019.
Improving Relation Classification Using Relation Hierarchy
Relation classification involves predicting the correct relation between two entities. Existing studies that perform such tasks do not take into account the hierarchical relationship between relationships. For example, the relation “location of birth“ is a more general relation than “city of birth” or “country of birth”. In this study the author used this hierarchical information to improve training. The fundamental idea is that predicting “location of birth“ instead of “city of birth” may be less precise but still kind of correct. Thus, the model is not completely penalised and instead the loss is computed relative to the distance between the correct relation and the one predicted in the relation tree.
Using meaning instead of words to track topics (My own paper 😀 )
Tracking topics is useful for analysing current trends and discovering emerging ones in texts. All other methods proposed in the past have focused on using only lexical information in the context of flat topic models. This article shows that using semantic information from word embeddings provides similar performance to lexical solutions and those hybrid methods may provide improved performance. Moreover, the article shows that tracking topics that are hierarchically related is more challenging as there is no objective taxonomy making the structural information unreliable for tracking topics. Moreover, topics and sub-topics can be easy to confuse which also makes tracking more difficult.
- ChatGPT and Large Language Models : Key Historical Milestones in AI - 22 August 2024
- Why non linearity matter in Artificial Neural Networks - 18 June 2024
- Is this GPT? Why Detecting AI-Generated Text is a Challenge - 7 May 2024