NLP Models have shown tremendous advancements in syntactic, semantic and linguistic knowledge for downstream tasks. However, that raises an interesting research question — is it possible for them to go beyond pattern recognition and apply common sense for word-sense disambiguation?

Thus, to identify if BERT, a large pre-trained NLP model developed by Google, can solve common sense tasks, researchers took a closer look. The researchers from Westlake University and Fudan University, in collaboration with Microsoft Research Asia, discovered how the model computes the structured, common sense knowledge for downstream NLP tasks.

According to the researchers, it has been a long-standing debate as to whether pre-trained language models can solve tasks leveraging only a few shallow clues and their common sense of knowledge. To figure that out, researchers used a CommonsenseQA dataset for BERT to solve multiple-choice problems.

#opinions #ai common sense #bert #bert model #common sense #nlp model #nlp models

Is Common Sense Common In NLP Models?
1.95 GEEK