Like the brain, language processing software can assign different types of information to one word.
Computers are becoming better at understanding our meaning, from search engines to voice assistants. This is thanks to the language-processing software that can make sense of an astonishing number of words without being explicitly told what they mean. These programs instead infer meaning through statistics. A new study shows that this computational method can assign multiple types of information to one word, similar to the brain.
The study was published in Nature Human Behavior on April 14, 2016. It was led by Gabriel Grand Ph.D. ’16, a graduate in electrical engineering and computing science affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory. Idan Blank, an assistant professor at University of California at Los Angeles, also co-led the research. Ev Fedorenko of the McGovern Institute for Brain Research, a cognitive neuroscience researcher who studies the way the brain understands and uses language, and Francisco Pereira of the National Institute of Mental Health supervised the work. Fedorenko said that the wealth of knowledge found by her team in computational language models shows how much we can learn about the world just through language.
In 2015, the team of researchers began analyzing statistics-based models for language processing. This was a new approach. These models determine meaning by analyzing the frequency of co-occurrences of words in texts. They then use this information to compare words. Such a program could conclude, for example, that \”bread\”, \”apple\”, and \”notebook\” are more alike than they are with each other because \”bread\”, \”apple\”, and \”notebook\” are frequently found near words such as \”eat\”, \”snack\”, and so on.