Will Computers Ever Truly Understand What we are saying?

Computers and Communication

The Science Daily on January 11, 2016 published an article titled “Will Computers Ever Truly Understand What we are saying?” The article was addressing the observed difference in the ability to discern the contextual meaning of a word between computers and humans. The article was published as a sequel to a 2014 report by ArjenStolk on the same subject. Computers, the article notes, cannot distinguish the meaning of the same word when applied in different contexts. Since communication is more than a mere exchange of gestures and linguistic signals, computers should be able to respond to the social context in which the communication is done. Stolk gives an example of making a gesture “V” using two fingers. The gesture can be interpreted differently depending on the context. To illustrate the extent to which computers are adaptable to contextual understanding, Stolk developed a video game involving signals only. A neurologic simulation of the participants demonstrated that the right temporal lobe is involved in the coordination of non-verbal communication over the computer. Specifically, the superior temporal gyrus showed high activity when the players exchanged understanding of the game moves. The article notes that the typical “reasoning” of computer programmed robots is based on the statistical pattern recognition rather than the contextual usage of the word. However, it is the historical and present context of the subject that real people use for effective communication, but not the statistical regularities that computers employ. The lack of stored information tuning is the reason behind the awkwardness that autistic patients express in their social communication.


Statistics play an important role in the programming of robots and other computerized devices. The statistical pattern helps the computers to make decisions. Each word is assigned a set of categories and features that are specific to it. Thefeatures of each word have a probability density that is conditioned to the category in which the word belongs. The decision making sequence of the computer is confined to any of the decision rules that include the Bayesian model, Neyman-Pearson model, and the maximum likelihood rule.Homonyms, like other words, have the same pattern. Computers only associate the pattern with only one homonym and disregard the others. As a result, a word with more than one meaning will be interpreted only as the word entered in the computer program.

This article in my opinion is very informative of the nature of computer language processing and how different this attribute is from the humans’.It gives insight on the need for the context in the comprehension of communicated facts. Human beings carry out successful communication because of their ability to discern non-verbal cues that are crucial in the conversation process.This article is a scientific opinion given by a neurologist, meaning that the information is credible. The conclusion made in the article is reliable because it comes after a lengthy period of scientific research. According to the article, Stolk has carried out several studies in the field of neuropsychiatry to determine the areas of the brain that are involved in processing speech. The article goes beyond computerized communication to discuss autism and neurodegenerative disorders. Just like computers, autistic patients and people with deformed prefrontal cortex are not able to put information into context, thus making them to exhibit irregular processing of material and consequently incoherent responses.