![]() ![]() ![]() By the mid-1960s, the “chatterbox” era made headlines with ELIZA, a computer program that simulated a psychotherapist issuing contextual, echoic responses to a patient’s input. This launched the era of “supervised learning” computational models, or symbolic NLP.Īmerican linguist, philosopher, cognitive scientist, historian, and social critic Noam ChomskyĮven as Chomsky was writing his thesis, the “Georgetown-IBM experiment” had successfully programmed a computer to translate about 60 Russian phrases into English, leading its authors to predict, falsely, that machine translation would be solved within a few years. If only the rules of syntax could be organized into logically structured “trees,” computers armed with a thorough lexicon could be instructed how to “learn” a language. While Chomsky did not intend his theories to be used to engineer products, computer scientists eagerly took on the task in a race to “pass” the Turing Test. How would someone get that unless there is an innate instinct for grammar: “I will send you also for this something.” Yet only the first sentence has proper, comprehendible syntax. To demonstrate his theory, he compared two semantically nonsensical sentences sharing the same five words: “Colorless green ideas sleep furiously” and “Furiously sleep ideas green colorless.” Neither sentence had likely been expressed before, thus could not have been learned from experience. ![]() nurture debate, raising doubts that language is acquired solely through environmental learning. This so-called transformational generative grammar theory tipped the scales of the nature vs. Only a few years later, pioneering linguist Noam Chomsky revolutionized his field by alleging that the basic syntax of grammar is a universal, innate human characteristic. Passing this “Turing Test” thereafter became a goal for computer scientists trying to wrest intelligence out of silicon chips. In 1950, the British mathematician, cryptologist and computing pioneer Alan Turing predicted that a human observer eventually would not be able to differentiate replies made by a human from those of an artificially intelligent machine. What does NLP bode for the future and will its benefits outweigh its pitfalls? One thing is clear – NLP is big business: The global market for NLP-related technologies is estimated to be worth €31.4 billion by 2025 and is growing by 21.5 percent annually. Still far from perfect, its applications already have had significant impact on globalization, politics, civil discourse, and business. What was thought inconceivable at the end of the last century is now standard features on every smart phone and PC. Treading the tightrope between linguistics and computer science (and, to an increasing degree, neuroscience), NLP has evolved exponentially alongside advances in computing power and the accumulation of digital data. Today’s voice recognition, translation, speech generation, dictation software, auto-suggested texts and “chatbot” technologies owe their imperfect existence to decades of research in the field of natural language processing (NLP), whose core goal is creating a machine that can parse and process language as well as (or even better than) a human. “Go ahead, ask her something!” Reluctant to give Jeff Bezos my native voice imprint, I asked in my fractured Wienerisch, “Alexa, translate ‘ Mistkübel’ into English!” Her Austro-hostile response, uttered snootily in high- Piefkenesisch, was an embarrassed: “I’m sorry, but I cannot locate any relevant data that answers your query.” Hmm … okay, then, “Alexa, translate ‘ Mülleimer’ into English.” Suddenly, the Berlin matron became Miss Iowa and issued her answer in a cheery mid-Western twang, “treeyash keeyan.” A compulsive early adopter of high-tech gadgetry once invited me over to check out his latest acquisition – an Amazon Echo. ![]()
0 Comments
Leave a Reply. |