В этой теме я предлагаю обсудить возможности и будущее глубинного обучения. Есть статья
, где как раз критически обсуждается хайп вокруг ИИ и рассматриваются реальные возможности в глубинном обсуждении. Так что: стоит ли овчинка (Deep Learning) выделки или нет? В статье, в частности, говорится о неудачах в области понимания естественного языка (Natural Language Understanding). Я завел тему в LinkedIn с целью обсудить возможности нейронных сетей в NLU. Вот эта дискуссия:
Problems that need to be solved to solve the problem of understanding the natural language
As you know, GNMT translates the text in a natural language into its internal language. As far as I know, the problem of translating a natural language into a language of knowledge bases (for example, ontologies) has already been solved (or not yet?). There remains the problem of implementing ontology processing operations, for example, matching the resulting ontology to a global knowledge base to determine its truth / falsity, its information value and relevance, etc.
If I'm not mistaken, Dr. Sowa in one of the discussions on LinkedIn recommended that students of artificial intelligence turn to logic. Do I understand correctly that the question arises: how to use the logic to implement the task of processing and mapping ontologies?
Beth Carey
Beth Carey Hi Rasool Barlybayev - I think Koos Vanderwilt could make some good comments on your questions with the current methodologies, including GNMT, which I believe is Google's neural net methodology for machine translation. I would say GNMT and Deep Learning bypass meaning because of their inherent approach ie. looking for statistical co-location of word patterns, and I'm not sure meaning is retrievable. Open scientific problems of NLU are not solved by this approach and may never be. GNMT made good progress with some machine translation pairs, and will probably continue to, but is it asymptotic as a scientific model for machine translation and NLU? What is the goal, human-like accuracy? or incremental % points better than before, potentially plateauing.
Koos Vanderwilt
Koos Vanderwilt Quite an honor for you to say this, Beth. I am not sure if I can, but I will try and produce some generalities. I am hampered by not knowing what GNMT stands for. I am not sure Semantic Web technology can be considered to have solved various questions, including the translation of text to knowledge bases. There are two issues here: the knowledge basis or ontologies and the ways to reason over them. Ontologies consist of classes/concepts and properties, VERY roughly SUBJCT, PREDICATE and OBJECT. You can get these out of a text with a parser. Googling "triplet table(s)"and "ontologie(s)"will bring up sites that tell you more. Ontologies are simple, but the details are NOT. "Implementing ontology processing operations" could be "editing the ontology"or"creating an ontology" or performing consistency checking. Logics are used for reasoning over the ontologies. This is just like in real life: you have knowledge, and taking it as a point of departure, you reason with it.
Koos Vanderwilt
Koos Vanderwilt For instance, Moscow is the capital of Russia, I am sure every Russian knows. Every Russian will therefore be able to conclude Leningrad is not the capital of Russia. Two cities, a predicate "is capital of" and a reasoning leading to a negative statement about Leningrad. You could imagine how nice it'd be if the computer could reason about facts in genomics, medicine, and so on. Thereare quite a few logics, mostly called Description Logics. These are subsets of First Order Predicate Logic, which has a large time complexity- meaning using it takes forever to run. Many Description Logics are constructed to be more tractable, but you can do less with them. The SW is a huge, world-wide project that, if it delivers what is being promised, will improve medicine, the practice of law, biology research, and other fields that involve text and therefore Text Analytics. I hope this is somewhat informative.
Menno Mafait
Menno Mafait Rasool, this field has a fundamental flaw:
It is well known that grammar provides structure to sentences. However, scientists have not discovered yet the laws of intelligence, which are embedded in grammar. This Universal Logic embedded in Grammar provides a logical structure (=meaning) in natural language, by which sentences make sense.
As a consequence of being ignorant of this natural structure of sentence, in NLP, rich and meaningful sentences are degraded to "bags of keywords", by which the natural structure of sentences are discarded beyond repair. This loss of information is like a two-dimensional movie that as lost its three-dimensional spatial information. Hence the deep problems in this field to grasp the deeper meaning expressed by humans.
I have a solution for the long term:
I have knowledge, experience, technologies and results that no one else has. I am probably the only one in the world who has defined intelligence as a set of natural laws.
Menno Mafait
Menno Mafait I am using fundamental science (logic and laws of nature) instead of cognitive science (simulation of behavior), because:
• Autonomous reasoning requires both intelligence and language;
• Intelligence and language are natural phenomena;
• Natural phenomena obey laws of nature;
• Laws of nature (and logic) are investigated using fundamental science.
Using fundamental science, I gained knowledge and experience that no one else has:
• I have defined intelligence in a natural way, as a set of natural laws;
• I have discovered a logical relationship between natural intelligence and natural language, which I am implementing in software;
• And I defy anyone to beat the simplest results of my Controlled Natural Language (CNL) reasoner in a generic way: from natural language, through algorithms, back to natural language. See:
http://mafait.org/challenge/It is open source software. Feel free to join.