Yann LeCun, head of AI research at Meta and a key figure in the emergence of neural networks, said in an interview this week that these types of systems are not powerful enough to achieve true intelligence.
Google’s technology is what scientists call a neural network, a mathematical system that learns skills by analyzing large amounts of data. By identifying patterns in thousands of cat photos, for example, he can learn to identify a cat.
Over the past few years, Google and other large companies have built neural networks that have learned huge amounts of prose, including unpublished books and Wikipedia articles by the thousands. These “Language Large Models” can be applied to many tasks. They can summarize articles, answer questions, create tweets, and even write blog posts.
But they are very deficient. Sometimes they are born perfect prose. Sometimes they are born bullshit. Systems are very good at recreating patterns they’ve seen in the past, but they can’t think like a human.