Artificial intelligence forms of machine learning will produce a revolution in computer systems, a new kind of hardware-software union that can put AI in your toaster, says AI pioneer Geoffrey Hinton.
Hinton, delivering the closing keynote Thursday at this year’s neural information processing systems conference, NeurIPS, in New Orleans, said the machine learning research community “has put time to realize the implications of deep learning for how computers are built”.
He continued, “What I think is that we’re going to see a completely different type of computer, not for a few years, but there’s every reason to investigate this completely different type of computer.”
All digital computers to date were built to be “immortal”, where the hardware is designed to be reliable so that the same software runs anywhere. “We can run the same programs on different physical hardware…knowledge is immortal.”
Also: AI could have a 20% chance of being sentient in 10 years, according to philosopher David Chalmers
This requirement meant that digital computers missed, Hinton said, “all sorts of variable, stochastic, flaky, analog, unreliable properties of hardware that could be very useful to us.” These things would be too unreliable to let “two different pieces of hardware behave exactly the same at the instruction level”.
Future computer systems, Hinton said, will take a different approach: They will be “neuromorphic” and they will be “lethal,” meaning that every computer will be a tight link between the software that represents neural networks and the hardware that is messy. , in the sense of having analog rather than digital elements, which may incorporate elements of uncertainty and may change over time.
“Now the alternative to that, which IT people really don’t like because it attacks one of their core principles, is to say we’re going to drop the separation of hardware and software,” Hinton explained.
Also: LeCun, Hinton, Bengio: AI conspirators receive prestigious Turing award
“We’re going to do what I call lethal computing, where the knowledge the system has learned and the hardware are inseparable.”
These deadly computers could be “developed”, he said, by getting rid of expensive chip factories.
“If we do that, we can use very low-power analog computing, you can have trillion-way parallelism using things like memristors for weights,” he said, referring to a decades-old type of experimental chip based on non-linear. circuit elements.
“And you can also develop hardware without knowing the precise quality of the exact behavior of the different pieces of hardware.”
Also: Deep learning godfathers Bengio, Hinton and LeCun say the field can fix its flaws
The deadly new computers will not replace traditional digital computers, Hilton told the NeurIPS crowd. “It’s not going to be the computer looking after your bank account and knowing exactly how much money you have,” Hinton said.
“It will be used to put something else: it will be used to put something like GPT-3 in your toaster for a dollar, so running on a few watts you can have a conversation with your toaster.”
Hinton was invited to give the talk at the conference in recognition of his ten-year-old paper, “ImageNet Classification with Deep Convolutional Neural Networks,” written with his graduate students Alex Krizhevsky and Ilya Sutskever. The paper received the conference’s “Test of Time” award for its “enormous impact” in the field. The work, published in 2012, was the first time a convolutional neural network competed at a human level on the ImageNet image recognition competition, and it was the event that sparked the current era of AI. .
Hinton, who is a recipient of the ACM Turing Award for Achievement in Computer Science, the equivalent of the Nobel Prize in the field, formed the Deep Learning Conspiracy, a group that resurrected the moribund field of machine learning with his fellow recipients of Turing, Meta’s Yann LeCun and Yoshua Bengio of the MILA Institute for AI in Montreal.
Also: AI on steroids: Much larger neural networks to come with new hardware, say Bengio, Hinton and LeCun
In that sense, Hinton is AI royalty in his position on the pitch.
In his guest lecture, Hinton spent most of his time talking about a new approach to neural networks, called the forward-forward network, which does away with the backpropagation technique used in almost all neural networks. He proposed that by removing the back-prop, fore-forward nets could more plausibly approximate what happens in the brain in real life.
A draft paper on prospective work is posted on Hinton’s homepage (PDF) at the University of Toronto, where he is a professor emeritus.
The front-to-front approach could be well suited to lethal computing hardware, Hinton said.
“Now, if something like this is going to happen, we have to have a learning procedure that will run in a particular hardware, and we learn to use the specific properties of that particular hardware without knowing what all those properties are,” Hinton explained, “But I think the front-to-front algorithm is a promising candidate for what this little procedure might be.”
Also: Turing’s New Test: Are You Human?
One obstacle to building the new analog lethal computers, he said, is that people are attached to the reliability of running software on millions of devices.
“You would replace that with every one of these cell phones that would have to start out as a baby cell phone, and it would have to learn to be a cell phone,” he suggested. “And it’s very painful.”
Even the most expert engineers in the technology involved will be slow to abandon the paradigm of perfect and identical immortal computers for fear of uncertainty.
“Among those who are interested in analog computing, there are still very few who are willing to give up immortality,” he said. It’s because of the attachment to consistency, to predictability, he said. “If you want your analog gear to do the same thing every time… You have a real problem with all that spurious electrical stuff.”
#kind #computer #pioneer #Geoff #Hinton