Back in 2007, a group of eminent thinkers and experts in the science of neural networks used a conference on AI to organize a side-meeting and discuss neural networks. The conference organizers denied their request for an actual workshop, primarily because the topic they wanted to discuss was still too far-fetched for scientific circles at that time. More than a decade later, these networks are spearheading developments in the area of artificial intelligence and also provide an insight into the works of the human brain.
Key Takeaways:
- For a variety of reasons, backpropagation isn’t compatible with the brain’s anatomy and physiology, particularly in the cortex.
- For decades, neuroscientists’ theories about how brains learn were guided primarily by a rule introduced in 1949 by the Canadian psychologist Donald Hebb.
- In essence, the algorithm’s backward phase calculates how much each neuron’s synaptic weights contribute to the error and then updates those weights to improve the network’s performance.
“The algorithm enables deep nets to learn from data, endowing them with the ability to classify images, recognize speech, translate languages, make sense of road conditions for self-driving cars, and accomplish a host of other tasks.”