Play all audios:
HOW A NEURON COMPUTES Artificial neural networks owe their origins to studies of biological neurons in living brains. In 1943, neurophysiologist Warren McCulloch and logician Walter Pitts
proposed a simple model of how a neuron works. In the McCulloch-Pitts model, a neuron is connected to its neighboring neurons and can receive signals from them. It can then combine those
signals to send signals to other neurons. But there is a twist: It can weigh signals coming from different neighbors differently. Imagine that you are trying to decide whether to buy a new
bestselling phone. You talk to your friends and ask them for their recommendations. A simple strategy is to collect all friend recommendations and decide to go along with whatever the
majority says. For example, you ask three friends, Alice, Bob and Charlie, and they say yay, yay and nay, respectively. This leads you to a decision to buy the phone because you have two
yays and one nay. However, you might trust some friends more because they have in-depth knowledge of technical gadgets. So you might decide to give more weight to their recommendations. For
example, if Charlie is very knowledgeable, you might count his nay three times and now your decision is to not buy the phone – two yays and three nays. If you’re unfortunate to have a friend
whom you completely distrust in technical gadget matters, you might even assign them a negative weight. So their yay counts as a nay and their nay counts as a yay. Once you’ve made your own
decision about whether the new phone is a good choice, other friends can ask you for your recommendation. Similarly, in artificial and biological neural networks, neurons can aggregate
signals from their neighbors and send a signal to other neurons. This capability leads to a key distinction: Is there a cycle in the network? For example, if I ask Alice, Bob and Charlie
today, and tomorrow Alice asks me for my recommendation, then there is a cycle: from Alice to me, and from me back to Alice.