Giving AI a Second Thought

Researchers created a neural network that can weigh new data and change its degree of certainty in an answer over time, just like humans.

Nick Bild
2 months agoMachine Learning & AI

In stark contrast to artificial neural networks, us humans can be very fickle in our decision making. Like the shifting sands at the seashore, our opinions on a matter can vary from day to day. This may seem like an indication that we are unreliable and prone to making bad decisions. No doubt we are in some cases, as our own experience teaches us. But what it really highlights is that we can rapidly change our views as we consider new information.

Not only does that new information have the potential to change our views, but also our degree of certainty in what we believe. For example, we may hold a particular view, yet say that we are not completely certain about it. Or, we might say that we are almost completely convinced of something else. Machine learning models do not behave in this way. When given an input, they produce the same output time and time again. And many model architectures, like large language models, do not provide any degree of certainty in their answers. So they may be very uncertain of their answer, yet reply with confidence all the same.

But are you sure you're sure?

A trio of researchers at the Georgia Institute of Technology recognized that these discrepancies between human decision making and artificial neural networks are going to prevent artificial systems from advancing to the levels of intelligence that we hope them to achieve. In response, they developed a novel neural network architecture that operates in a more human-like way than traditional algorithms. Their algorithm continues to weigh evidence as it comes in over time, and it will produce different answers as its level of certainty about the answer to a question changes.

To achieve this goal, the team utilized a Bayesian neural network in their system architecture. Rather than having a single number for each model weight, these networks instead contain a distribution. This allows for a degree of probabilistic reasoning that traditional algorithms cannot achieve. An evidence accumulation process was also integrated into the system to help it weigh new data as it comes in over time. This enables it to change its answers to questions as the probabilities fluctuate.

To evaluate the performance of this technique, the model was trained on the MNIST handwritten digit dataset. Three existing models — CNet, BLNet, and MSDNet — were similarly trained for comparison. To test their decision-making performance, each of the models was then asked to determine which digit it was being shown, but only after significant noise was added to the images. Sixty human participants were also asked to classify each digit, and provide a metric representing their degree of certainty in their answer.

Knowing when you're wrong

In the course of these experiments, the team found that their algorithm outperformed all of the existing models in terms of accuracy. It was also discovered that their system was far more human-like in its operation than other models. Key factors like accuracy rate, response time, and confidence patterns were very similar between humans and the Bayesian neural network. It was also noted that the speed of decision making and the level of confidence in an answer were similarly correlated between the new model and human decisions.

The MNIST dataset is very constrained, so moving forward, the researchers intend to test their approach on more varied datasets. Those results should give us more information about how closely this model really does mimic the human decision-making process.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles