Nobel Prizes 2024: AI Breakthroughs Win Big

Author:Murphy  |  View: 25407  |  Time: 2025-03-22 19:17:33
AI-generated image.

Since the first Nobel Prizes were awarded in 1901, the last period of the year has become an exciting time to learn about remarkable individuals and their contributions across various fields.

This Nobel season has been particularly intriguing – and somewhat controversial – due to the special recognition given to advancements in AI within the Physics and Chemistry categories.

This year's awards spotlight the vast potential of AI and raise pressing questions about the nature of scientific disciplines in an era when computational methods are redefining traditional fields.

In this article, we aim to explore the role of AI in the 2024 Nobel Prizes, discuss the controversy now that things have settle down, and invite you to share your opinion on the matter!

Could AI become a lasting presence in future Nobel categories?

Nobel in Physics 2024

Image from Alina Grubnyak in Unsplash

Although computers cannot "think" as humans do, computer algorithms can now mimic human-like functions such as memory and learning.

This year's laureates, John Hopfield and Geoffrey Hinton, have helped make this possible by laying the groundwork for the machine learning revolution that began around 2010.

Specifically, the Nobel recognition was awarded for their work on Hopfield networks and Boltzmann machines, respectively. Research began in the 1980s and continued developing over the following decades.

The Nobel Prize Foundation has provided extensive information for general and specialized audiences. Therefore, in this article, I will focus on the core aspects that make these contributions Nobel-worthy!

Hopfield Networks

Although Hopfield networks were not the first neural networks, they are considered an early influential model.

In particular, they were among the first to use a recurrent, fully connected architecture to store and retrieve patterns, distinguishing them from earlier models like the Perceptron, which was single-layer and feedforward.

Let's explore the key ideas behind Hopfield Networks:

  • Memory Function: Hopfield networks store patterns as memories. Given incomplete or noisy data, the network can recreate the closest stored pattern, making it useful for restoring corrupted or incomplete information.
  • Binary States: Each neuron in a Hopfield network holds a binary state (either 0 or 1).
  • Energy Landscape: Patterns are stored as low-energy states. When presented with a similar input, the network "moves" to the nearest stored pattern by minimizing its energy, similar to the principle behind modern gradient descent!
  • Fully Connected: Every neuron connects to every other neuron forming a fully connected network. Therefore, each neuron influences the others.
  • Associative Memory: The way of retrieving information in those networks functions like associative memory in the brain, allowing us to recall an entire event or object from a small cue. Hopfield networks were crucial in advancing models of associative memory!

Given these main points about Hopfield networks, do you think this proposal is Nobel-worthy in Physics?


Boltzmann Machines

While Hopfield networks can recall stored patterns from partial or noisy inputs, they do not generate new data in the way that generative models do.

This is the case of Boltzmann Machines, considered early generative models, as they were among the first neural networks capable of learning and representing complex probability distributions over input data.

Let's review the main points for Boltzmann Machines:

  • Probabilistic Learning: Boltzmann Machines learn to represent data distributions. Therefore, they can capture the patterns within the data.
  • Stochastic Nodes: As in Hopefield networks, each node can be in a binary state. States are determined probabilistically, based on the states of connected nodes.
  • Visible and Hidden Units: The network consists of visible units (which represent input data) and hidden units (which represent latent variables). The usage of different units allows it to learn deeper representations of the data.
  • Architecture: Boltzmann Machines are fully connected networks but are not recurrent.
  • Energy Function: The learning process also includes minimizing an energy function. This energy-based approach guides the network's learning process, similar to the approach used in Hopfield networks.
  • Data Generation: Once trained, Boltzmann Machines can generate new data samples similar to the training data. And that is a generative task!

The concept of probabilistic learning and the energy-based approach proposed for Boltzmann Machines laid the groundwork for more advanced generative models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs).

To me it sounds like a big player, isn't it?

For an in-depth explanation of the Physics foundations of those awards, I strongly recommend the following Tim Lou:

The Science Behind AI's First Nobel Prize

Nobel in Chemistry 2024

Image from Google DeepMind in Unsplash

Continuing with the AI-influenced prizes, half of the 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis and John Jumper from DeepMind for their contributions to protein science through AI-driven methods. Specifically, they have received recognition for their work on AlphaFold.

AlphaFold is a tool capable of accurately predicting protein structures. This tool has been crucial for advances in fields like drug development and disease research. Currently, the methods used by the tool are AI-based and its predictions are freely accessible through an online database, benefiting scientists worldwide!

Transformers

Demis Hassabis brought AlphaFold to life by using statistical and physics-based approaches to analyze how amino acids in a protein might interact and predict the resulting protein structures. However, John Jumper advanced it further with the second version, AlphaFold 2, refining the method by integrating a transformer-based architecture to predict protein structures more efficiently.

Indeed, transformers were key to AlphaFold 2‘s improvement. The model uses two main transformer modules: one to analyze relationships between amino acid residues within a protein and another to assess relationships between amino acids and the broader sequence context. The iterative use of these transformers allows the model to progressively refine its understanding of how amino acids interact in three-dimensional space and propose consistent protein structures.

Transformers are also the key architecture behind the famous Large Language Models!


Nobel Controversy

This year's prizes honor computational methods that have transformed disciplines such as physics, chemistry (and biology!), and had the potential to enable the era of Artificial Intelligence.

However, the biggest controversy lies in whether these discoveries – especially the one awarded the Physics Nobel – truly belong to pure science or should be classified under Computer Science.

Does it make sense to create a new Nobel category for Computer Science contributions?

This category was not foreseen by Alfred Nobel when the awards were created, as the concept of Computer Science didn't even exist when the awards were first established over 100 years ago…


For the Chemistry prize, while AlphaFold‘s AI-driven solution to protein folding was pretty impressive, some believe the award should recognize the human-driven science behind it, rather than the tool itself.

Should AI, which relies on existing data rather than original hypotheses, be credited in traditional scientific fields?

While the prize does not solely focus on AlphaFold, it celebrates this project as a milestone in AI's potential to accelerate scientific discovery. It showcases how AI-driven models can transform and speed up research.

Finally, the authorship of AI models trained on large datasets has been debated. John Jumper himself pointed out that AlphaFold's success is also due to years of effort from scientists who contributed to databases like the Protein Data Bank.

Is this fair authorship then?

This remains a key question following the awards!

Final Thoughts

For the first time – and probably not the last – scientific breakthrough enabled by Artificial Intelligence has been recognized with a Nobel Prize, as well as the ideas that led to its development.

Clearly, both Hopfield networks and Boltzmann machines were physics-inspired models that enabled AI. While Hopfield networks are considered early neural networks that made promising contributions to associative memory, Boltzman machines' generative capability distinguished them from earlier models, such as Hopfield networks, which were limited to deterministic associative memory tasks.

C'mon, Boltzmann Machines were foundational for later generative models!

On the other hand, while acknowledging the fact that AlphaFold is a tool, Demis Hassabis and John Jumper have successfully utilized Artificial Intelligence to predict the structure of almost all known proteins speeding up years of work and re-shaping the traditional processes. The impact of their contribution cannot be neglected.

I see the valuable impact of those discoveries but also the controversy… What about you? What are your thoughts?

Has the AI hype influenced even the prestigious Nobel prizes? Or is it a fair attribution?


That is all! Many thanks for reading!

I hope this article helps in understanding the AI influence of this year's Nobel Prizes!

You can also subscribe to my Newsletter to stay tuned for new content.

Especially, if you are interested in articles on Artificial Intelligence:

What are Digital Twins?

5 Key Points to Unlock LLM Quantization

DALL-E 3: a step towards content policy moderation

Tags: Artificial Intelligence News Programming Science Technology

Comment