Earlier this month I was fortunate to join IAIFI’s Summer Workshop in Boston as a volunteer, snapping photos and video for the opportunity to sit in on lectures and participate.
IAIFI (pronounced eye-fi) is a National Science Foundation (NSF)-funded research institute that – in collaboration with MIT, Harvard, Northeastern and Tufts – brings together some of our most talented researchers at the intersection of AI and physics.
While I was interested to hear about the applications of AI for physics, I was intrigued to learn that techniques from physics are increasingly useful for improving the effectiveness and explainability of machine learning models. As we depend on AI models ever more, developing a better understanding of their mechanics and designing them to be more resource-efficient will be important to helping us adopt them in a way that’s safe and sustainable.
As I reflect on the talks I heard at IAIFI, here are a few that I especially enjoyed:
- Physicists such as Newton and Einstein were able to derive testable laws of physics using math; how might AI help us do the same? In his talk “Symbolic Distillation of Neural Networks for New Physics Discovery,” Miles Cranmer walked us through his work on genetic algorithms to derive such mathematical theories. One obstacle: with the space of possible outputs being so large, neural networks can only take us so far. How might a mixed symbolic and neural network approach (a la DeepMind’s AlphaGo) help us narrow the problem? Additional research may help us find out.
- David Shih of Rutgers made Miles’ work tangible in his talk: “Machine Learning for Fundamental Physics: From the Smallest to the Largest Scales.” While our current Standard Model of physics has been surprisingly resilient (up to and including the discovery of the Higgs Boson at CERN’s Large Hadron Collider), physical phenomena such as Dark Matter, Dark Energy, and matter/anti-matter asymmetries show us that there must be “new physics” beyond the Standard Model. Could Machine Learning help? Large particle colliders such as the LHC produce an enormous amount of data (100 petabytes – or 100 million gigabytes – and growing). Sorting through and finding patterns in large sets of data is one of Machine Learning’s superpowers. Exciting progress is being made, and shortages in compute and researchers in this space mean progress could be faster.
- The field Astrophysics and Cosmology is also starting to see the benefits of Machine Learning, solving problems over the last five years that were once considered impossible. Ben Wandelt – PhD in astrophysics and Professor at Sorbonne University – in his talk “From inference to discovery with AI in the physical sciences” spoke to how techniques such as Information Maximizing Neural Networks are being used to quantify important features of data that are relevant for inference. Ultimately helping us make better sense of the data we get when we look out on to the universe.
One of my favorite aspects of the workshop was seeing researchers find inspiration in one-another’s work: applied ML engineers learning from astrophysicists, who themselves were inspired by string theorists, and so on. It reminded me of Design Thinking’s “Alternative Worlds” method, where we open up novel solutions to a given problem by intentionally approaching it from a new point of view. Here’s to the collision of ideas, and the insights they reveal.
* * *
To watch the above talks, and others presented, check out IAFI’s video playlist here. To learn more about IAIFI, surf over to iaifi.org or @iaifa-news on Twitter. Here’s a recap video I shot and edited as a volunteer:
// As published on LinkedIn
Leave a Reply