Insights

  • Discovery Engines Podcast

    Discovery Engines Podcast

    Accelerating scientific discovery is exciting in theory, but what does it look like in practice?

    To help us separate hype from reality, and give us a sense of the very real challenges and opportunities that scientists and innovators face as they tap emerging technologies to accelerate scientific discovery, we’re excited to launch our new podcast, Discovery Engines.

    Our first six episodes are now online,featuring scientist founders such as Cristian Ponce and Théo Schäfer of Tetsuwan, who are turning scientific intent into executable code. And Hani Goodarzi, Core Investigator at the Arc Institute which, in partnership with NVIDIA, recently released Evo 2, the largest ever AI foundation model for biology.

    Stay ahead of the curve and be notified when new episodes are posted by subscribing to our podcast’s newsletter at www.discoveryengines.co.

    Full episodes also available on:

    YouTube: https://youtube.com/@discoveryengines

    Spotify: https://spoti.fi/3BzMqiy

    Apple Podcasts: https://apple.co/40sQCu9


  • Autonomous Science: Opportunity Map

    Autonomous Science: Opportunity Map

    Earlier this summer, startup EvolutionaryScale raised $142 million to build AI models that generate novel proteins. Last year, the University of Toronto received Canada’s biggest ever university research grant to support self-driving labs for materials discovery.

    All the while, life science is notoriously expensive and regulated. The systemic challenges of producing novel materials means it can take decades between their discovery and our widespread adoption of them.

    So, how do we reconcile these two seemingly opposing forces? If you’re an industry leader, how should you weigh an investment of your company’s resources into the emerging field of autonomous science? As students and potential founders, what opportunities might there be to build a useful product?

    Over the last few months I’ve posed such questions to 18 experts at leading labs and startups. Their insights include:

    • Novel discovery means little without being able to scale up production.  In CapEx-intensive and highly regulated spaces, this is no small feat.
    • There is a steep organizational learning curve to adopting AI-driven methods, from getting existing data-sets “ML-ready,” to documenting existing protocols that can be challenging to express in code (e.g. mix solution until “cloudy.”).
    • Regulatory compliance pressure is both limiting and advancing the adoption of self-driving labs in life science.
    • In the US, there is increasing interest in adopting autonomous science technology as a way of securing our supply chain and maintaining our strategic edge.

    I’ve mapped my findings onto this mural, laying out key drivers and barriers to adoption of autonomous science technology, the peer landscape, and potential startup ideas categorized by investment threshold.

    My hope is that if you’re new to this space, this map will shorten your ramp-up time. Want to make your own version?  Go ahead! This map is licensed under Creative Commons license for you to remix freely.


    I am optimistic about our ability to harness autonomous science platforms in a way that’s both safe and that unlocks breakthrough discoveries that help us live healthier lives, on a healthier planet, and gain insights into the mysteries of nature.

    This is why I founded YesAnd Labs, to partner with founders and industry leaders and accelerate the pathway from discovery to real-world impact. If you’d like to learn more, or just want to share what cool work you’re doing, drop us a line.

    Here’s to accelerating scientific discovery, together. Onward.


  • Self-Driving Labs for Science: Video Intro

    Self-Driving Labs for Science: Video Intro

    Noncommunicable diseases like diabetes and cancer prematurely claim over 15 million lives each year. Bringing a new drug to market takes an average of 8.5 years.

    In the quest for clean energy, better batteries with higher performance and lower costs are essential. Yet, new materials often take decades to move from the lab to widespread application.

    Could self-driving science, where deep learning and robotics unite to test and validate scientific hypotheses, accelerate progress in health, climate change, and other fields?


    Recently, I’ve had the privilege of speaking with a number of experts in life and materials sciences to explore this emerging approach to research.

    To foster more interest and investment in this potentially transformative field, I will be open-sourcing my research through an opportunity map that’s freely shareable under Creative Commons license.

    To kick things off, I’m excited to share a video intro: Self-Driving Labs for Science: What They Are, Their Promise, and Their Risks:

    An accompanying mural – with links to supporting articles and a listing of subject matter experts consulted – is accessible here: https://bit.ly/self-driving-science-map.

    I’d love to hear your feedback and thoughts. Here’s to the future of science!


  • AI in Industrial Physics: APS Webinar Notes

    AI in Industrial Physics: APS Webinar Notes

    Last week, I had the opportunity to be a panelist on a webinar hosted by the American Physical Society (APS), where we explored the role of Artificial Intelligence in Industrial Physics. For those not familiar, APS is nonprofit on a mission to advance the knowledge of physics. Dating back to 1899, it represents over 50,000 members across academia, national labs and industry.

    This webinar – artfully facilitated by Peter Fiske – was a wide-ranging exploration into how AI is influencing physics, along with advice on how to work at this intersection for those interested in doing so. For an in-depth download – and to hear the much more astute points made by my co-panelists from organizations such as Toyota Research Institute and Sandia National Labs – I recommend watching the full recording embedded below. For a sneak peek, and as a complement to the recording, also including here an excerpt of my prep notes.

    A big thank you to Alex Semendinger (Mathematics PhD student at Brandeis), David Shih (High Energy Theory Professor at Rutgers), and Gage DeZoort (Physics postdoctoral researcher at Princeton) for letting me pick their brains as I prepped, and Stephanie Hervey at APS for the opportunity ⚛️ 🙌


    What are some ways in which Artificial Intelligence/Machine Learning is having an impact in physics and physics-related industries?

    • In advanced materials research, deep learning has been used to increase the speed and efficiency of discovery by predicting the stability of new materials. Recently, a team was able to describe 2.2 million new crystals, 736 of which have been independently created by external researchers. What’s more, this process is being paired with autonomous labs to further accelerate the discovery process. Where synthesis fails, this data is fed back into the learning model, which then improves follow-on techniques for materials screening and synthesis design. (Reference: Scaling Deep Learning for materials discovery; Nature, November 2023)
    • In fusion energy, an overarching goal is to pull atoms together and capture the resulting energy. Our sun – a fusion energy plant – accomplishes this through its sheer gravitational mass. On earth, scientists use powerful magnetic coils to confine the nuclear fusion reaction inside of donut-shaped “tokamak” vessels. The Swiss Plasma Center used Reinforcement Learning to autonomously learn to command the full set of 19 magnetic control coils, a promising step towards a new approach to sustaining these reactions. This Neural Network was initially trained in a simulation. (Reference: Magnetic control of tokamak plasmas through deep reinforcement learning, Nature, February 2022)
    • In high energy physics, many Machine Learning methods (e.g. graph networks, transformers, diffusion models, auto-encoders, normalizing flows) are having an impact. One theoretical physicist I spoke to shared how, in his experience, applying ML has enabled physicists to, for example, train models for anomaly detection, i.e. “look at anomalous events, reconstruct the invariant masses of the particles in the collision, and then decide if they can be described by a Standard-Model process.” If they can’t, perhaps it’s a pointer to new physics. AI methods have also helped speed up existing simulations, and simulation-based inference, he continued.
    • For accelerating the research process in general, platforms such as IBM’s Deep Search and Google DeepMind’s Gemini have been built to help researchers analyze a large corpus of existing knowledge and quickly make sense of it, an important part of the research process that can otherwise be painstakingly slow. IBM’s approach involves creating “knowledge graphs,” which uses Natural Language Processing to construct a comprehensive view of nodes, edges, and labels, enabling question answering and search systems to retrieve and reuse comprehensive answers to given queries. A preview of Gemini’s approach, here:

    What are some limits to applying AI/ML in physics?

    • At facilities such as the Large Hadron Collider, one postdoc I spoke to shared how researchers there often face pre-defined constraints, e.g. a very stringent data-taking pipeline or little time to use compute resources. Access to talent and compute is a common theme I heard, especially in academia or smaller startups. This postdoc continued: “In academia if a group has 5-10 researchers working on the same thing, that’s considered large.” A May 2023 MIT Sloan article (Study: Industry now dominates AI research) wrote: “Today, roughly 70% of individuals with a PhD in artificial intelligence get jobs in private industry, compared with 20% two decades ago.” Based on number of parameters, the largest AI models developed in any given year now come from industry 96% of the time. The number of published papers with industry co-authors has nearly doubled since 2000.” That giant sucking sound? AI talent getting swooped up by industry!
    • Silver Lining 1: Resource scarcity has led to innovations that enable smaller organizations and teams to do more with less. One postdoc I spoke to noted how equivariant architectures, which force neural networks to obey a certain symmetry, have enabled researchers in more resource constrained environments to work with less data, fewer parameters, and train and run models more efficiently. Such innovations benefit the research community as a whole, including in industry.
    • Silver Lining 2: In the United States the government is stepping up to try and close the gap. In 2020, the National Science Foundation announced a $100 million investment in five NSF AI institutes, including the AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI) whose summer workshop I profiled earlier this year. In 2023, the NSF also announced a $140 million investment to support AI research in seven areas of opportunity and risk associated with advances in AI. The US Department of Energy, via national labs such as Argonne, is also driving strategic AI for Science initiatives as detailed in this 2023 report.

    How can physics students/professionals best position themselves to transition to a data science or AI/ML career?

    • The first step is to realize that as a physicist you have gained tools that are incredibly valuable in AI. Specifically you are well-versed in breaking down a complicated system into small parts; noticing something that works a certain way and building an intuition around it. Diffusion models – a breakthrough behind some of the photorealistic image generators that have become popular as of late – can find their antecedents in the physical process of diffusion from thermodynamics.
    • Second: get practical experience. For example, learn how to code and build your first neural networks via micro-courses on Kaggle (free). Coding opens up so many doors – just be sure to learn the latest frameworks, be it TensorFlow, PyTorch, etc. Once you’ve got some coding under your belt, find research papers that inspire you and, as one PhD student recommended: try and replicate their work on your own. And/or check out more specialized courses, such as this MITx course on Computational Data Science in Physics.
    • Third: apply for an internship. Research Experiences for Undergraduates (REU) programs, funded by the National Science Foundation, support active participation by undergraduate students in research areas funded by the NSF. Google’s Summer of Code enables you to pair up with mentor organizations such as CERN, who propose ideas for areas of research they need help with. Submit a proposal and, once accepted, spend 12+ weeks working on a project for your sponsor, receiving mentorship and building relationships throughout the process.

    Full video recap of our webinar, below. Interested in attending a future APS webinar? Check them out and register here!

    // As published on LinkedIn


  • Where AI and Physics meet: Reflections from IAIFI’s 2023 Summer Workshop

    Where AI and Physics meet: Reflections from IAIFI’s 2023 Summer Workshop

    Earlier this month I was fortunate to join IAIFI’s Summer Workshop in Boston as a volunteer, snapping photos and video for the opportunity to sit in on lectures and participate.

    IAIFI (pronounced eye-fi) is a National Science Foundation (NSF)-funded research institute that – in collaboration with MIT, Harvard, Northeastern and Tufts – brings together some of our most talented researchers at the intersection of AI and physics.

    While I was interested to hear about the applications of AI for physics, I was intrigued to learn that techniques from physics are increasingly useful for improving the effectiveness and explainability of machine learning models. As we depend on AI models ever more, developing a better understanding of their mechanics and designing them to be more resource-efficient will be important to helping us adopt them in a way that’s safe and sustainable.

    As I reflect on the talks I heard at IAIFI, here are a few that I especially enjoyed:

    • Physicists such as Newton and Einstein were able to derive testable laws of physics using math; how might AI help us do the same? In his talk “Symbolic Distillation of Neural Networks for New Physics Discovery,” Miles Cranmer walked us through his work on genetic algorithms to derive such mathematical theories. One obstacle: with the space of possible outputs being so large, neural networks can only take us so far. How might a mixed symbolic and neural network approach (a la DeepMind’s AlphaGo) help us narrow the problem? Additional research may help us find out.
    • David Shih of Rutgers made Miles’ work tangible in his talk: “Machine Learning for Fundamental Physics: From the Smallest to the Largest Scales.” While our current Standard Model of physics has been surprisingly resilient (up to and including the discovery of the Higgs Boson at CERN’s Large Hadron Collider), physical phenomena such as Dark Matter, Dark Energy, and matter/anti-matter asymmetries show us that there must be “new physics” beyond the Standard Model. Could Machine Learning help? Large particle colliders such as the LHC produce an enormous amount of data (100 petabytes – or 100 million gigabytes – and growing). Sorting through and finding patterns in large sets of data is one of Machine Learning’s superpowers. Exciting progress is being made, and shortages in compute and researchers in this space mean progress could be faster.
    • The field Astrophysics and Cosmology is also starting to see the benefits of Machine Learning, solving problems over the last five years that were once considered impossible. Ben Wandelt – PhD in astrophysics and Professor at Sorbonne University – in his talk “From inference to discovery with AI in the physical sciences” spoke to how techniques such as Information Maximizing Neural Networks are being used to quantify important features of data that are relevant for inference. Ultimately helping us make better sense of the data we get when we look out on to the universe.

    One of my favorite aspects of the workshop was seeing researchers find inspiration in one-another’s work: applied ML engineers learning from astrophysicists, who themselves were inspired by string theorists, and so on. It reminded me of Design Thinking’s “Alternative Worlds” method, where we open up novel solutions to a given problem by intentionally approaching it from a new point of view. Here’s to the collision of ideas, and the insights they reveal.

    * * *

    To watch the above talks, and others presented, check out IAFI’s video playlist here. To learn more about IAIFI, surf over to iaifi.org or @iaifa-news on Twitter. Here’s a recap video I shot and edited as a volunteer:

    // As published on LinkedIn


  • What Is Reality? 5 Quick Takes From Today’s Leading Scientists

    What Is Reality? 5 Quick Takes From Today’s Leading Scientists

    What is reality? It’s such a fundamental question, it seems almost silly to ask it. And yet, when our brightest minds drill into it, the answers are surprisingly fuzzy. Textbook definitions of physics, i.e. “for every action there is an equal and opposite reaction,” or “atoms are the fundamental building blocks of matter”— break down. When we look for matter, we find probability. When we look for gravity, we also find time. Simply put, as one theoretical physicist recently told me, we still don’t know “what the hell is going on.”

    While confusion reins, luck is on our side. We’re lucky to live in a time when scientists are actively exploring these questions; newly available technologies are accelerating our discoveries; and the proliferation of cat videos create business models for platforms such as YouTube that enable us to quickly share knowledge.

    In that spirit, below are five of my favorite “quick takes” on the nature of reality, each with a different answer to this deceptively simple question.

    Take 1: Reality, or the physical world as we experience it, is produced by our consciousness. And thus misses a lot of what actually exists around us. As explained by Professor Donald Hoffman, Cognitive psychologist at UC Irvine.

    Take 2: Relatedly: our three-dimensional reality (plus time) is actually one of 10 or 11 dimensions, most of which we don’t perceive. As explained by Juan Maldacena, theoretical physicist and Professor at the Institute for Advanced Studies in Princeton.

    Take 3: Our reality is simply one of many copies of reality. Each branch of which contains full, conscious copies of each one of us (hello many-me’s! 👋). As explained by Sean Carroll, Theoretical Physicist at Cal Tech.

    Take 4: Our reality is an ancestor simulation created by a super-intelligent species. As explained by Nick Bostrom, Swedish philosopher and AI thought-leader at the University of Oxford.

    Take 5: Reality is a holographic projection of a thin, distant, two-dimensional surface. As explained by theoretical physicists Jim Gates, Leonard Susskind, and others.

    Which theory do you find most credible? Or do you have a different take entirely?

    // As published on Medium


  • Five Takeaways from “Quantum To Business”​

    Five Takeaways from “Quantum To Business”​

    Last week I attended the Quantum to Business (Q2B) conference, an annual event that brings together thought-leaders and stakeholders in the Quantum Computing community.

    As a newcomer to Quantum Computing I felt – in the words of a fellow participant – like I was “drinking from three firehoses.” Needless to say, when the smoke cleared from three days of research presentations, product demos, and some incredible chocolate chip cookies, here are my impressions of the current state of the industry.

    1. We’ve entered a new era in Quantum Computing. But a lot of tough engineering challenges remain.

    With Google’s recent announcement of having achieved “Quantum Supremacy,” there’s a new sense that robust, commercially deployable quantum computers are just around the corner. Not so, seems to be the consensus. A good metaphor for Google’s demonstration is the Wright Brothers’ first flight, which lasted all of 12 seconds. An amazing proof of concept for a new technology, but one with very little practical value.

    Qubits – or the induced properties of subatomic particles that sit at the core of quantum computers – are notoriously hard to control. The systems that we’ve built so far are “noisy” – i.e. undisciplined, and spit off lots of inaccurate data. There is a lot of work to be done in reducing errors, or improving the “fault tolerance” of these systems.

    In fact, the industry hasn’t even converged on how to build a quantum processor. Some teams are moving forward with qubits that need to be supercooled. Others think the trick is using elements that are stable at room temperature. Whereas Google and IBM’s quantum hardware is built using superconducting qubits, Microsoft is taking a different approach, and developing topological ones.

    “NISQ,” or “Noisy Intermediate-Scale Quantum,” is the industry shorthand for the era of quantum computing that we’re currently in. While recent advancements have shifted Quantum Computers from theory to reality, until we overcome the (big) engineering challenges that remain, truly useful Quantum Computers are still a ways off. How far off? Some folks say 5-10 years. The joke is that’s what folks have been saying for the last 5-10 years.

    2. Today’s Quantum Computers can in fact perform some operations. However, they’re limited in scope.

    While we’re still a ways off from reliable quantum computers, there are indeed some things that NISQ Computers are capable of today. The sweet spot is applications that classical computers struggle with, but where getting a wrong answer is – relatively speaking – not a big deal. Such as providing product recommendations (a la Netflix).

    One company, D-Wave, has carved out a niche in this space, creating its own version of a Quantum Computer using a more stable process called “Quantum Annealing.” D-Wave has been selling these machines for nearly a decade, with real customers using them on real problems. For example, VW recently announced a partnership with D-Wave, where they will use their machines to optimize traffic flow in Beijing.

    Quantum Computers do their work by executing “quantum algorithms.” Some folks at Q2B were adamant that the industry’s focus should be on finding new applications (“use-cases”) for these algorithms. Others felt the emphasis should be on discovering new algorithms, custom-suited for the “noisy” machines that we have. QC Ware, which hosted the conference, is one of several algorithm-developers in this space.

    3. Without a revenue-ready product, everyone’s investment pitch is very creative.

    As an entrepreneur, the holy grail for a new startup is product-market fit. What’s striking about the QC space is that, for the most part, the product is still under development, and the market demand is still theoretical. It seems obvious that if someone were to build a functional quantum computer, demand would quickly follow. But with engineering timelines unknown, and few profitable applications of the tech as it stands today, everyone is doing their own unique dance for investment and growth.

    Investment in Quantum Computing seems to be driven by two factors: perceived potential opportunity. And fear of missing out (FOMO). Specifically:

    Government: A robust quantum computer could enable better traffic flows at ports. Or crack, by brute-force, the highest encryption standards we use today. China recently announced a $10B investment in a national laboratory for Quantum Information Science. The US recently announced the Quantum Computing Act, backed by $1.2 Billion federal grant.

    Startups: Given the murky product roadmap, a company building a full-stack solution (a la Rigetti) requires investors with patience, a trait investors are not exactly known for. Startups that build just one part of the stack (e.g. software to better control quantum processors) have a more ready market, however a good portion of this market is government and academia – not the typical customer mix you would expect for a venture-backed tech startup.

    Enterprises: Large tech players such as Google, IBM and Microsoft have a natural advantage in developing quantum hardware, as with cash reserves they can afford to be patient. Each is vying to be the trusted partner of industry, and build the “sticky” ecosystem that draws in startups and enterprises. Industry is eager to partner with these companies to develop applications for quantum hardware, though seem unlikely to build any hardware themselves. Honeywell stands out as a “traditional” hardware company that’s gone all-in, taking the wraps off a homegrown Quantum Computing program that’s several years in the making.

    Consulting Firms: Consultancies seem to have the most ready source of revenue in Quantum. Accenture, Booz Allen, BCG and McKinsey have each built Quantum Computing practices, positioning themselves as the de-facto translator and integrator of quantum computing technologies for the companies they serve. Consultancies already sell “readiness” audits, similar to what we’ve seen in the AI and Digital Transformation spaces. 

    4. A lot of thought is being paid to building out the larger Quantum Computing ecosystem.

    While advances in Quantum Computing hardware are important, if the industry is to be successful, a lot of other pieces need to fall into place. These include: quantum algorithm engineers; hardware suppliers; integrators; sales people who understand the technology and new programming languages that enable end-users to manipulate a quantum processor for the outcomes they seek.

    I was impressed by how intentionally these pieces are being put in place. The larger tech companies stand out in their efforts. Google, IBM and Microsoft have all introduced their own programming languages (Cirq; Qiskit, and Q#, respectively), making it easier for developers to write code for quantum hardware. They have also led efforts to build community among key stakeholders, such as startups who are building a given slice of the quantum stack, customers who are interested in developing use-cases, and university researchers looking to conduct new experiments. Microsoft does so through its Quantum Network; IBM through its Q Network, which boasts over 80,000 users.

    I even had a chance to play a Minecraft inspired Quantum Computing game, developed to help students and professionals better understand how Quantum Computing works. Part of a larger gaming effort to get folks interested in this space, and encourage a new generation of students to take up degrees quantum computing.

    5. Quantum Computing is diversity-challenged.

    We know that technology has a diversity problem. The field of Quantum Computing seems to be no exception. I was struck by the fact that about 90% of the speakers were male. So were most of my fellow conference-goers. The “Global South” was missing as well. The scene seemed to be made up of the usual suspects in tech: the US, Europe, and a few “tech forward” countries in Asia.

    I was heartened to see some positive moves to address this imbalance at Q2B. Such as male speakers referring to scientists as “she” when presenting. And inviting a female to moderate the conference’s closing panel. As a first-generation American I inquired about – and was extended – a 50% discount on the ticketed price. A generous gesture from Q2B’s organizing team.

    I don’t have an answer to this challenge, I was just struck by it over my time there. One step could be to publicize scholarships for underrepresented communities, a path taken by the organizers of MIT’s upcoming EmTech AI conference. Beyond this, of course, we need to address the root causes of this disparity. Work advanced through organizations that support women in STEM.

    New technologies represent some of the best economic ladders our society has to offer, for both underrepresented communities, and underrepresented economies. If the future is to be Quantum, my hope is it will also be more reflective of the societies it will impact.

    * * *

    Q2B impressed me with how smoothly it ran, and how many different important players in the eco-system showed up to trade notes in a relaxed, friendly setting. I would go again.

    If you’re interested to learn more about what transpired at the conference, or dig into some of the slides presented (especially with more technical content), I recommend checking out posts under the Twitter hashtag #Q2B19. And if you want to continue the conversation, come join us on Reddit over at r/quantumcomputing.

    // As published on LinkedIn


  • What Is The Nature Of Reality? How Quantum Computing + A.I. May Supercharge Our Search For Answers

    What Is The Nature Of Reality? How Quantum Computing + A.I. May Supercharge Our Search For Answers

    Quantum computing aims to harness the properties of quantum physics to solve real world problems. Its next job may be to help us understand reality itself.

    Growing up, the idea of death frightened me. My coping strategy was to distract myself from it, or tell myself that I’d think about it later on in life. This worked fairly well. Minus the occasional night terror. Or that time when I found myself on the family couch, bear-hugging my mom in an existential panic (bless her heart). Today, this fear still crops up, but I’ve gained some new tools to deal with it. As my therapist tells me: feel your fear. And beneath it, often you’ll find another feeling. For me, underneath this fear lies excitement. And underneath this excitement, is for me, a question: What is reality? What is this thing I’ve been born into and, presumably, am so afraid of leaving behind?

    As luck would have it, some very smart people have been investigating this question for a very long time. Physicists, in particular, have devised smart theories, many of them validated through ingenious scientific experiments, as to what the nature of our reality actually is. It’s what brought us “every action has an equal and opposite reaction,” or Isaac Newton’s laws of motion. Or “a particle can be a wave, and a wave a particle,” or quantum physics as developed by luminaries such as Niels Bohr, Max Planck and Albert Einstein. Discoveries that spawned new theories about the nature of our reality, such as that it: represents 4 out of 11 possible dimensions; is constantly splintering into copies of itself; or that it’s actually generated by our own consciousness (insert head explosion emoji).

    Over the last 50 years, however, our understanding of “reality” – or the world around us as explained through fundamental physics – has slowed. While past discoveries enable many of the technologies we depend on today, the long-term, open-source nature of this research means few institutions have been willing to pony up the investment required to push this field along. Caltech physicist Sean Carrol thinks that today there are fewer than 100 physicists actively working on advancing our understanding of fundamental physics. Is this slower pace of discovery an accurate reflection of our curiosity for the world around us? Thankfully, another path is emerging.

    Of the many challenges to testing new theories in fundamental physics, two big ones are time, and cost. The time required to design a test for theory. The cost required to build the experiment and run it. While many effective experiments can be done in the low-million dollar range, the ones that yield the most interesting results can cost much more. The Large Hadron Collider, for example, built to help us discover new particles, took decades to plan and cost a whopping $4.75 billion to build.

    Simulations, however, offer a potential workaround. They’re quicker to set up, cheaper to build, and could potentially be as useful to researchers as experiments conducted in the “real” world. The challenge until now has been that our simulations have been – necessarily – basic. Accurately simulating interactions between atoms in matter as small as a molecule is computationally overwhelming, even for our most powerful supercomputers. Simply put, our simulations have not been able to mimic real world experiments. Enter Quantum Computing.

    Quantum Computing is a fundamentally different approach to building a computer. At its core, the job of a computer is to process long strings of bits encoded as 0s and 1s. A classical computer (the ones we use today) processes these bits via billions of transistors embedded in a silicon chip. A quantum computer, on the other hand, relies on “quantum bits” – or the induced properties of subatomic particles. These “qubits” have the special property of being able to represent a 0, a 1, or any value in between – at the same time. Because of this feature, they eliminate some of the constraints of (binary) classical computing systems and enable enormous computational outputs in parallel.

    Quantum computers may help us run the types of hyper-realistic physics simulations that up until now have been impossible, at a fraction of the cost of conducting those experiments “in real life.” In fact, the very act of building stable, useful quantum computers might give us new insights into quantum mechanics itself.

    In addition to Quantum Computers, Artificial Intelligence (AI) may also have a role to play. Today, one common way of building AI is through layered neural networks which, through ingesting large amounts of data (say tagged photos of cats), use increasing levels of abstraction to develop an understanding of how this data “works.” Well, what if instead of making sense of cat photos, we asked this AI to make sense of unexplained natural phenomena, such as Dark Matter? Two physicists at MIT, Tailin Wu and Max Tegmark, have started doing just that. They’ve endowed a machine learning algorithm with four common analytical strategies employed by scientists, and asked it to make sense of increasingly realistic simulations of the physical world. Paired with a quantum computer, we can imagine a rich environment in which an AI might help us make sense of the world around us.

    * * *

    The rise of Artificial Intelligence comes with a long list of potential dangers. I’m especially wary of how AI can be paired with content to influence our behaviors. AI that – as historian Yuval Noah Harari puts it – “knows us better than we know ourselves.” As with any new technology, at their core Quantum Computers and Artificial Intelligence are tools, which we know from experience can be used just as easily to build, as they can to destroy. The ability for Quantum Computing and AI to help us make gains in areas that are important to us, such a developing better treatments for disease, making our cities less congested and modeling climate change will, I hope, set a clear example of the ways we want to apply these technologies, and a clearer contrast to the ways in which we don’t. And, in the process, maybe even shed light on a question that has sparked the curiosity of humanity for generations: what is the nature of reality?

    // As published on LinkedIn