All Articles

Is genius worth pursuing?

I admit I was sad when o1 preview surpassed PhD student level performance on benchmarks of biology focused problem solving. Week after week of progress and success is great for humankind but honestly sucks as I imagine what it means for my future as a scientist. I now feel the pain of Garry Kasparov and Lee Sedol, who previously experienced the pain of being bested by AI.

In chess, we are observing some of the drawbacks of democratized superhuman performance. Cheating scandals and the ‘boringness’ of the game have made it less motivating to stay at the top. Training for chess is less about creativity but now more about grueling memorization based prep. Is there now nothing special about being a world class scientist now too?

Scientists are akin to chess players (or professional athletes, artists, or leaders) in that they have a craft that they hope to get better at every day. But in science, the stakes are too high to impose limits on the machine for the sake of competition. Sport is entertainment, science is means to an important end.

Science is not a good paying profession; you work long hours and do complicated experiments for very little gratification outside of your personal satisfaction. When the primary bottlenecks are now money and time, when thinking becomes commoditized, maybe you don’t want to just be a technician running experiments.

I do the reagent ordering, the mindless experiments, the admin work, and the rest of the chores of science so that I can have the pleasure to think. There are physical barriers to sports — very few people are capable of running fast for a long time or hitting logo 3s. Anyone can sit on their ass, tell ChatGPT to think, and ‘make discoveries’. AI is the great equalizer, and the P, h, and D no longer make us special.

Where does AI conquer first?

There are 36 trillion cells in the human body. Each is affected by multi-modal inputs including but not limited to: mechanosensation, electrical stimuli, microenvironmental soluble proteins and metabolites, pH, temperature, and even their past history. These variables are functions of time (e.g. circadian rhythm), insults/injury at the organismal or tissue scale, and of course the surrounding cellular milieu and tissue structure. To a human, this is impossible to compute, but could this be attainable for an AI model?

  • Natalia Trayanova has been building models of the heart for decades now. Her lab is able to build personalized models of damaged hearts from imaging data and predict heart events and provide candidate areas to receive curative ablation for various arrhythmias. Their approach can model singular cardiomyocytes and together build an understanding of networks with high electrical or mechanical connectivity.

  • Simplified models of organ systems pioneered by Don Ingber and Emulate have also demonstrated that complex organismal biology can in fact be distilled down to manageable systems for predicting toxicity or PK/PD for drug compounds. The ability for in vitro models to overcome and surpass in some settings the predictive performance of in vivo models indicates that general principles of drug distribution can be understood.

  • Indeed, pharmacokinetic and pharmacodynamic models demonstrate that we can in fact learn clean laws of activity and distribution within biological systems, and these have been quite successful over many decades for characterizing novel drug compounds.

These physics based modeling systems (e.g. mechanics of the heart and blood) have succeeded in capturing emergent properties. It is likely that Michael Levin’s work on understanding principles of electrical potential based regulation of regeneration can be similarly encapsulated by theory.

Another seemingly obvious area for the supremacy of AI is in biochemistry and biophysics. These again are relatively straightforward chemistry and physics problems — just fitting keys in holes or vice versa. There has already been a lot of ink spilled describing how wonderful these new advances are.

However, our understanding of complex disease is lacking. To date, the returns from systems biology approaches have been quite poor, having not told us too much more than anything that we didn’t already know from decades of low-plex hypothesis driven science. There is no physics based way of understanding rheumatoid arthritis (yet, potentially).

The upshot of this is that biology is an arena where experiments win. Animal models or cell line models predict efficacy and toxicity.

In part, this is because the problems in biology are much harder to define and there are far fewer benchmarks of success. In cancer biology, what we care about are response rates, which are a function of many factors including those that are patient specific. In the more simplified in vitro setting, we care little about interpolation (ie. if we have very effective therapy A that kills 70% of cells, and ineffective therapy B that kills 10% of cells, we don’t care about the gray area between 10-70%, but rather how to push effective therapy A past 70%). In this manner, we don’t care about mimicking performance; all we care about is improvement, which is by definition outside of distribution, and likely harder for an AI to consistently surpass.

With these points in mind, we can define criteria for ‘AI-dominatability’:

  1. Typically imperfect human performance, but clear definitions of perfect performance. AI wins in areas where humans are not perfect. Pathology, radiology, answering trivia.
  2. Large training and evaluation datasets. AI wins where there are high accuracy, diverse, and well annotated databases. PDB, USO question banks, the internet corpus.
  3. Few degrees of freedom. AI wins when parameters can be accurately assembled. AI fails when there is not enough information to accurately fit parameters or in fields that cannot be summarized by heuristics or theory. Science is tricky because a lot of what is published is wrong, and even more is unpublished or requires special unwritten technique. These unwritten skills of the craft are the advantage of a human. When people report the results of experiments, they don’t include all the necessary parameters.

What does this mean for the scientists of today?

Will AI teach us how to cure Alzheimer’s, cancer, and live forever?

Beating biology benchmarks is not actually the difficult part of doing biology. Coming to unique insights is and will probably forever continue to be the rate limiting step in biological research. This is not to say that AI won’t play a role — in fact coming up with insights for where to apply AI also counts as addressing a rate limiting step. Scientists have the opportunity to use the machine as a tool, finding applications for it that nobody else has. One very creative example of this is using AlphaFold Multimer as a screening platform to identify specifically which proteins are capable of interacting.

But often, some experimentally derived insight from a human scientific hunch provides the impetus to apply AI or simply collaborate in orthogonal axes. For antibodies, you can optimize affinity with structural biology and an AI agent, as this is essentially a physics problem. But for half life, there would have been no way for AI to de novo figure out FcRn recycling mechanisms that provided the major leap for almost all new half life extended antibodies. The human scientist discovered a new mechanism, and now AI might be useful for optimizing that mechanism.

The bottom line is that you still need to be able to see something that no one else has seen.

AI still needs to be prompted, and coming up with optimal prompting strategies is still hard work. AI doesn’t know everything, and experiments or human scientific intuition may still be useful for coming up with new problems for AI to solve.

Is there a good way to industrialize human scientific intuition? Maybe the strength is to prevent humans from ruling out possibilities and developing a smart prompting strategy that allows an agent to review literature and come up with critical questions. Developing an agent capable of learning ‘from scratch’ or unlearning a lot of junk that is published in the literature could be a significant advance and provide more interpretable assessment of how conclusions are made.

In the hypothesis generation arena, AI could be good at identifying functions of genes by similarity search. Or it could be used to mine metagenomics data to identify new functions. I have no doubt that AI can be a very proficient annotation tool for understudied sequences. One day, it could be used to design fit for purpose proteases or other enzymes, or design complex small molecules like PROTACs. AI could be used for filtering hypotheses or removing redundant or probable failures from experiments. In some instances, AI can nominate candidates for drug repurposing, which may be particularly useful for rare diseases.

How can the new PhDs prepare for AI?

Humans are resilient; we respond to threats and adapt. To gain an advantage in our new AI world, we can adapt by learning these skills:

  1. Better prompting strategy: The most frustrating thing is that AI is incredibly unhelpful for any of my day to day tasks except for coding — where it assists with R and Python syntax. I will sometimes use it as a poor quality search engine to increase my exposure to new ideas. It has certainly enabled me to speed up, giving me the ability to come to conclusions from computational analyses 2-3x faster than if I had tried to do them by just consulting Google. However, when I try to prompt for novel or unexpected research ideas, the outputs are unoriginal and often quite bad/incorrect. Learning how to get the most out of an AI system is certainly very high yield.

  2. Fluency with AI infrastructure: using the latest tools from FutureHouse, NotebookLM, etc to improve your research. Using these tools, we can avoid doing painful activities like writing reviews or reading through every citation in a review. AI is good at summary and can highlight things that we might miss. We should still read papers and evaluate primary data ourselves versus trying to understand through the lens of an AI agent. Perspective pieces may still be useful to read if not just to access how a thought leader organizes frameworks. I’m afraid reviews written by trainees are officially useless, however.

  3. Multimodal expertise: AI should accelerate our ability to learn both computational and experimental expertise. It should also elevate our ability to ingest foundational knowledge by coming up to speed on fields faster. The ability to curate what is true vs untrue is arguably the most important skill in an age of information abundance. In order to accomplish this, you need to know a lot and how to do a lot. Network effects are important; if you are an experimentalist who glazes over computational analyses, or vice versa, you are going to get beat by someone who is able to pay attention 100% of the time. With AI, it is now possible to be good at everything.

  4. Technical proficiency: What differentiates someone who is a cloning magician versus someone who spends months trying to do a simple ligation? The answer is time. One person wastes time, and the other is someone who gets things done. Technical proficiency will matter a lot more in a world where AI can think. Unfortunately, humans are already quite cheap experimentalists and I don’t think AI will be automating Western blots or qPCRs at a reasonable cost anytime soon. I cannot simply tell an AI to optimize staining conditions for my antibody. Maybe someone will build a cheap enough machine to do it, but it likely isn’t worth the marginal cost. Humans are cheap labor. Resources in science consist of the following: experimental grindset, thinking and experimental design, time, and money. 1 and 2 are modifiable (money too but sometimes this can’t be helped), and if AI allows you to max out the thinking and experimental design skill, your technical proficiency (and effort) and time saved becomes a differentiator.

The final thing I wanted to discuss is philosophical.

Is genius worth pursuing?

I really hope so.

Science requires intense motivation, and for new students of the craft, I worry that the inevitability of AI could blunt the motivation to pursue basic biology research. Greatness in all disciplines is motivated by a big prize at the end of the tunnel, at the expense of often crushing financial, health, and social sacrifices. An important part of the AI future should be recognizing the pioneers that decide to go down this path, as without the creative work that scientists do, I think life becomes incredibly boring.

Many questions remain unanswered, waiting for the right scientist to lead AI to the promiseland.

There is a mistrust or hesitation to use AI products, and paired with already existing inequalities wrt the use of new technology, the gap between the haves and have nots is going to balloon. Everyone will have a personal tutor, but not everyone will use it. It’s a beautiful accomplishment to have zero theoretical gap in opportunity but it means that decisions made earlier and earlier in life will have important consequences. As a parent, do you encourage your kid to go outside and play soccer, or allow them to continue talking to their AI tutor? Compounding starting earlier means that a much greater degree of variance in outcomes is possible.

In the limit of time, humanoid robots with real world physics understanding could do experiments for us 24/7, or be controlled by an autonomous AI scientist that could prompt the robot with ideas to execute upon. It is honestly an inevitability that human ingenuity evolves into a sort of human computer interface profession. We are all going to need to learn how to collaborate and get the most out of our AI friends. A mix of strategy and skill, the teenage games were useful after all.

Published Oct 25, 2024

Harvard-MIT PhD Student