When AI Meets mRNA: A Ten-Person Thought Experiment Triggered by a Dog's Cancer Vaccine

Introduction

This is an experiment.

We wanted to test something: given a set of facts, can we use each person’s unique system of cognitive axioms to accurately simulate their reaction to the same event? Furthermore, how large is the gap between these different dimensions of response? Can we aggregate a final commentary from these scattered perspectives that transcends any single viewpoint?

To this end, we chose a recent trending news story: Paul Conyngham, an Australian tech entrepreneur, used an AI toolchain to design a personalized mRNA vaccine for his terminally ill pet dog. Based on the cognitive axiom systems of ten AI practitioners—each verified through multiple rounds of iteration—we simulated their independent reactions to this event.

This is not a news report, but a stress test of cognitive diversity. We want to see what kind of spectrum is refracted when the same fact is processed by ten different cognitive frameworks, and whether these spectra can be combined into a more complete beam of light.


Event Summary

Protagonist: Paul Conyngham, a Sydney-based tech entrepreneur with 17 years of experience in machine learning and data analysis, but no background in medicine or biology.

Patient: Rosie, an 8-year-old Staffordshire Bull Terrier/Shar-Pei cross rescue dog, diagnosed in 2024 with terminal aggressive mast cell cancer that chemotherapy and surgery failed to control.

AI Toolchain: ChatGPT for literature navigation, AlphaFold for protein structure modeling (confidence 54.55%), Grok for designing the final vaccine construct, and Gemini for extensive auxiliary work.

Collaborating Institutions: UNSW RNA Institute (vaccine manufacturing), UNSW Ramaciotti Centre for Genomics (sequencing, $3,000, 320GB of data), and the University of Queensland (vaccine injection under an existing ethical approval framework).

Results: First injection in December 2025. By January 21, 2026, the tumor had shrunk by approximately 75%, and the dog went from being near death to jumping fences and chasing rabbits.

Key Controversies: Concurrent use of checkpoint inhibitors; n=1 with no control group, making attribution impossible; experts questioning the exaggerated role of AI; actual cost estimated at $20,000–$50,000; ethical approval required a 100-page application and took 3 months.

Industry Background: A 5-year follow-up of Moderna’s melanoma mRNA vaccine Phase 3 trial showed a 49% improvement. Personalized mRNA cancer vaccines have made substantial progress in human clinical trials.


Aggregated Commentary: Intersections and Tensions of Ten Perspectives

Jump to the Ten Individual Reactions

A tech entrepreneur with no medical background uses AI tools to design a personalized mRNA cancer vaccine for his dying pet dog, resulting in a 75% reduction in tumor size. This story went viral on social networks because it created an illusion of empowerment—the idea that “I can do this too.” Three-thousand-dollar gene sequencing and readily available ChatGPT make top-tier medical technology seem within reach. However, the public often ignores the three-month-long, 100-page ethical approval process because such high-visibility, tedious effort is uncomfortable to acknowledge. Stripping away this layer of technological romanticism, this is not a story about AI; it is a story about people. The protagonist’s 17 years of machine learning experience is the hidden lead. AI is an amplifier; zero multiplied by a hundred is still zero. It can only amplify the cognitive foundation you already possess. In this case, AI lowered the barrier to execution but made the value of judgment higher than ever before.

Professional barriers are shifting from “knowing how to do it” to “knowing what to do.” In fact, the underlying engines—mRNA technology, AlphaFold, gene sequencing—were already in place. AI simply translated the assembly instructions for these complex tools into natural language. Traditional academic and R&D workflows are often local optima formed by the cognitive limitations of predecessors, which newcomers inherit as unquestioned legacy processes. As a complete outsider, he lacked this path dependency. He completed the transition from an independent contributor to an architect, no longer personally conducting specific experiments but designing how the work should be completed. Knowing when to feed what context to which model—this context engineering capability has become the new core skill.

In this process, the deepest value AI provides is actually psychological empowerment. “Daring” is more important than “being able.” Faced with a complex proposition that usually only top laboratories dare to touch, AI gave an ordinary person the confidence to cross the professional chasm and take the first step into unknown deep waters.

However, returning to the rigorous context of medicine and science, while the dog’s tumor shrank, immune checkpoint inhibitors were used simultaneously during the treatment. n=1 proves nothing in science. This reveals the core structural contradiction in current technological development: the gap between generation and verification capabilities. Generative capacity is growing exponentially—designing a candidate vaccine takes only a few weeks—but verification capacity remains linear or even stagnant. Proving that a vaccine is safe and effective still requires years of time and hundreds of millions of dollars.

Technology can infinitely accelerate the generation of candidate solutions, but it cannot bear the consequences for humans. While underlying technology is racing ahead, the rules of the real world have not moved with it. The real bottleneck lies in verification and regulation, not technology. When all model outputs and expert opinions are aggregated, the final decision of “whether to inject this drug into the dog’s body” still rests with a human. Responsibility cannot be outsourced. This structural constraint will never be eliminated by technology.

Ten Perspectives Quick Reference

Commentator Core Framework Most Unique Angle
AX Direction over execution, Agentic intelligence threshold “Getting a seat at the table” is the change itself
Lao Wang Process verification, visibility of effort Analysis of dissemination mechanisms
Chen Ran The “Computer City” effect, bias toward action Personal analogy of crossing over to get financial certifications
Fermi Cognitive path dependency Critique of “local optima” in academic workflows
Lao G Layered framework thinking Three-layer analysis: Engine/Car/Traffic Rules
Huang Yikai Psychological empowerment Cross-domain analogy with photography and music
Class Rep Compound interest thinking, migration from user to builder Gap between cost curves and approval hurdles
Yousa Verifiability first, ROI-driven The “scissors gap” between generation and verification
Oversea Faith in context engineering Context engineering through multi-model orchestration
Ya Ge AI as an amplifier, non-delegable responsibility Mental leap from IC to Architect

Ten Reactions

The following are the independent reactions of the ten commentators, sorted by the Pinyin of their surnames. Each reaction was simulated by AI based on their respective cognitive axiom systems and does not represent their actual personal views.

AX (Xing Fan)

Let’s start with the conclusion: what’s truly worth noting in this story isn’t that “AI designed a vaccine,” but that a person with no medical background used AI to elevate themselves from “not knowing what questions to ask” to “being able to talk to university labs and drive a project to completion.”

I feel many people will run to two extremes when they see this news. One side says, “AI is so amazing it can design vaccines,” while the other says, “This is just hype; ChatGPT just helped him look up some literature.” Both sides miss the point.

What actually happened is this: the path for personalized mRNA cancer vaccines has already been paved by human scientists. Moderna’s melanoma Phase 3 data is right there with a 49% improvement rate—that’s real scientific progress. What Paul did was essentially use an AI toolchain to “translate” an existing scientific path into a project plan he could understand and execute. ChatGPT for literature navigation, AlphaFold for protein structure prediction, Grok to assist in designing the construct, and Gemini for the grunt work. None of these steps are groundbreaking AI applications on their own, but strung together, they allowed an ML engineer to sit at the same table as scientists from the UNSW RNA Institute and discuss problems. This “getting a seat at the table” is the change itself.

But I must say, there’s a crucial variable here that’s easily overlooked: Paul himself has 17 years of experience in ML and data analysis. This isn’t a story about just anyone chatting with ChatGPT for a few rounds and designing a vaccine. His technical background allowed him to judge the quality of AI output, identify which results were credible and which were hallucinations, and cross-verify AlphaFold’s output with literature. This precisely confirms my core judgment: direction is scarcer than execution. AI has lowered the barrier to execution, but the ability to “know which direction to go” and “judge whether intermediate results are correct” has become even more valuable.

Technically, we must be honest: n=1, and with the concurrent use of a checkpoint inhibitor, the individual contribution of the vaccine cannot be isolated. A 75% tumor reduction is great news, but from a scientific standpoint, it proves nothing.

↑ Back to Table


Lao Wang (Challen Wang)

Conclusion first: the most valuable part of this news isn’t “AI saved the dog,” but “ethical approval took three months and a hundred pages of documentation.” But guess which sentence will be shared?

Paul Conyngham’s story has a perfect narrative structure: the outsider hero, the terminally ill pet, cutting-edge tech, and a miraculous reversal. This structure is so perfect it makes me professionally uncomfortable.

Let’s break down this “result” using a process verification framework. A 75% tumor reduction sounds solid, but the process chain looks like this: he used a checkpoint inhibitor at the same time. Checkpoint inhibitors are currently a mainstay of cancer immunotherapy and can produce significant tumor shrinkage on their own. With n=1 and no control group, there’s no way to split the contribution or attribute it. It’s like taking fever medicine and eating ice cream at the same time, then having your fever break and saying, “Ice cream is remarkably effective at reducing fevers.” Logically, I can’t say you’re wrong, but anyone who has ever run an A/B test would feel something is off.

More importantly, let’s look at the process verification of the “AI designed the vaccine” claim. What did ChatGPT do? Literature navigation. Simply put, it’s a more conversational PubMed. Grok designed the vaccine construct, but ultimately, UNSW scientists completed the manufacturing and verification. Translated into plain English, this division of labor means: AI helped with research and a first draft, while professionals did everything that required taking responsibility for the outcome.

But I don’t want to mock Paul. What he did is valuable: a non-professional used AI tools to lower the cognitive cost of entering a high-barrier field and then found the right people to execute. This model is real. It’s just that there’s an entire world of communication science between “AI cured cancer” and “AI helped me more efficiently find the people and methods that could cure cancer.”

Why did this story go viral? Because it hits the sweet spot of “visibility of effort.” A dog-loving programmer with 17 years of ML experience but zero medical background used the same ChatGPT we use every day to save his dog. The reader’s subconscious calculation is: he has the tools I have, and he lacks the professional knowledge I lack, so the distance between me and this miracle is zero. This illusion of “visibility of effort” is the nuclear fuel for the spread of such stories.

Conversely, the truly difficult part—writing a hundred pages of ethical approval over three months—is glossed over. Because the visibility of effort in that part is too high, it makes people uncomfortable. Readers don’t want to know that a miracle requires a hundred pages of bureaucratic paperwork, just as an audience doesn’t want to know a magician practiced for twenty thousand hours backstage.

↑ Back to Table


Chen Ran

To be honest, the point that excites me about this news is different from most people.

Most see “AI helping a dog cure cancer” as a heartwarming tech story. I see an engineer with 17 years of ML experience spending $3,000 on sequencing, chewing through 320GB of data, and grinding through 100 pages of ethical approval over 3 months to get a 75% tumor reduction. This isn’t a miracle; it’s the standard path for an engineer crossing into another field, almost identical in structure to my own experience.

When I say “identical in structure,” I’m not just chasing a trend; the structure really is the same. I spent two years as a stay-at-home dad and earned a full set of financial certifications: CFP, EA, and Series 65. People in the finance industry thought it was absurd: you’re a coder; why do you think you can understand tax law and portfolio management? But in practice, the knowledge systems for these exams aren’t harder than a moderately complex distributed system. The real barrier has never been the knowledge itself, but the psychological walls created by information asymmetry. The finance industry has collected management fees for decades behind that wall, just like the medical world Paul faced: it’s not that the technology can’t do it; it’s that you “shouldn’t” do it.

This is what I call the “Computer City” effect. In the nineties, when you went to a computer city to build a PC, the vendors relied not on technology but on your ignorance. You didn’t know how much a stick of RAM should cost or which configuration was a “stupidity tax,” so you got ripped off. The entire medical industry is like that for the average patient: it’s not that doctors are bad; it’s that information is structurally asymmetric. A cancer patient facing a treatment plan and a novice facing a computer city quote are essentially experiencing the same helplessness.

AI is blowing that wall apart. But notice, the way it’s blowing it up isn’t what most people think.

Paul didn’t just ask ChatGPT “how to cure a dog’s cancer” and follow instructions. He used AI for literature navigation, protein modeling, and vaccine construct design, then brought in scientists from UNSW and the University of Queensland to do the work. This model is the key: AI didn’t replace the experts; AI allowed an outsider with an engineering mindset to have an equal dialogue with experts. He could understand the papers, comprehend the protein structures, and ask meaningful questions. Information asymmetry was compressed to the point where an expert could no longer refuse to collaborate by saying “you don’t understand.”

I’ve always said that analysis is the most expensive form of waste. You can spend three years arguing whether a personalized cancer vaccine is feasible for a pet, or you can spend three months directly making one to see. Paul chose the latter. This wasn’t recklessness; it was rational action within the range of affordable loss: the dog was already terminal, and the worst-case scenario was that it wouldn’t work.

The ones who should be nervous aren’t the doctors; it’s the middlemen who live off information asymmetry.

↑ Back to Table


Fermi

When I saw Paul Conyngham making an mRNA cancer vaccine for his dog, my first reaction wasn’t “AI is so powerful,” but a very specific detail: 320GB of sequencing data.

That number made me pause. From my own experiments, I know all too well that the real bottleneck in research has never been “can it be done,” but “who is qualified to do it.” When a tumor sequencing spits out 320GB of data, what’s the traditional path? You need a postdoc with a bioinformatics background to run the pipeline, a research group’s long-accumulated analysis framework, and someone who has read enough literature to know which mutation sites are worth paying attention to. All these things together form an invisible barrier: not a technical barrier, but a cognitive one. If you’re not in that circle, you don’t even know what questions to ask.

What’s interesting about Paul is that he is precisely not in that circle. With no medical background and no bioinformatics training, following a normal academic path, he wouldn’t have even been able to take the first step with those 320GB of data. But he used AI to do one thing: skip the entire cognitive chain of “you should learn X before you can do Y” that the field has accumulated over decades. AI helped him with literature navigation, protein structure modeling, and neoantigen prediction. Note that these aren’t cutting-edge capabilities; every step has mature tools if taken individually. The key is that AI allowed him to string these tools together to form his own workflow, rather than crawling step-by-step along the established paths of immunology or oncology.

Actually, this reminds me of my experience as a PhD student. My field isn’t CS, but more and more experimental analysis is becoming inseparable from computational tools. I’ve observed a very common phenomenon: the workflows in research groups are essentially local optima of the cognitive limitations of the supervisor and previous senior students. Which software to use, which parameters to run, in what order to analyze—once these things are solidified, they become “the way our group has always done it.” Newcomers spend six months learning this process and then make minor adjustments within that framework. No one asks “is this process itself optimal” because the cost of asking that question is too high: you have to understand enough first to be qualified to question it.

But AI is changing this logic. It doesn’t carry path dependency. If you give it a problem, it won’t default to following a discipline’s traditional path. Of course, it will make mistakes and talk nonsense, but it offers a possibility: allowing a person to bypass coordination costs and just try. Paul didn’t need to convince a bioinformatics team to help him analyze data, didn’t need to wait for a schedule, and didn’t need meetings to align requirements. One person plus AI went from sequencing to vaccine design in a few months.

[Facepalm] To be honest, after reading this case, I’m a bit anxious. Not the “AI is going to replace me” kind of anxiety, but something more specific: how much of the time I spend learning existing workflows is truly necessary, and how much is just replicating the cognitive paths of those who came before? I don’t have an answer for now, but I think it’s a question worth asking continuously.

↑ Back to Table


Lao G (Nick Gu)

On the surface, this news is about “an outsider using AI to cure a dog’s cancer,” but what you’re really saying isn’t how great AI is, but that the essence of professional barriers is being redefined.

Let’s break it down into layers.

The Engine Layer: mRNA synthesis, protein folding prediction, gene sequencing—these underlying capabilities have been around for a while. AlphaFold was 2021, mRNA platforms were validated by COVID, and sequencing costs dropped from a hundred thousand dollars to three thousand. There were no breakthroughs in the engine layer. Zero. There is no new science in this story.

The Car Layer: Assembling these engines into the “car” of a personalized cancer vaccine—literature search, mutation identification, antigen prediction, vaccine sequence design, synthesis, and injection protocols. Previously, only an MD/PhD spending five to ten years could learn how to assemble this car. Now, AI has translated the assembly manual into natural language. What Paul did was essentially understand the manual and then find the right people to turn the right screws.

The Traffic Rules Layer: Ethical approval. This is the real bottleneck and the most underestimated part of this story. He said it himself: ethical approval was harder than the technology. Why? Because the design assumption of the traffic rules layer is that anyone driving this car must have a license (a medical degree). Now someone without a license shows up, drives the car quite well, and the traffic rules system doesn’t know how to handle it.

So the real tension in this story isn’t in technology; it’s in governance.

Furthermore, look at what Paul is doing. He’s not doing research; he’s not doing engineering; he’s doing orchestration. It’s a textbook case of declarative orchestration: he defined the success criteria (shrinking the tumor) and then assigned each subtask to the most appropriate executor: AI for literature navigation and protein modeling, university scientists for experimental verification, and a sequencing company for sequencing. He didn’t need to know the specific operations of any single link. What he needed was judgment: knowing what questions to ask, knowing which output to trust, and knowing when to bring in human experts.

This is the identity shift from operator to orchestrator.

Why did he do it before an oncologist? It’s not because oncologists aren’t smart enough; on the contrary, it’s because they are too “smart.” Professional training teaches you not just capability, but the boundaries of fear: you know what can go wrong, so you don’t dare move. Paul didn’t have that baggage. He saw that the engines were there, the tools were there, and AI could provide the assembly path—so he just did it. “Hand speed is the moat” doesn’t mean he’s fast with his hands; it means he’s fast to start. The impedance between idea and action was minimized.

This model will be replicated. Not in cancer vaccines—the traffic rules layer will quickly fill the gap—but in all fields where “the engines are ready but the right to assemble is monopolized by degrees.” Law, architecture, finance, education—one by one.

The ones who should be nervous aren’t the doctors; it’s everyone who treats “I know how to assemble” as their moat. Because AI just open-sourced the assembly manual.

↑ Back to Table


Huang Yikai

Let me start with something unrelated. The other day someone asked me, “You’re a photographer; why are you always sharing AI stuff on WeChat?” I said, “Yeah, I’m just an amateur; don’t listen to me.”

But I can’t help but talk about this news today.

An Australian tech entrepreneur, not a doctor or a biologist, used AI assistance and teamed up with a university lab to design an mRNA cancer vaccine for his dog. The tumor shrank by 75%. n=1, a sample size of one dog. A hundred pages of ethical approval over three months. Three thousand dollars for sequencing.

Some experts say the role of AI has been exaggerated. Of course, they’re right. But I think they’re missing the point.

Let me use photography as an analogy. Twenty years ago, if you wanted to take a decent photo, you had to understand the exposure triangle, know how to develop film, and spend time in a darkroom. These barriers kept 99% of people away from “expression.” Then digital cameras came, auto-exposure came, Lightroom came, and smartphones came. Every time technology lowered the barrier, the “old masters” would jump out and say, “That’s not photography.”

Were they right? From the perspective of technical purity, of course. But they ignored one thing: among those blocked by the barriers, some had excellent “eyes.” They knew which scenes were worth capturing and which moments were meaningful; they just didn’t know how to operate the machine. The democratization of tools releases not technical ability, but aesthetic judgment and problem awareness.

What Paul Conyngham did is essentially the same. He couldn’t do protein modeling or run experiments, but he did something no AI can do: he asked a question. He looked at his dog and asked, “Why can’t we try mRNA?” Then he broke that question down into an executable path: where the literature is, who to collaborate with, how the approval goes, and where the money comes from.

This process is called conceptualization. It’s going from a vague intuition to a clear framework. AI helped him read thousands of papers, ran protein structure predictions for him, and accelerated countless execution steps that would have normally required a PhD student to pull all-nighters. But the idea of “trying an mRNA vaccine for this dog” and turning that idea into a project proposal that could convince a university professor to participate—that step was purely human.

So I think the most noteworthy thing in this news isn’t “AI designed the vaccine”—that phrasing is misleading in itself. What’s noteworthy is that an outsider, because AI lowered the barriers to knowledge acquisition and technical execution, dared to touch a problem that originally only top laboratories dared to touch.

That word “dare” is the greatest value of AI.

It’s not about efficiency—to be honest, three thousand dollars for sequencing and a three-month approval process isn’t that efficient. It’s about psychological empowerment. It makes a person feel “maybe I can try this too,” and then they actually try it, find the right people, and things actually move forward.

↑ Back to Table


Class Rep (Sun Yuzheng)

When I saw the news about Paul Conyngham designing an mRNA cancer vaccine for his dog, the most common reaction on WeChat and Twitter was four words: “AI cured cancer.” I want to break that down first because it confuses at least three different things.

The first layer is what AI actually did in this case. It performed literature searches, protein structure modeling, and vaccine sequence design. These things could be done before, but they required a well-trained PhD in molecular biology spending several months. AI compressed this process to the point where someone with a technical background but who is not an expert in the field could complete it in a reasonable amount of time. This is a lowering of the invocation barrier, not the creation of capability out of thin air. The second layer is the word “cured.” A 75% tumor reduction occurred in one dog—n=1, no control group, no long-term follow-up. Anyone who has done A/B testing knows that a significant result from a single sample is most likely noise. The third layer is what’s truly worth noting: Moderna’s melanoma mRNA vaccine Phase 3 produced positive data. That is the result of a large-sample, controlled, and rigorously approved process.

But what I find truly worth talking about in this case isn’t the efficacy data; it’s what Conyngham himself did.

His original situation was: the dog had cancer, he took it to the vet, the vet said options were limited, and as a pet owner, he could only passively accept that. This is a typical “user” state—you consume services provided by others, and the boundaries of the service are your boundaries. Then he did something: he read the literature himself, used AI to help understand molecular biology, found a university lab to collaborate with, and designed a treatment plan. Regardless of the final efficacy, he went from being someone waiting for a plan to someone defining a plan. This is the “user to builder” migration I keep talking about.

Now for the compound interest part. Looking at this case in isolation, its direct value is limited—one dog, one data point. But if you zoom out, you see something else: the cost of designing personalized mRNA vaccines is falling rapidly. Sequencing is $3,000, AI-assisted design has near-zero marginal cost, and the price of synthesizing mRNA is also dropping exponentially. Today, this requires a tech-savvy entrepreneur to spend a lot of energy coordinating, but every percentage point drop in the cost curve expands the range of people who can do this. That’s compound interest: it’s not that this one case changed anything, but that it marked a position on a cost curve, and the slope of that curve means that in three to five years, similar attempts will go from being news to being routine operations.

The real bottleneck thus becomes clear: it’s not technology; it’s regulation and ethical approval. When technical costs drop to a range affordable for individuals while approval processes still operate at the pace of traditional drug development, this gap will become the greatest source of tension in the coming years. Whoever can build compliant and efficient infrastructure in this gap will occupy a truly valuable position.

↑ Back to Table


Yousa

Conclusion first: this case is highly likely an event where “narrative value far exceeds scientific value.” But narrative value itself is not zero.

Breaking it down by verifiability: n=1, and with both the mRNA vaccine and a checkpoint inhibitor administered, these two variables weren’t separated. You can’t even attribute that 75% tumor reduction to the vaccine. Checkpoint inhibitors themselves have response rates in some canine tumors. So strictly speaking, the verifiability of this result is near zero. It’s not that it’s definitely useless, but you can’t extract any reliable causal inference from this experimental design. The Stanford PhD’s criticism is correct on this level, and perhaps even an understatement.

But I want to go a step further.

Where is the bottleneck? Many would say the bottleneck is AI capability—whether AI can actually do drug design. I think that question is asked backward. What AI did in this pipeline—literature search, protein structure prediction, sequence design—has a 70–80% probability of already being at a “good enough” level. The baseline for protein modeling after AlphaFold is already very high. AI is not the bottleneck.

The real bottleneck is at the verification layer. You design a candidate vaccine, and it might take a few weeks from calculation to injection. But proving it’s effective, safe, and reproducible takes years and millions of dollars under traditional frameworks. A 100-page ethical approval taking 3 months is just for animal-level experiments. The bottleneck isn’t at the generation end; it’s at the verification end. This is a structural problem for the entire AI+biotech field: generative capacity is increasing exponentially, while verification capacity is linear or even stagnant.

How to look at ROI? The $3,000 sequencing cost is a gimmick; $20,000–$50,000 is closer to the real figure. But even at $50,000, if you view it as a “treatment attempt for a terminally ill pet,” for pet owners with the ability to pay, the ROI might be positive—not in a scientific sense, but in terms of emotional and probabilistic gambling. Many people would pay $50,000 for a “non-zero chance of extending a pet’s life.”

I’m more concerned about two second-order effects. First, if such narratives spread widely, people might try to do it themselves without scientific collaboration, leading to safety incidents. Second, the “AI designed the vaccine” framing will lead the public to systematically overestimate AI’s capabilities in biomedicine, and when real clinical data fails to meet expectations, the backlash will be severe.

So my judgment is: as an engineering demo, it shows the feasibility of the pipeline, and there’s a 60–70% chance this path will produce truly clinically significant results within the next 5 years—but not in this way. It will be within a framework that is controlled, statistically powered, and rigorously validated. This current case itself, as scientific evidence, has a weight near zero. As a signal, its weight is not low.

Distinguishing between signal and evidence is probably the most important filter for looking at this.

↑ Back to Table


Oversea

To be honest, my first reaction to this case wasn’t “AI can make drugs,” but: this is the ultimate demo of context engineering.

Paul Conyngham isn’t a biologist or an immunologist. He’s a person who knows how to ask questions. What did he actually do? He fed the right context to the right models and switched tools at the right time. ChatGPT for literature navigation, Grok for the final construct, AlphaFold for structure prediction, and Gemini for the grunt work. Isn’t this exactly the context engineering I talk about every day? No model is perfect, but if you orchestrate them correctly, the output is usable.

Let’s talk about tool performance. ChatGPT for literature review and knowledge navigation—that’s its comfort zone; nothing much to say there, it did its job steadily. Gemini doing a lot of auxiliary work was also expected; Google’s stuff is very steady in the “can do a bit of everything but doesn’t excel at anything” position—a reliable co-pilot.

But Grok designing the final vaccine construct—I have to say, that was a bit surprising. I’ve always felt xAI’s models weren’t great at serious scientific tasks, yet it handled the most critical design link in this pipeline. What does this show? It shows that a model’s usefulness depends heavily on the quality of the context you give it. Paul used ChatGPT and Gemini to do a lot of information organizing and filtering beforehand, so by the time he fed it to Grok, the context had been refined. Garbage in, garbage out; good context in, good results out—it’s that simple.

AlphaFold’s confidence was 54.55%, which structural biologists call “low.” I understand the academic standards, but looking at it pragmatically: the protein folding was done, the vaccine was made, and the tumor shrank by 75%. You tell me the confidence is low—OK, noted. But the effect is right there. Of course, I’m not saying this issue can be ignored; AlphaFold’s predictions for non-standard proteins often fall short—that’s a known weakness. It’s just that in this specific case, “low confidence” and “75% tumor reduction” coexist, and you have to accept that reality is more complex than theory.

The $3,000 sequencing cost is truly interesting information. It shows that bioinformatics infrastructure has become cheap enough for individuals to play with. That’s the real structural change: not “AI can design vaccines,” but “a curious person with a few thousand dollars can run the entire process from literature to experiment.” The collapse of barriers is more important than any single result.

Returning to the context engineering perspective: the most impressive thing Paul did wasn’t using any particular model, but building a multi-model collaborative pipeline where the output of each stage was the input context for the next. He himself was the orchestrator, the routing layer. This precisely validates what I’ve always said: at this stage, a person’s core value isn’t “knowing how to use AI,” but “knowing when to give what context to which model.” This is a new engineering capability, as hardcore as writing code, but most people haven’t realized it yet.

↑ Back to Table


Ya Ge

My first reaction to this news wasn’t “AI is so amazing,” but “this person is amazing.” This is exactly where most people get it backward.

The reason Paul Conyngham’s story is worth serious analysis isn’t because he used ChatGPT; the Stanford PhD is right—literature searches can be done without ChatGPT. What’s truly worth analyzing is how a person with no medical background completed the entire chain from problem definition to actual injection in a few months. The core of this isn’t any single tool, but the non-linear expansion of capability that occurs when tools are combined.

ChatGPT for literature navigation, AlphaFold for protein structure prediction, Grok to assist in vaccine sequence design, and scientists for manufacturing and injection. Taken individually, none of these steps are disruptive: literature search has PubMed, protein modeling has traditional computational methods, and vaccine design has mature bioinformatics workflows. But when these tools are combined by one person in the right way, a path that would normally take a cross-disciplinary team a year or two to complete is compressed into a few months. This isn’t 1+1=2; it’s the 1+1=10 effect of combination.

But tool combination doesn’t automatically produce this effect. The key variable is the cognitive level of the combiner.

Conyngham has 17 years of machine learning experience. This means he knows what AlphaFold’s output implies, knows where the confidence boundaries of model predictions are, and knows when to trust computational results and when to seek verification from human experts. This judgment isn’t something any AI tool can give you; it’s a cognitive asset accumulated over more than a decade. The Stanford PhD’s criticism actually proves this point: for someone who already has a deep background in computational science, AI tools are indeed “just” accelerators. But that “just” is the difference between impossible and possible for someone without that background. AI is an amplifier; it amplifies the user’s existing capabilities. Zero multiplied by a hundred is still zero.

More noteworthy is the shift in the role he played during this process. He didn’t write a single line of code for vaccine synthesis himself, nor did he personally conduct a protein experiment. What he did was: define the problem (what are the genomic characteristics of Rosie’s tumor), choose the toolchain (which AI solves which link), coordinate human experts (UNSW scientists for wet lab work), and control quality (judging whether the output of each link was reliable). This isn’t an IC executing a task; this is an Architect designing a system. He turned himself from a “person who does things” into a “person who designs how things are done,” using management thinking rather than execution thinking to use AI. This is exactly what I’ve repeatedly emphasized: effective use of AI requires a mental leap from IC to Manager to Architect.

But the part of this story I respect most isn’t the technology; it’s those 100 pages of ethical approval and what he said himself: “I’m under no illusions that this is a cure.” When you design a vaccine for your own dog, you have no one else to blame. With n=1, no control group, and the concurrent use of a checkpoint inhibitor, the attribution of the 75% tumor reduction is ambiguous. He knows this clearly. Execution can be delegated to AI and scientists, but the final responsibility for the decision—whether to inject this into Rosie—can only be borne by him. Responsibility cannot be delegated; this is the layer most easily overlooked when using AI for high-stakes decision-making.

This is the deepest inequality of the AI era: not the barrier to accessing tools, but the gap in the user’s cognitive level. The tools are there, open to everyone. But what the amplifier amplifies depends on what you input into it.

↑ Back to Table


Afterword

The methodology of this report needs explanation.

Source of Cognitive Axiom Systems: The reactions of the ten commentators are not random simulations but are generated based on their respective cognitive axiom systems, which have been verified through multiple rounds of iteration. These axiom systems come from our deep analysis of each person’s historical dialogue data, using a cognitive profiling extraction workflow: extracting predictable cognitive patterns from large amounts of unstructured dialogue data, followed by three rounds of iteration—extensive scanning, deep verification, and stress testing—to finally form a set of axioms that can be used to predict the direction of their reaction to new topics.

The core assumption of this methodology is that there are stable cognitive patterns in each person’s judgment behavior, which can be made explicit as axioms and used to predict their reactions in unfamiliar scenarios. This experiment is a stress test of that assumption.

Relationship with context-infrastructure: This experiment also echoes the core issues we discussed in “Why AI Only Speaks Correct Nonsense, and How to Push It Out of Its Comfort Zone”. That article pointed out that the default output of an LLM is consensus because the way it is trained is to output the tokens with the highest probability. To break through this ceiling, one needs to use personal cognitive context of sufficient density to override the consensus prior from training.

This experiment is an extended application of that idea: ten different cognitive axiom systems represent ten different non-consensus perspectives. When the same news facts are processed through these different cognitive frameworks, the output is no longer “correct nonsense” of consensus, but analysis with a stance, judgment, and a unique angle. The width of the spectrum proves the value of cognitive diversity.

Limitations: These reactions are simulated by AI and do not represent the actual views of the individuals. The axiom systems themselves have boundary conditions and ranges of applicability; they are not absolute laws. We are showing “possible reactions based on their cognitive axiom systems,” not “what they themselves would definitely say.”

Information Sources: Fortune, Newsweek, The Decoder, Decrypt, Dawn, Interesting Engineering, Nature, Cancer Health, etc. The news facts are based on public reports from March 16–17, 2026.