Alysson Muotri and his team have been toiling away in the lab for the last year and a half or so, obsessing over bland-looking, pea-sized lumps of cells. Despite their unassuming appearance, lumps like these have taken neuroscience by storm. They’re lab-grown “brains.” Scientists call them brain organoids, and they offer rudimentary 3-D models of the brain’s cortex — the outer layer where complex functions like memory, language, and likely self-awareness, play out — created in petri dishes from stem cells.
Muotri and his team aren’t the first to develop brain organoids. The models, sometimes called cerebral organoids, have been around since about 2013. But the University of California, San Diego neuroscientist’s lab and collaborators have been behind several innovative uses of them in recent years. In 2016, Muotri helped author research published in Nature that utilized brain organoids to show how Zika can cause birth defects. He’s also incorporated them into his work on autism.
And earlier this year, he reported preliminary results from using the gene-editing tool CRISPR to create Neanderthalized variations. He and his team swapped in the Neanderthal version of a gene we modern humans have called NOVA1. It’s a so-called master regulator gene that can influence the expression of hundreds of others. And CRISPR-ing the Neanderthal equivalent into the human stem cells that evolve into organoids resulted in brain structure that mirrored some of the differences seen in people with autism.
As if that last bit weren’t crazy enough, recently, his group has been gunning to push brain organoids past their current state of simplicity — by hooking them up to a robot that allows the 3-D models to interact with the world around them.
Past the Developmental Plateau
The thing to remember about cerebral organoids is that they aren’t complete brains. They’re very basic versions of the cortex. They don’t even contain all the cell types you’d normally find in a brain, and they often lack any sort of vascular system to deliver nutrients. (Though other labs are tackling that issue.)
They’re so basic, they don’t get past the nine-month mark of brain development, essentially what you’d see with a newborn. And even though having even a simple model of the human cortex to test in the lab is arguably more helpful than using animal models, that simplicity will only get you so far. The closer a model and its networks of neurons and other brain cells is to a mature human brain, the more insight researchers can gain from their experiments.
At first, Muotri and his team thought the developmental plateau might have been a consequence of how they were growing the organoids. Maybe there was something in their method that needed tweaking. “But then we realized that in human neurodevelopment, something similar happens,” he says. “When you have a newborn baby, the networks are quite immature at that stage. And they only refine and become more sophisticated as you add input and output. This is what [we were] missing. These brain organoids [were] not receiving input and output.”
They mulled the problem over in their heads, thinking about ways they might be able to feed information into the little lumps.
They considered implanting the organoids into another animal, like rats. (Something other teams have already done.) “But then you already have a very complex organism in another brain that makes everything more complicated,” Muotri says. Then the team thought about stimulating the organoids with neurotransmitters, the brain’s chemical messengers. “But we really wanted to mimic some kind of learning experience,” he says. “That’s when the robotic experiment came to my mind.”
Getting a Reaction
Here’s the basic setup: A cerebral organoid is hooked up to a computer that’s also linked to a spiderlike, four-legged robot. The computer acts as a translator of sorts, picking up spontaneous electrical signals from the organoids. Next, based on programming from the researchers, the computer assigns a function — in this case, “walk”— to the signal and feeds that information to the robot. Then, the robot walks forward.
But all of this is easier said than done. None of it happens in anywhere near real-time. In the beginning, the process took days. Now, Muotri and his team have whittled that down to about 10 seconds or so, though there’s still a lot of time to shave off. “I mean, milliseconds, that’s where we want to get,” he says. “Similar to the human brain.”
Despite this bottleneck, their method seems to be working. They’re using what are called neural oscillations — a dressed-up way of saying brain waves — to measure the organoids’ maturity. Neural oscillations are repetitions of certain electrical activity in the brain that happen at different frequencies, and researchers often use a tool called electroencephalogram (EEG) to measure them.
Initially, the team was seeing oscillations at a rate of about 3,000 spikes per minute on EEG readouts — typical of regular neuron cultures and roughly on par with what you’d see in the very early stages of human brain development. But now, the organoids’ oscillations have skyrocketed to 300,000 spikes per minute, “which is what you would expect for a post-natal human brain,” Muotri says. “It comes as a shock to many people that we can get that level of activity.”
To push that development even further, the group is wrestling with getting the organoids to respond to their environment.
Muotri likens it to how we pick up on things when we’re babies. “Looking at your mom, your mom smiles at you, you smile back,” he says. “You receive some sort of reward mechanism. Or you start walking, and you realize that you need to have some kind of balance, and you refine your network to achieve that.” That’s what they’re going for with their organoid-computer-robot setup. He says the goal is to engineer situations “where the networks have to rearrange somehow” and the organoid makes the robot walk backward rather than forward.
Though they’re still sorting out how to accomplish this (and are pushing to have it hammered out by next fall), Muotri and his lab have even more long-term goals for their approach.
Eventually, they plan to tie all of this research, which is yet-to-be-published, into work they’ve done to create neanderthalized brain organoids and versions meant to serve as models of neurological conditions like autism. The hope is to use their robot platform to compare these different types of organoids with each other and with a typical organoid, to see how they differ developmentally.
It’s ambitious work, as so much research surrounding organoids is. And, as with other experiments utilizing this promising new type of brain model, it’s treading into unknown ethical territory.
“If these were liver organoids or gut organoids, I don’t think anybody would be concerned,” says Insoo Hyun, a bioethicist at Case Western Reserve University in Cleveland who isn’t involved in Muotri’s work. But cerebral organoids bring some big ethical questions to the table, specifically because researchers have a pretty good hunch that the cortex, the area of the brain that organoids represent, is key to self-awareness.
Normally, growing a brain organoid in the lab means the model is “basically a disembodied brainlike thing,” Hyun says. He likens it to a sensory deprivation chamber where there are no signals coming in and none going out. “But if you have signals that are coming into it, and you’re doing experiments to see how it reacts to stimuli, then you might be starting to give it a little bit of the necessary support it might need to develop some kind of awareness.”
And that potential for consciousness, to whatever degree it may manifest, makes it difficult to decide what the most ethical approach is to using cerebral organoids. It’s similar to the concerns that are popping up around AI and the potential of machine consciousness.
Muotri, too, has been grappling with these issues and how the public will react, collaborating with ethicists to get their take on the direction his lab is pushing brain organoid technology. So far, he says the experts he’s spoken with don’t seem too concerned.
“They are less worried than I would imagine,” he says. “I would have thought that the public would have a different perception — that these are live, organic networks that are now connected to machines and might dominate the world. But they think that the public will not see that. And I’m unsure. I don’t know.”