Future Now
The IFTF Blog
A Wearable MRI Machine
Talking telepathy with Mary Lou Jepsen
Interview by IFTF Research Affiliate Scott Minneman for volume three of Future Now,
IFTF's print magazine powered by our Future 50 Partnership.
Spock, the Vulcan character in Star Trek, often performed a “Mind Meld” with other sentient beings, creating a telepathic link to exchange thoughts or probe minds. Science fiction? Not for much longer! Technology pioneer and luminary Mary Lou Jepsen’s new venture is in bringing this mind-reading superpower out of neuroscience labs and into everyday reality—an affordable reality, too, using optoelectronic components and production processes developed for display in our smartphones. However, this powerful ability comes with dilemmas: Who gets access to whose thoughts and for what purpose? How can we safeguard ourselves against possible abuse?
The path to telepathy is coming indirectly, through Jepsen’s exploration of a more costeffective medical diagnosis tool: essentially an affordable alternative to Magnetic Resonance Imaging (MRI) machines. A current MRI installation costs millions of dollars for hardware and a shielded room, but such equipment is often key to the detection and treatment of cancer, heart disease, and even mental illness. Global inequities in access to MRI technology have created a huge chasm between the haves and the havenots. Simply put, those with access to the technology get more accurate diagnoses and can seek appropriate treatment, and those without, are more likely to die prematurely.
MRI-quality Imaging in a Beanie
After years of working in the display technology field for such giants as Intel, Google, and Facebook (in the Oculus division), Openwater founder Jepsen broke away and formed the new company. Openwater is pioneering ways to create MRI-quality and high-resolution images by taking advantage of how the human body transmits and scatters infrared light.
What if an MRI-quality brain scan could be achieved by wearing something that looks like a knitted ski cap? Or a knee joint’s internals could be examined by a device indistinguishable from an elastic brace? Such possibilities would not only eliminate trips to the hospital lab, but also allow individuals and their medical professionals to gather images on a more continual basis. Working out of a Quonset hut in Sausalito, Jepsen’s team is doggedly pursuing this breakthrough, from all-hands meetings every morning, to occasional late-night surges, blending start-up and hard-core physics in equal measure.
Jepsen’s unique background includes a Sc.B. in electrical engineering and a doctorate in optical physics from Brown University, plus early exposure to computational holography while getting her Sc.M. at MIT’s Media Lab. This was followed by a string of novel projects spanning start-ups, academia, and big industry where, along the way, she pioneered tiny optoelectronics at MicroDisplay, served as CTO for the revolutionary One Laptop Per Child (OLPC) effort at MIT with Nicolas Negroponte, and launched the Pixel Qi spin-out from OLPC. She also conceived of a wildly creative (but never realized) technology to project video on the Moon by re-directing sunlight—Moon TV. A named inventor on more than 200 patents, she grew up as the daughter of a mechanic, and strongly believes that most laboratories aren’t much more different than an auto shop; “Most of the time you’re trying to find the right tool, or make the right tool,” she said.
Jepsen is painfully familiar with the need for affordable MRI. While pursuing her PhD, she was diagnosed with a brain tumor—one that likely would’ve been detected sooner were it not for the expense of brain imaging at the time. Post-op, she rapidly finished her degree, but the incident forced her to give up her most creative endeavors for jobs that provided health insurance and covered the drugs she’s taken to survive since then. Her near-fatal experience doubtlessly piqued a long fascination with neuroscience and medical imaging—many brain tumors aren’t discovered until they’re quite large, simply because it’s too costly to routinely look for them.
“Affordable medical imaging is so critical to quality care, and it’s unfortunately very rare in many parts of the world. There are something like 40 MRI machines per million people in the U.S., but that number drops to two MRI machines per million in Mexico, and in Africa, it becomes even more sparse with MRI installations primarily residing in capital cities. The number one health expenditure in the world is brain disease; it incapacitates so many people,and takes a long time to properly diagnose and treat. Better diffusion of diagnostic capabilities is great, but advanced health care technology has been a big contributor to soaring health care costs in the U.S., Openwater could really change that landscape.
“I get an odd comment from medical professionals from time to time, about whether easy and inexpensive medical imaging won’t amplify what some see as an Internet-fueled mass of people doing self-diagnoses and bothering their doctors with wacky theories about what’s wrong ... As somebody who was very ill for quite a while with an undiagnosed condition that would’ve shown up on an unaffordable scan, I don’t have a lot of patience with this take on things. In fact, my reaction to that critique is pretty unequivocal—if a doctor thinks that the public’s engagement with their own health is a bad thing, then perhaps they chose the wrong profession!”
A while back, Mary Lou had two “a-ha” moments that probably could only happen to someone with her unique academic and technology pedigree. One, the human body is essentially transparent to infrared (IR) light—those wavelengths can penetrate deeply and even go right through many kinds of human tissue. Two, the milky images we get from IR light scattering off of our tissues can be cleared up with a technique from holography. This combination of IR imaging and computational holography lets us view smaller and smaller regions of the brain, deeper and deeper in our heads—eventually discerning the functioning of tiny regions and even individual neurons in the brain.
For the past two decades, scientists have been doing a better job of just that: making temporal images of brain function by watching for miniscule telltale surges of oxygen in blood flow that accompany neuronal activity. Using a time-based variant of MRI, called functional Magnetic Resonance Imaging (fMRI), researchers can analyze neural activity and predict what someone is looking at with increasingly accuracy. This work has revealed all sorts of details—now we know that brain activity for when we see something or for when we imagine seeing, is essentially identical.
This is where smartphone technology and holography come in. Contemporary liquid crystal semiconductor display (LCD) components, and the demand for high density displays on cell phones and virtual reality (VR)/augmented reality (AR) headgear, have resulted in technology that can capture IR wavefronts emanating from the body. Using these LCD components to capture the wavefront of IR being emitted from a person’s body, we can then apply moderately esoteric mathematics (called Phase Conjugation) to eliminate the cloudiness from scattering, and can capture clear images of tiny structures deep within our brains.
“You can make a hologram, measure the wave pattern of the wavelength of light in three dimensions, invert that in the screen itself, and then you make your body effectively transparent to light. You can look at the blood flow, the structures, the tumors—using LCDs literally made in the same factories that produce components for cellphones or VR goggles,” Jepsen explained.
This technology will most certainly revolutionize health care, giving millions of people better access to a medical diagnosis that was previously prohibitively expensive or not available. But in marshalling LCD technology and algorithms to “read” the brain, this image-capture beanie will be able to reveal not only what’s amiss with brain tissue, but will literally expose what we’re thinking.
For over a decade, neuroscientists have been studying this side of the technology. At the University of California at Berkeley, Dr. Jack Gallant and his team at the Gallant Lab made fMRI recordings of grad students looking at YouTube videos for hundreds of hours, collected a library of neuronal reactions, and created a “visual dictionary” utilizing data analytics, artificial intelligence (AI), and brain scan data. These fMRI readings measured areas where oxygen was flowing (i.e., neural activity in the brain) and also used pattern matching. Stored data could predict what a student was watching. Understandably, the image was grainy and low-resolution, but recognizable; upping the resolution would enable one to see what people are thinking about and visualizing inside their brains. Similarly, one could examine the brain’s speech centers in the same way, and an fMRI could reflect the words that someone is thinking. It’s reverse-engineering the brain.
Not only that, but such technology also is apparently read/write capable—with the future potential to “write” changes by stimulating neurons—raising the possibility that thoughts and images can be “written” to your brain that weren’t there before.
The Ability to Read/Write the Brain
The thorny and ethical dilemma raised by this revolutionary technology is one of the reasons Jepsen left Facebook to pursue Openwater full time. “Peter Gabriel calls this an ‘Oppenheimer Cocktail,’” she noted. “Exuberance about its potential, but fear of its misuse—what do we do?” The ethical and legal implications were so profound that somebody needed to talk about them, said Jepsen. She felt that she could begin the conversation. “National academies of almost every developed country say that one of the top things we can do as technologists, is to reverse engineer the brain. But nobody’s talking about what happens once we do it.”
Tech Futures Lab connected with Jepsen, directly, to get her answers to some of the most pressing questions in this space.
How does this read/write technology play out?
Even though we aren’t totally sure and aren’t yet talking in specifics about how these technologies will unfold, it’s likely that our medical diagnosis stuff will come first. That’s because the legalities and ethics are more cut-and-dried—there’s little argument to be had about the benefits of an intelligent bra that can spot breast cancer, but mind reading is a distinctly different animal.
There are lots of incredibly cool upside scenarios. For visual thinkers, the potential of being able to somehow think an idea or an image and have it externalized is incredibly powerful. There’s so much in our heads, and getting it out into world is a major bottleneck. The learning side, too—what if we can learn new things by ... getting them put into our brains?
But there needs to be an ongoing discussion about the ethics of every one of these emerging capabilities. Can police or military make you wear a cap? What about employers who require you to be capped for a job? Who owns your thoughts? Once you share them, can you delete them? What about filtering? Have you ever thought something you didn’t want to say out loud? We have to make the technology so that it only works when we want to ‘think into it.’ It’s one thing to get somebody to show you what they saw, but what happens as we inch over to telepathy? There are profound issues that crop up once we can implant thoughts and use it to directly change minds and communicate at the level of neuronal activity. The ability to have private thoughts is fundamental.
All these cases need to be considered and answered, and my doctorate isn’t in ethics. We’ve been reaching out to the big thinkers in the area ... because these debates may take longer to develop than the technology itself.
How should we manage these issues?
We’re well beyond the days when ethical debates like this one take place with a little committee sitting in a boardroom. These debates need to happen soon in a public forum. Just look at how rapidly our expectations and beliefs about privacy have shifted, in a very short time. We can’t wait until the results of some cutting edge research are upon us before we discuss them.
Luckily, we are not the only field with this dilemma. CRISPR-Cas9 (a genome editing tool) is making ethics and policy conversations happen in other communities ... There’s hysteria and strident voices around AI, too …
One fundamental approach we’re taking at Openwater is that we need to keep people in control of their thoughts. You have to want the cap to read your thoughts for it to work. People need to be able to mask their thoughts, and it’s our responsibility to teach them how and make sure our system does it. Unfortunately, like a lot of safeguards, it can probably be eventually circumvented (we aim for this to never be the case but must plan for it regardless), or people will reverse-engineer enough of what we’re doing and sidestep whatever we design.
What stage are you at with the basic science and development work?
We’re not the ones doing the basic neuroscience work, but there are tons of really fascinating studies ... There are some incredible rat maze studies with implanted probes where they can transfer one rats’ knowledge of how to navigate a maze to other rats who’ve never seen the maze. That recent face cell work at Caltech where they reconstructed face stimuli with little apparent error with just 100 instrumented neurons—they’re amazing findings. You were asking about semantic mapping, earlier—there’s still a lot to learn there, but some early indications are that we’re not as different as one might initially think. It’ll be much easier to do experiments once it doesn’t take time on a multi-million dollar machine.
So we’re approaching a year in—time spent exploring a lot of the basic physics. We’ve been doing experiments on various configurations to see how deep we can get the light to penetrate, what resolution we can manage—crucial performance measures. We’re trying to build and test lots of things in parallel, learn from mistakes and combine solutions so we will be prepared to evaluate trade-offs, make design decisions, and choose particular avenues. We’re aiming to show our working demos to the world, but that will signal our move to start production, and rushing into hardware decisions can be really costly.
Unlike the cutthroat world of cell phones, we’re an intriguing market with a lot of potential, utilizing the same processes as next-gen VR and AR optoelectronics. These same cutting-edge facilities and processes may give Openwater a jump on imitators. We’ve been submitting lots of patents. Imitators will eventually come. Some may not be as careful about ethics. All the more reason for us to have these ethics discussions widely and early, so that a broad set of stakeholders can sign on and agree to a set of rights.
It seems clear that lessons learned from the Oppenheimer era made a deep impact on Jepsen and her fellow researchers. They aren’t taking their responsibility lightly. She concludes, “We have to define what it means to be responsible in developing this, and I can’t completely separate the medical part from the telepathy part, but try to use my skills to make the best change in the world that I possibly can.
FUTURE NOW—Reconfiguring Reality
This third volume of Future Now, IFTF's print magazine powered by our Future 50 Partnership, is a maker's guide to the Internet of Actions. Use this issue with its companion map and card game to anticipate possibilities, create opportunities, ward off challenges, and begin acting to reconfigure reality today.
About IFTF's Future 50 Partnership
Every successful strategy begins with an insight about the future and every organization needs the capacity to anticipate the future. The Future 50 is a side-by-side relationship with Institute for the Future: a partnership focused on strategic foresight on a ten-year time horizon. With 50 years of futures research in society, technology, health, the economy, and the environment, we have the perspectives, signals, and tools to make sense of the emerging future.
For More Information
For more information on IFTF's Future 50 Partnership and Tech Futures Lab, contact:
Sean Ness | [email protected] | 650.233.9517