/* Target unordered list (bullets) in the Rich Text Block */ .Blog-Rich-Text ul { list-style-type: disc; color: #0000FF; /* Change this color to your desired bullet color */ } /* Target ordered list (numbers) in the Rich Text Block */ .Blog-Rich-Text ol { color: #FF0000; /* Change this color to your desired number color */ list-style-type: decimal; /* Customize list type */ } /* Target the list items */ .Blog-Rich-Text ul li, .Blog-Rich-Text ol li { font-size: 18px; /* Customize the font size for list items */ line-height: 1.6; }
Karim Galil:
Welcome to the Patientless Podcast. We discuss the Good, the Bad, and the Ugly about Real World Data and AI in clinical research. This is your host, Karim Galil, co-founder and CEO of Mendel AI. I invite key thought leaders across the broad spectrum of believers and defenders of AI to share their experiences with actual AI and Real World Data initiatives.
Karim Galil:
Hi everyone, and welcome to another episode from Patientless Podcast. Today's guest is from the venture capital world. We don't usually invite a lot from the venture capital world, but he has a very interesting background, very interesting portfolio of companies, and I thought this is going to be a very interesting conversation about the future of AI and where is the industry heading. Today's guest has his undergrad from Cornell in Biology. And rather than being in the wet lab today he actually is in the VC world. He started as an associate at Lux Capital, but today he's a general partner at Jazz Ventures with very interesting investments, such as Gordon Health. Today's guest is John Lee, partner at Jazz Ventures. John, thank you for making it to the episode.
John Lee:
Thank you. I appreciate it. Thanks for having me, Karim.
Karim Galil:
John, what's the journey from biology to the venture capitalist world. What happened?
John Lee:
I ask myself that every single day. I was studying computational biology in undergrad, but at the same time I was doing internships more around health systems, and around how technology can impact health systems. The one thing that I noticed was oftentimes there were these really interesting scientific discoveries in the lab, but frequently they weren't making it out into the real world for a number of different reasons. And so, I was always interested in the application of technology rather than just the basic discovery of science. And so, when I was starting to think about what I could in between undergrad and joining the PhD program I thought venture capital would be an interesting way to explore the things I was interested in on how do you get breakthroughs in science really out there in the world. It has been over a decade now. I'm probably not going back to grad school anytime soon. But, I do think that being in the VC world is an interesting opportunity to really push out these core innovations that sometimes do get stuck in the lab, and I think it's one of the most effective ways to do so.
Karim Galil:
Is that what Jazz is investing in? Are you guys investing only in healthcare? What's your thesis around investments at Jazz Ventures?
John Lee:
Yeah. We do have a lot of interest and focus in healthcare, but it's not the only stuff that we invest in. We have a really broad mandate to invest in companies that expand the boundaries of human performance. And so, we particularly like looking at breakthroughs at the intersection of digital technology and neuroscience that can impact human experience cognitively. This has led to lots of different companies in our portfolio where we think that things like consumerization can really impact healthcare delivery pretty positively. We think that human and loop AI can augment productivity in lots of really interesting ways. And so, we really look at everything from the enterprise, to healthcare, to therapeutics, to even consumer products. And so, we have a pretty broad mandate, but largely set around this idea of how do you scale productivity, how do you enhance and expand human potential in a lot of different directions.
Karim Galil:
A lot of these companies have very solid, and they spent years basically in the R&D process. How do you guys as a venture capital evaluate the technology, or evaluate the secret sauce, say for a therapeutic company for example.
John Lee:
Yeah. It's a great question. Prior to Jazz, I spent about a decade helping to build a firm called Osage Partners, which focused on academic innovations and how do you [inaudible 00:04:05] into companies. I'm very familiar with the topic of how to evaluate technology. Frankly, the simple answer is that most times the technology comes secondary to the team that's actually building and rolling out the technology. Oftentimes, the build in market and the form factor that you take at the technology is more important than the technology itself. You know, that being said, occasionally there are moments where the superiority of technology can be the biggest competitive advantage, which is actually often the case with things like therapeutics where there's very strong IP around it.
John Lee:
I think it works a little bit differently when you start talking about digital technology, oftentimes software doesn't have a strong [inaudible 00:04:47] around it, and so it's a little bit of a different go to market, and the packaging of the process matters quite a bit more. I would say that AI is a great example of this. There's lots of different methods out there for how do you get a slightly better neuro network, how do you get slightly different algorithms that are somewhat better than one another. But in reality, the most important thing is how do you package that stuff into a completely product.
Karim Galil:
You just opened a big can of worms now. There's tons of questions in my head. Let's start with this, all right, how can an AI company... I hear you, it's really hard to patent AI today. It's just hard. And once you've published something, it becomes public knowledge already, and anyone can just get the paper and work on the same model. How can a company build a mote? How can a company actually build a defensible software business? Is it only the packaging, or is there any kind of network affect you have seen from a customer perspective, or from a data perspective?
John Lee:
Yeah. I mean, I would say that the classic answer for this would be you want to build some sort of data mote, you want some proprietary way that you're collecting data and which can feed to your algorithm that nobody else really has the ability to do so. I think this is particularly relevant when you're talking about neural networks that rely on large sources of data. But you know, that being said, it's always hard to compete against the Googles, Facebooks, Microsoft, AWI, Amazons of the world, because they're always going to have more data than you. In the world of neural nets, I actually think it's very difficult for an individual startup to have a significant advantage there. And the advantage often comes from nimbleness and the ability to target markets that are perhaps too small of an opportunity for those large companies to go after and build a mote around kind of brand users features, and then build from there.
John Lee:
When you talk about things like what you're doing where you're talking about neural symbolic systems, a lot of the field really isn't quite there yet. There's an advantage in experience and breadth of experience in being able to design those symbolic systems where only a handful of people will have that perspective, or that point of view, or the ability to design those systems. I think that when we're talking about neural symbolic systems as opposed to neural nets, there's some higher inherent barriers to entry, because you have to be able to design those expert systems, which often the expertise is limited to just a handful of people in the world.
Karim Galil:
Then at that point, the team becomes one of your main competitive advantages. And the problem that we have seen, and I'm not sure how you work with that with your portfolio companies, is the great AI talent tends to gravitate towards the Googles and the Amazons; not for the salary, but because they can get to work with tons of data from day one. For them, it's like hey I go to a startup, my career is going to take a down turn because I don't know how much data they have, how much data they can get in the next year or two. Then hiring becomes a really challenging problem for a lot of companies who want to build say neural symbolic systems, for example. Is that a pattern that you're seeing in healthcare? And if so, how did your companies work around that?
John Lee:
Yeah. I mean, I would say that I think this applies not only to AI, but kind of engineers in general. Right? You're always going to have a lot more safety, and a lot more comfort, and probably a lot more interest from engineers to work at those large companies because they are a lot more attractive for a number of different reasons. That being said, I do think that technology rapid commoditize, and so when it comes to neural nets I would say maybe five years ago there was probably a pretty substantial AI talent shortage where there were really only a handful of experts that you could draw upon, and there was a lot of competition for those. I would say since then there's been a lot of commoditization. You can see that because you're not longer seeing the massive seven, eight figure salaries going to AI engineers as it used to five years ago.
John Lee:
And so, I think naturally technology commoditized. There have been lots of AI platforms that have come out since then that make it a lot easier to work with neural nets, work with deep learning, without the need to be able to go and design the algorithms themselves. Really employment is the key issue there. In reality, for healthcare and pharma companies, yes they're never going to be able to recruit a large number of those algorithm designers, but they're going to benefit for the commoditization of a lot of these technologies. In some ways, I think it actually naturally matches up well with the risk tolerance they have anyways, where you probably want to start absorbing those sorts of technologies once it's prime time.
John Lee:
I think that when we get into hybrid AI systems, and kind of these novel architectures that are starting to emerge, a lot of those have higher day on utility to pharma companies, and I would guess that the engineers that are working on that would want to initially start in pharma or harder problems, because they're algorithms suit those problems better. And so, I actually think that from an AI shortage perspective, or a talent perspective: it's one, commoditize very quickly, or commoditized very quickly; and two, the next generation of problems to be solved by these hybrid AI systems, these engineers are going to gravitate towards those industries, and pharma, and these other healthcare organizations are going to benefit from it.
Karim Galil:
So, pharma companies pretty soon are going to be technology companies actually. And we're seeing big pharma are aware of that. They're already recruiting hundreds, if not even thousands in some occasions, AI engineers and AI talent, because they're quickly realizing that... I mean, [inaudible 00:10:33] now says we are a data and a pharma company, we're not just a pharma company. But, a lot of our audience are not super tech savvy, so we jump on the neural symbolic approach, terms. Can we explain to our audience what is the difference between neural nets, symbolic AI, a neural symbolic approach, and what are the benefits of each one of them?
John Lee:
Yeah. I like to somewhat think of this kind of from a psychological perspective. If you think about levels of understanding of say animals, I would say that there's probably a few different levels, and people like Judea Pearl comment on this where the first is more of a sensory and observation level, or an association level, where you're taking in insights and you're making conclusions based off of rough correlations that you're doing. I would say that this is probably where neural nets are today. It's basically saying all the answers are within the data, and with correlation you can find every single answer, which obviously just by saying that statement there are flaws in that, because a lot of those correlations tend to be spurious. But, I think that's where we basically are with neural nets today.
John Lee:
A level above that, or a tier above that, would be the ability to intervene or do based off associational observations that you make in an environment. I think that's kind of where neural symbolic systems come in, where symbolic systems are these expert systems or knowledge systems where you have roles associated with what you view as kind of knowledge in the world. And so, you map out these systems of knowledge and then you apply things like big data or correlative systems like neural nets to have a better understanding of what's going on. For example, you could do what would X be if I do Y? You can make these types of conclusions.
John Lee:
I step beyond that where you start getting to strong AI, and artificial general intelligence is the ability to think counter factual, or to imagine within a system. I don't think we're quite there. I think that things like neural symbolic systems are really a step for that and probably will be the predecessors of truly strong AI.
Karim Galil:
Would it be safe to say that a neural net learns by statistical weights, like how statistics work, versus a neural symbolic system is leaning more towards learning by facts?
John Lee:
Yeah.
Karim Galil:
If that is then the case, it seems like the way to go in healthcare is a neural symbolic system, because I find it hard to imagine a physician working with a neural net and getting hey, this patient... The chance for death is high within the next three months. They will gravitate to why, and then the system is going to fail to say why. And a physician will then feel comfortable working with that. I also find it hard for a pharma executive to basically make decisions based on a system that doesn't really meet the FDA way of thinking about life, which is very factual, and very scientific way. Would that be a safe assumption?
John Lee:
Precisely. I think that's correct. But, one way to say this is if you look at a thermometer and you see the temperature rise on a classic thermometer, and then you feel that it's getting warmer, a neural net approach would be the rising of the thermometer is causing the temperature to rise, or is it the temperature that's causing the thermometer to rise? With a symbolic system, you simply just place a rule and say it's obvious that the temperature rising impacts the thermometer. You draw that causal inference, or the causal relationship, and you have a much better understanding of what's going on. And so, when you're talking about pharma, it is very important to know if the impact that you see due to some sort of molecule is the result of the molecule or it's something else. And so, those causal relationships are really the key to unlocking much more intelligent systems. And it not only applies to pharma and healthcare, but it applies really to any industry that has sparse information and that requires true insight and understanding rather than just being able to associate.
Karim Galil:
The question then is how can you start crafting those rules? I mean, if we can think of medicine, what are the rules of thinking of medicine? It becomes really hard. How have companies solved that problem? Is the approach to those neural symbolic systems very rule based, requires clinicians and experts? Or is it a hybrid? What does it really mean?
John Lee:
Yeah. I think multiple approaches to do this. At the core of the question it becomes comfort with how do you define causality. That's really the important relationship to suss out here. And so, if you take kind of a historical scientific approach, you kind of steer away from causality, but in reality as humans we probably assign a lot more causality than correlation, and do it probably correctly in most cases. There are a number of ways to do that. I think one way, if you have experts designing the systems, they have a better sense of what is causal and what is correlative, but it's going to be subjective, but you have lots of experts in the design and then you create it. I think that's one way.
John Lee:
I think the NGO had a paper recently about how you could do this within your own nets where you statistically identify and suss out causal relationships, or what are kind of pre causal relationships. And so, I think there is a statistical approach here. There's also, Judea Pearl speaks a lot about how you can define those causal relationships at scale. But ultimately, I think this is the great unsolved problem when you're talking about neural symbolic system. How do you exactly create at that scale those structural ways to do things like semantic reasoning, or to create those understanding.
Karim Galil:
You talked about and touched on packaging and go to market. What are the successful models you have seen as an investor for an AI company to go to market, especially in the healthcare sector, what kind of business models and what kind of distribution channels work?
John Lee:
Yeah. I mean, there's no perfect answer here, but I would say that oftentimes it's no different than any other successful company that patches up a product. I view AI as really kind of a feature in the stack, it's a way to make things a lot better. You really have to focus on the things that actually improve and give you an advantage. And so, if there's a meaningful improvement for end user, I think that that's an appropriate place to really apply AI, it just has to... It just goes back to what is good product design and what is good product.
Karim Galil:
There's a very interesting blog post that [inaudible 00:17:08] had, which is investing in AI is not like investing in software. It's different gross margins, and it's more like investing in a pharmaceutical company where you need to expect two to three, if not even more, years of pure R&D, no commercial activities, before the company has something that is significant enough to take to the market. A great example is Adam Wise, Adam Wise now are cutting really big deals, I mean talking about billions of dollars very few month. But that took them what? Around four or five years before they come to that breaking point. How do you guys at Jazz think of that? Do you think this is a patient investment? Or do you tend to more do investments when the company pass that R&D threshold? What's your thesis around the timing of investment in an AI company?
John Lee:
Yeah. You know, I would say the [inaudible 00:17:56] article was probably a bit more focused around software tools, enterprise software and B-to-B tools that are permanently selling AI as a service. And in those situations the gross margin can be quite low, and the years to build it rather than kind of a traditional tool may take longer because you need the data gathering portion of it. I think when you compare that to examples like Adam Wise and some of these drug discovery companies, interestingly enough more-and-more I think pharma companies have been leaning on purchasing those services and doing drug development deals with companies that don't yet have a ton of data, rather they have an interesting approach.
John Lee:
The thing about Adam Wise and some of these kind of first generation drug development companies is that they're using these neural nets and there's a big hypothesis that this could lead to better ad-me, or better drug selection, and candidate selection. It could be possible, but it's probably more investing around hope than reality. That's partly because just the feedback cycle in pharma is just way too long to actually tell.
John Lee:
And so, I don't think we have any issue investing in companies that does have, or want to spend the time to build something foundational. And then the question becomes what are the near term and midway milestones that are indicative of future success? That's slightly different for every single company. But for a drug development company it's really hard just because the feedback loop is just so long.
Karim Galil:
Any interesting investments that you have done lately you want to share with us from your portfolio today?
John Lee:
Yeah. Speaking about drug development and something that is kind of very relevant to this conversation, we are an investor in a company called Genesis Therapeutics. In a lot of ways they're an AI powered drug development and drug discovery company. Their philosophy is that the first generation of drug development companies were using these computer algorithms that were really ported over from things like imagine net and that are looking at molecules at a pixel-by-pixel level, but have a true understanding of what's going on at a physical level. And so, the team over the invented something called potential net, which is using something called a graph convolutional neural net and it's basically using a physics based model with knowledge and understanding of how proteins fold, and then designing molecules from that point.
John Lee:
In a lot of ways, it's very similar to a neural symbolic approach, just given that you are starting with a base rule of limitations... Or a base set of rules, and limitations, and libraries, and then optimizing using neural nets to find the ideal molecule. I'd say approaches like that are just really exciting, because it is the next generation and there seem to be, at least in early data, some real impact and real positive impacts when it comes to ad-me optimization and selecting a much better molecule.
Karim Galil:
How's the investing world going virtual? I mean, take a company like the one you talked about right now, Genesis, this is very sophisticated technology. I'm assuming you need to spend a lot of time with the founders, and getting to know more about them, and about the tech. How are you guys able to do those kinds of interactions today in a world where everything is on Zoom?
John Lee:
Yeah. It's interesting, there's somewhat of a dichotomy happening. I would say in one way things can move a lot faster, because the barrier to meeting and then also the expectation of how well you get to know somebody has gone down. And so, if the barriers have gone down, and then the expectations have gone down, you can essentially work through a deal a lot quicker. I think we're starting to see this in the case of these deals. I think that there's probably a few months of hesitation where a lot of firms were not sure what was going on and probably hit pause, but then are realizing that they're being super productive and getting lots of companies... Spending time with lots of companies. I actually think that in terms of operation it is ideal for the VC world to operate on this model.
John Lee:
I think the downside is its harder to get to know somebody very well, and have I would say a lot more attention and time focused on a specific relationship. And so, if things are moving faster you have a shorter period of time to get to know somebody before a deal is done. I think additionally, if you're not meeting someone face-to-face that there are some clues that you're probably missing from body language that may have impact later on, but it's unclear what the feedback loop is. I would guess that in any situation like this fraud probably increases over time. And so, it'll be interesting how it plays out. I would say that productivity has certainly gone up, but maybe the depth of diligence, the ability of the depth of diligence has gone down.
Karim Galil:
Should we expect another [inaudible 00:22:43] coming out soon from this pandemic?
John Lee:
I hope not.
Karim Galil:
Going back to that AI piece of the equation, when do you really think we're going to see a true impact of AI in healthcare? I mean, is it the next five years? You talked about the feedback loops. You said, today we're investing in hope, because by the time we know whether those drugs are going to work or not it's not going to be next week or next year. Right? It's very long feedback loops, as you said. So, when do you really think we're going to see a different healthcare system, a system that is driven by artificial intelligence, is driven by data insights rather than by subjective experiences from different stakeholders?
John Lee:
Yeah. I think it's happening now and the reason I think so is I think Covid has really accelerated healthcare innovation by 10 or 15 years, because rather than sticking to models that were slowly failing you had a realization from lots of providers, pairs, and stakeholders that the systems need to change now. The benefit of a lot of healthcare delivery going digital or digitally optimized is that you can inherently collect a lot more interesting data. And so, I do think that the transition is definitely happening. I think that it bleeds into everything from infrastructure, it bleeds into how you keep your records, it bleeds into EHRs, and the standardization of those EHRs, it bleeds into what you can do with the data once everything is standardized, and then new and novel types of information that you can then start analyzing with artificial intelligence. I think it gets quite interesting.
John Lee:
We've seen many different models in telehealth; everything from ABA therapy delivery, to primary care, all those things you're going to start to be able to automate certain parts of it. And it's a question about how much you can automate. But it's unquestioned that you will have a lot more information to build a lot more interesting things. And so, I think that for companies that are oriented around building systems that can deliver a lot more with automation, so neural symbolic companies and hybrid AI companies, it's such a fascinating time because now you actually have the data and the willingness from stakeholders to move. And so, I'm really excited about what's going on. I think that there's just a tremendous opportunity right now.
Karim Galil:
Speaking about Covid, I always like to end the podcast with asking if you can Zoom call any living person today who would it be and why?
John Lee:
That's a very interesting question. We spoke about him quite a bit. I would say that Judea Pearl is probably someone I would love to talk with and Zoom call, just given that I think a lot of his work around causal reason and causal inference is going to become very relevant in the very short-term.
Karim Galil:
That was a great choice. I recommend reading a lot of those things because it's just a different way of thinking and just a paradigm shift on how you approach AI problems in general. John, thank you so much for taking the time for this. I'm going to borrow your analogy around the biology of the difference between neural symbolic AI and neural nets. But again, thank you so much for taking the time for this podcast. It was a pleasure having you as a guest here.
John Lee:
Yeah. Thanks for having me. Really appreciate it.