Jon Hu's Profile
Jon Hu
Submitted
Activity Feed
This is amazing.
Problems with patient recruitment seems to be getting more and more visibility lately, and yet very little has been done to address the issues. It’s great that there are companies like Acurian out there who are trying to get into the game. The industry has seen some great success with innovative methods at patient identification/recruitment (e.g., Vertex and the CF Foundation), but the majority of trials still use the same traditional methods that have been around since the stone age.
My main question here is around scale, and who’s best positioned to achieve that scale. Acurian has 17M people in its database that it’s tracking data for. While this is a great achievement, it nevertheless still only represents a little over 5% of the US population. This may very well be enough for large-scale pivotal trials on common diseases (e.g., heart). But that’s not the population of trials that struggles with patient recruitment. Instead, it’s the rare diseases who face the majority of that hurdle, and 5% seems low. What’s the optimal size of the database, given that there are marginal costs associated with data collection. How can Acurian or any other company in the private sector build a sufficiently large database? What other resources/assistance do they need from the public sector?
“But can providers and patients trust Pfizer and other members of the pharmaceutical industry with their data?”
Data privacy and cynicism/caution on sharing health data is a great topic, but while the question you pose is THE one on almost everyone’s minds, I’d like to pose a somewhat edited form of that question: Can society afford NOT to give over their data? Personalized medicine is the goal of many people nowadays, including being highlighted by former President Barack Obama in his SOTU address. It’s a buzzword used by people throughout society and mentioned in an almost reverent and inevitable tone – the idea that a drug can be developed and prescribed based on your unique characteristics and disease profile. But let’s take a minute and go one step further than the mantras – what, exactly, is personalized medicine and how do we get there.
The first step is to understand the patient – that’s the “personal” part of personalized medicine. So how do you understand the patient? This is not how human beings understand other human beings. It’s not about the person’s name, background, or emotions. Neither the disease nor the drug cares about the occupation of the patient, the number of siblings, nor his/her aspirations. You have to understand the biology of the patient and the body’s interaction with the disease. This requires data; it requires personalized data. That data can then be used at the basic science level and the drug development level, both to understand the fundamentals of the disease as well as to understand how best to intervene in the pathophysiology of that disease in the context of the specific patient burdened with that disease. This is impossible to do without that data.
There’s been quite a bit of backlash on biopharma R&D productivity lately. Basically, it has been more and more expensive (both in terms of time and $$$) to develop a drug, and a lot of these drugs have marginal incremental benefits. The consensus blame is that the low-hanging fruit has all been picked, and so the remaining diseases are just that difficult to tackle. But this doesn’t have to be that way. The difficulty of a problem is relative, and here specifically, it’s relative, in part, to our understanding of the disease-human interactions. But this obstacle can be overcome with the data from patients, resulting in drugs that are safer and more effective, have a higher response rate once given to a patient, and takes less time and money to reach the market. I believe that all of these are worthwhile results to pursue, and the first step of attaining that process is to start the widespread sharing of data.
And so I will pose once more – can patients and physicians afford not to share data?
Once upon a time, I was incredibly bullish on telemedicine and even did some preliminary diligence into the field as potential investment opportunities. However, the more I delved into it, the less convinced I was about its opportunities in the short-medium term. The main reason is not a functional one – it is highly functional and will go a long ways into reducing medical costs in the US. The main reason, as Monika points out above, is that it’s hard to get buy-in from people, in particular from the doctors.
US physicians are highly conservative in their approaches, and they mostly take “do no harm” very seriously. As such, they are always trying to minimize catastrophic problems that may occur. This single philosophy makes the vast majority of them to be highly hesitant in adopting telemedicine. While many illnesses can be diagnosed remotely, there’s always a small chance that the symptoms are an indication of a much more serious problem. Also, in many clinical settings for easy-to-diagnose/treat diseases (e.g., urgent care), physicians can go through patients every ~15 min or so, and telemedicine will not cut down on that time much for the physicians. The combination of these two factors then make telemedicine penetration increasingly difficult. For example, let’s say a patient presents with a cough. Chances are, it’s nothing bad. It may be a cold or just allergies. In fact, it is highly likely that it’s one of those. But there’s always that small chance that it’s something much worse. It could be pneumonia or an infection, which requires more interventions. It can also be cancer or heart failure, which require MUCH more effort. The pushback here is that if the physician thinks it can be something more serious, they can always tell the patient and schedule an in-person check-up in the next few days. However, they’ll tell you that there’s still a non-zero risk of something bad occurring (e.g., that they can’t convince the patient to come in personally), and that it’s a risk they’re uncomfortable taking.
Basically, that was a long-winded way of saying that there’s a mismatch between the primary beneficiary (patients’ time) and the people with the most decision-making power (physicians). After spending a large amount of energy looking into this field, I was eventually forced to give up as I couldn’t find an easy way to overcome the opposition from physicians. I’m still highly bullish on telemedicine, but I think it’ll be a very very very long march and that the inflection point won’t occur any time in the near future. What are your thoughts on overcoming these obstacles for adoption?
It’s interesting to see the evolution of these companies that basically outsource the manufacturing of a ton of parts and assemble the resulting pieces together. You see a very successful example with auto manufacturers, in which the entire industry does this phenomenally well. But you also see cases of this that are much less successful (e.g., Boeing’s Dreamliner). I’m highly curious as to what the drivers are that enables people to start to predict the success of these ventures.
You highlight building the teams and having everyone work together on it much earlier in the process. Who should be the ones that drive this process change, and once implemented, will it be enough to solve the major problems (e.g., the tolerances that you mention)? If I take the analogy of cars or planes and apply it here, it seems like it’ll be the responsibility of the construction firms on creating these interdisciplinary teams. However, it seems like they may have the least amount of power, as they basically implement a project that’s initiated and designed by the developer/architects. So from that perspective, it seems like the developer will have to be the one that creates these early-stage teams to involve everyone. Would you agree with that, and regardless of who ultimately is responsible for leading the charge, what’s the best way that they can actually create these teams? What are the incentives for everyone to be involved early, when that’s a cost to these firms without any revenues (e.g., I don’t think construction firms will get paid for their inputs on the design of the building). Will firms have to vertically integrate, or how will that work?
3D printing is a fascinating topic for the manufacturing world and is a great current case study in the adoption of an innovative new technology in an otherwise very traditional and old-fashioned industry. In particular, I think the adoption will go through a series of different phases, with each phase being characterized by increasing presence of 3D printing. As you mentioned in the write-up, it started with the ability for rapid prototyping and eventually Daimler went on to start using 3D printing for parts. However, I’d argue that this is still very much in the dabbling phase, as you’re simply replacing a traditional manufacturing process with a new one ONLY at the last step – the actual manufacturing.
One of the main advantages of additive manufacturing is that you can actually completely redesign how you create products from the very beginning – the design phase. Currently, car manufacturers assemble a ton of parts together (millions?), as they’ve found it to be the most efficient way of doing so. However, additive manufacturing will enable them to design the car with far fewer parts, as incredibly odd shapes that used to take multiple parts and many bridging parts (e.g., screws) can now be printed as a single part. This reduction in number of parts will also hopefully lead to increased quality, as fewer moving parts means that fewer things can go wrong. Do you know if Daimler or any other auto manufacturer has started looked at redesigning their processes by fundamentally integrating 3D printing?