Joe Johnson's Profile
Joe Johnson
Submitted
Activity Feed
Their strategy is to develop the new ideas to some specific value inflection point. Typically this will be one or multiple killer experiments that should either de-risk the asset and provide a strong “Go” signal, or expose a flaw in the asset (toxicity, lack of efficacy) and provide a strong “No-Go” signal. If the signal is go, they will engage industry VC or Pharma partners to back the asset through later stage development. This allows them to exit and re-coup their returns earlier on. If the experiment signals “no-go” or is unclear, the project is likely to receive no more funding.
The structure of the industry is capitalistic, so it naturally incentivizes competition (and thus not sharing data). However, there are examples of consortiums that aim to address this. One example I’ve heard is many large pharma companies agreeing to share their relevant data amongst themselves for a year or two (enough time to give them a competitive edge vs the rest of the industry) and then agree to make that data known to the public. This at least prevents the data from being locked away forever. I believe the DDF does that in a sense, but since they work with early stage companies, they do not have much of the more meaningful clinical data to share.
This is a really cool concept. The field of personalized nutrition is beginning to take off, and it’s this type of machine learning that is a key driver. One concern I have with Wellio (and other similar companies) is that they may get too excited by how many different things they could do with machine learning. The value proposition discusses being a nutritionist, grocery shopper, and chef all in one, and the blog later goes on to describe shopping parameters around packaging and waste. While these are all potentially great value-adds, I would think a startup would want to focus on being great at one, maybe two, value-adds to begin with.
Interesting read! One question I have is what is the overall value proposition for this platform? Is the purpose to become a more accurate or sensitive diagnostic tool than the current standard, or is it to ultimately be a cheaper alternative once economies of scale kick in? I agree that machine learning definitely has a place in animal healthcare, but I wonder if this is the best application. You mention both cost reduction and improved treatments as possible positive effects, but without a particular goal in mind for either the research seems hard to justify.
This is a very interesting read. From my understanding of StitchFix’s business model, they continue to employ stylists for the exact reasons you mentioned – capturing emotional/contextual inputs that the algorithms would miss, such as special occasions, pregnancies, etc… As to your question of how much value there is in the machine learning algorithm, I personally think it is a lot. One reason is that I believe in their claims that these algorithms lead to better recommendations. However, even if most of the clothing decisions actually did come from the stylists, the algorithm adds a lot of value as a marketing tool. The company has generated a lot of buzz and free marketing due to this innovative approach…I first learned about them through Katrina’s appearance on the “How I Built This” podcast, which may not have invited her on as just another clothing company.
It always frustrates me when great technologies are slow to reach patients because of implementation barriers. I agree with you that many of the “legitimate” concerns with 3D printing orthopedics have been addressed. The challenge with breakthrough treatments for complex conditions (such as a broken hip) is that nobody wants to bear the risk associated with being the first to adopt the new technology, should something go wrong. In pharmaceutical development the approach is generally to find the easiest, safest path to approval for a drug first, then as doctors and payers become more comfortable with the new drug conduct follow up studies to pursue more difficult diseases. Like Mike, I wonder if the same approach could be used here in applying 3D printing for less complex joints than the hip first.
I’m glad to see that Cigna is thinking of ways to use their machine learning predictive capabilities to help patients, becoming patient and outcome focused. However, I have trouble believing that these methods are being used entirely (or even primarily) in such ethical ways. As we’ve learned in our finance class, riskier investments require a higher rate of return. If an insurer is able to predict future health problems using AI they may deem a customer as more “risky”. This risk perception could allow them to justify increasing insurance prices for high risk patients, or refusing to insure the patient altogether. Further more, I wonder about the ethics of making such a decision. From a financial standpoint, if Cigna has the ability to detect that a consumer has a “NPV < 0" due to their risk profile, is it ethical for them to choose not to ensure them (or to hike their price up)?
This is fascinating. From your questions, it seems to me that the key issues facing MySurgeryRisk are scalability, as integration and training in new hospital systems and new contexts requires immense upfront costs. I give a lot of credit to the centralization of EMRs to making this technology possible though. While they’re not perfect, without centralized EMRs make the input data much cleaner and more consistent than they’d otherwise be.
Without clean, consistent input data, implementation just at the local level would be quite difficult, and scalability would seem infeasible.
This is such a difficult issue because of the extreme impacts that both sides of the debate may have. Those in favor of these invasive machine learning techniques may cite lives saved from suicide prevention and anti-terrorism applications. However, the importance of privacy and the sense of security that comes with it is much less tangible, so weighing the benefits is extremely subjective. While this does not address the entire problem, I think facebook could do a better job of explicitly informing users of their practices. I feel most of the resentment of these applications of machine learning comes from a lack of trust – when people agreed to sign up for Facebook they did not realize they were signing up for so much oversight from Facebook. If Facebook explicitly warned its users about its targeted advertising methods, suicide prevention, or counter-terrorism methods before enacting them, its users might trust the company more instead of feeling like these “spying” methods are being covered up.