Samantha Sanders's Profile
Samantha Sanders
Submitted
Activity Feed
This is an extremely interesting article — really enjoyed reading about this.
One thing that you suggest in your piece is to have Earlens mail a wearable diagnostic toolkit to a prospective user. I would argue that this may not be possible for the average hearing aid consumer. From my clinical experience, typically the patients who require hearing aids tend to be elderly and oftentimes have difficulty navigating technology. Consequently, it could be difficult for them to figure out how to use a mailable diagnostic toolkit with features that could test hearing capability, map biometrics, etc.
The second thing I would say based on my clinical experience is that hearing aids are extremely expensive, and this represents one of the greatest pain points for patients. This article mentions that Earlens could reduce the cost of its hearing aid by using additive manufacturing to directly produce the hearing aid, which I think is crucial to its value proposition. I would expect that consumers probably don’t care that their hearing aids (or the mold used to produce their hearing aids) are produced by additive manufacturing — but they certainly would care if their hearing aids were cheaper.
Very interesting article. One of the things that struck me about this piece was the author’s mention of both crowdsourcing related to flavor ideas as well as smaller organizations / start-ups. In this sense, the author tries to demonstrate that PepsiCo is leveraging the concept of open innovation broadly — it draws upon ideas both from individuals (e.g., customers, employees) as well as consumer packaged goods products that are already on the market.
One question I would ask is whether PepsiCo’s investment in smaller companies (as was the case in its “Nutrition Greenhouse” program) can truly be considered an example of open innovation. The practice of buying/investing in smaller company has been a common practice for decades, and it is by no means limited to PepsiCo (or the consumer packaged goods industry — think of pharmaceutical companies purchasing biotech firms). While this practice does give PepsiCo a stake in new products, I believe that it is different from its consumer and employee crowdsourcing campaigns. Rather than soliciting ideas from a broad population, PepsiCo is evaluating existing ideas and assessing their brand potential. Can this truly be considered open innovation?
Really enjoyed this! I know you asked a question about how machine learning could be used to effectively spot fraudulent patterns that have never occurred before, and I wanted to expand on this. I think that your question gets at one of the fundamental principles of machine learning — the idea that as data (in this case, fraud data) continues to evolve and change, one must feed these new data into the machine learning algorithm in order to refine it.
I would go a step further and ask the question: how can machine learning in this case adapt to ever-changing medical practices and protocols? For example, while use of a medication for a certain disease may have been considered wasteful in 2010, it might not be considered wasteful in 2018 (or vice versa), given that medical practice is constantly changing. It will be quite a challenge to update these new protocols and practices in the context of rapidly evolving medical standards. However, as you argue in your piece, these efforts will likely be worthwhile given the potential applications of machine learning to the health insurance industry.
Very interesting! I really enjoyed this piece. I would comment here that Nanfang Hospital’s focus on pathology in particular (e.g., with regards to cervical cancer images) is just one piece of the puzzle here. Machine learning is of course extremely applicable to medical fields like pathology, radiology, and dermatology, given that these are visual fields that require straightforward diagnoses. For instance, in pathology, you review a slide of cervical tissue and determine thereafter whether the patient has cancer or not. The same basic process applies in radiology and dermatology (reviewing images to identify if a patient has a particular diagnoses). However, I would argue that the application of machine learning to diagnosis in more cognitive fields (like primary care or neurology) are more challenging. In these cases, diagnosis is more of an art than a science — it requires clinical suspicion regarding possible diagnoses and then appropriate follow-up (e.g., laboratory testing and imaging) to explore these possible diagnoses. In my opinion, this is a more difficult issue to solve than the application of machine learning to pathology, radiology, or dermatology.
The primary reason why I bring up this information in the context of this piece is because the author states that Nanfang Hospital wants to use machine learning to solve the issue of long hours for doctors. I would argue that this is not necessarily going to occur with the application of machine learning to the fields of pathology, radiology, or dermatology (which are some of the best lifestyle specialties). In order to leverage machine learning to truly improve physician work-life balance, Nanfang Hospital needs to focus on its applications to diagnosis within the cognitive specialties in addition to specialties like pathology.
Extremely interesting topic! My concern is linked to the first question that you pose here (will patients ever be willing to trust diagnoses/suggestions from ML-driven platforms without human intervention). My belief is that patients will indeed be willing to trust those suggestions, but that those suggestions may not often be accurate. For example, I see patients all the time who come into the doctor’s office with a false perception of what they might have thanks to WebMD. While I recognize that Curai might eventually be more accurate and useful to patients than WebMD given its use of machine learning techniques, I still have concerns regarding its ability to give patients sound medical advice. Medical questions are often nuanced and require detailed physician evaluations to answer them properly (e.g., with physical examinations and laboratory testing). Accordingly, I think it will be difficult to have Curai appropriately answer technical medical questions, and as a result patients will be left worrying that they could have a serious diagnosis (e.g., cancer) that may not be based in reality.
Really interesting piece. From my perspective, it seems as though these Grand Challenges are especially useful in soliciting ideas suited to local needs. For instance, by operating a Grand Challenge China run by the Chinese government, the Gates Foundation helps to create an environment in which its initiatives are tailored to the actual needs of the Chinese people.
A few questions / comments emerge from my reading of your piece:
1. How does the Gates Foundation determine how to prioritize its Grand Challenges? In other words, when does it decide to run a Grand Challenge focused on a certain country or global health issue? You state that you think that the Gates Foundation could consider Grand Challenges related to education, world hunger, and women’s rights, but how would the solutions that come out of these Grand Challenges be prioritized against the Gates Foundation’s existing initiatives?
2. Regarding the question you raise about Grand Challenges vs. centralized decisions — I agree that there needs to be a balance in these situations between expert influence and open sourcing of ideas. I would argue that to some extent, these Grand Challenges already incorporate some expertise in assessing the value of submitted solutions (e.g., for healthcare, presumably a medical professional judges the competition’s entries). From my perspective, I think that this is sufficient to ensure that appropriate expertise is incorporated in the Gates Foundation’s decision-making process. Accordingly, I would argue that maintaining the current structure of these Grand Challenges is worthwhile given that leaving the competition open to both experts and laypeople alike facilitates more creativity in idea generation.
This is a really interesting idea. A few questions/comments for you:
1) Do you think that this machine learning algorithm would lead to a large shift in the way that physicians actually practice medicine? As you mentioned in your write up, the medical community currently uses the Framingham risk score, which is flawed, but there is nevertheless a large amount of study data that shows that this risk score is useful in identifying which patients are at risk for heart disease (and therefore which patients should be placed on heart medications like statins). How much of an incremental improvement do you think this algorithm will actually have on identifying patients at risk for heart disease?
2) Given that the Framingham risk score is already so entrenched in medical practice, it would be a great deal of work for clinicians (both at Partners and elsewhere) to switch their practice to a machine learning algorithm. How would you go about convincing physicians (many of whom have been using the Framingham risk score for years) to switch to a new machine learning algorithm? Additionally, how would you drive buy-in from physicians outside of the Partners network?