Santosh Iyer

Activity Feed

On November 15, 2018, Santosh Iyer commented on The Missing Piece: How Lego Found Open Innovation at a Critical Time :

Excellent article – very interesting read.

As a personal contributor to open innovation (LEGO Ideas) for LEGO [1], I dont think designers have a diminished role. In my mind, they serve a very complimentary role – that is, to ensure that ideas that are filtered and green lit, as you mention in your article, through the open innovation process. From my point of view, the designers role is really scaling an “idea” into a product – that is, making tweaks to the “buildability” and “marketability” of the product such that the end user will want to buy the product, will find it easy to use, and can enjoy it for a future period of time.

One of the biggest personal challenges I found as a crowd sourced innovator contributing to LEGO is that what I originally thought of as a good idea needed to go through several design changes through the process described above, before the end result was something that LEGO could market! However, the designer is uniquely qualified to do this within the organization, as they always have been.

[1] https://medhatch.org/pilot

On November 15, 2018, Santosh Iyer commented on The Missing Piece: How Lego Found Open Innovation at a Critical Time :

Excellent article – very interesting read.

As a source of open innovation for LEGO [1], I dont think designers have a diminished role. In my mind, they serve a very complimentary role – that is, to ensure that ideas that are filtered and green lit, as you mention in your article, through the open innovation process. From my point of view, the designers role is really scaling an “idea” into a product – that is, making tweaks to the “buildability” and “marketability” of the product such that the end user will want to buy the product, will find it easy to use, and can enjoy it for a future period of time.

One of the biggest personal challenges I found as a crowd sourced innovator contributing to LEGO is that what I originally thought of as a good idea needed to go through several design changes through the process described above, before the end result was something that LEGO could market! However, the designer is uniquely qualified to do this within the organization, as they always have been.

[1] https://medhatch.org/pilot

On November 15, 2018, Santosh Iyer commented on China’s Take on 3D Printing in Healthcare :

Interesting read.

I think 3D printing in the China context really offers a unique solution to precision medicine, to enhance the patient experience. The biggest challenge, in my mind to a 3D printing approach for patient organs, etc. is the accuracy of modelling, the time to print, and the economies of scale which is going to make it tractable for rural or remote hospitals which desperately need this technology to help their patients. To the latter point, your reference [11] referring to reduced costs is a little ambiguous to me, because I think the authors are assuming that economies of scale will kick in before those reduced costs can be realized. Coming from a medical device industry that has to sell high fixed cost equipment to hospitals, painting a rosy long term economic picture is critical to fostering adoption.

I would be curious to see if MIIT can engage local vendors to lease or give their printers on a “per-use” basis through a subscription model, so hospitals do not have to incur such a high upfront cost before they can deliver some of the incredible precision medicine benefits of 3D printing.

On November 15, 2018, Santosh Iyer commented on Adidas’s Race to be #1 in 3D Printing :

Great read.

I have no doubt in my mind that Adidas will provide truly superior performance with 3D printed footwear over traditional athletic footwear. However, my biggest reservation is less about the technology and more about distribution. With a push towards online retail, I think apart from your elite athletes that would come into a store to get shoes made, a lot of others may resort to a flexible shoe model that contorts to your foot type, rather than relying on physical irregularities (unique to each persons foot) that needs to be capture in the 3D printing process. Companies are doing this in the clothing space [1] – while seemingly low tech, its also a lot more scalable (its able to manufacture at a one-size fits all approach, but provide a custom experience because of materials selection). Curious to see how 3D printing can address these issues – and if it is truly a fad, or can it maintain a niche target market for the elite athletes of the world.

[1] https://www.mizzenandmain.com/

On November 15, 2018, Santosh Iyer commented on Citizen by Day, Scientist by Night? :

Gavin,

Great write up. One exciting opportunity I see with this mode of crowd-sourced open innovations is implications in annotating data, at scale, quickly.

The closest paid analog to Zooniverse that comes to mind is Amazon Mechanical turk, which pays users to annotate data ranging from dogs and cats to surgical instruments on laparoscopic video images. For both the free (Zooniverse) and paid user (Turk) scenarios, quality of labeling data is extremely important, especially for scientists who then need to use this data to train machine learning models, publish ground breaking research, etc. One would readily assume that the paid user, in this scenario may intrinsically be more motivated to produce better quality data since they may be dropped from the turk network if they annotate poorly. However, I would argue the opposite – and one can look at Wikipedia as a great example of this.

While Ben Newton in the previous post talks about the self-selecting nature of contributors who are intrinsically interested, I would extend this paradigm further by also arguing that they provide a crowd-sourced mode of quality control as well [1]. What’s really great about this is Wikipedia does not have to rely on a few people to manually scrub posts to ensure they are quality controlled, but can rely on its contributors instead. In a similar vain, I would hope that if and when Zooniverse can get a reliable set of its own annotators, that they can also self select out bad contributors, and permanently ban them from the network, similar to Wikipedia. Now, lets go back to the Amazon example – because their system is closed, Amazon has to manually screen for bad annotators (based on user feedback, or random audits) and remove them from the Turk network on a case-by-case basis. This is time consuming and costly!

[1] https://en.wikipedia.org/wiki/Wikipedia:Quality_control

On November 14, 2018, Santosh Iyer commented on Machine Learning and Radiologists: Friends or Foes? :

You bring up some interesting issues on human/machine collaboration/handoff that I think is crucial to address going forward in healthcare. My personal belief is that machines and machine learning, at least in today’s context, are extremely narrow in training – that is, they are unable to understand broader context around the radiology feature classification problem.

As an example, consider the following scenario:

Radiologists of the future not only use ML models to screen for routine conditions, but also to change features to look for as they are managing patients in the hospital – based on what the patient is currently experiencing now. What the system may initially present as lesions signalling organ failure may actually be tied to some unknown upstream effect (i.e. say a metastatic tumor originating in the brain) that the physician only recently discovers after consultation with his/her fellow physicians. In this way, similar to the priority sequencing mechanism you describe in your article, I also see ML tools for radiology being quick “second opinion” checkers that can enable physicians to act quickly and deliver intervention more efficiently.

On November 14, 2018, Santosh Iyer commented on Machine Learning and Radiologists: Friends or Foes? :

You bring up some interesting issues on human/machine collaboration/handoff that I think is crucial to address going forward in healthcare. My personal belief is that machines and machine learning, at least in today’s context, are extremely narrow in training – that is, they are unable to understand broader context around the radiology feature classification problem.

As an example, consider the following scenario:

Radiologists of the future not only use ML models to screen for routine conditions, but also to change features to look for as they are managing patients in the hospital – based on what the patient is currently experiencing now – i.e. what the system may present as lesions signalling organ failure as part of standard pre-screening task may actually be tied to some upstream effect (i.e. say a metastatic tumor originating in the brain) that the physician only recently discovered after consultation with his/her fellow physicians. In this way, similar to the priority sequencing mechanism you describe in your article, I also see ML tools for radiology being quick “second opinion” checkers that can give physicians results quickly and allow intervention to be delivered sooner.

Insightful write up, Aditya.

I’d like to elaborate more on your point about data veracity, since I think that its a crucial part of training your model with the correct inputs. In my article, I discuss about the impact of surgeon performance on post operative complication [1], which roughly equates to 50% [2] of the 150,000 deaths quoted in your article. All too often, many people in the EMR space who have tried to build comprehensive outcomes-based predictive models have suffered from bad data (or inaccurate data) that looses precision and accuracy as it scales to larger patient populations and other markets.

In this stride, I think a lot of caution needs to be taken into curating the kind of hospital system and surgeon data being fed into training data sets for MySurgeryRisk.

[1] https://d3.harvard.edu/platform-rctom/submission/collaborative-autonomy-in-the-operating-room-verb-surgical-and-democratized-surgery/
[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2639900/

On November 14, 2018, Santosh Iyer commented on A Tech Titan in Healthcare: All Eyes on Deep Learning :

Explanatory artificial intelligence (XAI), as you’ve highlighted, is crucial to gaining trust across various clinical entities. To your point about when the algorithm should enter, I believe recent XAI research [1] into decision tree modelling and automated rule extraction may provide some clues. Intervention into the decision making process is really a subset of the supervisory capability of a machine learning model to understand where in the overall process of diagnosis/intervention the physician is.

Put another way, understanding the decision making workflow of a physician, and mapping this to the ML capabilities of a parallel decision making system, provides a good way for a human to query an ML system as they see fit. Alternatively, the ML system can also serve in an advisory (or early warning) capacity to prevent a physician from deviating from some optimal decision making path and nudge them towards an optimal solution.

Your point about annotation by a human expert also underscores the importance of how ML models should sequence data interpretation into buckets that mirror their clinical workflow counterparts (especially during the R&D design process). This enables not only regulatory and clinical bodies to communicate functional capabilities/limitations of the system to others, but also engineers to trouble shoot and benchmark system capabilities to gold standards.

In this way, I feel that designing ML models for healthcare and enabling transparency are really two sides of the same coin that, if implemented correctly, can hit two birds with one stone.

[1] https://arxiv.org/pdf/1806.00069.pdf

Ollie, great insight and thanks for your comment.

I would argue that standardization, and low barriers to trailability and adoption associated with collaborative autonomy, coupled with an attractive pricing model, affords an opportunity for surgery 1.0 countries to leapfrog to surgery 4.0. To elaborate more on the pricing model, one can evision a scenario where free upfront costs, coupled with discounted subscription fees for emerging markets, are offset by other markets where the willingness to pay is higher.

On your point about barriers to healthcare innovation, I would like to clarify what exactly collaborative autonomy means to me in 2030 – it’s not necessarily robots performing tasks autonomously, but also “guard-railing” surgeons from deviating from the norm (i.e. damaging critical anatomy, by enforcing virtual force constraints). The latter scenario seems a lot more tenable in a decade in my mind, and the FDA and MIRS companies have been receptive to this mission (in the R&D sphere) – at least through the last decade.