Elizabeth Liu's Profile
It is very interesting that you described a trend from open to closed innovation – something I have not observed before. In terms of individual companies, it is true that generally when they are younger and have more to gain, they tend to be more susceptible to open innovation, but when they have a large enough network, they choose to close the innovation. I really like how you listed examples of tradeoffs of open innovation in your last paragraph. Below are some generalized thoughts I had on the tradeoffs based on the examples:
A few factors might influence the decision of companies to be more open when they are younger / smaller. One is that once they gain a sufficient scale, they are able to get similar quality of insight from closed network, whereas in early days they need to rely on the environment to provide enough inputs. Another factor is that many tech / new media companies are founded on the premise of openness and transparency, regardless of whether there are contrarian content or malicious usage on there (or maybe those can be controlled more easily). As they scale, they are less able to control the contrarian content, and at the same time receive more public scrutiny on their social responsibility to filter such information / use of information in their network. As a result, they have to resort to either spending a lot of capital in controlling the network, or closing the network.
The risk of closing off the network is not just that your system will learn from less data, but also that your system might have limited variety of data to learn from. Especially if you couple closing off the network with controlling the network, you have a situation where the network itself (in this case Facebook) becomes the entity that decides what people in this closed network will receive. This is risky for the people involved in the system (as the system might be biased), as well as for Facebook (as they might lose competitive advantage to newer fresher perspectives sooner).
Fascinating topic! I think there is so much that AI could do to help humanity, and this is a great example. In particular, where AI helps is early pattern recognition, to alert humans in digging deeper where needed. While I think the use of AI in detecting disease can be helpful, the risk of not passing the same information through human interpretation is also high (in the case of false negatives – i.e. there was a disease but machine did not catch it). Where I think machine could really add value is in detecting mental health issues that are not quite diseases, e.g. a deterioration in mental health that is worth noting and correcting. Currently this part of mental health is largely ignored, because there is just not enough resource to look at everyone in their daily life. AI can look through this vast amount of information and pattern much faster, and provide recommendations for individuals to consider, and if warranted, to send to doctors for further evaluation. Would love to hear your thoughts on how AI can help the “quality of life” part of mental health!
Thank you for the thorough research and thoughtful article! I totally agree with your opinion that a large part of the value Fitbit (and similar technologies) can provide are in providing individual, and collective, pattern recognitions and recommendations. An aspect of health that Fitbit and competitors seem to have not looked into is mental health – how are people’s activity level, sleep cycle, etc. affecting their mental health (not just physical health such as indicated by diseases)? One reason they might not have looked into it is that while diseases (or prevention of) can be quantified, quality of life measures such as mental health are less susceptible to be valued. Another related reason is that while Fitbit can draw on clinical data to measure relationship between disease and input data, mental health is a spectrum that might require people to self-report their health level – and data collection that requires actions on the end of the customer is difficult. I do hope that some day this would be addressed though!
This is a fascinating research Brian! I see parallel in the rest of media (movie and TV development, and even music) – but in those fields the experts (producers) stand firmly with the stance that machine learning can aid with screening of human-created content, but could not produce content, because what’s most valuable in media is creativity and human interpretation . I think same logic can be applied here, as the audience of a sports article, for example, read it mainly for the human interpretation of the data. If people are simply interested in the data and their summary, there are faster ways to view those (e.g. charts and other data summaries) than NLG.
Very interesting essay with a very interesting display name! I’m intrigued by your recommendation to remove certain variables to avoid discrimination, e.g. postal code. There are inherent trade-offs between having complete data that could help to fine-tune algorithm, and adding bias which are inherent in input and feedback loop since those are done by human. In fact, the fact that machines could eventually pick up patterns to quickly understand what you’re looking for is dangerous in itself. In that scenario, we would be stuck in our original preferences without given opportunities to branch out and learn from divergence. Information overflow is pushing us to rely on machine to screen data, but what if that’s preventing us from receiving the diverse data points we need?
This is such a well-researched and interesting read, Akash! I think it’s fascinating how ML and robotics can be used in discovery and innovation. I have been a doubter of the ability for ML to prove truly useful in innovation for the limitation you presented, namely the the tunnel vision of the robot-program pair. However, after reading your essay I’m starting to think that ML can present data from repeated simulation for human to observe patterns more easily, and save human time for the more value-adding activities of speculation and creation. This touches on your question #2. On your question #1, I think it is difficult for ML to truly eliminate human bias because human judgment is required as the input, as well as for screening and interpretation of the output. Would love to hear your thoughts though. Also would love to hear about how you learned about this company. Thanks a gain!