KMA

  • Student

Activity Feed

On November 15, 2018, KMA commented on YouTube | Machines Cleaning Up Human Content :

Really interesting piece. Of course, there should be no space for hate speech online and there are certainly risks associated with ignoring language that incites violence online. For instance, John Oliver has a good segment about the failure of Facebook to adequately remove hate speech which he believes has contributed to violence in Myanmar. That being said, I worry deeply about some of the risks associated with training machines to remove content deemed inappropriate, in particular when organizations have specific political views. There have been several instances of sites like YouTube removing Conservative content from their site, claiming it hate speech when it was obviously not…which makes me very concerned about the freedom we should all have to express both popular and unpopular views. This raises the question of who should be the arbiter of what is considered “Hate Speech”…and frankly, I’d rather it not be Youtube.

Really enjoyed this article about Pfizer’s efforts and I’m of course rooting hard for their success. Unlike some of the other commenters here, I’m inclined like Helene to agree that the “do good” nature of the project is likely to limit fraud and reduce the need for compensation of the individual contributors and Pfizer overall. One thing that I’m not sure I understand is how to define the size of the project to crowd-source. On the one hand, Pfizer was definitely successfully using crowd-sourcing for a specific coding problem in the article. Indeed, it seems that you’d want to gather folks’ views on a substantial problem, but with a challenge like “cure cancer” it is hard to see how you could actually add meaningful value without tremendous team efforts over a period of time. Can crowd-sourcing really be effective in this context with a problem of this magnitude and complexity?

On November 15, 2018, KMA commented on Additive Manufacturing…For the Body? :

Thanks for sharing this, Georges; very interesting and thought-provoking piece. One question that this raises for me is around the ethics of additive manufacturing in this context. What would happen if folks could perpetually update their organs so that they never fail? Could we end up in a world without death?

On November 15, 2018, KMA commented on IBM’s Deja Vu in Disruption :

Thanks for sharing! Like MM, above, I wonder a lot about the potential risks of open innovation in a private business context. In my work experience, there is tremendous value in having access to IP that others don’t. In order to pull this off, would you have to change the IBM model? Or would you simply protect the base platform with some sort of patent? Or maybe it would be more incremental changes and not expose the bulk of the Watson IP? My sense talking to a few of the people in the section is that open-source is often used in academic settings and not really for business where you risk a very valuable underlying asset, but truly not sure.

Thanks for sharing this interesting article. It seems pretty compelling that 3D printing has opened up a world of possibilities around the cheap, fast development of prototypes. However, do you think there may be any risks of moving to a much cheaper/faster additive manufacturing process versus the historically more deliberate, expensive process for prototyping? For example, might the hurdle of requiring large initial investments helped the funnel to ensure that only the best ideas have time spent on them? This reminds me a bit of the IDEO case where they started with a super wide funnel before focusing on the most relevant/tactical ideas; seemed like the case was split at the time so I wonder if there are potential negatives here as well. Thanks again!

On November 15, 2018, KMA commented on Disney – “A Whole New World” of Machine Learning :

Thanks for sharing this perspective. This article has me thinking even bigger about the potential for an algorithm like FVAE, which fundamentally is trying to derive a person’s thoughts based on his/her facial expressions. I wonder about the potential outside of content creation (or monetization) like using it to assess the truthfulness of interviewees, public figures, or even loved ones. Looking forward to seeing how this shakes out.