HBS2020_12345

  • Student

Activity Feed

This is definitely a complex problem and I can see how machine learning would be helpful. The piece about how private sector investment has become the sources of cash and demand that drive the tech sector today and the reluctance of a socially conscious Silicon Valley to build war weapons was especially interesting.

On November 14, 2018, HBS2020_12345 commented on The Future of Venture Capital: Humans vs. Machines :

I did not know that VC firms were using machine learning to guide their investments and found that fascinating. I was impressed by the scoring system. It makes sense that Google would use the vast data it has available. The question of how much weight to give to the machine’s recommendation versus a human’s is a tricky one.

I had not heard of robo financial advisors and found the concept interesting. It is interesting to think about how robo advisors could be deployed to better serve female investors. I like the idea of a women-centric product. I am skeptical that machine learning alone will reverse gender biases, but if thoughtful humans are creating the training, perhaps it is possible.

It was interesting to learn more about how Facebook uses machine learning to identify fake news. I like your idea that Facebook be more transparent about what political content is paid. I do think the public sector has a responsibility to provide some guide rails about political messaging online, but I do think it should not be unnecessarily prescriptive.

On November 14, 2018, HBS2020_12345 commented on The benefits of A.I. in Sub-Saharan Africa :

It was very interesting to learn how machine learning could be applied to assess creditworthiness in developing countries. The question about whether MyBucks is invading privacy is an interesting one. My opinion is that this is a real risk but I am not sure how to resolve it.

The Atul Gawande quote was very powerful and encapsulates the challenge at hand very well. It’s great that machine learning could accelerate how quickly patients get results. The risk of a machine medical error is scary, and would likely be received critically by the general public. I wonder if machine learning could also expose human bias rather than replicating it.