Cherish Weiler

  • Student

Activity Feed

On November 15, 2018, Cherish Weiler commented on Using Machine Learning to Combat Deforestation :

This is such an interesting concept that I wasn’t aware of before. I’m curious as to the methods employed before acoustic monitoring and why they were so ineffective. For example, I thought perhaps satellite imaging could have been an option, but I’m assuming that there is a time lag concern there – by the time deforestation is noticed on a satellite image, it’s too late. I think it’s interesting to think about how Rainforest Connection, as a non-profit, literally can’t afford human capital. Many for-profit companies find it cheaper to invest in growing their human workforce BEFORE transitioning to machines/software – Rainforest Connection doesn’t appear to have this “luxury” which I think could ultimately be a win for them in the sense that they don’t have to deal with the difficult conversation around human capital redistribution and the ethics around this “future of work”.

On November 15, 2018, Cherish Weiler commented on Tesla: How Machines Are Now Driving Our Cars :

Such a fascinating read! The neural network project for continuous refinement of the onboard computers is especially critical; the quality of these computers has a direct and very, very real impact on the safety of the passengers. I think, for many people, the tech behind autonomous cars is beyond their comfort zone. But I wonder, with those same people deeply concerned about the safety of these vehicles, should Tesla make an effort to communicate the sophistication of their technology and data processing and, if so, how? Do they have a responsibility to make this knowledge public and accessible, so as to ease concerns and fuel buy-in in the movement towards autonomous vehicles? Would it be a benefit or waste of their time?
I’m also very interested in the ethics of this problem – some individuals think that Tesla, and similar companies employing ML systems, have a responsibility to hire Ethics Officers/Employees to oversee the salient moral implications of automating human decisions. At the same time, although there have been serious accidents, have we not already seen a massive improvement in the quality of driving among these autonomous vehicles, which would lead to a major reduction in motor-vehicle accidents? Is this also a matter of educating the public? Do we even need their understanding if this is a trend that will move forward regardless?

On November 15, 2018, Cherish Weiler commented on Training Robots to Curate Individualized News :

Great read! Machine learning could not be a better fit for an organization like Newstag. We as consumers expect the products/content platforms we use to be increasingly tailored to our needs and likes – I can’t even imagine the level of specificity consumers will demand from content platforms in 10, 20, 30 years time. By the time our children and grandchildren are engaging with content platforms, I suspect the techniques Newstag currently employs (using historical data and third parties to generate recommendations) will look archaic.

I completely agree that the level of data Newstag is needing to process is getting beyond human capabilities; very soon machine learning algorithms will be the ONLY way to manage this data. I’m curious to know how Newstag considers their 1) redistribution of human capital and 2) investments in software vs software developers. Are they concerned about the implications of the future of this type of work becoming more and more software dependent with less and less reliance on human intervention? How are they navigating that discussion? Should we as consumers be concerned if the companies who supply our content platforms are NOT discussing the ethics of moving away from human capital and towards machines?

On November 15, 2018, Cherish Weiler commented on Bridgestone: Production System Innovation Through Machine Learning :

Interesting to read about the necessity of innovation in a company that I don’t think of when I think “innovation”. The EXAMATION system sounds not just innovative but necessary in the increasingly competitive, quality-driven industry. I’m extremely curious how the implementation of this ML system affects the distribution of human capital – I recognize that it is a critical move to incorporate systems that increase quality and safety rather than rely on human assessment, but how does Bridgestone view its responsibility to the human workforce it is/will inevitably displace? What moral obligation do they have in displacing their workers? Are they responsible (as they are, I’m assuming, one of the first-movers in using ML instead of humans for product quality assessment) for leading the dialogue on the ethics of a decision like this, a decision their competitors will likely have to make as well?

On November 15, 2018, Cherish Weiler commented on Bots with a Touch of Humanity :

So good to read about “the future of work” which should be/is top of mind for a lot of companies today. I see how, with the majority of a company’s data being “Dark Data”, it still quite difficult for RPA to be fully utilized and its benefits fully realized. IQ Bot sounds like a critical next step and it sounds like AA was essentially forced to make the investment in developing IQ Bot upfront – I imagine that companies were/are not willing to commit to AA software knowing it could only accurately process 20% of their data. But will IQ Bot be sophisticated enough soon enough for AA to get a proper return on this investment? Are IQ Bot’s data capabilities at 50% now? 60%? What’s the threshold required by companies today (who I’m hoping are sophisticated enough to understand that 50% data processing via software can still be worth it)?

One of my biggest concerns would be around what happened with IBM Watson – the technology was phenomenal, but the requirements behind integration were too much for companies to bear, discouraging them from pursuing implementation of the software.
Another concern I have is the one you mentioned about the ethics of displacing human workforce. Frankly, I’d be extremely disappointed if AA is not contributing to the dialogue around the ethics of human capital redistribution; while I can’t say I know WHO bears the ultimate responsibility for the implications of this workforce “transition”, I think the least AA could do (as a contributor to this workforce displacement) is foster collaborative and constructive conversation around the topic.


Great read! It’s interesting to see the growth of companies like CloudFlare as the internet has becoming increasingly complex. I don’t see the need for companies like this diminishing anytime in the future, although I wonder how the future of privacy will affect them – some individuals (e.g. believe that personal privacy should/will be dissolved significantly over time (potentially to a point where personal privacy will no longer exist in a meaningful way), so I wonder how CloudFlare thinks about that in terms of 1) the likelihood of this being a possibility for the future (and how far into the future) and 2) what it could mean for their business. One the second point, how does the “death of privacy” change the needs of CloudFlare’s consumers? Would CloudFlare offer increasingly sophisticated personal privacy solutions? Or would they accept the limitations of personal privacy and pivot to “making the internet a better place” initiatives?

Beyond privacy concerns, I agree that proactive counter measures against cyber attacks are crucial and that these will only become more and more proactive. Perhaps it will even get to a point where a reactive counter measures are considered no longer effective. The IoT devices are another fascinating topic for cyber security – it’s interesting to see how “breach-able” the current IoT devices on the market already are. I’m a little surprised CloudFlare hasn’t moved more quickly around IoT device protection considering the level of media attention and press those breaches get; is this a case of IoT device breaches being blown out of proportion by the media or the case of CloudFlare as a startup being too small to react as quickly as we would want/expect?