Great read, Melina. I think the major consideration for me is understanding the underlying data that the recommendations are being built on, and how that relates to new verticals or markets that GO-JEK might expand into. Currently, there is likely a lot of data that GO-JEK just doesn’t collect on its user that impacts their ride experience and their overall satisfaction – maybe they had a bad mood going into the ride, or maybe they just weren’t looking when the scooter drove by. None of these variables are factored into the algorithm, but impact the experience that a user has in a way that does impact what the algorithm is trying to solve for. As GO-JEK expands into new markets, they will always have the challenge of building that new data set from the ground up so that they can train their new algorithm.
Great read Michael. I feel like focusing on measuring the quality of the trailer isn’t exactly representative of how a movie is actually going to turn out, either in quality or in box office performance. I worry that by overindexing on the trailer, Fox might develop really top notch trailers that don’t lead to substantially higher box office performance – or at worse, might mislead people about the overall quality of the movie. I do think that there are ways to identify how elements of a script or actors and viewer reaction and correlated – Netflix’s development of House of Cards based off user search history and preferences is a great example of this. But I worry that Fox lacks the data that makes the Netflix approach compelling, which might limit or skew the final recommendations in a way that doesn’t bear fruit.
Interesting read Romaan. One of the tricky things about applying machine learning or algorithmic-driven approaches to really critical activities like catching bad actors is who’s to blame when it goes wrong. We all know that humans are fallible, but even if a machine performs at a statistically better level, it’s easy to ask “would a human have caught that?” if things go wrong. I do believe that using machine learning in such a scenario avoids much of the bias that we’re seeing with other applications of tech and homeland security. Solving the manipulation is all about control – how can you ensure that the engineers building these protocols are reviewed enough to stop bad actors from slipping in.
Really thought provoking stuff here, Ennis. I wonder what motivation companies would have to bring this into their organizations – if other private organizations are already doing this at scale and delivering high performance learning, it’s probably easier (and cheaper) for companies to invest in their employees cost of attending, rather than doing it in house. As more and more colleges move their courses online, whether through free courses like MIT opencourseware or paid courses like HBX, I wonder if General Assembly’s brand will be strong enough to compete with these academic powerhouses. I think GA will need to find a specific niche that they can occupy, and hope that established universities don’t compete there.
I still believe that future competitions by Hyperloop can produce the same level of marginal benefit, if not more, given that it’s a totally new technology. This first round of the competition is only looking at 10 markets across 4 countries. But if Hyperloop is to achieve it’s mission, it’s going to need to expand out of these initial markets into geographies that are significantly different, and additional open challenges like this one is one way to do it. There is also so much more than choosing geographies that could be tackled in a competition that is just as critical to the uptake and success of this new transportation modality – user experience, pricing, integration with local or traditional modalities.
Because this is all so new, there is so much to learn from innovators all over the world.