Sam Wang's Profile
Sam Wang
Submitted
Activity Feed
Thank you so much for this informing post Eleanor! I have always been thinking that there must be some AI implementations in the energy industry and there it is! It is amazing to see how much reduction in carbon AI could provide. However, I am concerning about the cases when AI made wrong judgements that post the company with significant risks and how could they mitigate that risks specifically.
Amazing post Serena! I have honestly been annoyed by how precise the AI recommendation algorithms work for me on TikTok and sometimes the over-recommendation of AI when I would click once on a random topic then all videos suggested next would be on that one topic I did not care about, which just seemed to me like a great invasion of privacy. I wonder the role of regulators in this field. Should they put more restrictions on the AI personalization part? Should they release this freedom back to the users?
This is very informative Screeni! I have in fact never heard of Fiverr before but what a brilliant idea the founders have in the beginning! The thinking of setting a $5 fix price is just really cool and innovative. However, I really wonder whats their margins like. Just like the question for ZBJ, whats their business models and how much they are making through each $5 transactions? Their sustainability issue with GenAI is also pressing, essentially meaning cheap human labors (cost of living and education) competing with GenAI (cost of training).
Great post Wabantu! I started trading initially with traditional platform such as Fidelity and Schwab, whose mobile apps and web UIs I found to be really hard to use without clear and simplified logics. This lead me into Robinhood, about half a year into trading. Initially I held great doubt about them after hearing numorous conspiracies about their GameStop event, as they seemed to be the greedy Wall-street backed platform that just didn’t care about customers. But after using it for a while, I really found its app and UI to be far superior than their traditional competitors. The point about the ethics for lowing the entry of trading raised by Eleanor is also interesting. I think that they do not hold responsibility lowering the bars to something people are legal to do. If they should be held acountable for education, shouldn’t education institutes be the real ones be accountable?
This is a great post David! I have personally been using StockX since high school (during the hype-beast era). Though I often browse through their websites to just check prices of particular items, I rarely buy directly from them due to the high amount of fees I have to pay (I think at some point I had to pay over 20% of the total price as fees). Their process also used to be extremely slow, spanning couple weeks of wait time until the item is delivered to you. I do believe in their competitiveness with their brand image as the sort of “one and only” guy in the market and their extensive authentisity process that noone could match.
Thank for this amazing post Ben! I am a long-time AccuWeather user and I have always been wondering how it does weather forecasting so well that in my opinion, no-one else could compete with it. Now it all makes sense that they are a large group of experts who work on an immerse amount of data to provide the best models. I am also intrigued by your mentioning of their biggest challenge beign the climate change, which really makes sense as climate change is something that is hard to predict (or maybe there is a way they can predict climate change and incoporate that into their models?). If I were given a chance, I really would love to see how it actually collects, stores, and cleans, and utilizes data exactly.
This is a very interesting post. I also wrote about how big data is used by Tesla but we offer different points of views. I am particularly interested in your mentioning about how Tesla lowers the insurance price by using more precise/accurate driving behavior predictions through data gathered from advanced sensors in the vehcles. I have heard about the Tesla Insurance before but have a relatively negative oppinions on its implementation. Tesla judges the way you drive in almost all aspects: did you speed, did you drive recklessly (maybe determined through how often you turns for no reason?), did you park accurately, etc. However, such judgement for a perfect driver only lies in ideal conditions where every single drivers on the road are assumed to be perfect. There are always others who are just driving like they owned the road (honorable mention: Boston drivers). Exception could also happen when you may also need to go through the red light for emergency vehcles. While I do not have any documentation backing this point up, I have heard that it is really hard to appealyour “reckless driving” behavior judged by the AI to be of others’ faults instead of yours. I could see that if Tesla become very friendly with the appeals, people might just lie for lower insurance money and it is really hard to judge without more data from more advanced sensors. I would not use its insurance unless I absolutely have to.
Thank you Serena for this amazing post! I have always thought about how CVS is at a great position in using data to enhance and healthcare and your analysis really helps me understand the specific processes. Everything seems very natural and amazing: CVS works with Microsoft and its big data collection to provide predictive options for diabete, cancer, flu, etc. I also really like your mentioning of the data privacy leaking case at the end, which may be more serious in the context of health information comparing to other fields where data might not be as sensitive. I also would like to stretch other challenges stermed while I reading the post:
I learned about how messy EHR data could be for machine learning models at HMS. I wonder how well could CVS build models that actually help the population in a proper way (such as low additional financial burdens) that is also unbiased towards demongraphics or psychographicsa. For risk predictions, where the outcomes of the deployment of ML to predict if you are at risk is always better than nothing, it might make sense to trust a more liberal algorithms that minimize false negatives. However, how about the effects of false positives (patients who are not at high risk but predicted by model to be of high risk), which could really add unnecessary financial burdens to patients. Is there an actual moral balance between minimizing false positives vs. false negatives in the context of health vs. finances? Of course this is a question that is almost impossible to be answered but very interesting and crucial to think about.