Interesting model. I’ve always wondered though if this approach runs the risk of fundamentally breaking the core principles behind insurance. By rewarding super users who are already motivated and able to improve their fitness with lower health premiums (even through non-cash rebates), you are in principle increasing costs for individuals who may have pre-existing health conditions that prevent them from being fully healthy.
I wrestle with this one personally, especially given that over the course of my life I’ve been in both camps, a healthy and fitness-motivated potential super user and an individual afflicted by a chronic pre-existing health condition
This is a great article, thank you for sharing. I must say, this article also has me terrified, and perhaps indicates our need to have prudent regulatory frameworks in place for sensitive industries such as healthcare, education, security, law & order. The potential for harm if models are missed applied could be catastrophic.
Second, this article also brings to mind a topic that we have discussed a few times in class. The potential for algorithms to reinforce pre-existing biases that are found in the natural world. Therefore before relying solely on the machine, it is always prudent for a human to assess the outputs to basically gut check the data for negative social outcomes.
Stress testing, much like peer review in academica, sounds like a prudent method achieving that. Perhaps this shows then that the application of AI technology for sensitive industries should be done within open-source / collaborative models, instead of closed / proprietary development
This is a great topic, thanks for sharing. For me, this brings up the question: what role will governments / society have to play in regulating the use of AI, especially in relation to data privacy, or even more broadly application.
For example, should we ban the use of AI in relation to law enforcement. One could argue that the use of AI could potentially assist police departments across the country in policing certain areas that might have a predicted level of crime over others. Or in warfare… should we have AI guided weapons such as drones?
Certainly a lot of food for thought about the guardrails we as a society need to discuss and eventually place on such a powerful and potentially revolutionary technology