{"id":281,"date":"2020-04-13T19:12:55","date_gmt":"2020-04-13T23:12:55","guid":{"rendered":"https:\/\/digital.hbs.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/"},"modified":"2020-04-13T19:12:55","modified_gmt":"2020-04-13T23:12:55","slug":"racial-bias-in-healthcare-algorithms","status":"publish","type":"hck-submission","link":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/","title":{"rendered":"Racial Bias in Healthcare Algorithms"},"content":{"rendered":"<p>There&#8217;s notoriously high human error in medicine. Data driven decision making could help us improve healthcare outcomes, but algorithms based on data can be imperfect, too. What level of algorithmic error is acceptable as in the pursuit of better health outcomes?<\/p>\n<p>Healthcare is a huge area of opportunity for data-driven decision making \u2013 in the US, healthcare spending is 20% of our GDP and it\u2019s estimated that 30% ($750 billion!!!) is waste because of all the errors and inefficiencies in care. In the US, we have the highest cost of care of any country in the world, yet some of the worst health outcomes in the developed world.<\/p>\n<p>So can algorithms help us suggest the right course of treatment for a patient and reduce human error? Can they help us most accurately diagnose a patient based on their clinical history? And can they help us drive down costs by comparing clinical outcomes based on different treatments?<\/p>\n<p>With all this in mind, I was struck by <a href=\"https:\/\/www.washingtonpost.com\/health\/2019\/10\/24\/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients\/\">this article<\/a> from the Washington Post, titled &#8220;Racial Bias in a medical algorithm favors white patients over sicker black patients.&#8221; The article reports on an Optum algorithm that was found to have significant racial bias. The algorithm wasn\u2019t intentionally racially biased (and in fact, had not included race as a category) \u2013 instead it used future healthcare spending as a proxy for future disease. But it turns out that white Americans spent about $1,800 more than black Americans on healthcare. As a result, the algorithm consistently recommended more medical care for the white Americans who the algorithm deemed to be \u201csicker\u201d (when in fact, they were just consuming more of our healthcare resources). This is striking because it shows the dangers of correlating something like healthcare consumption with healthcare need \u2013 different populations may consume healthcare differently (for cultural reasons, accessibility of care, cost of care, insurance coverage, etc.) It also shows the risk of algorithms reinforcing bias \u2013 in this case, the algorithm recommended more healthcare invention for whites (which the algorithm deemed to be sicker), which only reinforced the existing discrepancy in healthcare consumption.<\/p>\n<p>This is not a new issue. Studies in healthcare show racial bias in the care received \u2013 black women in particular are much less likely to receive pain medication, for example, and there\u2019s been other studies that show they\u2019re less likely to receive treatment for lung cancer and cholesterol medications than their white counterparts. \u00a0But what is scary about an algorithm that\u2019s racially biased is that race can be explicitly excluded from the algorithm \u2013 but that doesn\u2019t mean <em>bias<\/em> was excluded since the measuring stick chosen (consumption of healthcare) differs by race.<\/p>\n<p>I\u2019m currently working on a start-up that cleans and joins data to enable algorithm development. How do you make sure that your algorithms aren\u2019t biased, particularly when they can seem like a \u201cblack box\u201d in terms of what\u2019s recommended? And how do we manage the risk of data driven healthcare \u2013 presumably these algorithms can be corrected, but an early version might have issues. We are willing to accept human error, but are we willing to accept algorithm error, particularly in healthcare where decisions have life or death consequences.<\/p>\n<p>In this case, researchers were able to correct the bias with a relatively simple solution. They tweaked the algorithm to determine how a sick patient was based on their actual conditions, rather than on their healthcare spending.<\/p>\n<p>The end of the article mentions a future where we may stress test algorithms with data scientists (just as security firms test whether a company\u2019s data security is sufficient).<\/p>\n<p>What do you think? Are the benefits of data driven medicine worth the risk?<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There&#039;s notoriously high human error in medicine, but algorithms can be imperfect, too. How should we handle this? <\/p>\n","protected":false},"author":12999,"featured_media":282,"comment_status":"open","ping_status":"closed","template":"","categories":[465,443,427,460,476],"class_list":["post-281","hck-submission","type-hck-submission","status-publish","has-post-thumbnail","hentry","category-ai","category-algorithms","category-data-analytics","category-ethics","category-public-health","hck-taxonomy-country-united-states"],"connected_submission_link":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/assignment\/lpa-blog-assignment\/","yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Racial Bias in Healthcare Algorithms - Leading with People Analytics<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Racial Bias in Healthcare Algorithms - Leading with People Analytics\" \/>\n<meta property=\"og:description\" content=\"There&#039;s notoriously high human error in medicine, but algorithms can be imperfect, too. How should we handle this?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/\" \/>\n<meta property=\"og:site_name\" content=\"Leading with People Analytics\" \/>\n<meta property=\"og:image\" content=\"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/machine-learning-in-healthcare.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1000\" \/>\n\t<meta property=\"og:image:height\" content=\"1000\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/racial-bias-in-healthcare-algorithms\\\/\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/racial-bias-in-healthcare-algorithms\\\/\",\"name\":\"Racial Bias in Healthcare Algorithms - Leading with People Analytics\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/racial-bias-in-healthcare-algorithms\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/racial-bias-in-healthcare-algorithms\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/wp-content\\\/uploads\\\/sites\\\/30\\\/2020\\\/04\\\/machine-learning-in-healthcare.jpg\",\"datePublished\":\"2020-04-13T23:12:55+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/racial-bias-in-healthcare-algorithms\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/racial-bias-in-healthcare-algorithms\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/racial-bias-in-healthcare-algorithms\\\/#primaryimage\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/wp-content\\\/uploads\\\/sites\\\/30\\\/2020\\\/04\\\/machine-learning-in-healthcare.jpg\",\"contentUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/wp-content\\\/uploads\\\/sites\\\/30\\\/2020\\\/04\\\/machine-learning-in-healthcare.jpg\",\"width\":1000,\"height\":1000},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/racial-bias-in-healthcare-algorithms\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Submissions\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Racial Bias in Healthcare Algorithms\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/#website\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/\",\"name\":\"Leading with People Analytics\",\"description\":\"MBA Student Perspectives\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Racial Bias in Healthcare Algorithms - Leading with People Analytics","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/","og_locale":"en_US","og_type":"article","og_title":"Racial Bias in Healthcare Algorithms - Leading with People Analytics","og_description":"There&#039;s notoriously high human error in medicine, but algorithms can be imperfect, too. How should we handle this?","og_url":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/","og_site_name":"Leading with People Analytics","og_image":[{"width":1000,"height":1000,"url":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/machine-learning-in-healthcare.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/","url":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/","name":"Racial Bias in Healthcare Algorithms - Leading with People Analytics","isPartOf":{"@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/#website"},"primaryImageOfPage":{"@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/#primaryimage"},"image":{"@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/#primaryimage"},"thumbnailUrl":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/machine-learning-in-healthcare.jpg","datePublished":"2020-04-13T23:12:55+00:00","breadcrumb":{"@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/#primaryimage","url":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/machine-learning-in-healthcare.jpg","contentUrl":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/machine-learning-in-healthcare.jpg","width":1000,"height":1000},{"@type":"BreadcrumbList","@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/racial-bias-in-healthcare-algorithms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/"},{"@type":"ListItem","position":2,"name":"Submissions","item":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/"},{"@type":"ListItem","position":3,"name":"Racial Bias in Healthcare Algorithms"}]},{"@type":"WebSite","@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/#website","url":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/","name":"Leading with People Analytics","description":"MBA Student Perspectives","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/hck-submission\/281","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/hck-submission"}],"about":[{"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/types\/hck-submission"}],"author":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/users\/12999"}],"replies":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/comments?post=281"}],"version-history":[{"count":0,"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/hck-submission\/281\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/media\/282"}],"wp:attachment":[{"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/media?parent=281"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/categories?post=281"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}