{"id":80,"date":"2020-04-11T12:47:09","date_gmt":"2020-04-11T16:47:09","guid":{"rendered":"https:\/\/digital.hbs.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/"},"modified":"2020-04-11T12:47:09","modified_gmt":"2020-04-11T16:47:09","slug":"locked-in-by-algorithms","status":"publish","type":"hck-submission","link":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/","title":{"rendered":"Locked in by Algorithms?"},"content":{"rendered":"<p><strong><b>Bail and risk-assessing algorithms<\/b><\/strong><\/p>\n<p>The US justice system uses bail to impose restrictions on an individual before trial. The purpose of bail is three-fold: (i) to ensure that the defendant will show up to court, (ii) to allow the defendant to proceed with their life until a firm sentence is made, (iii) to lower the incarceration rate, reducing prison system costs.<\/p>\n<p>If bail is not set correctly, we face two possible negative outcomes:<\/p>\n<ul>\n<li>If too low: bail fails to achieve (i), since the defendant may lack the incentive to show up to court<\/li>\n<li>If too high: the defendant cannot pay, and bail fails to achieve (ii) and (iii)<\/li>\n<\/ul>\n<p>Traditionally, it is up to the judge\u2019s discretion to set the bail. There are two main reasons why this has proven problematic. First, it can be very arbitrary (e.g. some judges in New York City are more than twice as likely as others to demand bail [1]). Second, judges are humans and as such are not exempt from unconscious biases.<\/p>\n<p>To combat these shortcomings, more and more states have introduced risk-assessing algorithms. These algorithms offer a risk score of the defendant that accounts for the probability of a no-show to court and\/or recidivism. The score is provided to the judge to inform the bail decision, but not to mandate it.<\/p>\n<p>&nbsp;<\/p>\n<p><strong><b>The results are in: automated risk assessment and inequity<\/b><\/strong><\/p>\n<p>Many hope that the introduction of algorithms in the bail process will result in a more efficient and fairer system, but evidence is mounting that points to the contrary.<\/p>\n<p>Algorithms do not eliminate bias, but in fact can perpetuate inequities found in the historical record. A 2016 report by ProPublica on Broward County, Florida looked at risk scores generated by algorithms, and discovered that the scores rated black defendants significantly more likely than white defendants to be a risk of committing a violent crime [2].<\/p>\n<figure id=\"attachment_71\" aria-describedby=\"caption-attachment-71\" style=\"width: 640px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-71\" src=\"http:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/Risk-Decile-Score-1024x571.png\" alt=\"\" width=\"640\" height=\"357\" srcset=\"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/Risk-Decile-Score-1024x571.png 1024w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/Risk-Decile-Score-300x167.png 300w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/Risk-Decile-Score-768x428.png 768w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/Risk-Decile-Score-1536x856.png 1536w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/Risk-Decile-Score-600x334.png 600w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/Risk-Decile-Score.png 1868w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><figcaption id=\"caption-attachment-71\" class=\"wp-caption-text\"><strong>Fig.1:<\/strong> The ProPublica data shows a downward trend in the scores of white people as scores increase. That trend is not really visible for black defendants&#8217; scores. [6]<\/figcaption><\/figure>\n<div class=\"mceTemp\"><\/div>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_72\" aria-describedby=\"caption-attachment-72\" style=\"width: 640px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-72\" src=\"http:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/glm-significance-of-race-1024x697.png\" alt=\"\" width=\"640\" height=\"436\" srcset=\"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/glm-significance-of-race-1024x697.png 1024w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/glm-significance-of-race-300x204.png 300w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/glm-significance-of-race-768x523.png 768w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/glm-significance-of-race-600x409.png 600w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/glm-significance-of-race.png 1502w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><figcaption id=\"caption-attachment-72\" class=\"wp-caption-text\"><strong>Fig. 2:<\/strong> In predicting violent score, the race factor is only significant for African American.<\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<figure id=\"attachment_73\" aria-describedby=\"caption-attachment-73\" style=\"width: 640px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-73\" src=\"http:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/False-positivity-comparison-1024x283.png\" alt=\"\" width=\"640\" height=\"177\" srcset=\"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/False-positivity-comparison-1024x283.png 1024w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/False-positivity-comparison-300x83.png 300w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/False-positivity-comparison-768x212.png 768w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/False-positivity-comparison-600x166.png 600w, https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/False-positivity-comparison.png 1514w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><figcaption id=\"caption-attachment-73\" class=\"wp-caption-text\"><strong>Fig. 3:<\/strong> The previous two images show that black people are more likely to have high risk scores and that race is only a significant factor when the defendant is black. The ProPublica code showcased here demonstrates the bad outcomes of that bias. The false positive rate (i.e. defendant scores high risk but does not recidivate) is higher for blacks at 45% than it is for whites at 23%. The overall false positive rate is 32%.<\/figcaption><\/figure>\n<div class=\"mceTemp\"><\/div>\n<p>In his Wired article [3], Tom Simonite uses evidence from the state of Kentucky to unveil similar consequences of use of risk-assessing algorithms. After passing a law in 2011 that requires that judges consult the risk score, the state saw an overall increase in the proportion of defendants granted release, but that increase was significantly larger for white defendants than it was for black defendants.<\/p>\n<p>&nbsp;<\/p>\n<p><strong><b>In defense of the algorithm<\/b><\/strong><\/p>\n<p>So are we stuck? Should this evidence of inequitable outcomes doom predictive risk-assessment models? While experts agree that bias is a major concern, some try to find a middle ground.<\/p>\n<p>Simonite\u2019s tone in this article is very critical towards algorithms in this context, his criticism suggests that there might still be hope to use predictive modeling (though I\u2019m not sure he would agree with that assessment). He highlights that what often prompts the judge to make biased decisions is <em><i>not the result<\/i><\/em> of the algorithm but the judge\u2019s interpretation of it. A Kentucky study [4] showed that, given a moderate risk score, judges were more likely to release white people than black people.<\/p>\n<p>In a 2017 New York Times article [5], Gonz\u00e1lez-Bail\u00f3n and others point to benefits of algorithms in the bail context: incarcerated population decreases without an increase in crime. Evidence from Virginia, New Jersey and Philadelphia backs up their claim. They also dispute the notion that \u201calgorithms simply amplify the biases of those who develop them and the biases buried deep in the data on which they are built\u201d. Algorithms are not to blame for biases that are deeply engrained in the system. A well designed algorithm will not perpetuate such biases.<\/p>\n<p>&nbsp;<\/p>\n<p><strong><b>Concluding thoughts<\/b><\/strong><\/p>\n<p>One certainty is that relying only on judge discretion itself can result in bad outcomes. Judges, as humans, bring their own unconscious biases to decision-making and the inefficiency means more people in jail.<\/p>\n<p>There are pros and cons to relying on algorithms. On the plus side they can:<\/p>\n<ul>\n<li>help mitigate biases from judge interpretation<\/li>\n<li>reduce incarceration rates<\/li>\n<li>make for a more efficient system in which judges can focus on ruling.<\/li>\n<\/ul>\n<p>On the flip side, leaning on algorithms can:<\/p>\n<ul>\n<li>diffuse accountability<\/li>\n<li>perpetuate the racial discrimination inherent in the justice system if not well designed (as frequently appears to be the case today).<\/li>\n<\/ul>\n<p>To me, the best outcome has to be an integrated human-machine decision-making process. We need to capture these benefits of automated risk-assessment while understanding and mitigating the bias they can bake into the outcomes.<\/p>\n<p>Recognizing the complex results so far, I still support algorithmic risk assessment, but with clear checks and balances provided not only by the judge but also by an external auditing party.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>References<\/strong><\/p>\n<p>[1] <a href=\"https:\/\/academic.oup.com\/qje\/advance-article-abstract\/doi\/10.1093\/qje\/qjx032\/4095198?redirectedFrom=fulltext\">https:\/\/academic.oup.com\/qje\/advance-article-abstract\/doi\/10.1093\/qje\/qjx032\/4095198?redirectedFrom=fulltext<\/a><\/p>\n<p>[2] <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing<\/a><\/p>\n<p>[3] <a href=\"https:\/\/www.wired.com\/story\/algorithms-shouldve-made-courts-more-fair-what-went-wrong\/\">https:\/\/www.wired.com\/story\/algorithms-shouldve-made-courts-more-fair-what-went-wrong\/<\/a><\/p>\n<p>[4] <a href=\"http:\/\/www.law.harvard.edu\/programs\/olin_center\/fellows_papers\/pdf\/Albright_85.pdf\">http:\/\/www.law.harvard.edu\/programs\/olin_center\/fellows_papers\/pdf\/Albright_85.pdf<\/a><\/p>\n<p>[5] <a href=\"https:\/\/www.nytimes.com\/2017\/12\/20\/upshot\/algorithms-bail-criminal-justice-system.html\">https:\/\/www.nytimes.com\/2017\/12\/20\/upshot\/algorithms-bail-criminal-justice-system.html<\/a><\/p>\n<p>[6] <a href=\"https:\/\/github.com\/propublica\/compas-analysis\/blob\/master\/Compas%20Analysis.ipynb\">https:\/\/github.com\/propublica\/compas-analysis\/blob\/master\/Compas%20Analysis.ipynb<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>Note: [3] is the article that prompted this post, the rest are references.<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In courts around the country, risk-assessment algorithms determine who gets bail and who has to stay locked up. Can we avoid perpetuating the systemic biases of the justice system with these automated tools?<\/p>\n","protected":false},"author":11856,"featured_media":84,"comment_status":"open","ping_status":"closed","template":"","categories":[427],"class_list":["post-80","hck-submission","type-hck-submission","status-publish","has-post-thumbnail","hentry","category-data-analytics","hck-taxonomy-industry-public-administration","hck-taxonomy-country-united-states"],"connected_submission_link":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/assignment\/lpa-blog-assignment\/","yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Locked in by Algorithms? - Leading with People Analytics<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Locked in by Algorithms? - Leading with People Analytics\" \/>\n<meta property=\"og:description\" content=\"In courts around the country, risk-assessment algorithms determine who gets bail and who has to stay locked up. Can we avoid perpetuating the systemic biases of the justice system with these automated tools?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/\" \/>\n<meta property=\"og:site_name\" content=\"Leading with People Analytics\" \/>\n<meta property=\"og:image\" content=\"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/featured-AI-coutr-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1280\" \/>\n\t<meta property=\"og:image:height\" content=\"640\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/locked-in-by-algorithms\\\/\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/locked-in-by-algorithms\\\/\",\"name\":\"Locked in by Algorithms? - Leading with People Analytics\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/locked-in-by-algorithms\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/locked-in-by-algorithms\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/wp-content\\\/uploads\\\/sites\\\/30\\\/2020\\\/04\\\/featured-AI-coutr-1.jpg\",\"datePublished\":\"2020-04-11T16:47:09+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/locked-in-by-algorithms\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/locked-in-by-algorithms\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/locked-in-by-algorithms\\\/#primaryimage\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/wp-content\\\/uploads\\\/sites\\\/30\\\/2020\\\/04\\\/featured-AI-coutr-1.jpg\",\"contentUrl\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/wp-content\\\/uploads\\\/sites\\\/30\\\/2020\\\/04\\\/featured-AI-coutr-1.jpg\",\"width\":1280,\"height\":640},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/locked-in-by-algorithms\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Submissions\",\"item\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/submission\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Locked in by Algorithms?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/#website\",\"url\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/\",\"name\":\"Leading with People Analytics\",\"description\":\"MBA Student Perspectives\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/d3.harvard.edu\\\/platform-peopleanalytics\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Locked in by Algorithms? - Leading with People Analytics","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/","og_locale":"en_US","og_type":"article","og_title":"Locked in by Algorithms? - Leading with People Analytics","og_description":"In courts around the country, risk-assessment algorithms determine who gets bail and who has to stay locked up. Can we avoid perpetuating the systemic biases of the justice system with these automated tools?","og_url":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/","og_site_name":"Leading with People Analytics","og_image":[{"width":1280,"height":640,"url":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/featured-AI-coutr-1.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/","url":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/","name":"Locked in by Algorithms? - Leading with People Analytics","isPartOf":{"@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/#website"},"primaryImageOfPage":{"@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/#primaryimage"},"image":{"@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/#primaryimage"},"thumbnailUrl":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/featured-AI-coutr-1.jpg","datePublished":"2020-04-11T16:47:09+00:00","breadcrumb":{"@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/#primaryimage","url":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/featured-AI-coutr-1.jpg","contentUrl":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-content\/uploads\/sites\/30\/2020\/04\/featured-AI-coutr-1.jpg","width":1280,"height":640},{"@type":"BreadcrumbList","@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/locked-in-by-algorithms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/"},{"@type":"ListItem","position":2,"name":"Submissions","item":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/submission\/"},{"@type":"ListItem","position":3,"name":"Locked in by Algorithms?"}]},{"@type":"WebSite","@id":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/#website","url":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/","name":"Leading with People Analytics","description":"MBA Student Perspectives","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/hck-submission\/80","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/hck-submission"}],"about":[{"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/types\/hck-submission"}],"author":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/users\/11856"}],"replies":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/comments?post=80"}],"version-history":[{"count":0,"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/hck-submission\/80\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/media\/84"}],"wp:attachment":[{"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/media?parent=80"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d3.harvard.edu\/platform-peopleanalytics\/wp-json\/wp\/v2\/categories?post=80"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}