Indigo Watson's Profile
Indigo Watson
Submitted
Activity Feed
This is a great piece – thank you! I find Hill’s comment (that ‘Neighborly could unintentionally be making communities less democratic by giving a disproportionate amount of influence to those wealthy enough with assets and liquidity to purchase bonds and drive project spending’) to be instructive, and I wonder if there is potentially another factor in play. Could it be that there are some communities, perhaps even the communities most in need of this type of investment, that will be unable to attract such investment in the first instance because there aren’t enough investors from within the community (wealthy or otherwise)? In this way, might it give outsized influence to investors external to the community, in a way that might drive outcomes suboptimal to resident community members?
This is a really engaging conversation! One (perhaps entirely implausible) variation on this theme is whether it would ever be sensible for Adidas to market the 3D printers themselves to premium customers – in effect, selling the printers to customers, thereby allowing them to make their own Adidas shoes. Customers would only be allowed to make shoes for themselves (i.e., machines would be programmed for only one person’s foot), and they would have to source materials exclusively from Adidas, but it could make the technology a ‘must-have’ for premium footwear customers and provide another justification for continued development of 3D printing technology beyond merely prototypes?
Some cities are promoting policies that encourage people to build affordable housing units in their backyards (i.e, Los Angeles is even piloting a program that would pay people to build smaller backyard homes to house homeless individuals: http://www.latimes.com/local/lanow/la-me-ln-homeless-tiny-house-20180411-story.html). I wonder if this project could begin combatting NIMBY (‘not in my backyard’) issues that permeate the homelessness and affordable housing debates in many U.S. cities by partnering with them on similar projects.
Like Tomas, I’m struck by the morally consequential decisions that will need to be built into some of these algorithms. Specifically, what will be the relationship between businesses creating these technologies and government regulators? Will government regulators focus only on (suboptimal, after-the-fact) outcomes, or will they become part of developing these algorithms and be a partner in navigating the ethical challenges that Tomas mentions above?
I wonder if we will soon see cross-training of lawyers and coders – for example, will junior associates differentiate themselves from other junior associates (thereby making themselves more valuable to potential employers) by becoming familiar with this technology? Separately, you note that the team ‘chose to check Kira’s findings manually.’ Given the high stakes, how will law firms become more comfortable engaging with this type of technological disruption, so that they can realize the cost savings of delegating this work to Kira-type platforms?
Thanks for the post! This essay reminds me of an article I recently read about China’s first AI news anchor (https://www.npr.org/2018/11/09/666239216/ai-news-anchor-makes-debut-in-china) – the AI anchor innovation invites similar analyses, as mentioned in the comment above (i.e., what will the impact be of such changes on the labor market). Beyond workforce participation, however, I wonder if there are an additional set of questions around the implications of potential machine-learning-driven mistakes and vulnerabilities? For example, mistakes made in Jeopardy and in baseball game recaps may be less consequential than mistakes in an article about fiscal or monetary policy. Moreover, would these (news-gathering and news-presenting) algorithms become targets for interference from non-friendly actors?