N. Fleming's Profile
A really thought-provoking topic, and frankly, scary as hell.
To the question of whether or not CAD drawings for printable guns can and should be protected as speech to me, is unequivocally no. In the same vein that obtaining instructions and materials to build bombs, or possessing explicit images of minors is highly illegal, so to should be this. The fact is, these things do tremendous, tangible damage.
The next logical question is of course how do we police this? While not perfect, I would look to AI technologies to monitor the potential illicit circulation of such files. In a similar manner to how Facebook has flagged over 2 million instances of potential terrorist activity across its platform, one could automate the flagging process of circulation here.
Very cool to see this from Lego! In principle, it makes a ton of sense – what better way to engage an eager and creative customer base than to involve them in the very development of the products they will use. To the point regarding potential dissent from the company’s design team, I think I’m not quite so worried. Given that only 1 idea is brought to market each year, it’s not as though designers are competing for work with the broader public. Further, I would think a tremendous amount of design and engineering occurs between the steps of conceptualization and market launch. That said, I do agree that running an open innovation funnel such as this does command certain degree of transparency, as to avoid the problem mentioned above of Lego potentially receiving criticism for stealing ideas from the public.
Interesting to see how crowd-sourcing flopped in this context. It makes me think of other examples where crowd sourcing has worked and what’s different, and I think it comes down to how much value is in every incremental data point. If you’re Ancestry.com and trying to relate genes to predispositions for various illnesses, under no circumstance does adding a data point harm you (provided the data is accurate); your data set just continues to become more robust. In a creative application such as this, however, the sheer magnitude of 27,000 ideas is just far too much to meaningfully analyze. It’s as though the more data you get, the harder it becomes for the best ideas to stand out.
Really interesting to see how seemingly disparate data can come together to build a picture of a person’s creditworthiness. On the notion of data security, I agree it’s imperative to take precautions. That said, perhaps a first step might be to reduce the data collected from the get-go. With 10,000 data points per user, would it not take a tremendous historical record of financial performance for the algorithms to then implicitly attribute the probability of repayment to various factors? Is there such thing as too much data?
Spencer – great essay. This is such a fascinating and meaningful problem given the polarity of the current political climate. I think your point about the erosion of trust in real media sources being a bi-product of the fake news phenomenon is spot on. Putting myself in Facebook’s shoes, I think it’s unreasonably aspirational to think they will catch all fake news before it can propagate across news feeds. That said, I think it is realistic to think that they should own the longitudinal responsibility of informing users of media they have interacted with which have been later flagged as fake. Whether it’s some sort of a notification several days later or otherwise, this may help lighten the burden they are putting on their AI technology.
One other thought is to what extent is it Facebook’s role to preserve political objectivity in this? Is a left-leaning AI fake news algorithm a bad thing? I think of more traditional businesses with explicit political or religious agendas (e.g. Chik-fil-A’s not open on Sunday policy) and wonder what’s different here. As a Facebook user, I tend to take for granted its utility as a communication tool, with little consideration that it is in fact a for-profit company with its own values, stances and perhaps social goals.