Roni's Profile
Roni
Submitted
Activity Feed
In response to your second question, I don’t believe that Lego is over-reliant on the external knowledge of it’s users. If anything, I view their adoption of open innovation as a shift from an insular design process to one that incorporates feedback from users. Companies already gather user feedback in a myriad of ways (e.g. surveys, focus groups), so I see open innovation in this case as just a further step in that direction. Last year, I saw the news that Lego has developed a Boost kit that lets kids build five different smart toy models, including a cat, robot and guitar, with the help of sensors and motors – so it seems that they are now linking feedback from users (kids) with demand from customers (parents / educators).
Really enjoyed reading this! Fascinating topic. Weighing in on the bias discussion, I think one way to mitigate the bias of over-sampling certain neighborhoods (e.g. those with a large percentage of minority or low income residents) is to require that all criminal incidents within a given police district should be included to generate and continuously update the machine learning algorithm. In my view, correlation between the algorithm outputs and demographic attributes does not imply bias – provided that the inputs to the algorithm are non-discriminatory. These inputs need to be strictly regulated.
Another question on my mind is: what happens when humans adapt to this algorithm and changes their criminal behavior patterns to “game the system”? For example, if offenders notice that police cars are patrolling certain blocks in traditionally “bad” neighborhoods more frequently, it’s possible that they will re-locate their activities to less-patrolled blocks in traditionally “nice” neighborhoods – which could generate significant backlash from the residents of those places.
It’s incredible that the open innovation initiative was able to reduce the cycle time of NASA’s R&D process from 3 to 5 years to 3 to 6 months! A lot of the comments above have already touched on the security risk of enabling public access to previously internal information in an institution like NASA, but your piece also highlights a very interesting managerial dilemma: crowd-sourcing solutions, in NASA’s case, has undermined the employee value proposition that has attracted high caliber talent in the past. Extrapolating from what you mentioned in the article, many people join NASA despite a pay cut in exchange for the “right” to solve very important, complex problems, as well as the prestige that arises from the exclusivity of this access. Yet, there are likely other employees who embrace an open innovation model and welcome the ability to collaborate with individual problem-solves in the public. In light of this issue, I would add one more question for NASA’s management to your current list: if NASA continues to embrace open innovation, what changes do they plan on making – or not making – to their talent strategy?
Fascinating topic, and very well articulated article! I share your concern about Safilio pushing ahead too quickly on supply-side innovation without improving their ability to forecast market demand. On the other hand, transitioning from traditional manufacturing (which, as you mentioned, requires large batches to reduce unit costs) to 3D printing might enable them to implement a just-in-time delivery system (similar to Toyota and their suppliers), thus reducing the need to hold high levels of inventory and the costs of not being able to forecast future demand accurately.
Great post, Albert! I especially appreciate your use of info-graphics to clearly illustrate how the platform works.
While I think Einstein has a strong value proposition, the skeptic in me questions the validity of Salesforce’s claim to “democratize” artificial intelligence. As far as I can tell, this suite of customizable artificial intelligence tools is just another set of products that Salesforce is selling to its enterprise customers, even if it is doing so by embedding them into existing offerings. At its core, artificial intelligence is really just regressing a large volume of data to generate insights about the relationships between certain variables – that is, it’s just a fancy term for data analysis, powered by modern technology. In this sense, I think it falls squarely into the value proposition of Salesforce itself. To truly “democratize” AI and accelerate its adoption, Salesforce would need to open up its technology to the broader public (similar to Valve). I’m not saying that it’s reasonable to expect a private company to do this – I am just saying that I am unconvinced about how genuine Salesforce is in positioning Einstein’s mission as “democratizing artificial intelligence.”