In my view, Buzzfeed is the news platform of the 21st century. Content is a quantity game, as much as it is a quality game. Not every story lands, but the more stories you have, the more likely one will land.
With that in mind, it’s vital for Buzzfeed to continue to grow their user-generated content. The community elements of their website allow for 2-way communication between authors and readers, which is prime real-estate for valuable interactions and exponential growth in content. By placing more weight on the importance of dialogue on the articles, each article can become a platform for further debate, which in turn drives the amount of content produced and engagement metrics.
There’s huge potential in the news industry for AI to be utilised to gauge ‘truth’ needles. With the endless stream of information out there, it becomes practically impossible for humans to sift through them and uncover what should become a story. AI helps bridge that gap and augments the human effort to find the next big story.
There are a number of efforts that focus on doing this, which can be leveraged to build tailored solution for the news Organization in question. For this reason, I would lean on available APIs to develop an internal tool capable of identifying stories. I believe it needs to be an internal effort, because your competitive advantage is your ability to source and report on stories and by bringing this capability in-house, you can help to differentiate yourself from your competition.
Great article on how additive manufacturing is changing the fashion game.
I see 3D printing as moving us in the direction of decentralising design. There’s a definite trend for increased personalisation and, with 3D printing, consumers have the tools to customise clothing and even design some themselves.
This changes the role of fashion companies. In this new world, the onus is on consumers to come up with designs. Do they now become platforms? What role do designers have? Do they become redundant in the process?
This brings into question the topic of open innovation too. Are these new designs owned by the consumer or the fashion brand?
An intriguing space! A lot to think about here.
You highlighted the challenge of picking the goose from amongst the ducks. Gauging bias and credibility is very challenging – not just for machines, but for humans too. You mentioned the difficulties machines have cutting through creative uses of language to side-step the algorithms, but in my view the gap will get narrower and it’ll become more and more difficult to bypass algorithms as they improve over time.
To address your question, it becomes a war of capabilities. Who has the better engineers – the ones building fake news engines or the ones that are trying to weed out fake news. I would argue it’s the latter — the good guys should win this battle.
Great article that accurately reflects the struggle we faced at YouTube. With over 400 hours of content uploaded every minute, it’s virtually impossible for humans to manually review and remove videos that don’t meet community guidelines. That’s where machines have to step in and offer a helping hand – sifting through videos pinpointing ones that should be removed and escalating ones that need further review by a human. A great example of machines and humans working together.
As with the scale of videos uploaded, livestreams offers an additional challenge. In my view, it would be possible to apply a similar approach to on-demand videos on live videos with the caveat that the live videos may need to have a time lag. This time lag would allow the video to be constantly screened by YouTube’s systems to ensure community guidelines are being met. There are further protections that could be put in place, such as earning the trust of the platform before having the ability to livestream. This could be in the form of a ‘clean history’ or a sizable subscriber base that indicates some level of credibility.
Finally, we must not forget the power of the community in helping police platforms. With the right culture and systems in place, users on YouTube can be empowered to contribute to actively policing the platform and flagging videos for review when needed. On top of supporting with policing, it would also feed the ML algorithms very helpful training data to help finetune it’s video-spotting capabilities.