Andrew Baxter's Profile
Andrew Baxter
Submitted
Activity Feed
“Would relying solely on machine learning for drug discovery cause Pfizer to fail to identify an important combination of therapeutics that may be the cure to certain types of cancer?”
I think machine learning is a powerful tool in the hands of researchers in the field of drug discovery, however I do not believe that it will ever be able to be used alone. Data strewn across “millions of scientific papers among thousands of journals” is literally impossible for a team of humans at Pfizer to analyze effectively. I agree that the power of machine learning here is not just the ability to read all of these papers, but to generate patterns and predict relationships and hypotheses that must be evaluated by experienced human scientists. It would not at all surprise me if, as you said, machine learning aims in discovering meaningful relationships that existing drug molecules have with other diseases and genes, relationships that are currently invisible to us.
I think that the Community for Open Antimicrobial Drug Discovery is a fantastic example for open innovation. It’s well accepted that the rise of antibiotic resistant pathogens represents an existential threat to humanity, it’s excellent to hear that a solution to this threat is being driven by the CO-ADD when big pharma have been sluggish to react. However I agree that the biggest challenge I see here is carrying these molecules through the development process to become medicines that save lives. I worry that partnering with a large pharmaceutical firm may spoil some of CO-ADD’s unique mission, to protect humanity against drug resistant microbes, if these molecules are not aligned with the current short term financial goals of big pharma.
There is a lot of open innovation in the realm of mitigating climate change. This has generated many ideas, however implementing these ideas and scaling them has proven difficult. The analogy with this article and climate change is that the pain felt by society now is low, and therefore there is little financial incentive for businesses to implement these ideas. However by the time the pain of drug resistance or climate change is truly felt, it may be too late to act.
The gulf between the current state of additive manufacturing and what is required on a Just-In-Time production line is quite startling – 30 seconds per part versus 2 hours per part. I wonder how quickly this time will come down as we make advancements in the fields of robotics, 3-D printing and material science. Right now it seems like competing against scale driven manufacturing processes in the auto industry is a battle additive manufacturing will not be able to win. I’m also wondering what innovations in additive manufacturing are required to minimize the after work that needs to be done on the parts after printing.
The real opportunity looks to be having an additive manufacturing machine that is capable of producing all of Toyota’s low volume supply parts that are not subject to the strict Just-In-Time requirements. Eliminating the requirement of tooling seems like the biggest win here. It sounds like there is a case for ‘made to order’ parts for very low volume lines belonging to the out of production vehicles described above.
Interesting to think about what other public services could be improved with crowd sourced information gathering. Your parallel with Waze made me think of public transport, especially in large metropolitan areas with extensive public transport networks such as New York or London. I’m imagining crowd sourcing information through the “Transport for London” mobile app on how congested certain services are, cleanliness and maintenance issues picked up by customers. Transport users could drop tags, or highlight certain buses or train cars for identifying services and areas that need to be improved. Since public transport systems are often a target of terrorist attacks, this could be a way of creating a behavior of vigilance and reporting that could pave the way for crowd sourcing security ideas.
My major concern for using crowd sourcing for national security is that the resultant data would likely be extremely noisy and full of cultural bias and racial prejudice.
Schlumberger stepping into the realm of machine learning and artificial intelligence feels like it could be a watershed moment for the oil and gas industry. Many rigs that I’ve worked on use technology and techniques that are over 30 years old, in some spaces it feels as if the industry is languishing in the stone age. Schlumberger have always pushed the boundary of oil field technology, this however is something special.
Schlumberger’s DELFI platform seeks to bring artificial intelligence, data analytics, and automation together to aid in complex modeling, simulation, analysis, and forecasting for the entire exploration and production life cycle. The goal of this technology is to improve operational efficiency and deliver optimized production at the lowest cost per barrel [1]. I agree to your point that Schlumberger are uniquely placed to do this as they have access to a huge amount of data from oil and gas operators worldwide. I think their competitive advantage will be maintained by access to some of the most cutting edge oilfield services software and hardware that will perfectly compliment their new machine learning offering. Another thing in Schlumberger’s favor is that they are a familiar brand in an industry that is sometimes reluctant to embracing new small players, and innovation in general. One of the biggest risks I see if the sensitivity surrounding data in the oil and gas industry, proprietary reservoir and well data is extremely closely guarded. Schlumberger will need to convince their clients that they can continue to safeguard this data after it is fed to an AI that will generate analysis for all of its clients.
[1] “DELFI Cognitive E&P Environment.” Schlumberger https://www.software.slb.com/delfi accessed November 2018