Big data steps towards the creation of personalised medicine

February 21, 2014 − by Suzanne Elvidge − in Big data, Data analytics, Drug development − No Comments

Personalised medicine, the ability to tailor treatment to an individual, is a growing arena. Because it has the potential to make clinical trials more effective, or make it easier for physicians to pick the best treatment first time, it could improve outcomes for patients and cut costs for pharmaceutical companies and payors.

One of the key drivers for personalised medicine is the creation of new treatments that are better targeted and more effective. Techniques such as molecular simulation can be used to estimate the potential activity of a drug against a target, enabling lead optimisation in drug discovery, and personalisation of treatment and stratification of patients in clinical trials and everyday use.

In a collaboration across the Atlantic, researchers from University College London (UK) and Rutgers University (USA) have teamed up to look at the use of big data from molecular simulation in predicting the activity of a potential personalised drug targeted against HIV protease. The results have been published in the Journal of Chemical Theory and Computation and reported at the annual meeting of the American Association for the Advancement of Science (AAAS).

HIV protease plays an important role in the lifecycle of HIV, and its protein structure and therefore its activity varies slightly from individual to individual. The researchers used molecular simulation to simulate the shape of HIV protease produced by viruses from different individuals (based on the virus’s gene sequence) and to rank nine FDA-approved HIV-1 protease inhibitors.

“We show that it’s possible to take a genomic sequence from a patient; use that to build the accurate, patient-specific, three-dimensional structure of the patient’s protein; and then match that protein to the best drug available from a set. In other words, to rank those drugs – to be able to say to a doctor ‘this drug is the one that’s going to bind most efficiently to that site. The other ones, less so’,” Peter Coveney, director of the Centre for Computational Science at University College London told the BBC.

The ranking required around 50 simulations of these models, and each simulation required a hundred cores on a computer, and ran for about 12 to 18 hours, generating vast quantities of data. These complex datasets then required post-processing and analysis.

While this is only a proof-of-concept study, future advances in computing power and big data analysis could mean that these kinds of calculations could be performed quickly and cheaply enough to provide practical support for doctors deciding what drugs to offer patients.

While GenoKey wasn’t involved in this research, the company is working on tools and techniques that allow the mining of complex data sets to make personalised medicine, including personalised preventive medicine, a reality.





Post a Comment

Your email address will not be published. Required fields are marked *

*