Researchers report that a #machinelearning tool, which has been trained using large datasets comprising of #chemical_safety_data is so good at #predictingcertainkindsoftoxicity that it now rivals, and occasionally outperforms expensive #animaltestingstudies.
#Computermodels have been used across #industryandacademia for decades to predict toxicity. These models usually incorporate the #chemicalstructureofamolecule, an understanding of its #modeofactionwithinthebody, and #datafromanimaltests or #invitrostudies. Furthermore, the toxic effects of novel compounds can be predicted using a method called ‘#readacross’ wherein the untested compounds are compared with structurally or biologically similar molecules whose effects are known. However, regulators still want these data to be corroborated using animal studies.
To improve the software, scientists from the Johns Hopkins University Bloomberg School of Public Health in Baltimore, Maryland, created a huge database built on information collected by the European Chemicals Agency (ECHA) as segment of a 2007 law known as #REACH that requires companies to #registersafetyinformation on most chemicals marketed within the European Union. In 2016, this database contained 9800 chemical molecules based on nearly 800,000 animal tests. As of May 2018, the closing date for the registrations, the ECHA had collected information on more than 20,000 compounds.
The team used data on 866,000 chemical properties/hazards to #trainthecomputermodel for #predictinghealthhazardsandchemicalproperties. The authors call this model, combining #conventionalchemicalsimilarity and #supervisedlearning as RASAR (read-across structure activity relationship). Effectively, the software mimics how a toxicologist views and analyzes a new chemical compound, but in an automated fashion. This draws attention to the novel #possibilitiesofbigdata in the field of #chemicalsafety.
Scientists say that such #computermodels can replace six common #safetyandtoxicitytests conducted on millions of animals every year, such as putting chemical compounds into rabbits' eyes to identify if they #causeirritation, or feeding chemicals to mice or rats to #calculatethelethaldoses. Since these tests account for nearly 57% of the #testtoxicity worldwide, the number of animals used in toxicity testing would sharply go down.
The National Institute of Environmental Health Sciences (NIEHS) is working on validating the technique. And once the validation is complete, the United States Environmental Protection Agency (US EPA) will evaluate the results and determine whether they can be used to inform chemicals evaluated under the Toxic Substances Control Act (#TSCA). And if the evaluation is favorable, such models can be used to inform #screeninglevelhazarddeterminations or #prioritizelargenumbersofsubstances.”
In the EU, the ECHA has encouraged companies to #avoidanimaltests by using read-across methods wherever possible. However, the screening method has its shortcomings; although, it can predict #simpletoxiceffects such as irritation, #complexendpoints such as #sterility or #cancer are still out of bounds. As of now, such predictions cannot be deemed stand-alone and need to be complemented by certain animal tests.Tags: animal testing, avoid animal tests, big data, chemical safety, chemical safety data, Chemical Structure, computer models of toxicity, Drug Safety, machine learning, machine learning in toxicology, machine learning software, possibility of big data in toxicology, predicting toxicity, read-across strategies, registration of safety information, regulatory bodies and environmental protection agencies, screening hazardous chemicals, stop cruelty to animals, toxicity testing, toxicity testing endpoints, Toxicology