Better Metrics for “Sustainable by Design”: Toward an in Silico Green

Dec 13, 2017 - Center for Alternatives to Animal Testing, Environmental Health and Engineering, Johns Hopkins Bloomberg School of Public Health, 615 N...
0 downloads 4 Views 262KB Size
Subscriber access provided by READING UNIV

Article

Better Metrics for “Sustainable by Design” - Towards an in silico Green Toxicology for Green(er) Chemistry Alexandra Maertens, and Hans Plugge ACS Sustainable Chem. Eng., Just Accepted Manuscript • DOI: 10.1021/ acssuschemeng.7b03393 • Publication Date (Web): 13 Dec 2017 Downloaded from http://pubs.acs.org on December 14, 2017

Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a free service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are accessible to all readers and citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.

ACS Sustainable Chemistry & Engineering is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.

Page 1 of 9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

ACS Sustainable Chemistry & Engineering

Better Metrics for “Sustainable by Design” - Towards an in silico Green Toxicology for Green(er) Chemistry

Alexandra Maertensa and Hans Pluggeb , Center for Alternatives to Animal Testing, Environmental Health and Engineering, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe St, Baltimore, MD a

[email protected], Verisk 3E, formerly known as 3E Company, 4520 East-West Highway, Suite 440, Bethesda,,MD 20814, USA b

KEYWORDS: Risk Assessment, Hazard Assessment, Green Chemistry, Green Toxicology, Computational Toxicology, and QSAR

ABSTRACT

Sustainable, green chemistry cannot be successful without a clear way of measuring what makes a chemical “greener”. One central tenet of green chemistry is the avoidance of toxicity and sustainability, yet the metrics to design a less toxic alternative or avoid problematic lifecycle issues are woefully lacking. Toxicologists have limited tools available to provide R&D chemists the necessary guidance in designing less toxic chemicals. Current metrics for the most part cannot assay toxicity in a time-frame that is useful for the business cycle. The methodologies typically used for toxicology endpoints also provide little information that is useful for chemists to improve their sustainable design . Here, we discuss both the problems with existing metrics for assessing toxicity and sustainability, and the steps necessary for better hazard and risk assessment big data and machine learning for more sophisticated metrics based on in vitro and in silico testing.

Background and Significance R&D chemists traditionally have considered function foremost when designing new chemicals, while other considerations came later - with the result that hazard, risk and life cycle assessments were often some of the last considerations before a product went to market, or were considered after a chemical was in commerce. Information and methodologies are not easily available to guide such decisions, and the testing typically necessary to make an informed decision is both expensive and not timely - the typical rodent carcinogen test takes 5 years. In addition such extensive testing is/was often not “required” as part of the product introduction/registration process. A lot of the long term animal testing was instead often driven by human epidemiology or short term testing data. As a result, most 1 industrial chemicals have either no data, or very limited data to judge their hazard . The Toxic Substance Control Act 2 (TSCA) inventory includes 83,000 chemicals, yet toxicology data is available for only 3% . Exposure, risk and lifecycle data are even more sparse as they typically are not within the mandate of regulatory agencies. The latter situation should change with the implementation of the “new” TSCA which now does consider exposure and hence risk as part of the Pre-Manufacturing Notice (PMN) and risk evaluation processes. Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH-EU) program also requires “available” toxicology data, but puts

ACS Paragon Plus Environment

ACS Sustainable Chemistry & Engineering 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

severe limitations on generation of new mammalian data as part of its registration process. REACH also encourages 3,4 the development of more benign alternatives as does the “new” TSCA. Even if there were no humane considerations, regulations (TSCA-US) or restrictions (REACH- EU) concerning animal testing, gathering in vivo hazard data on all existing substances would be impossibly expensive as well as exceed the capacity of research facilities - to say nothing of the challenges presented by mixtures, or novel, complex chemistries such as nanomaterials. Moreover, it is not even clear if gathering such data would provide useful results for humans – e.g. many compounds in coffee are positive in at least one assay for carcinogenicity, yet there is no evidence that the 5 daily dose of coffee has increased cancer rates . A further strike against the current paradigm is that animal models are in some respects “black-box” - a chronic or subchronic repeat dose study typically reports pathology but gives no clue to the molecular initiating event. Hence little information that can help guide chemists in designing safer alternatives is provided. Bisphenol A (BPA) was positive in several endocrine disruptor screens, and was therefore presumed to interact with the estrogen receptor. As a result of negative test results in these assays, Bisphenol S (BPS) was designated as a more benign alternative. Unfortunately, molecular mechanistic studies revealed that at least one additional molecular mechanism of BPA toxicity involved the Estrogen Related Receptor Gamma (ESRRG), 6 to which BPS also binds - and in fact more potently than BPA . BPS thus became a regrettable substitution for BPA.

Another example would be the most commonly used tool for skin sensitization - the Local Lymph Node Assay 7 (LLNA test) – which has a less than 50 percent correlation with human skin sensitivity and a relatively low 8 reproducibility for a test that is considered the gold-standard . It is quite likely that more accurate predictions could be made from combining information from assays that can establish the likelihood of a molecular initiating event (e.g. the 9 10,11 the Direct Peptide Reactivity assay (DPRA) as well as the cellular responses (KeratinoSens and h-CLAT) . Therefore, there is an acute need for the development of tools and metrics that can more accurately assess hazard, exposure, risk and lifecycle for chemicals. Such tools need to provide human relevant data to allow an informed decision on design, manufacturing, and use - preferably using universally accepted methodologies at a reasonable cost. Such tools are unlikely to depend heavily on animal testing, but rather on computational approaches that take advantage of all the available data (leading to a better understanding of molecular mechanisms of toxicity), and better metrics that take into consideration the complex use-case of the chemical in question. In short, green chemistry 12,13 needs a “green toxicology” that can leverage the era of big data and more sophisticated computer models, resulting in better metrics. Big Data For some, big data means anything too large to fit on an Excel spreadsheet. However, the hallmark of big data is not just an increase in scale, but also data that are generated faster, in greater variety, and are higher in value than previously available. More subjectively, “big data” has to be data that is more than the sum of its parts. While other fields have confidentially marched into the era of big data, unfortunately, toxicology has been slow to see much progress. It has been hampered by working with relatively small data sets – e.g. for years, skin sensitization models 14,15 were limited to training data sets that typically contained fewer than 200 chemicals . Such data sets will by definition have a limited applicability domain and hence have difficulty predicting toxicity from novel chemistries. Little confidence can thus be put in using such methodologies to assess hazard or to guide more benign molecular design. Some more sophisticated molecular toxicology models for skin sensitization can take up to a week of computer time to reach more reliable assessments. However, with the availability of the REACH data set, Pubchem, as well as Toxcast, hazard assessment is 16,17 genuinely entering the “big data” era - now, (Table 1) datasets of 10,000 chemicals by endpoint are available . The origin of the data is highly variable: HSDB data is derived mostly from “Peer Reviewed” data sources whereas ECHA data is an agglomeration of individual submissions. As a result the ECHA data may suffer from noise i.e. highly variable data quality. Part of this is addressed via self-selected quality scores, based on the Klimisch system. The noise level/ quality of the Toxcast data is directly based on QA/QC parameters of the experimental methodology – the 18 data exists but the quality is not absolutely stated . Thus noise level will need to be carefully addressed as well as

ACS Paragon Plus Environment

Page 2 of 9

Page 3 of 9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

ACS Sustainable Chemistry & Engineering

simple errors such as transposition of units i.e. 20 mg/kg becomes 20,000 mg/kg and vice versa. Data curation thus becomes an important issue although the sheer size of the database may limit uncertainty. While from the perspective of data-mining these are small datasets, the relative breadth of these data sets has drastically changed our perspective on the possibilities that are inherent to such data., This has lead to the 11 development of tools to easily analyze legacy data on a large-scale . Equally important, we now have at our disposal 18 increasingly rich data sets to elucidate the molecular mechanism of toxicity and prioritize chemicals -e.g. Toxcast . Given both the quality and quantity of dose-response data in large-scale studies of altered gene expression 19 (transcriptomics), and our ability to use more sophisticated methods to tease out pathways of toxicity , our 20 understanding is steadily increasing. Exposure science has lagged behind. The development of exposure datasets 21 22 and the promise shown by EWAS (environment-wide studies) and exposome studies will enhance our ability to focus on risk at the population-level from real-world exposure scenarios (including co-exposures). Life-cycle analysis has yet to substantially benefit from big data approaches, as there are no regulatory drivers per se. Universally agreed upon parameters and data necessary for such an approach are rarely generated or stored by manufactures, let alone available to assessors. Modeling Computational approaches have been successfully applied as a screening-level methodology for ecotoxicology 23 considerations - e.g. Ecosar (which calculates both physical chemical properties and potential ecotoxicity values from a SMILES string). These approaches are well accepted by most regulatory agencies, notwithstanding their less 24 than stellar reliability/accuracy . Similarly, some expert-based rule systems that look for known structural motifs - e.g. 25 OncoLogic for predicting cancer-hazard - have vastly streamlined hazard-screening and are generally accepted by regulatory agencies. One extension of this expert-based approach is the regulatory rules adopted by most agencies 26 for new polymers . The sheer complexity of polymers, as well as the fact that most polymers in commerce are mixtures, precludes simple models or SAR-based approaches. Instead, polymers are grouped into classes based on the range of molecular weights, possible impurities, the presence or absence of functional groups, and physical chemical properties (correlated with a known hazard (i.e. swellability)). Such an approach will likely prove necessary for nanomaterials, which present a similar problem, with even less physical chemical characterization data. A proposal has been made to group nanomaterials into common categories based on similar structural elements, size 27 distribution and structural element , while other strategies have focused on categorization according to exposure and 28 use scenarios (such as inhalation) that are linked to specific biological outcomes . Such models have the advantage of regulatory acceptance, however they are fairly simple and depend neither on sophisticated models or large data sets. They have a fairly narrow applicability domain as well as limited wellcharacterized mechanisms of toxicity. Recently, the availability of larger “big data” sets resulted in a vast improvement in our ability to extend models to a broader universe of chemicals. Big data provide both an estimate of hazard and an estimate of confidence in the prediction, as well as demonstrating the limits of animal data. This area of research is perhaps most mature in the area of skin sensitization - where several high-confidence models have 29 been developed that outperform the traditional animal test - either by focusing on human data , 3D molecular 30 modeling , or by using a wide-ranging similarity approach to the entire toxicological universe of REACH-registered 15 chemicals . Nonetheless, with the development of larger data sets with a wider variety of endpoints, it is likely that data-gap filling will be possible for other endpoints in the future, and more over a comprehensive understanding of what types of chemicals are relatively data-poor can guide towards smarter use of resources for testing.

Metrics Metrics lies at the heart of a sustainable methodology for assessing hazard and risk. Until very recently metrics as 31 they were, have consisted of externally inconsistent metrics limited to very elementary scoring i.e. low-medium-high (with an occasional very high) buckets. Unfortunately in a 8 systems by 7 chemicals comparison matrix none of the 31 commercial systems consistently scored alike, even those from the same originating organization . A major source of the differences in these metrics systems is the underlying assumptions. The least modulated of these systems are based solely on the presence of a chemical on lists of interest e.g. a TLV list. Obviously the sensitivity of the metric is

ACS Paragon Plus Environment

ACS Sustainable Chemistry & Engineering 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

highly dependent on the number of lists and their individual weighting. Absence from lists does not impute a lack of toxicity: lists may simply not exist for an endpoint of concern. It also may take years if not decades for chemicals to be included on a (new) list, e.g. endocrine disruptors. Like chemicals are often not included in the same lists, e.g. BPA analogues. All of these “lags” in data implementation significantly affect the accuracy of these metrics, although their simplicity makes them generally faster and cheaper, and hence more widely available. On the other end of the metrics scale are full blown assessments which are not so much a metric as a tome. Cost often far exceeds $ 1,000,000 per chemical (without animal testing) which for the 30,000 -100,000 industrial chemicals in commerce (in quantity >1 ton) results in an unsustainable investment. Animal testing cost would most likely be more than double that cost per chemical. The infrastructure for an endeavor on that sort of scale simply does not exist. Add in the need/desire for limiting the use of animals in deriving toxicological characteristics and the development of alternatives to animal testing, and resources become even more constrained. The metrics lying in the middle of this spectrum have the highest potential: not too expensive, medium accuracy, reasonable cost. Even advanced data generation costs, whether it be through high throughput systems or molecular toxicology, have the potential for being very economical on the envisioned scale. All of these metrics are still screening metrics but have some, varying, basis in scientific data. Thus, the scoring does not depend on list systems but is based on (raw) scientific data, curated to some variable extent. Most of these metrics however still utilize the bucket system: low, medium, high (very high) and are thus qualitative. As a result of the utilization of the bucket system, comparisons across multiple endpoints (e.g. acute toxicity and mutagenicity) are limited to worst case assumption i.e. one high score makes the whole chemical score high. One can forget assessing/scoring chemical mixtures, unless one once again succumbs to the highest denominator method: i.e. one high score scores the entire mixture high. As most chemicals are present/utilized in commerce as mixtures, all these limitations severely limit the real-life application of these screening methods. The most expensive of these methods at around $ 5,000 plus per chemical, GreenScreen, basically consists of 32 mini-assessments with a more limited review and conclusions but still resulting in a bucket score for each endpoint. Scivera and Verisk 3E to some degree automated the screening hazard assessment process, thus lowering cost. Scivera’s approach uses curated data to derive bucket scores for individual endpoints, but in a much more automated 33 form . Verisk 3E GreenScore’ methodology both scores hazard in unitless quantitative values as well as assesses 34 mixtures, all using logarithmic transformation of raw scientific data . All of the systems discussed so far are simple hazard assessment screening systems: they only look at the inherent hazard of a chemical independent of exposure and don’t have a good approach to assessing mixtures, with the exception of 3E GreenScore. In the simple hazard mixture approach discussed above 0.01% formaldehyde would be weighted higher than 99.99 % isopropanol resulting in a hazard metric score as high or carcinogenic, contrary to a truly weighted approach. Risk (as hazard X exposure) would be an even more accurate metric. The difference between hazard and risk can be illustrated with acetic acid. While acetic acid is inherently hazardous (i.e. a very strong corrosive for both skin and eyes) we all eat 5 ml of a 4% solution just about every day on average. Obviously ingesting a teaspoon of vinegar does not incur a risk (spilling 4 gallons of glacial acetic acid from 30 feet up would of course incur a major risk!). 35 36

Two relatively new exposure systems have been proposed , . Verisk 3E GreenScore uses Consexpo exposure equations and REACH exposure scenarios to derive exposure screening values for scenarios with multiple, real life 30 variables . These exposure screening values can then be used to derive relative risk screening values, which together with hazard screening scores can be used in alternatives assessments. This also allows the use of exposure surrogates such as emissions parameters to derive facility wide risk screening estimates. The other exposure based 35 package uses a semi-quantitative approach to derive exposure and hence risk estimates, specifically for use in alternatives assessment. It considers both human health hazard and environmental impacts in order to facilitate decision making. One of the problems with all the metrics discussed above is data gaps. At best we have incomplete datasets for 15,000 existing chemicals and most of these are very incomplete indeed. New chemicals of course have no data at

ACS Paragon Plus Environment

Page 4 of 9

Page 5 of 9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

ACS Sustainable Chemistry & Engineering

all. Both REACH and TSCA contain various restrictions on use of animal testing to generate new data. Although new approaches, whether in vitro or in silico, are rapidly being developed; only very few endpoints (e.g. skin sensitization) have well developed alternative methods. The most common problem is that of correlating existing “old-school” toxicology with newly developed metrics, e.g. acute toxicity versus high-throughput screening. In addition most regulations and detailed assessment schemes still revolve around old-school toxicity data, after all how is one to interpret a positive response in a liver enzyme assay in deriving a Reference Dose (RfD)? In the interim, Read-Across methodologies are used to transfer known classical toxicology data from one chemical to an alike-looking chemical of similar structure and functional groups. The alike-looking chemical is then assigned similar potency and effects as the originating chemical. Various computational methods have been deployed to 37 rationalize Read-Across methodology from an art to a science . Again some endpoints, e.g. acute toxicity, have better reliability than others. New in silico methodologies are in various phases of development, with varying degrees of progress. Within the next decade all of these methodologies are expected to mature to the point where the only remaining task will be the correlation of classical, old-school toxicology data with the computational methodologies and in vitro tests. Risk assessment and management will then be completely based on in vitro and in silico metrics, and will not just identify risk once a chemical is ready for commerce, but can guide chemists towards the design of more benign and sustainable chemicals.

Conclusion - Next Steps What, then, are the next steps towards realizing a vision of green toxicology metrics? It is useful for regulatory toxicology to look at the in silico toxicology that has become standard in the pharmaceutical 38 39 40 industry - where models for ADME as well as clinical endpoints have proven their usefulness . The value of combining large data sets, computational approaches, and an understanding of molecular mechanisms of toxicity also is very powerful. Moreover, in contrast to consumer chemicals (where safety assessment is the last phase before market), within the pharmaceutical sciences safety is considered at every step of development, and the drug 11 development process can be seen as one that works to optimize efficiency and improve safety .

Clearly, while datasets have grown substantially, there is the need for more data - not just in terms of chemicals and endpoints, but richer data for effect level and molecular mechanism. Data collecting needs to extend to exposure concerns so that risk can be considered more methodically; more specifically in humans. The process of collecting the data sets necessary for life-cycle considerations needs to be initiated. At the same time, there should be a focus not just on growing data but ensuring that such data is of high quality - neither machine-learning approaches nor metrics can be accurate in the presence of profoundly noisy data. Mathematical modeling may be able to address precision and hence noise. An additional key component is making data sets - especially teaching data sets - readily available to the community. Better methodologies need to be developed for incorporation of non-traditional data (i.e. HTS such as Toxcast and in chemico screens) into hazard and risk (screening) assessment, mostly likely through developing (common) mechanistic pathways of toxicity. Most existing (screening) methodologies rely on read-across or structural-activity relationships. These methodologies offer little insight into the molecular mechanisms of toxicity and therefore offer no clear way to guide benign design or to analyze the hazard of novel chemistries. Additionally, an understanding of the necessary tradeoff between accuracy and cost of metrics and supporting data must be an essential part of developing a sustainable green toxicology paradigm. Finally, as long as regulatory agencies considers an animal test the “gold-standard” of hazard assessment, our ability to accurately predict human risk metrics will remain stalled in the century in which such tests were developed.

ASSOCIATED CONTENT

ACS Paragon Plus Environment

ACS Sustainable Chemistry & Engineering 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

AUTHOR INFORMATION Corresponding Author *Alexandra Maertens

a Center for Alternatives to Animal Testing, Environmental Health and Engineering, Johns Hopkins Bloomberg School of Public Health, 615 N. Wolfe St, Baltimore, MD

Author Contributions The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript. Notes/Conflict of Interest HP owns stock in Verisk Analytics, Verisk 3E's parent company.

ABBREVIATIONS

BPA, BisPhenol-A BPS, BisPhenol-S ESRRG-Estrogen Related Receptor Gamma HTS, High-Throughput Systems LLNA, Local Lymph Node Assay PMN, PreManufacturing Notice REACH, Registration, Evaluation, Authorisation and Restriction of Chemicals RfD, Reference Dose SRA, Structure-Activity Relationship TSCA, Toxic Substances and Control Act (1) Hartung, T. Toxicology for the twenty-first century. Nature 2009, 460 (7252), 208–212 DOI: 10.1038/460208a. (2) Hartung, T. Food for thought...on alternative methods for chemical safety testing. ALTEX 2010, 27 (1), 3–14. (3) European Chemicals Agency. ECHA (The European Chemicals Agency). In Encyclopedia of Toxicology (Third Edition); Wexler, P., Ed.; Academic Press: Oxford, 2014; pp 263–264. (4) Nabholz, J. V.; Miller, P.; Zeeman, M. Environmental Risk Assessment of New Chemicals Under the Toxic Substances Control Act TSCA Section Five. In Environmental Toxicology and Risk Assessment; pp 40–40 – 16. (5) Basketter, D. A.; Clewell, H.; Kimber, I.; Rossi, A.; Blaauboer, B.; Burrier, R.; Daneshian, M.; Eskes, C.; Goldberg, A.; Hasiwa, N.; et al. A roadmap for the development of alternative (non-animal) methods for systemic toxicity testing - t4 report*. ALTEX 2012, 29 (1), 3–91 DOI: 10.14573/altex.2012.1.003. (6) Okada, H.; Tokunaga, T.; Liu, X.; Takayanagi, S.; Matsushima, A.; Shimohigashi, Y. Direct evidence revealing structural elements essential for the high binding ability of bisphenol A to human estrogen-related receptor-gamma. Environ. Health Perspect. 2008, 116 (1), 32–38 DOI: 10.1289/ehp.10587. (7) Alves, V. M.; Capuzzi, S. J.; Muratov, E.; Braga, R. C.; Thornton, T.; Fourches, D.; Strickland, J.; Kleinstreuer, N.; Andrade, C. H.; Tropsha, A. QSAR models of human data can enrich or replace LLNA testing for human skin sensitization. Green Chem. 2016, 18 (24), 6501–6515 DOI: 10.1039/C6GC01836J.

ACS Paragon Plus Environment

Page 6 of 9

Page 7 of 9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

ACS Sustainable Chemistry & Engineering

(8) Luechtefeld, T.; Maertens, A.; Russo, D. P.; Rovida, C.; Zhu, H.; Hartung, T. Analysis of publically available skin sensitization data from REACH registrations 2008-2014. ALTEX 2016, 33 (2), 135–148 DOI: 10.14573/altex.1510055. (9) OECD. OECD Guidelines for the Testing of Chemicals, Section 4 Test No. 442C: In Chemico Skin Sensitisation Direct Peptide Reactivity Assay (DPRA): Direct Peptide Reactivity Assay (DPRA); OECD Publishing, 2015. (10) Urbisch, D.; Mehling, A.; Guth, K.; Ramirez, T.; Honarvar, N.; Kolle, S.; Landsiedel, R.; Jaworska, J.; Kern, P. S.; Gerberick, F.; et al. Assessing skin sensitization hazard in mice and men using non-animal test methods. Regul. Toxicol. Pharmacol. 2015, 71 (2), 337–351 DOI: 10.1016/j.yrtph.2014.12.008. (11) Crawford, S. E.; Hartung, T.; Hollert, H.; Mathes, B.; van Ravenzwaay, B.; Steger-Hartmann, T.; Studer, C.; Krug, H. F. Green Toxicology: a strategy for sustainable chemical and material development. Environ Sci Eur 2017, 29 (1), 16 DOI: 10.1186/s12302-017-0115-z. (12) Maertens, A.; Anastas, N.; Spencer, P. J.; Stephens, M.; Goldberg, A.; Hartung, T. Green toxicology. ALTEX 2014, 31 (3), 243–249 DOI: 10.14573/altex.1406181. (13) Anastas, N. D. Green Toxicology. In Green Techniques for Organic Synthesis and Medicinal Chemistry; 2012; pp 1–23. (14) Jaworska, J. S.; Natsch, A.; Ryan, C.; Strickland, J.; Ashikaga, T.; Miyazawa, M. Bayesian integrated testing strategy (ITS) for skin sensitization potency assessment: a decision support system for quantitative weight of evidence and adaptive testing strategy. Arch. Toxicol. 2015, 89 (12), 2355–2383 DOI: 10.1007/s00204-0151634-2. (15) Luechtefeld, T.; Maertens, A.; McKim, J. M.; Hartung, T.; Kleensang, A.; Sá-Rocha, V. Probabilistic hazard assessment for skin sensitization potency by dose-response modeling using feature elimination instead of quantitative structure-activity relationships. J. Appl. Toxicol. 2015, 35 (11), 1361–1371 DOI: 10.1002/jat.3172. (16) Luechtefeld, T.; Maertens, A.; Russo, D. P.; Rovida, C.; Zhu, H.; Hartung, T. Global analysis of publicly available safety data for 9,801 substances registered under REACH from 2008-2014. ALTEX 2016, 33 (2), 95– 109 DOI: 10.14573/altex.1510052. (17) Zhu, H.; Zhang, J.; Kim, M. T.; Boison, A.; Sedykh, A.; Moran, K. Big data in chemical toxicity research: the use of high-throughput screening assays to identify potential toxicants. Chem. Res. Toxicol. 2014, 27 (10), 1643– 1651 DOI: 10.1021/tx500145h. (18) Dix, D. J.; Houck, K. A.; Martin, M. T.; Richard, A. M.; Setzer, R. W.; Kavlock, R. J. The ToxCast program for prioritizing toxicity testing of environmental chemicals. Toxicol. Sci. 2007, 95 (1), 5–12 DOI: 10.1093/toxsci/kfl103. (19) Pendse, S. N.; Maertens, A.; Rosenberg, M.; Roy, D.; Fasani, R. A.; Vantangoli, M. M.; Madnick, S. J.; Boekelheide, K.; Fornace, A. J.; Odwin, S.-A.; et al. Information-dependent enrichment analysis reveals timedependent transcriptional regulation of the estrogen pathway of toxicity. Arch. Toxicol. 2017, 91 (4), 1749–1762 DOI: 10.1007/s00204-016-1824-6. (20) Neveu, V.; Moussy, A.; Rouaix, H.; Wedekind, R.; Pon, A.; Knox, C.; Wishart, D. S.; Scalbert, A. ExposomeExplorer: a manually-curated database on biomarkers of exposure to dietary and environmental factors. Nucleic Acids Res. 2017, 45 (D1), D979–D984 DOI: 10.1093/nar/gkw980. (21) Patel, C. J.; Bhattacharya, J.; Butte, A. J. An Environment-Wide Association Study (EWAS) on type 2 diabetes mellitus. PLoS One 2010, 5 (5), e10746 DOI: 10.1371/journal.pone.0010746. (22) Shaffer, R. M.; Smith, M. N.; Faustman, E. M. Developing the Regulatory Utility of the Exposome: Mapping Exposures for Risk Assessment through Lifestage Exposome Snapshots (LEnS). Environ. Health Perspect. 2017, 125 (8), 085003 DOI: 10.1289/EHP1250. (23) Clements, R. G.; Nabholz, J. V. ECOSAR: a computer program and user’s guide for estimating the ecotoxicity of industrial chemicals based on structure activity relationships. In EPA-748-R-93-002; US Environmental Protection Agency, Office of Pollution Prevention and Toxics 7403. Washington, DC, 1994. (24) Melnikov, F.; Kostal, J.; Voutchkova-Kostal, A.; Zimmerman, J. B.; Anastas, P. T. Assessment of predictive models for estimating the acute aquatic toxicity of organic chemicals. Green Chem. 2016, 18 (16), 4432–4445 DOI: 10.1039/C6GC00720A. (25) Woo, Y. T.; Lai, D. Y. OncoLogic: a mechanism-based expert system for predicting the carcinogenic potential of chemicals. Predictive Toxicology 2005, 385–413. (26) Boethling, R. S.; Nabholz, J. V. Environmental assessment of polymers under the US Toxic Substances Control Act; United States Environmental Protection Agency, 1996. (27) Walser, T.; Studer, C. Sameness: The regulatory crux with nanomaterial identity and grouping schemes for hazard assessment. Regul. Toxicol. Pharmacol. 2015, 72 (3), 569–571 DOI: 10.1016/j.yrtph.2015.05.031. (28) Godwin, H.; Nameth, C.; Avery, D.; Bergeson, L. L.; Bernard, D.; Beryt, E.; Boyes, W.; Brown, S.; Clippinger, A. J.; Cohen, Y.; et al. Nanomaterial categorization for assessing risk potential to facilitate regulatory decision-making. ACS Nano 2015, 9 (4), 3409–3417 DOI: 10.1021/acsnano.5b00941. (29) Braga, R. C.; Alves, V. M.; Muratov, E. N.; Strickland, J.; Kleinstreuer, N.; Trospsha, A.; Andrade, C. H. Pred-Skin: A Fast and Reliable Web Application to Assess Skin Sensitization Effect of Chemicals. J. Chem. Inf. Model. 2017, 57 (5), 1013–1017 DOI: 10.1021/acs.jcim.7b00194.

ACS Paragon Plus Environment

ACS Sustainable Chemistry & Engineering 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 8 of 9

(30) Kostal, J.; Voutchkova-Kostal, A. CADRE-SS, an in Silico Tool for Predicting Skin Sensitization Potential Based on Modeling of Molecular Interactions. Chem. Res. Toxicol. 2016, 29 (1), 58–64 DOI: 10.1021/acs.chemrestox.5b00392. (31) Panko, J. M.; Hitchcock, K.; Fung, M.; Spencer, P. J.; Kingsbury, T.; Mason, A. M. A comparative evaluation of five hazard screening tools. Integr. Environ. Assess. Manag. 2017, 13 (1), 139–154 DOI: 10.1002/ieam.1757. (32) Full GreenScreen Method | GS Method https://www.greenscreenchemicals.org/learn/full-greenscreenmethod (accessed Sep 21, 2017). (33) Rinkevich, J.; Beattie, P.; McLoughlin, C. Making Green Chemistry Accessible to Consumer Product Decision-makers: Chemical Screening and Alternatives Assessment for Everyone, 2017. (34) Plugge, H. GreenScore Hazard Assessment of Mixtures: A Novel Methodology, 2017. (35) Arnold, S. M.; Greggs, B.; Goyak, K. O.; Landenberger, B. D.; Mason, A. M.; Howard, B.; Zaleski, R. T. A quantitative screening-level approach to incorporate chemical exposure and risk into alternative assessment evaluations. Integr. Environ. Assess. Manag. 2017 DOI: 10.1002/ieam.1926. (36) Plugge, H. A new tool for the IH Toolbox: Automated Exposure Screening. American Industrial Hygiene Association, 2017. (37) [PDF]Read-Across Assessment Framework (RAAF) - ECHA - Europa EU. (38) Gleeson, M. P.; Hersey, A.; Hannongbua, S. In-silico ADME models: a general assessment of their utility in drug discovery applications. Curr. Top. Med. Chem. 2011, 11 (4), 358–381 DOI: 0.2174/156802611794480927. (39) Valerio, L. G., Jr; Balakrishnan, S.; Fiszman, M. L.; Kozeli, D.; Li, M.; Moghaddam, S.; Sadrieh, N. Development of cardiac safety translational tools for QT prolongation and torsade de pointes. Expert Opin. Drug Metab. Toxicol. 2013, 9 (7), 801–815 DOI: 10.1517/17425255.2013.783819. (40) Valerio, L. G., Jr. In silico toxicology for the pharmaceutical sciences. Toxicol. Appl. Pharmacol. 2009, 241 (3), 356–370 DOI: 10.1016/j.taap.2009.08.022.

Tables

Table 1

Data Source (a)

ECHA ToxCast(d) HSDB(f) ACToR a b c d e f g h i

“Big Data” sources in Toxicology

# of Chemicals

# of Endpoints

(b)

> 100(c) > 1,000(e) >> 17(g) (i) >1,000

16,596 > 9,000 > 5,800 (h) 456,918

European Chemical Agency -- Registered Substances https://echa.europa.eu/information-onchemicals/registered-substances 16596 unique substances in 62987 dossiers as of 10/24/2017 Number of endpoints possible for each chemical - actual number varies widely US Environmental Protection Agency https://www.epa.gov/chemical-research/toxcast-dashboard Mostly biochemical/enzyme assays - actual number of assays varies widely US National Library of Medicine - Hazard Substances Databank https://www.nlm.nih.gov/pubs/factsheets/hsdbfs.html Broad categories, over 50 endpoints possible ACToR https://actor.epa.gov/actor/home.xhtml Data includes Toxcast, Endocrine Disruptor Screening Program, and for some chemicals exposure estimate

ACS Paragon Plus Environment

Page 9 of 9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

ACS Sustainable Chemistry & Engineering

FOR TABLE OF CONTENTS ONLY

Describes Green Toxicology as a sustainable practice

ACS Paragon Plus Environment