Facial reconstruction

Search LJMU Research Online

Browse Repository | Browse E-Theses

Transformational machine learning: Learning how to learn from many related scientific problems.

Olier, I, Orhobor, OI, Dash, T, Davis, AM, Soldatova, LN, Vanschoren, J and King, RD (2021) Transformational machine learning: Learning how to learn from many related scientific problems. Proceedings of the National Academy of Sciences, 118 (49). ISSN 0027-8424

[img]
Preview
Text
Transformational machine learning Learning how to learn from many related scientific problems.pdf - Published Version
Available under License Creative Commons Attribution.

Download (1MB) | Preview

Abstract

Almost all machine learning (ML) is based on representing examples using intrinsic features. When there are multiple related ML problems (tasks), it is possible to transform these features into extrinsic features by first training ML models on other tasks and letting them each make predictions for each example of the new task, yielding a novel representation. We call this transformational ML (TML). TML is very closely related to, and synergistic with, transfer learning, multitask learning, and stacking. TML is applicable to improving any nonlinear ML method. We tested TML using the most important classes of nonlinear ML: random forests, gradient boosting machines, support vector machines, k-nearest neighbors, and neural networks. To ensure the generality and robustness of the evaluation, we utilized thousands of ML problems from three scientific domains: drug design, predicting gene expression, and ML algorithm selection. We found that TML significantly improved the predictive performance of all the ML methods in all the domains (4 to 50% average improvements) and that TML features generally outperformed intrinsic features. Use of TML also enhances scientific understanding through explainable ML. In drug design, we found that TML provided insight into drug target specificity, the relationships between drugs, and the relationships between target proteins. TML leads to an ecosystem-based approach to ML, where new tasks, examples, predictions, and so on synergistically interact to improve performance. To contribute to this ecosystem, all our data, code, and our ∼50,000 ML models have been fully annotated with metadata, linked, and openly published using Findability, Accessibility, Interoperability, and Reusability principles (∼100 Gbytes).

Item Type: Article
Uncontrolled Keywords: AI; drug design; multitask learning; stacking; transfer learning
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QA Mathematics > QA76 Computer software
Divisions: Computer Science & Mathematics
Publisher: National Academy of Sciences
Related URLs:
Date Deposited: 08 Feb 2022 10:31
Last Modified: 08 Feb 2022 10:45
DOI or ID number: 10.1073/pnas.2108013118
URI: https://researchonline.ljmu.ac.uk/id/eprint/16251
View Item View Item