RT Journal Article
JF 2015 IEEE 31st International Conference on Data Engineering (ICDE)
YR 2015
VO 00
SP 1191
TI Efficient sample generation for scalable meta learning
A1 Sebastian Schelter,
A1 Juan Soto,
A1 Volker Markl,
A1 Douglas Burdick,
A1 Berthold Reinwald,
A1 Alexandre Evfimievski, K1 Training
K1 Partitioning algorithms
K1 Electronic mail
K1 Indexes
K1 Distributed databases
K1 Data models
K1 Predictive models

AB Meta learning techniques such as cross-validation and ensemble learning are crucial for applying machine learning to real-world use cases. These techniques first generate samples from input data, and then train and evaluate machine learning models on these samples. For meta learning on large datasets, the efficient generation of samples becomes problematic, especially when the data is stored distributed in a block-partitioned representation, and processed on a shared-nothing cluster. We present a novel, parallel algorithm for efficient sample generation from large, block-partitioned datasets in a shared-nothing architecture. This algorithm executes in a single pass over the data, and minimizes inter-machine communication. The algorithm supports a wide variety of sample generation techniques through an embedded user-defined sampling function. We illustrate how to implement distributed sample generation for popular meta learning techniques such as hold-out tests, k-fold cross-validation, and bagging, using our algorithm and present an experimental evaluation on datasets with billions of datapoints.
PB IEEE Computer Society, [URL:http://www.computer.org]
LA English
DO 10.1109/ICDE.2015.7113367
LK http://doi.ieeecomputersociety.org/10.1109/ICDE.2015.7113367