Computer Vision News - July 2016

Top Ranking Methods The challenge turned out to be much harder than it was expected. Of hundreds of participants, only few passed the first rounds. The strategy suggested by the organizers was the following:. Computer Vision News Challenge 37 Another notable contribution came from the Intel team, whose method is based on gradient boosting of trees built on a random subspace dynamically adjusted to reflect learned features relevance. Several teams made their solutions publicly available and the organizers wish to encourage this practice in future challenges. For instance, the source code of auto-sklearn is available under an open source license following this link . “ We don’t have better algorithms. We just have more data ” Peter Norvig, Chief Scientist @ Google Structured configuration space. Squared boxes denote parent hyperparameters whereas boxes with rounded edges are leaf hyperparameters. Grey colored boxes mark active hyperparameters which form an example configuration and machine learning pipeline. Each pipeline comprises one feature preprocessor, classifier and up to three data preprocessor methods plus respective hyperparameters [AAD Freiburg team] The winning team’s approach to AutoML. They add two components to Bayesian hyperparameter optimization of an ML framework: meta-learning for initializing the Bayesian optimizer and automated ensemble construction from configurations evaluated during optimization [AAD Freiburg team] - reduce the search space with “filter” methods; - reduce the number of hyper-parameters using versions of the algorithms that optimize them with “embedded methods”; - use an ensemble method to grow an ever improving ensemble until the time is out. The overall winner team , led by F. Hutter who co-developed SMAC (Sequential Model-based Algorithm Configuration) and the AutoWeka software, delivered a new tool called auto-sklearn : it devised heterogeneous ensembles of predictors based on scikit-learn pipelines (frequently used by other teams too), using a combination of meta-learning and Bayesian hyper-parameter optimization. Their paper is here and their slides are here . We wish to thank them for allowing us to republish the images from their paper ( credit: Efficient and Robust Automated Machine Learning , Feurer et al., Advances in Neural Information Processing Systems 28: NIPS 2015 ). Challenge

RkJQdWJsaXNoZXIy NTc3NzU=