Designing the architecture of deep neural networks and choosing their hyperparameters is a difficult problem often requiring a trial and error by hand since they are essentially black-box algorithms. We are developing a genetic (evolutionary) algorithm, with transfer learning across generations, to automatically optimize neural network architectures and hyperparameters given any dataset by leveraging high-performance parallel computing. We plan to scale this to supercomputers with thousands of GPUs such as Blue Waters.
Project Members: Daniel George, Eliu Huerta