Optimizing Neural Network Architectures via Parallel Computing with Evolutionary/Genetic Algorithms

Designing the architecture of deep neural networks and choosing their hyperparameters is a difficult problem often requiring a trial and error by hand since they are essentially black-box algorithms. We are developing a genetic (evolutionary) algorithm, with transfer learning across generations, to automatically optimize neural network architectures and hyperparameters given any dataset by leveraging high-performance parallel computing. We plan to scale this to supercomputers with thousands of GPUs such as Blue Waters.

Project Members: Daniel George, Eliu Huerta

Gravity Group
1205 W. Clark St.
Urbana, Illinois 61801
Email: kindrtnk@illinois.edu
CookieSettings CookieSettings