Artificial neural networks – the result of researchers’ efforts to mimic the processes that take place in the human brain – have been a focus of extensive research for some time. By the time of the Isaac Newton Institute programme, the field was at a crucial junction. In the 1990s it had become clear that the future of machine learning hinged not so much on neurobiology, but rather on statistics and probability theory. Artificial neural networks “learn” by examining large sets of training data, but these large data sets are rarely clean. Instead they come with errors and variability. To make the most of such noisy data sets, it is important to quantify and understand the uncertainty in the data. The recognition that these probabilistic aspects of neural networks required a sound mathematical footing led to the NNM programme.
The programme organisers understood that they needed to bring together experts from computer science and statistics, as well as other relevant fields, such as physics and dynamical systems. The Isaac Newton Institute, with its expertise and focus on interdisciplinary research, was the ideal location to provide the time and space to exchange ideas.