Computational Neuroscience and Anthropogeny
Neuroscience has made great strides in the last decade following the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a science and engineering grand challenge that has greatly accelerated research on large-scale recordings from neurons and reconstructions of neural circuits. Large-scale neural network models have in turn inspired major advances in artificial intelligence. These network models have been trained on large-scale data sets to recognize objects in images, caption photographs, and translate text between languages. The most recent advance has been the emergence of pre-trained foundational language models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. Language models like Generative Pre-trained Transformer 3 (GPT-3) and, more recently, Language Model for Dialogue Applications (LaMDA) can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range of reactions and debate on whether these large language models (LLMs) understand what they are saying or exhibit signs of intelligence. I will present examples of these dialogs and let the audience decide for themselves.
Attachment | Size |
---|---|
2022_11_19_09_Sejnowski.mp4 | 979.33 MB |