skip to content

Harding Distinguished Postgraduate Scholars Programme

 

  Alan Ansell   aja63@cam.ac.uk

  New Zealand

  Theoretical and Applied Linguistics, Churchill College

  PhD thesis: Learning to Remember

  Research interests
  1. Computational Linguistics
  2. Deep Learning
  3. Distributed representations of meaning
  4. Neural models of memory

 

Recent deep learning “contextualisation” methods have enabled the creation of computational models which can “understand” short spans of natural language much better than was previously possible. Contextualisation methods are effective largely because they are “self-supervised,” that is, they are capable of learning generalisable knowledge about a language solely by “reading” a large amount of text written in that language. However, they currently lack a satisfactory model of memory, a crucial faculty for language comprehension in humans, and therefore cannot integrate information across large spans of text. My research will focus on developing better memory models, which can learn in a self-supervised manner to extract relevant information from text, store it in the long term and be queried efficiently. If successful, such a model could be applied to improve the performance of a wide range of natural language processing applications, such as translators, digital assistants and search engines.

 

Who or what inspired you to pursue your research interests?

I was a keen chess player and scholar of Latin at high school. Playing around with computer chess programs made me wonder whether translating between languages could also be computerised. I discovered that while humans can understand language effortlessly, we find it almost impossible to describe how we understand language. I have been fascinated by the question of how to write algorithms capable of “understanding” language ever since. I am fortunate to have learned from a succession of exceptional teachers who have encouraged my curiosity and inspired me to pursue this interest.