2022
Communicative Efficiency or Iconic Learning: Do Acquisition and Communicative Pressures Interact to Shape Colour- Naming Systems?
Balint Gyevnar,
Gautier Dagan,
Coleman Haley,
Shangmin Guo,
&
Frank Mollica. Entropy.
How to encode arbitrarily complex morphology in word embeddings, no corpus needed
Lane Schwartz,
Coleman Haley,
&
Francis Tyers. Proceedings of the first workshop on NLP applications to field linguistics.
2021
Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Paul Soulos,
Sudha Rao,
Caitlin Smith,
Eric Rosen,
Asli Celikyilmaz,
R. Thomas McCoy,
Yichen Jiang,
Coleman Haley,
Roland Fernandez,
Hamid Palangi,
Jiangfeng Gao,
&
Paul Smolensky. LoResMT2021.
Deep neural networks easily learn unnatural infixation and reduplication patterns
Coleman Haley
&
Colin Wilson. SCIL 2021.
Morphology Matters: A Multilingual Language Modeling Analysis
Hyunji Hayley Park,
Katherine J. Zhang,
Coleman Haley,
Kenneth Steimel,
Han Liu,
&
Lane Schwartz. Transactions of the Association for Computational Linguistics.
2020
Invertible Tree Embeddings using a Cryptographic Role Embedding Scheme
Coleman Haley
&
Paul Smolensky. COLING 2020.
This is a BERT. Now there are several of them. Can they generalize to novel words?
Coleman Haley. BlackboxNLP 2020.
Neural Polysynthetic Language Modeling
Lane Schwartz,
Francis Tyers,
Lori Levin,
Christo Kirov,
Patrick Littell,
Chi-kiu Lo,
Emily Prud'hommeaux,
Hyunji Hayley Park,
Kenneth Steimel,
Rebecca Knowles,
Jeffrey Micher,
Lonny Strunk,
Han Liu,
Coleman Haley,
Katherine J. Zhang,
Robbie Jimmerson,
Vasilisa Andriyanets,
Aldrian Obaja Muis,
Naoki Otani,
Jong Hyuk Park,
&
Zhisong Zhang. Preprint.