Open Vocabulary Learning on Source Code with a Graph-Structured Cache
Abstract
Machine learning models that take computer program source code as input typically use Natural Language Processing (NLP) techniques. However, a major challenge is that code is written using an open, rapidly changing vocabulary due to, e.g., the coinage of new variable and method names. Reasoning over such a vocabulary is not something for which most NLP methods are designed. We introduce a Graph-Structured Cache to address this problem; this cache contains a node for each new word the model encounters with edges connecting each word to its occurrences in the code. We find that combining this graph-structured cache strategy with recent Graph-Neural-Network-based models for supervised learning on code improves the models' performance on a code completion task and a variable naming task — with over 100% relative improvement on the latter — at the cost of a moderate increase in computation time.
Additional Information
© 2019 by the author(s). Proceedings of the 36th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Many thanks to Miltos Allamanis and Hyokun Yun for their advice and useful conversations.Attached Files
Published - cvitkovic19b.pdf
Submitted - 1810.08305.pdf
Supplemental Material - cvitkovic19b-supp.pdf
Files
Additional details
- Eprint ID
- 94180
- Resolver ID
- CaltechAUTHORS:20190327-085810844
- Created
-
2019-03-28Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field