Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published April 2019 | public
Journal Article

On the Uncertainty of Information Retrieval in Associative Memories

Abstract

We (people) are memory machines. Our decision processes, emotions, and interactions with the world around us are based on and driven by associations to our memories. This natural association paradigm will become critical in future memory systems, namely, the key question will not be "How do I store more information?" but rather, "Do I have the relevant information? How do I retrieve it?" The focus of this paper is to make a first step in this direction. We define and solve a very basic problem in associative retrieval. Given a word W, the words in the memory that are t-associated with W are the words in the ball of radius t around W. In general, given a set of words, say W, X and Y, the words that are t-associated with {W, X, Y} are those in the memory that are within distance t from all the three words. Our main goal is to study the maximum size of the t-associated set as a function of the number of input words and the minimum distance of the words in memory - we call this value the uncertainty of an associative memory. In this work we consider the Hamming distance and derive the uncertainty of the associative memory that consists of all the binary vectors with an arbitrary number of input words. In addition, we study the retrieval problem, namely, how do we get the t-associated set given the inputs? We note that this paradigm is a generalization of the sequences reconstruction problem that was proposed by Levenshtein (2001). In this model, a word is transmitted over multiple channels. A decoder receives all the channel outputs and decodes the transmitted word. Levenshtein computed the minimum number of channels that guarantee a successful decoder - this value happens to be the uncertainty of an associative memory with two input words.

Additional Information

© 2018 IEEE. This research was supported in part by the ISEF Foundation, the Lester Deutsch Fellowship, the National Science Foundation under Grant CCF-1116739, and the NSF Expeditions in Computing Program under grant CCF-0832824. Part of the results in the paper were presented at the IEEE International Symposium on Information Theory, Cambridge, MA, July 1 – 6, 2012. The authors thank Tuvi Etzion for helpful discussions on anticodes. They also thank the three anonymous reviewers and the associate editor Joerg Kliewer for their valuable comments and suggestions, which have contributed for the clarity of the paper and its presentation.

Additional details

Created:
August 19, 2023
Modified:
October 19, 2023