Abstract
Neuromorphic computing has several characteristics that make it an extremely compelling computing paradigm for post Moore computation. Some of these characteristics include intrinsic parallelism, inherent scalability, collocated processing and memory, and event-driven computation. While these characteristics impart energy efficiency to neuromorphic systems, they do come with their own set of challenges. One of the biggest challenges in neuromorphic computing is to establish the theoretical underpinnings of the computational complexity of neuromorphic algorithms. In this paper, we take the first steps towards defining the space and time complexity of neuromorphic algorithms. Specifically, we describe a model of neuromorphic computation and state the assumptions that govern the computational complexity of neuromorphic algorithms. Next, we present a theoretical framework to define the computational complexity of a neuromorphic algorithm. We explicitly define what space and time complexities mean in the context of neuromorphic algorithms based on our model of neuromorphic computation. Finally, we leverage our approach and define the computational complexities of six neuromorphic algorithms: constant function, successor function, predecessor function, projection function, neuromorphic sorting algorithm and neighborhood subgraph extraction algorithm.