Skip to main content

Sparse Symmetric Format for Tucker Decomposition...

by Shruti Shivakumar, Jiajia Li, Ramakrishnan Kannan, Srinivas Aluru
Publication Type
Journal Name
IEEE Transactions on Parallel and Distributed Systems
Publication Date
Page Numbers
1743 to 1756

Tensor-based methods are receiving renewed attention in recent years due to their prevalence in diverse real-world applications. There is considerable literature on tensor representations and algorithms for tensor decompositions, both for dense and sparse tensors. Many applications in hypergraph analytics, machine learning, psychometry, and signal processing result in tensors that are both sparse and symmetric, making them an important class for further study. Similar to the critical Tensor Times Matrix chain operation (TTM c ) in general sparse tensors, the S parse S ymmetric T ensor T imes S ame M atrix c hain (S 3 TTM c ) operation is compute and memory intensive due to high tensor order and the associated factorial explosion in the number of non-zeros. We present the novel Compressed Sparse Symmetric (CSS) format for sparse symmetric tensors, along with an efficient parallel algorithm for the S 3 TTM c operation. We theoretically establish that S 3 TTM c on CSS achieves a better memory versus run-time trade-off compared to state-of-the-art implementations, and visualize the variation of the performance gap over the parameter space. We demonstrate experimental findings that confirm these results and achieve up to 2.72× speedup on synthetic and real datasets. The scaling of the algorithm on different test architectures is also showcased to highlight the effect of machine characteristics on algorithm performance.