Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published April 1, 2019 | Submitted
Report Open

Tensor vs Matrix Methods: Robust Tensor Decomposition under Block Sparse Perturbations

Abstract

Robust tensor CP decomposition involves decomposing a tensor into low rank and sparse components. We propose a novel non-convex iterative algorithm with guaranteed recovery. It alternates between low-rank CP decomposition through gradient ascent (a variant of the tensor power method), and hard thresholding of the residual. We prove convergence to the globally optimal solution under natural incoherence conditions on the low rank component, and bounded level of sparse perturbations. We compare our method with natural baselines which apply robust matrix PCA either to the {\em flattened} tensor, or to the matrix slices of the tensor. Our method can provably handle a far greater level of perturbation when the sparse tensor is block-structured. This naturally occurs in many applications such as the activity detection task in videos. Our experiments validate these findings. Thus, we establish that tensor methods can tolerate a higher level of gross corruptions compared to matrix methods.

Additional Information

Animashree Anandkumar is supported in part by Microsoft Faculty Fellowship, NSF Career Award CCF-1254106, ONR Award N00014-14-1-0665, ARO YIP Award W911NF-13-1-0084, and AFOSR YIP FA9550-15-1-0221. Yang Shi is supported by NSF Career Award CCF-1254106 and ONR Award N00014-15-1-2737, Niranjan is supported by NSF BigData Award IIS-1251267 and ONR Award N00014-15-1-2737.

Attached Files

Submitted - 1510.04747.pdf

Files

1510.04747.pdf
Files (366.2 kB)
Name Size Download all
md5:611e47370cae47b5a23962e9c7ed81f0
366.2 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 20, 2023