Low-rank matrix factorization can reveal fundamental structure in data. For example, joint-PCA on multi-datasets can find a joint, lower-dimensional representation of the data. Recently other similar matrix factorization methods have been introduced for multi-dataset analysis, e.g., the shared response model (SRM) and hyperalignment (HA). We provide a comparison of these methods with joint-PCA that highlights similarities and differences. Necessary and sufficient conditions under which the solution set to SRM and HA can be derived from the joint-PCA are identified. In particular, if there exists a common template and a set of generalized rotation matrices through which datasets can be exactly aligned to the template, then for any number of features, SRM and HA solutions can be readily derived from the joint-PCA of datasets. Not surprisingly, this assumption fails to hold for complex multi-datasets, e.g., multi-subject fMRI datasets. We show that if the desired conditions are not satisfied, joint-PCA can easily over-fit to the training data when the dimension of the projected space is high (∼> 50). We also examine how well low-dimensional matrix factorization can be computed using gradient descent-type algorithms using Google's TensorFlow library.