We consider inference problems for high-dimensional (HD) functional data with a
dense number (T) of repeated measurements taken for a large number of p
variables from a small number of n experimental units. The spatial and temporal
dependence, high dimensionality, and the dense number of repeated measurements
all make theoretical studies and computation challenging.
This paper has two
aims; our first aim is to solve the theoretical and computational challenges in
detecting and identifying change points among covariance matrices from HD
functional data. The second aim is to provide computationally efficient and
tuning-free tools with a guaranteed stochastic error control. The change point
detection procedure is developed in the form of testing the homogeneity of
covariance matrices. The weak convergence of the stochastic process formed by
the test statistics is established under the "large p, large T and small n"
setting. Under a mild set of conditions, our change point identification
estimator is proven to be consistent for change points in any location of a
sequence. Its rate of convergence depends on the data dimension, sample size,
number of repeated measurements, and signal-to-noise ratio. We also show that
our proposed computation algorithms can significantly reduce the computation
time and are applicable to real-world data such as fMRI data with a large
number of HD repeated measurements. Simulation results demonstrate both finite
sample performance and computational effectiveness of our proposed procedures.
We observe that the empirical size of the test is well controlled at the
nominal level, and the locations of multiple change points can accurately be
identified. An application to fMRI data demonstrates that our proposed methods
can identify event boundaries in the preface of the movie Sherlock. Our
proposed procedures are implemented in an R package TechPhD.