Given a set of points with their mutual distances or similarities one has
the task to find clusters of near or similar points and to separate distant
or dissimilar points. Spectral Clustering methods are well-known in theory
and applications. They have in common that they work with Eigenvectors of
matrices derived from the mutual distances or similarities of the points to
be separated. As you see in the literature there is much interest in it
right now . My focus of interest is an analysis of the methods, in which
sense they do "good" clustering, and in which cases they work better or less
good. Moreover I am interested in the connection to random walks, on the one
hand to get a better understanding of the algorithms from this point of
view, and on the other hand to get intuition for some new hopefully "better"
algorithm. I have not worked on modelling issues such as finding good
similarties, or implementation details like fast calculation of Eigenvectors
or numerical problems. In the talk I will explain the methods, I will give
sense to the term "good", I will explain the connection to random walks and
I will give several random walk properties which distinguish points in
different clusters.