Homework 5a: 3D Shape Clustering (22 Points)

Chris Tralie

Learning Objectives

  • Practice "data wrangling," or loading in and organizing data in a way that's digestible in code
  • Implement an unsupervised learning technique to discover structure in data
  • Implement KMeans clustering

Overview

The purpose of this assignment is to get you practice implementing KMeans clustering and loading in and organizing data. Your task will be to cluster a collection of 40 3D shapes into 4 clusters. The shapes are guitar, fighter jet, sedan, and shelves, and they are a subset of the Princeton shape benchmark. Each 3D shape is represented as a triangle mesh, or a bunch of triangle stitched together to approximate a surface. Below is a table showing all of the meshes you'll be dealing with in this assignment. If you're curious, you can click on the name of the mesh to view it interactively in a 3D browser engine I made called ggslac.

Of course, in the spirit of unsupervised learning, we'll pretend we don't actually know what they are, and we'll attempt to cluster them into 4 categories which will hopefully reflect the true groups that we know about.

3D Shape Features

Somehow, we will have to design some features to summarize each shape. We will be following a simple but robust classical paper in computer graphics that defines 3D shape features known as shape distributions (Click here to view the paper). The idea is to create a histogram of the distance between a bunch of random pairs of points on the surface. Each bin of the histogram becomes a dimension of the feature. Below shows a 3D guitar (viewed using polyscope), as well as a bunch of points sampled randomly by area among the surface that can be compared in a histogram.

Below is a plot of the histograms of distances between points for all of the shapes in the collection.

We can see that there are trends within each class. Our goal will be to have our algorithm find these trends and cluster.

Programming Tasks

Getting Started

Click here to download the data and some 3D shape processing utilities I wrote in Python. When you're finished, submit your Jupyter notebook and answers to questions below, as well as an indication of who your buddy was if you had one.

Your task will be to implement KMeans clustering on these 3D shapes by using the D2 features. There are smarter ways to compare histograms, but we'll just treat them as Euclidean vectors like we have with all of our other features so far. So you can use the KMeans algorithm just as we did in class, except you'll be in 100 dimensions instead of 2.

Below is code that you can use to generate a D2 histogram on the "guitar0.off" mesh file, assuming you're writing in a Jupyter notebook at the root of the starter code directory. We'll sample 10,000 points on each mesh and sample a million pairwise distances between them. We'll summarize these pairwise distances in a histogram with 100 bins (NOTE: The shapes have been normalized so that they're all roughly the same size to make these features "scale invariant")

Task 1: Load Data (5 Points)

Your first task is to load in and compute the histogram for all 40 shapes in the dataset, which are in the shapes folder. You'll then keep these in memory and use them in clustering

Task 2: KMeans Clustering (10 Points)

Implement KMeans clustering and run it on the data using 4 clusters. Figure out a way to print out what shapes are in what clusters. Run the algorithm several times. Question: What happens when you run this multiple times with multiple different initializations? Do the clusters make sense to you?

Task 3: Multiple Runs for Robustness (7 Points)

Recall that the loss function that KMeans is attempting to minimize is the sum of squared distances of each point to its assigned center. KMeans will always converge to a local min of this loss, but depending on the centers we started with, this may not be an optimal min over all possibilities. Therefore, a common trick is to run KMeans many times with different intial cluster centers and to return the clusters that achieve a minimum in loss. Implement this strategy and verify that the clusters you get group the shapes together better overall than a single iteration of KMeans with random cluster centers in the previous task.