Recreating a Classic Result: ISOMAP, Eigenfaces, and a Simple Face Recognition Experiment

An attempt to reproduce, explore, and truly understand a few classic ideas in dimensionality reduction and face analysis—by doing them myself.


Some ideas in data science are so influential that you keep encountering them—in papers, lectures, and references—but they can still feel abstract until you actually work through them yourself. This project was originally presented in a high-impact Science paper, and then extend that exploration to related ideas like eigenfaces and a very simple form of face recognition.

Rather than focusing on implementation details, this post is about what I observed, what started to make sense visually, and why these ideas suddenly felt intuitive once I saw the results.

Seeing High-Dimensional Data Differently

Face images are deceptively complex. A single grayscale image can be represented as thousands of numbers—one per pixel. When you stack hundreds of such images together, the resulting space is extremely high-dimensional, and human intuition struggles there.

The goal of dimensionality reduction is to compress that complexity into something we can reason about—often just two dimensions—while preserving meaningful structure. In the context of faces, that structure often relates to pose, lighting, and shadow patterns.

What fascinated me most was seeing how different methods preserve (or fail to preserve) these structures.

Exploring Neighborhoods and Connectivity

A key idea behind ISOMAP is that similarity should be defined locally before it’s defined globally. Instead of assuming all images relate directly to each other, the method first builds a notion of neighborhoods—which images are considered “close.”

When visualizing this connectivity, it became clear that the choice of neighborhood scale dramatically changes the story the data tells. Too strict, and the data fragments into disconnected pieces. Too loose, and everything blurs together. Somewhere in between, the structure starts to feel coherent and interpretable.

This step alone made me appreciate how much modeling assumptions shape outcomes, even before any embedding is produced.

When the ISOMAP Embedding Starts to Make Sense

Once the data is embedded into two dimensions, something interesting happens: the plot starts to tell a story.

Without any labels, I could still see regions of the embedding that corresponded to similar lighting directions or face orientations. Nearby points often looked visually similar when I inspected the corresponding images. In some directions, the arrangement felt almost continuous—like moving smoothly from left-lit to frontal to right-lit faces.

That moment—when geometry, images, and intuition finally aligned—was incredibly satisfying. It echoed the kind of structure described in the original paper, but seeing it emerge from my own experiments made it feel real rather than theoretical.

Comparing with PCA: Same Goal, Different Perspective

I also explored a more familiar approach: PCA. Projecting the same faces into two principal components produced a visually different result.

PCA tended to emphasize global variations, especially lighting and background intensity. While useful, the resulting arrangement felt less aligned with gradual pose changes. In contrast, the ISOMAP embedding appeared to preserve smoother transitions that matched how I intuitively think about face variation.

This comparison helped clarify an important distinction for me:
linear methods capture dominant variance, while manifold-based methods can better preserve curved structure.


Eigenfaces: Making PCA Tangible

Eigenfaces were one of the most visually intuitive parts of this exploration. Instead of thinking of PCA as an abstract projection, I could literally see what the main directions of variation looked like.

The leading eigenfaces often resembled:

  • lighting changes across the face

  • shadow shifts

  • strong contours around facial features

Rather than representing “faces,” these components represent how faces tend to change. Seeing that made PCA feel far less mysterious and far more interpretable.



A Simple Recognition Idea—and Why It Works (Sometimes)

Using these representations, I explored a very simple idea for distinguishing between two individuals based on how well a new image fits into each person’s learned representation.

In this controlled setting, the separation was quite clear. While this is far from a production-ready recognition system, it served as a powerful demonstration that structure matters: when the underlying representation is meaningful, even simple decision rules can work surprisingly well.

A Personal Note

Working through ISOMAP, eigenfaces, and this face recognition experiment made me realize how much I’ve become addicted to image-related problems. In my previous work experience, I didn’t really have the opportunity to explore image recognition or computer vision. Watching abstract math turn into visible structure—poses lining up, lighting patterns emerging, faces separating—was genuinely exciting. This experience opened a door to an area I hadn’t explored before, and it left me wanting to learn more.


Comments

Popular posts from this blog

How To Implement Dense_Rank() In Excel Sheet

Schema and Security Control in SQL server

My Georgia Tech OMSA Year 2 Fall term Takeaway (Fall25)