Route Sparse Autoencoder to Interpret Large Language Models

Abstract

Mechanistic interpretability of large language models (LLMs) aims to uncover the internal processes of information propagation and reasoning. Sparse autoencoders (SAEs) have demonstrated promise in this domain by extracting interpretable and monosemantic features. However, prior works primarily focus on feature extraction from a single layer, failing to effectively capture activations that span multiple layers. In this paper, we introduce Route Sparse Autoencoder (RouteSAE), a new framework that integrates a routing mechanism with a shared SAE to efficiently extract features from multiple layers. It dynamically assigns weights to activations from different layers, incurring minimal parameter overhead while achieving high interpretability and flexibility for targeted feature manipulation. We evaluate RouteSAE through extensive experiments on Llama-3.2-1B-Instruct. Specifically, under the same sparsity constraint of 64, RouteSAE extracts 22.5% more features than baseline SAEs while achieving a 22.3% higher interpretability score. These results underscore the potential of RouteSAE as a scalable and effective method for LLM interpretability, with applications in feature discovery and model intervention. Our codes are available at https://github.com/swei2001/RouteSAEs.

Publication
In EMNLP 2025

Citation:

@inproceedings{RouteSAE2025,
  title		=	{Route Sparse Autoencoder to Interpret Large Language Models},
  author	=	{Shi, Wei  and
      Li, Sihang  and
      Liang, Tao  and
      Wan, Mingyang  and
      Ma, Guojun  and
      Wang, Xiang  and
      He, Xiangnan},
  booktitle	=	{Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing},
  pages 	= 	{6801--6815},
  year 		= 	{2025}
}
Wei Shi
Wei Shi
史伟
Sihang Li
Sihang Li
李思杭
Xiang Wang
Xiang Wang
王翔 教授
Xiangnan He
Xiangnan He
何向南 教授