P
PaulPauls
I spent a lot of time and money on this rather big side project of mine that attempts to replicate the mechanistic interpretability research on proprietary LLMs that was quite popular this year and produced great research papers by Anthropic [1], OpenAI [2] and Deepmind [3].
I am quite proud of this project and since I consider myself the target audience for HackerNews did I think that maybe some of you would appreciate this open research replication as well. Happy to answer any questions or face any feedback.
Cheers
[1] https://transformer-circuits.pub/2024/scaling-monosemanticit...
[2] [2406.04093] Scaling and evaluating sparse autoencoders
[3] [2408.05147] Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
Comments URL: Show HN: Llama 3.2 Interpretability with Sparse Autoencoders | Hacker News
Points: 509
# Comments: 73
Continue reading...
I am quite proud of this project and since I consider myself the target audience for HackerNews did I think that maybe some of you would appreciate this open research replication as well. Happy to answer any questions or face any feedback.
Cheers
[1] https://transformer-circuits.pub/2024/scaling-monosemanticit...
[2] [2406.04093] Scaling and evaluating sparse autoencoders
[3] [2408.05147] Gemma Scope: Open Sparse Autoencoders Everywhere All At Once on Gemma 2
Comments URL: Show HN: Llama 3.2 Interpretability with Sparse Autoencoders | Hacker News
Points: 509
# Comments: 73
Continue reading...