• This site is a private, non-commercial website. As such, you're welcome here as long as you were invited. If you would like an invite, reach out to Cliff Spark

Context Rot: How increasing input tokens impacts LLM performance

  • Thread starter Thread starter kellyhongsn
  • Start date Start date
K

kellyhongsn

I work on research at Chroma, and I just published our latest technical report on context rot.
TLDR: Model performance is non-uniform across context lengths, including state-of-the-art GPT-4.1, Claude 4, Gemini 2.5, and Qwen3 models.
This highlights the need for context engineering. Whether relevant information is present in a model’s context is not all that matters; what matters more is how that information is presented.
Here is the complete open-source codebase to replicate our results: GitHub - chroma-core/context-rot: This repository contains the toolkit for replicating results from our technical report.



Comments URL: Context Rot: How increasing input tokens impacts LLM performance | Hacker News

Points: 207

# Comments: 46

Continue reading...
 
Back
Top