AI labs frequently update their models after launch, which can result in "nerfs" such as increased censorship, excessive quantization, or behavioral degradation. The LMSYS Arena tests model performance through API endpoints, revealing trends that may not be visible in consumer chat interfaces due to added system prompts and safety filters.
mayerwin.github.io
1 min
12h ago
AI labs frequently update their models after launch, which can result in "nerfs" such as increased censorship, excessive quantization, or behavioral degradation. The LMSYS Arena tests model performance through API endpoints, revealing trends that may not be visible in consumer chat interfaces due to added system prompts and safety filters.
mayerwin.github.io
1 min
12h ago
AI labs frequently update their models after launch, which can result in "nerfs" such as increased censorship, excessive quantization, or behavioral degradation. The LMSYS Arena tests model performance through API endpoints, revealing trends that may not be visible in consumer chat interfaces due to added system prompts and safety filters.
mayerwin.github.io
1 min
12h ago
No more articles to load