Claude 2.1 - 200k tokens!?!

Claude has released their 2.1 update which includes a number of new features and capabilities, but most impressively - it supports an unprecedented 200k tokens.
Anthropic notes a decrease in hallucinations with 2.1 compared to 2.0, with additional reductions in false claims for comprehension and summarization of provided content. In practice, this means that the model is less likely to attempt to agree with a prompt asking if a particular document supports a particular conclusion when the content itself does not. This is critical for document summation and sentiment analysis.
Combined with a dramatically increased factual recall for larger context windows - obviously key to the above, Claude 2.1 should be significantly easier to leverage the capabilities of their industry-leading context sizes effectively than before.
All features and improvements from their announcement include:
- Increased context window: 200k tokens, or about 150k words
- Developer Workbench for testing and tuning responses
- System Prompts for response structuring and guidance
- API Tool Use (limited access)
- Decrease in Hallucination rates (2x or more!)
- Increase in comprehension and response accuracy
- Decreased prices
Try Claude 2.1 at https://claude.ai/chats or in their new Workbench at https://console.anthropic.com/workbench/
Claude 2.1 and more are available in the Moire Medley Chat at https://medley.moire.ai

(Image of a bunch of tokens provided by DALL-E 3)