Claude 2

by Anthropic

 
The model was just released and some first tests showed that it might be better then the other models we tested before. This is why we wanted to try it out immediately, even before API access is possible from Germany.

Claude 2

by Anthropic

 
The model was just released and some first tests showed that it might be better then the other models we tested before. This is why we wanted to try it out immediately, even before API access is possible from Germany.

Main use cases: Can be used similarly to GPT-4 for any form of language generation, for example for creative content creation, text summarization, text editing, in-depth dialog, understanding complex contexts or coding.  

 
Input length: 100,000 tokens (approx. 300 pages of continuous text)  

 
Languages: best in English, but also possible in at least 43 other languages  

 
Model size: ~130 billion parameters

Main use cases: Can be used similarly to GPT-4 for any form of language generation, for example for creative content creation, text summarization, text editing, in-depth dialog, understanding complex contexts or coding.  

 
Input length: 100,000 tokens (approx. 300 pages of continuous text)  

 
Languages: best in English, but also possible in at least 43 other languages  

 
Model size: ~130 billion parameters