added

New Extremely Large Model!

Our new and improved xlarge has better generation quality and a 4x faster prediction speed. This model now supports a maximum token length of 2048 tokens and frequency and presence penalties.