multimodal-compress-white

Our enterprise-grade multimodal assistant carefully designed with privacy, security, and efficiency in mind.  Yasa comes with a myriad of exciting features including long context document processing, fast natively-optimized retrieval augmented generation, multilingual support (20 languages), search engine interface, and a code interpreter.



import reka

reka.API_KEY = "your-api-key"

response = reka.chat("What is the capital of the UK?")

print(response["text"]) # The capital of the UK is London.

Powered by a single unified model, Yasa-1 has rich understanding of the multimodal world we live in, giving it extended capabilities beyond text-only assistants. Use it to generate ideas for creative tasks, get answers to basic questions, or derive insights from your multimodal data. Try it here or see the documentation here.

Frequently Asked Questions

Please register your interest by contacting us via this form.

Yasa supports 20 languages. Please reach out for more details.

We design proprietary algorithms involving retrieval, fine-tuning, self-supervised instruction tuning, and reinforcement learning to tune our model on your datasets.

Yes.

We provide a suite of model sizes to meet your deployment needs.

License one of our models and download the weights via a few simple commands. Use our inference code to run the model within your infrastructure or any private cloud. Our training code supports adaptation to your datasets on premise.

Yes, users can interact with our models via API calls. We take privacy very seriously and apply best practices to keep your data secure.

Please contact us for pricing via this form.