Introducing Yasa

Our enterprise-grade multimodal assistant carefully designed with privacy, security, and efficiency in mind. We train Yasa to read text, images, videos, and tabular data, with more modalities to come. Use it to generate ideas for creative tasks, get answers to basic questions, or derive insights from your internal data. Try it here.

import reka

reka.API_KEY = "your-api-key"

response ="What is the capital of the UK?")

print(response["text"]) # The capital of the UK is London.

Generate, train, compress, or deploy on-premise with a few simple commands. Use our proprietary algorithms to personalize our model to your data and use cases. See the documentation here.

Frequently Asked Questions

Please register your interest by contacting us via this form.

Yasa supports 20 languages. Please reach out for more details.

We design proprietary algorithms involving retrieval, fine-tuning, self-supervised instruction tuning, and reinforcement learning to tune our model on your datasets.


We provide a suite of model sizes to meet your deployment needs.

License one of our models and download the weights via a few simple commands. Use our inference code to run the model within your infrastructure or any private cloud. Our training code supports adaptation to your datasets on premise.

Yes, users can interact with our models via API calls. We take privacy very seriously and apply best practices to keep your data secure.

Please contact us for pricing via this form.