Multimodal Assistant

We train our language model RekaLM to read text, images, and tabular data, with more modalities to come. We use it to power Yasa—our multimodal assistant—which you can use to generate ideas for creative tasks, get answers to basic questions, or derive insights from text and images.

import reka

reka.api.API_KEY = "APIKEY"

completion = reka.completion("What is the capital of the UK?")

print(completion) #"The capital of the UK is London"

Generate, distill, or deploy on-premise with a few simple commands. Use our proprietary algorithms to personalize our model to your data and use cases.

Frequently Asked Questions

Please register your interest by contacting us via this form.

While our model can output text from many languages, our focus has been on English with support for more languages to come soon.

We design proprietary algorithms and self-supervised instruction tuning objectives to tune our model on your datasets.


Our largest model is trained to perform many tasks in a zero-shot setting. In the setting where users are only interested in a narrower set of tasks, we provide API functions to compress the model via distillation. The lightweight model is often as performant as the largest model we offer (or better if you compare it with a non personalized version), easier to work with and update, and cheaper to deploy in commodity hardware.

License one of our models and download the weights via a few simple API commands. Use our inference code to run the model within your infrastructure or any private cloud. It also supports finetuning on premise if you have GPUs.

Yes, users can interact with our models via API calls. We take privacy very seriously and apply best practices to keep your data secure.

Please contact us for pricing via this form.