We work on advancing science and AI research in the following areas.

Artificial General Intelligence
General purpose multimodal agents with universal inputs and outputs.

Self-improving Agents
Proactive agents of knowledge that continuously self-improve and stay updated without supervision.

Internationalization
AI for everyone regardless of background, cultural, and social norms.

Efficiency
AI that can be deployed in a cost-efficient manner effectively and efficiently.

Past Research

Prior to Reka, our team has been involved in some of the greatest breakthroughs in AI.

AlphaStar (Nature, DeepMind Blog)
Co-authored by Dani Yogatama

AlphaCode (Science, DeepMind Blog)
Co-authored by Cyprien de Masson d’Autume

Bard (Sundar Pichai’s Blogpost)
Yi Tay as core contributor

DeepSpeech-2 (Paper)
Co-authored by Dani Yogatama

Gopher: Scaling Language Models: Methods, Analysis and Insights from Training Gopher (DeepMind Blog)
Co-authored by Cyprien de Masson d’Autume

Gmail Smart Reply (Paper)
Led by Matt Henderson

Flan-2: Scaling Instruction Finetuned Models (Google AI Blog)
Co-authored by Yi Tay as core contributor

LASER: Language-Agnostic SEntence Representations (Meta AI Blog, Paper)
Led by Mikel Artetxe

MoE-LM: Efficient Large Language Modeling with Mixture of Experts (Paper)
Co-led by Mikel Artetxe

OPT: Open Pretrained Transformer Language Models (Paper)
Co-authored by Mikel Artetxe

PaLM: Scaling Language Modeling with Pathways (Google AI Blog)
Co-authored by Yi Tay

PaLM-2: Technical Report (Announcement at Google I/O)
Co-led by Yi Tay (modeling & architecture)

PaLI-X: On Scaling up a Multilingual Vision and Language Model (Paper)
Co-authored by Yi Tay

UL2: Unifying Language Learning Paradigms (Google AI blog, Flan-UL2 release)

Co-led by Yi Tay

Unsupervised Machine Translation (Paper 1, Paper 2, Paper 3)
Led by Mikel Artetxe

Task-Oriented Dialogue (Paper 1, Paper 2)
Led by Qi Liu

ViT-22B: Scaling Vision Transformers to 22B Parameters (Google AI Blog).
Co-authored by Yi Tay

XGLM: Few Shot Learning with Multilingual Language Models (Paper)
Co-authored by Mikel Artetxe