We are excited to release the first version of our multimodal assistant Yasa-1, a language assistant with visual and auditory sensors that can take actions via code execution.
Sharing our funding led by DST Global Partners, Radical Ventures, and strategic partners (including Snowflake Ventures)