General

Kakao unveils multimodal large language model Honeybee


South Korean tech giant Kakao Corp. said Friday it has developed a multimodal large language model (MLLM) named Honeybee in a bid to expand its presence in the artificial intelligence market.

During an AI strategy meeting hosted by the Ministry of Science and ICT, Kakao’s CEO nominee Chung Shin-a revealed that her company has completed the development of Honeybee.

This upgraded large language model goes beyond conventional text understanding by incorporating vision and image comprehension capabilities.

Built upon the MLLM foundation, Honeybee is able to understand both images and text simultaneously, making it possible to respond to inquiries related to mixed image and text content, according to Kakao.

Kakao said it has shared Honeybee and its inference code on Github, an online software development platform and an open source community, to facilitate the widespread advancement of MLLMs globally.

Source: Yonhap News Agency