CIO Influence
CIO Influence News Natural Language

META to Unveil State-of-the-art Multi-modal LLM Chameleon

META to Unveil State-of-the-art Multi-modal LLM Chameleon

Meta Platforms (META) is working on Chameleon, an innovative multi-modal large language model that will do tasks across domains and with unprecedented integration. According to Meta’s paper, Chameleon can perform tasks requiring independent models until now; it demonstrates unmatched performance in both image captioning and Visual Question Answering (VQA) tasks and remains competitive in text-only tasks. With the competition in generative AI heating up, Meta’s Chameleon represents a major leap forward, a sneak peek into what the company will pit against the best that any frontier lab has to o****. Not released yet, Chameleon shows Meta’s commitment to driving AI technology forward.

Innovative Early-Fusion Approach in Multimodal Models

Traditionally, multimodal foundation models adopt a “late fusion” approach, where separate models process different modalities and combine their outputs for inference. Although this idea is very effective, it has limitations in terms of integrating information across modalities and generating interleaved image and text sequences.

On the other hand, Chameleon follows an “early-fusion token-based mixed-modal” architecture designed to learn from a mix of images, text, code, and other modalities from scratch. Chameleon converts images to discrete tokens like how language models process words, using a unified vocabulary of text, code, and image tokens. This allows the same transformer architecture to be applied across sequences that include image and text tokens.

Researchers say the model most similar to Chameleon is Google Gemini, which also uses an early-fusion token-based strategy. However, unlike Gemini, which uses separate image decoders during generation, Chameleon processes and generates tokens end-to-end. “Chameleon’s unified token space allows it to seamlessly reason over and generate interleaved image and text sequences without modality-specific components,” according to the researchers.

Training and Performance of Chameleon

The training of Chameleon proceeds in two stages using a dataset with 4.4 trillion tokens, comprising text, image-text pairs, and images and text interleaved. The authors trained both a 7 billion and a 34 billion parameter variant for over 5 million hours on Nvidia A100 80GB GPUs.

The experiments indicate that Chameleon can perform a variety of text-only and multimodal tasks. For visual question answering and image captioning benchmarks, Chameleon-34B showed state-of-the-art results, outperforming Flamingo, IDEFICS, and Llava-1.5, among others.

Multimodal models often see degraded performance on single-modality tasks, like text-only prompts. However, Chameleon remains competitive, matching the performance of models like Mixtral 8x7B and Gemini-Pro in commonsense reasoning and reading comprehension.

Additionally, Chameleon performs particularly well in mixed-modal reasoning and generation, especially on prompts that demand interleaved text and images. Human evaluations indicate a strong preference for the multimodal documents generated by Chameleon.

Conclusion

These early fusion techniques could lead to much more advanced models, particularly with the addition of more modalities. Robotics startups are already testing the use of language models in robot control systems. An exciting future research direction would be exploring how early fusion can enhance robotics foundation models.

FAQs

1. What is a multi-modal large language model?

A multi-modal large language model is an AI system that processes and integrates information from various modalities such as text, images, and code. Unlike traditional models that handle only one data type, these models can understand and generate mixed-modal outputs.

2. How does early fusion differ from late fusion in multi-modal models?

Early fusion integrates different modalities at the beginning of the learning process, allowing the model to process them together. Late fusion, on the other hand, processes each modality separately and combines the outputs at the end. Early fusion can improve integration and performance on tasks requiring mixed-modal reasoning.

3. What is the significance of Chameleon’s unified token space?

The unified token space in Chameleon allows it to seamlessly process and generate interleaved sequences of text and images without requiring separate modality-specific components. This design enhances the model’s ability to handle mixed-modal tasks effectively.

4. Why is the integration of language models into robotics control systems important?

Integrating language models into robotics control systems can enable more natural and intuitive interactions between humans and robots. This can lead to advancements in autonomous robotics, making robots more capable of understanding and responding to complex instructions and performing diverse tasks.

[To share your insights with us as part of editorial or sponsored content, please write to sghosh@martechseries.com]

Related posts

Revolutionizing Access to Justice: Introducing LegalShield’s Enhanced Mobile App

Phison and Cigent Partnership Sets New Standard in Cybersecurity with Self-Defending Flash Storage Drives

CIO Influence News Desk

NTT DATA Acquires Majority Stake in ProvenTech

PR Newswire