The latest on Hallucinations in MLLM
Multimodal Large Language Models (MLLMs) like GPT-4V and LLaVA have impressed us. They can understand and write text based on images.
However, they're not perfect.
One big issue is hallucination. The model makes text that contradicts the images. Let's explore this problem and learn how to use MLLMs safely and effectively.