Meta Segment Anything Model 3 (SAM 3) and Segment Anything Playground
Meta Segment Anything Model 3 (SAM 3) and Segment Anything Playground
Meta has launched SAM 3, an advanced AI model that unifies detection, segmentation, and tracking of objects in images and videos using text, exemplar, and visual prompts. SAM 3 offers improved flexibility by allowing open-vocabulary prompts, overcoming limitations of earlier models restricted to fixed labels. Along with the model release, Meta introduces the Segment Anything Playground—a user-friendly platform to experiment with SAM 3’s capabilities without technical expertise.
The release includes model checkpoints, evaluation datasets (SA-Co), fine-tuning code, and a new 3D reconstruction suite (SAM 3D), which powers features like Facebook Marketplace’s “View in Room” for home decor visualization. SAM 3 is also integrated into Meta apps like Instagram’s Edits and Vibes, enabling creators to apply dynamic effects easily.
Meta developed a hybrid AI-human annotation pipeline to generate a large, diverse training dataset, accelerating data labeling while maintaining high quality. The model architecture combines advances like the Meta Perception Encoder and DETR-based detection, achieving state-of-the-art segmentation performance with fast inference speeds.
SAM 3 is already aiding scientific efforts in wildlife monitoring and ocean exploration through open datasets like SA-FARI and FathomNet. While powerful, SAM 3 still faces challenges in zero-shot fine-grained concepts and complex language prompts, with ongoing work to extend its capabilities.
Meta encourages the AI community to build on SAM 3 and use the publicly available resources to foster innovation in visual AI. The Segment Anything Playground offers tools for practical editing, creative experimentation, and research, showcasing SAM 3’s broad potential across industries.

