Spatial Reasoning is fundamental to interacting within and navigating physical environments for embodied AI applications like robotics. However, data samples suitable for learning these capabilities are rare in AI pretraining datasets. Don't be limited by what your model can do out-of-the-box, curate any image dataset from the Huggingface Hub for Spatial VQA with tools for scene understanding.
VLMs trained using VQASynth ๐น
- estimate 3D distances between objects in an image
- describe distances colloquially, convert between common units of measurement
- answer queries about the orientation and spatial relationships between objects
- base responses on consistent references like floors and surfaces
Fusing semantic and metric data into templated VQA chat, Vision Language Models can be instruction-tuned with low-rank adapters to enhance their baseline spatial reasoning capabilities. VQASynth ๐น provides an open-source reproduction of SpatialVLM, which describes a 3D scene reconstruction pipeline and prompt templates for enhancing the spatial reasoning abilities of VLMs including:
- Semantic filtering with CLIP to normalize the image distribution and attributes
- Metric Depth Estimation with ZoeDepth to lift the 2D image to 3D point cloud
- Object-level captioning with FlexCap for precise 2D region proposal
- Plane-fitting with RANSAC for consistent 3D reference coordinates
Initial VQASynth ๐น pipelines prompted LLaVA for JSON-formatted object-level detailed captions or tags using RAM. Accordingly, we evaluated caption/tag based region proposal with publicly available models like CLIPSeg and groundingDINO.
๐ชถ Faster & lighter using Florence-2 for detailed image captions and region proposal grounded on text captions.
๐ Improves metric depth estimation speed & accuracy by replacing ZoeDepth with DepthPro.
๐ SAM2 replaces SAM in the localization refinement stage.
Before running the demo scripts, ensure you have the following installed:
- Python 3.10 or later
- Docker, Docker Compose V2
- NVIDIA Container Toolkit
CLIPSeg-based SpatialVLM data processing (recommended):
cd tests/data_processing/
docker build -f clipseg_data_processing.dockerfile -t vqasynth:clipseg-dataproc-test .
docker run --gpus all -v /path/to/output/:/path/to/output vqasynth:clipseg-dataproc-test --input_image="warehouse_rgb.jpg" --output_dir "/path/to/output"
GroundingDINO-based SpatialVLM data processing:
cd tests/data_processing/
docker build -f groundingDino_data_processing.dockerfile -t vqasynth:dino-dataproc-test .
docker run --gpus all -v /path/to/output/:/path/to/output vqasynth:dino-dataproc-test --input_image="warehouse_rgb.jpg" --output_dir "/path/to/output"
The scripts will produce 3D point clouds, segmented images, labels, and prompt examples for a test image.
The main pipeline uses Docker Compose to process a Hugging Face dataset into a VQA dataset including spatial relations between objects. The dataset follows conventions for training models like LLaVA. We recommend using an A10 GPU or larger for processing.
Make sure to update the config.yaml file by adding the following details: an output directory path, the repository ID for the dataset to be processed, and a dataset name to store the results to the hub. You can also optionally add include_tags
and/or exclude_tags
as comma-separated lists in the config file for filtering the dataset based on tags. If no tags are provided, the filtering will not be applied.
Then launch the pipeline with:
# Authenticate to push to hub
huggingface-cli login
# Run the pipeline
cd /path/to/VQASynth
bash run.sh
In your designated output directory, you'll find a json file processed_dataset.json
containing the formatted dataset.
Here are some examples:
Here's a sample of warehouse images captioned with spatial relationships similar to the table above.
wget https://remyx.ai/assets/vqasynth/vqasynth_warehouse_spaces.zip
# Data is formatted for LLaVA fine-tuning
unzip vqasynth_warehouse_spaces.zip
Once completed, you can follow this resource on fine-tuning LLaVa.
Try SpaceMantis in the HF Space or SpaceLLaVA in Discord
We've hosted some notebooks visualizing and experimenting with the techniques included in this repo.
Notebook | Description | Launch |
---|---|---|
Spatial Reasoning with Point Clouds | Visualize point clouds and evaluate spatial relationships |
This project was inspired by or utilizes concepts discussed in the following research paper(s):
@article{chen2024spatialvlm,
title = {SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning Capabilities},
author = {Chen, Boyuan and Xu, Zhuo and Kirmani, Sean and Ichter, Brian and Driess, Danny and Florence, Pete and Sadigh, Dorsa and Guibas, Leonidas and Xia, Fei},
journal = {arXiv preprint arXiv:2401.12168},
year = {2024},
url = {https://arxiv.org/abs/2401.12168},
}