diff --git a/image_assets/video_demo/CT-Abdomen_short.mp4 b/image_assets/video_demo/CT-Abdomen_short.mp4 new file mode 100644 index 0000000..a758185 Binary files /dev/null and b/image_assets/video_demo/CT-Abdomen_short.mp4 differ diff --git a/image_assets/video_demo/CT-COVID.mp4 b/image_assets/video_demo/CT-COVID.mp4 new file mode 100644 index 0000000..90a6105 Binary files /dev/null and b/image_assets/video_demo/CT-COVID.mp4 differ diff --git a/image_assets/video_demo/CT-nodule.mp4 b/image_assets/video_demo/CT-nodule.mp4 new file mode 100644 index 0000000..a27e84f Binary files /dev/null and b/image_assets/video_demo/CT-nodule.mp4 differ diff --git a/image_assets/video_demo/Cat.mp4 b/image_assets/video_demo/Cat.mp4 new file mode 100644 index 0000000..a68a9ea Binary files /dev/null and b/image_assets/video_demo/Cat.mp4 differ diff --git a/image_assets/video_demo/Demo-example3.mp4 b/image_assets/video_demo/Demo-example3.mp4 deleted file mode 100755 index 01ab239..0000000 Binary files a/image_assets/video_demo/Demo-example3.mp4 and /dev/null differ diff --git a/image_assets/video_demo/Demo-upload3.mp4 b/image_assets/video_demo/Demo-upload3.mp4 deleted file mode 100755 index 61016ef..0000000 Binary files a/image_assets/video_demo/Demo-upload3.mp4 and /dev/null differ diff --git a/image_assets/video_demo/MRI-Brain-T1Gd.mp4 b/image_assets/video_demo/MRI-Brain-T1Gd.mp4 new file mode 100644 index 0000000..de6dc72 Binary files /dev/null and b/image_assets/video_demo/MRI-Brain-T1Gd.mp4 differ diff --git a/image_assets/video_demo/Pathology_all_cells.mp4 b/image_assets/video_demo/Pathology_all_cells.mp4 new file mode 100644 index 0000000..2ee9936 Binary files /dev/null and b/image_assets/video_demo/Pathology_all_cells.mp4 differ diff --git a/image_assets/video_demo/Pathology_prompts.mp4 b/image_assets/video_demo/Pathology_prompts.mp4 new file mode 100644 index 0000000..61d3b2d Binary files /dev/null and b/image_assets/video_demo/Pathology_prompts.mp4 differ diff --git a/index.html b/index.html index 31b17f8..0f1a275 100644 --- a/index.html +++ b/index.html @@ -233,43 +233,61 @@

-
+

Everything

On image segmentation, we showed that BiomedParse is broadly applicable, outperforming state-of-the-art methods on 102,855 test image-mask-label triples across 9 imaging modalities.

-
+
+
+ +
-
+
-
+

Everywhere

BiomedParse is also able to identify invalid user inputs describing objects that do not exist in the image. On object detection, which aims to locate a specific object of interest, BiomedParse again attained state-of-the-art performance, especially on objects with irregular shapes.

-
+
+
+ +
-
+

All at Once

-

On object recognition, which aims to identify all objects in a given image along with their semantic types, we showed that \ourmethod can simultaneously segment and label all biomedical objects in an image without any user-provided input

+

On object recognition, which aims to identify all objects in a given image along with their semantic types, we showed that BiomedParse can simultaneously segment and label all biomedical objects in an image without any user-provided input

-
- +
+ +
+
+
@@ -605,11 +623,11 @@

Related Work

  • Focal Modulation Networks by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan and Jianfeng Gao.
  • -
  • Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing by Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon.
  • +
  • Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing by Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodonzzg Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon.
  • -
  • BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs by Sheng Zhang, Yanbo Xu, Naoto Usuyama, Hanwen Xu, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, Cliff Wong, Andrea Tupini, Yu Wang, Matt Mazzola, Swadheen Shukla, Lars Liden, Jianfeng Gao, Matthew P. Lungren, Tristan Naumann, Sheng Wang, Hoifung Poon.
  • +
  • BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs by Sheng Zhang, Yanbo Xu, Naoto Usuyama, Hanwen Xu, Jaspreet Bagga, Robert Tinn, Sam Preston, Rajesh Rao, Mu Wei, Naveen Valluri, Cliff Wong, Andrea Tupini, Yu Wang, Matt Mazzola, Swadheen Shukla, Lars Liden, Jianfeng Gao, Matthew P. Lungren, Tristan Naumann, Sheng Wang, Hoifung Poon.
  • -
  • BiomedJourney: Counterfactual Biomedical Image Generation by Instruction-Learning from Multimodal Patient Journeys by Yu Gu, Jianwei Yang, Naoto Usuyama, Chunyuan Li, Sheng Zhang, Matthew P. Lungren, Jianfeng Gao, Hoifung Poon.
  • +
  • BiomedJourney: Counterfactual Biomedical Image Generation by Instruction-Learning from Multimodal Patient Journeys by Yu Gu, Jianwei Yang, Naoto Usuyama, Chunyuan Li, Sheng Zhang, Matthew P. Lungren, Jianfeng Gao, Hoifung Poon.