We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
did pip install -r requirements.txt import torch from lavis.models import load_model_and_preprocess from lavis.processors import load_processor device = torch.device("cuda" if torch.cuda.is_available() else "cpu") caption = "Merlion near marina bay." #Load model and preprocessors # model, vis_processors, text_processors = load_model_and_preprocess("blip_image_text_matching", "base", device=device, is_eval=True) model, vis_processors, text_processors = load_model_and_preprocess("blip_image_text_matching", "large", device=device, is_eval=True)
got error:
--------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[12], line 2 1 import torch ----> 2 from lavis.models import load_model_and_preprocess 3 from lavis.processors import load_processor 5 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") File /workspace/InstructBLIP_PEFT/lavis/__init__.py:16 13 from lavis.common.registry import registry 15 from lavis.datasets.builders import * ---> 16 from lavis.models import * 17 from lavis.processors import * 18 from lavis.tasks import * File /workspace/InstructBLIP_PEFT/lavis/models/__init__.py:49 46 from lavis.models.blip2_models.blip2_t5_instruct_qformer_llm_lora import Blip2T5InstructQformerLLMLoRA 47 from lavis.models.blip2_models.blip2_vicuna_instruct_qformer_llm_lora import Blip2VicunaInstructQformerLLMLoRA ---> 49 from lavis.models.blip_diffusion_models.blip_diffusion import BlipDiffusion 51 from lavis.models.pnp_vqa_models.pnp_vqa import PNPVQA 52 from lavis.models.pnp_vqa_models.pnp_unifiedqav2_fid import PNPUnifiedQAv2FiD File /workspace/InstructBLIP_PEFT/lavis/models/blip_diffusion_models/blip_diffusion.py:29 27 from lavis.models.base_model import BaseModel 28 from lavis.models.blip2_models.blip2_qformer import Blip2Qformer ---> 29 from lavis.models.blip_diffusion_models.modeling_ctx_clip import CtxCLIPTextModel 30 from lavis.models.blip_diffusion_models.utils import numpy_to_pil, prepare_cond_image 31 from lavis.models.blip_diffusion_models.ptp_utils import ( 32 LocalBlend, 33 P2PCrossAttnProcessor, 34 AttentionRefine, 35 ) File /workspace/InstructBLIP_PEFT/lavis/models/blip_diffusion_models/modeling_ctx_clip.py:13 11 from transformers.modeling_outputs import BaseModelOutputWithPooling 12 from transformers.models.clip.configuration_clip import CLIPTextConfig ---> 13 from transformers.models.clip.modeling_clip import ( 14 CLIPEncoder, 15 CLIPPreTrainedModel, 16 _expand_mask, 17 ) 20 class CtxCLIPTextModel(CLIPPreTrainedModel): 21 config_class = CLIPTextConfig ImportError: cannot import name '_expand_mask' from 'transformers.models.clip.modeling_clip' (/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py)
The text was updated successfully, but these errors were encountered:
I think this might help you with this issue :) salesforce/LAVIS#571
Sorry, something went wrong.
No branches or pull requests
got error:
The text was updated successfully, but these errors were encountered: