diff --git a/assets/airbyte_with_milvus_1.png b/assets/airbyte_with_milvus_1.png
new file mode 100644
index 000000000..8c648d9e8
Binary files /dev/null and b/assets/airbyte_with_milvus_1.png differ
diff --git a/assets/airbyte_with_milvus_2.png b/assets/airbyte_with_milvus_2.png
new file mode 100644
index 000000000..16bfb96dc
Binary files /dev/null and b/assets/airbyte_with_milvus_2.png differ
diff --git a/assets/airbyte_with_milvus_3.png b/assets/airbyte_with_milvus_3.png
new file mode 100644
index 000000000..8de0da1d3
Binary files /dev/null and b/assets/airbyte_with_milvus_3.png differ
diff --git a/assets/airbyte_with_milvus_4.png b/assets/airbyte_with_milvus_4.png
new file mode 100644
index 000000000..cb8e39bf1
Binary files /dev/null and b/assets/airbyte_with_milvus_4.png differ
diff --git a/assets/airbyte_with_milvus_5.png b/assets/airbyte_with_milvus_5.png
new file mode 100644
index 000000000..64381dd23
Binary files /dev/null and b/assets/airbyte_with_milvus_5.png differ
diff --git a/assets/airbyte_with_milvus_6.png b/assets/airbyte_with_milvus_6.png
new file mode 100644
index 000000000..960e1fb16
Binary files /dev/null and b/assets/airbyte_with_milvus_6.png differ
diff --git a/site/en/integrations/integrate_with_airbyte.md b/site/en/integrations/integrate_with_airbyte.md
new file mode 100644
index 000000000..b376caf4d
--- /dev/null
+++ b/site/en/integrations/integrate_with_airbyte.md
@@ -0,0 +1,186 @@
+---
+id: integrate_with_airbyte.md
+summary: Airbyte is an open-source data movement infrastructure for building extract and load (EL) data pipelines. It is designed for versatility, scalability, and ease of use. Airbyte’s connector catalog comes “out-of-the-box” with over 350 pre-built connectors. These connectors can be used to start replicating data from a source to a destination in just a few minutes.
+title: Airbyte: Open-Source Data Movement Infrastructure
+---
+
+# Airbyte: Open-Source Data Movement Infrastructure
+
+Airbyte is an open-source data movement infrastructure for building extract and load (EL) data pipelines. It is designed for versatility, scalability, and ease of use. Airbyte’s connector catalog comes “out-of-the-box” with over 350 pre-built connectors. These connectors can be used to start replicating data from a source to a destination in just a few minutes.
+
+## Major Components of Airbyte
+
+### 1. Connector Catalog
+- **350+ Pre-Built Connectors**: Airbyte’s connector catalog comes “out-of-the-box” with over 350 pre-built connectors. These connectors can be used to start replicating data from a source to a destination in just a few minutes.
+- **No-Code Connector Builder**: You can easily extend Airbyte’s functionality to support your custom use cases through tools [like the No-Code Connector Builder](https://docs.airbyte.com/connector-development/connector-builder-ui/overview).
+
+### 2. The Platform
+Airbyte’s platform provides all the horizontal services required to configure and scale data movement operations, available as [cloud-managed](https://airbyte.com/product/airbyte-cloud) or [self-managed](https://airbyte.com/product/airbyte-enterprise).
+
+### 3. The User Interface
+Airbyte features a UI, [PyAirbyte](https://docs.airbyte.com/using-airbyte/pyairbyte/getting-started) (Python library), [API](https://docs.airbyte.com/api-documentation), and [Terraform Provider](https://docs.airbyte.com/terraform-documentation) to integrate with your preferred tooling and approach to infrastructure management.
+
+With the ability of Airbyte, users can integrate data sources into Milvus cluster for similarity search.
+
+## Before You Begin
+You will need:
+- Zendesk account (or another data source you want to sync data from)
+- Airbyte account or local instance
+- OpenAI API key
+- Milvus cluster
+- Python 3.10 installed locally
+
+## Set Up Milvus Cluster
+
+If you have already deployed a K8s cluster for production, you can skip this step and proceed directly to [deploy Milvus Operator](https://milvus.io/docs/install_cluster-milvusoperator.md#Deploy-Milvus-Operator). If not, you can follow [the steps](https://milvus.io/docs/install_cluster-milvusoperator.md#Create-a-K8s-Cluster) to deploy a Milvus cluster with Milvus Operator.
+
+Individual entities (in our case, support tickets and knowledge base articles) are stored in a “collection” — after your cluster is set up, you need to create a collection. Choose a suitable name and set the Dimension to 1536 to match the vector dimensionality generated by the OpenAI embeddings service.
+
+After creation, record the endpoint and [authentication](https://milvus.io/docs/authenticate.md?tab=docker) info.
+
+## Set Up Connection in Airbyte
+
+Our database is ready, let’s move some data over! To do this, we need to configure a connection in Airbyte. Either sign up for an Airbyte cloud account at [cloud.airbyte.com](https://cloud.airbyte.com) or fire up a local instance as described [in the documentation](https://docs.airbyte.com/using-airbyte/getting-started/).
+
+### Set Up Source
+
+Once your instance is running, we need to set up the connection — click “New connection” and pick the “Zendesk Support” connector as the source. After clicking the “Test and Save” button, Airbyte will check whether the connection can be established.
+
+On Airbyte cloud, you can easily authenticate by clicking the Authenticate button. When using a local Airbyte instance, follow the directions outlined on the [documentation](https://docs.airbyte.com/integrations/sources/zendesk-support#airbyte-open-source-enable-api-token-access-and-generate-a-token) page.
+
+### Set Up Destination
+
+If everything is working correctly, the next step is to set up the destination to move data to. Here, pick the “Milvus” connector.
+
+The Milvus connector does three things:
+- **Chunking and Formatting** - Split Zendesk records into text and metadata. If the text is larger than the specified chunk size, records are split up into multiple parts that are loaded into the collection individually. The splitting of text (or chunking) can, for example, happen in the case of large support tickets or knowledge articles. By splitting up the text, you can ensure that searches always yield useful results.
+
+Let’s go with a chunk size of 1000 tokens and text fields of body, title, description, and subject, as these will be present in the data we will receive from Zendesk.
+
+- **Embedding** - Using Machine Learning models transforms the text chunks produced by the processing part into vector embeddings that you can then search for semantic similarity. To create the embeddings, you must supply the OpenAI API key. Airbyte will send each chunk to OpenAI and add the resulting vector to the entities loaded into your Milvus cluster.
+- **Indexing** - Once you have vectorized the chunks, you can load them into the database. To do so, insert the information you got when setting up your cluster and collection in Milvus cluster.
Clicking “Test and save” will check whether everything is lined up correctly (valid credentials, collection exists and has the same vector dimensionality as the configured embedding, etc.)
+
+### Set up stream sync flow
+The last step before data is ready to flow is selecting which “streams” to sync. A stream is a collection of records in the source. As Zendesk supports a large number of streams that are not relevant to our use case, let’s only select “tickets” and “articles” and disable all others to save bandwidth and make sure only the relevant information will show up in searches: You can select which fields to extract from the source by clicking the stream name. The “Incremental | Append + Deduped” sync mode means that subsequent connection runs keep Zendesk and Milvus in sync while transferring minimal data (only the articles and tickets that have changed since the last run).
+
+As soon as the connection is set up, Airbyte will start syncing data. It can take a few minutes to appear in your Milvus collection.
+
+If you select a replication frequency, Airbyte will run regularly to keep your Milvus collection up to date with changes to Zendesk articles and newly created issues.
+
+### Check flow
+You can check in the Milvus cluster UI how the data is structured in the collection by navigating to the playground and executing a “Query Data” query with a filter set to “_ab_stream == \”tickets\””. As you can see in the Result view, each record coming from Zendesk is stored as separate entities in Milvus with all the specified metadata. The text chunk the embedding is based on is shown as the “text” property — this is the text that got embedded using OpenAI and will be what we will search on.
+
+## Build Streamlit app querying the collection
+Our data is ready — now we need to build the application to use it. In this case, the application will be a simple support form for users to submit support cases. When the user hits submit, we will do two things:
+- Search for similar tickets submitted by users of the same organization
+- Search for knowledge-based articles that might be relevant to the user
+
+In both cases, we will leverage semantic search using OpenAI embeddings. To do this, the description of the problem the user entered is also embedded and used to retrieve similar entities from the Milvus cluster. If there are relevant results, they are shown below the form.
+
+### Set up UI environment
+You will need a local Python installation as we will use Streamlit to implement the application.
+
+First, install Streamlit, the Milvus client library, and the OpenAI client library locally:
+```shell
+pip install streamlit pymilvus openai
+```
+To render a basic support form, create a python file `basic_support_form.py`:
+```python
+import streamlit as st
+
+with st.form("my_form"):
+ st.write("Submit a support case")
+ text_val = st.text_area("Describe your problem")
+
+ submitted = st.form_submit_button("Submit")
+ if submitted:
+ # TODO check for related support cases and articles
+ st.write("Submitted!")
+```
+To run your application, use Streamlit run:
+```shell
+streamlit run basic_support_form.py
+```
+This will render a basic form:The code for this example can also be found on [GitHub](https://github.com/airbytehq/tutorial-similarity-search/blob/main/1_basic_support_form.py).
+
+### Set up backend query service
+Next, let’s check for existing open tickets that might be relevant. To do this, we embed the text the user entered using OpenAI, then did a similarity search on our collection, filtering for still open tickets. If there is one with a very low distance between the supplied ticket and the existing ticket, let the user know and don’t submit:
+```python
+import streamlit as st
+import os
+import pymilvus
+import openai
+
+
+with st.form("my_form"):
+ st.write("Submit a support case")
+ text_val = st.text_area("Describe your problem?")
+
+ submitted = st.form_submit_button("Submit")
+ if submitted:
+ import os
+ import pymilvus
+ import openai
+
+ org_id = 360033549136 # TODO Load from customer login data
+
+ pymilvus.connections.connect(uri=os.environ["MILVUS_URL"], token=os.environ["MILVUS_TOKEN"])
+ collection = pymilvus.Collection("zendesk")
+
+ embedding = openai.Embedding.create(input=text_val, model="text-embedding-ada-002")['data'][0]['embedding']
+
+ results = collection.search(data=[embedding], anns_field="vector", param={}, limit=2, output_fields=["_id", "subject", "description"], expr=f'status == "new" and organization_id == {org_id}')
+
+ st.write(results[0])
+ if len(results[0]) > 0 and results[0].distances[0] < 0.35:
+ matching_ticket = results[0][0].entity
+ st.write(f"This case seems very similar to {matching_ticket.get('subject')} (id #{matching_ticket.get('_id')}). Make sure it has not been submitted before")
+ else:
+ st.write("Submitted!")
+
+```
+Several things are happening here:
+- The connection to the Milvus cluster is set up.
+- The OpenAI service is used to generate an embedding of the description the user entered.
+- A similarity search is performed, filtering results by the ticket status and the organization id (as only open tickets of the same organization are relevant).
+- If there are results and the distance between the embedding vectors of the existing ticket and the newly entered text is below a certain threshold, call out this fact.
+
+To run the new app, you need to set the environment variables for OpenAI and Milvus first:
+```shell
+export MILVUS_TOKEN=...
+export MILVUS_URL=https://...
+export OPENAI_API_KEY=sk-...
+
+streamlit run app.py
+```
+When trying to submit a ticket that exists already, this is how the result will look: The code for this example can also be found on [GitHub](https://github.com/airbytehq/tutorial-similarity-search/blob/main/2_open_ticket_check.py).
+
+### Show more relevant information
+As you can see in the green debug output hidden in the final version, two tickets matched our search (in status new, from the current organization, and close to the embedding vector). However, the first (relevant) ranked higher than the second (irrelevant in this situation), which is reflected in the lower distance value. This relationship is captured in the embedding vectors without directly matching words, like in a regular full-text search.
+
+To wrap it up, let’s show helpful information after the ticket gets submitted to give the user as much relevant information upfront as possible.
+
+To do this, we are going to do a second search after the ticket gets submitted to fetch the top-matching knowledge base articles:
+```python
+ ......
+
+ else:
+ # TODO Actually send out the ticket
+ st.write("Submitted!")
+ article_results = collection.search(data=[embedding], anns_field="vector", param={}, limit=5, output_fields=["title", "html_url"], expr=f'_ab_stream == "articles"')
+ st.write(article_results[0])
+ if len(article_results[0]) > 0:
+ st.write("We also found some articles that might help you:")
+ for hit in article_results[0]:
+ if hit.distance < 0.362:
+ st.write(f"* [{hit.entity.get('title')}]({hit.entity.get('html_url')})")
+
+```
+If there is no open support ticket with a high similarity score, the new ticket gets submitted and relevant knowledge articles are shown below: The code for this example can also be found on [Github](https://github.com/airbytehq/tutorial-similarity-search/blob/main/3_relevant_articles.py).
+
+## Conclusion
+While the UI shown here is not an actual support form but an example to illustrate the use case, the combination of Airbyte and Milvus is a very powerful one — it makes it easy to load text from a wide variety of sources (from databases like Postgres over APIs like Zendesk or GitHub over to completely custom sources built using Airbyte's SDK or visual connector builder) and index it in embedded form in Milvus, a powerful vector search engine being able to scale to huge amounts of data.
+
+Airbyte and Milvus are open source and completely free to use on your infrastructure, with cloud offerings to offload operations if desired.
+
+Beyond the classical semantic search use case illustrated in this article, the general setup can also be used to build a question-answering chat bot using the RAG method (Retrieval Augmented Generation), recommender systems, or help make advertising more relevant and efficient.
\ No newline at end of file
diff --git a/site/en/integrations/langchain/basic_usage_langchain.md b/site/en/integrations/langchain/basic_usage_langchain.md
new file mode 100644
index 000000000..d958b084b
--- /dev/null
+++ b/site/en/integrations/langchain/basic_usage_langchain.md
@@ -0,0 +1,335 @@
+---
+id: basic_usage_langchain.md
+summary: This notebook shows how to use functionality related to the Milvus vector database.
+title: Use Milvus as a Vector Store
+---
+
+# Use Milvus as a Vector Store
+
+>[Milvus](https://milvus.io/docs/overview.md) is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.
+
+This notebook shows how to use functionality related to the Milvus vector database.
+
+## Setup
+
+You'll need to install `langchain-milvus` with `pip install -qU langchain-milvus` to use this integration.
+
+
+
+```python
+%pip install -qU langchain_milvus
+```
+
+The latest version of pymilvus comes with a local vector database Milvus Lite, good for prototyping. If you have large scale of data such as more than a million docs, we recommend setting up a more performant Milvus server on [docker or kubernetes](https://milvus.io/docs/install_standalone-docker.md#Start-Milvus).
+
+### Credentials
+
+No credentials are needed to use the `Milvus` vector store.
+
+## Initialization
+
+```{=mdx}
+import EmbeddingTabs from "@theme/EmbeddingTabs";
+
+
+```
+
+
+```python
+# | output: false
+# | echo: false
+from langchain_openai import OpenAIEmbeddings
+
+embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
+```
+
+
+```python
+from langchain_milvus import Milvus
+
+# The easiest way is to use Milvus Lite where everything is stored in a local file.
+# If you have a Milvus server you can use the server URI such as "http://localhost:19530".
+URI = "./milvus_example.db"
+
+vector_store = Milvus(
+ embedding_function=embeddings,
+ connection_args={"uri": URI},
+)
+```
+
+### Compartmentalize the data with Milvus Collections
+
+You can store different unrelated documents in different collections within same Milvus instance to maintain the context
+
+Here's how you can create a new collection
+
+
+```python
+from langchain_core.documents import Document
+
+vector_store_saved = Milvus.from_documents(
+ [Document(page_content="foo!")],
+ embeddings,
+ collection_name="langchain_example",
+ connection_args={"uri": URI},
+)
+```
+
+And here is how you retrieve that stored collection
+
+
+```python
+vector_store_loaded = Milvus(
+ embeddings,
+ connection_args={"uri": URI},
+ collection_name="langchain_example",
+)
+```
+
+## Manage vector store
+
+Once you have created your vector store, we can interact with it by adding and deleting different items.
+
+### Add items to vector store
+
+We can add items to our vector store by using the `add_documents` function.
+
+
+```python
+from uuid import uuid4
+
+from langchain_core.documents import Document
+
+document_1 = Document(
+ page_content="I had chocalate chip pancakes and scrambled eggs for breakfast this morning.",
+ metadata={"source": "tweet"},
+)
+
+document_2 = Document(
+ page_content="The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.",
+ metadata={"source": "news"},
+)
+
+document_3 = Document(
+ page_content="Building an exciting new project with LangChain - come check it out!",
+ metadata={"source": "tweet"},
+)
+
+document_4 = Document(
+ page_content="Robbers broke into the city bank and stole $1 million in cash.",
+ metadata={"source": "news"},
+)
+
+document_5 = Document(
+ page_content="Wow! That was an amazing movie. I can't wait to see it again.",
+ metadata={"source": "tweet"},
+)
+
+document_6 = Document(
+ page_content="Is the new iPhone worth the price? Read this review to find out.",
+ metadata={"source": "website"},
+)
+
+document_7 = Document(
+ page_content="The top 10 soccer players in the world right now.",
+ metadata={"source": "website"},
+)
+
+document_8 = Document(
+ page_content="LangGraph is the best framework for building stateful, agentic applications!",
+ metadata={"source": "tweet"},
+)
+
+document_9 = Document(
+ page_content="The stock market is down 500 points today due to fears of a recession.",
+ metadata={"source": "news"},
+)
+
+document_10 = Document(
+ page_content="I have a bad feeling I am going to get deleted :(",
+ metadata={"source": "tweet"},
+)
+
+documents = [
+ document_1,
+ document_2,
+ document_3,
+ document_4,
+ document_5,
+ document_6,
+ document_7,
+ document_8,
+ document_9,
+ document_10,
+]
+uuids = [str(uuid4()) for _ in range(len(documents))]
+
+vector_store.add_documents(documents=documents, ids=uuids)
+```
+
+
+
+
+ ['b0248595-2a41-4f6b-9c25-3a24c1278bb3',
+ 'fa642726-5329-4495-a072-187e948dd71f',
+ '9905001c-a4a3-455e-ab94-72d0ed11b476',
+ 'eacc7256-d7fa-4036-b1f7-83d7a4bee0c5',
+ '7508f7ff-c0c9-49ea-8189-634f8a0244d8',
+ '2e179609-3ff7-4c6a-9e05-08978903fe26',
+ 'fab1f2ac-43e1-45f9-b81b-fc5d334c6508',
+ '1206d237-ee3a-484f-baf2-b5ac38eeb314',
+ 'd43cbf9a-a772-4c40-993b-9439065fec01',
+ '25e667bb-6f09-4574-a368-661069301906']
+
+
+
+### Delete items from vector store
+
+
+```python
+vector_store.delete(ids=[uuids[-1]])
+```
+
+
+
+
+ (insert count: 0, delete count: 1, upsert count: 0, timestamp: 0, success count: 0, err count: 0, cost: 0)
+
+
+
+## Query vector store
+
+Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.
+
+### Query directly
+
+#### Similarity search
+
+Performing a simple similarity search with filtering on metadata can be done as follows:
+
+
+```python
+results = vector_store.similarity_search(
+ "LangChain provides abstractions to make working with LLMs easy",
+ k=2,
+ filter={"source": "tweet"},
+)
+for res in results:
+ print(f"* {res.page_content} [{res.metadata}]")
+```
+
+ * Building an exciting new project with LangChain - come check it out! [{'pk': '9905001c-a4a3-455e-ab94-72d0ed11b476', 'source': 'tweet'}]
+ * LangGraph is the best framework for building stateful, agentic applications! [{'pk': '1206d237-ee3a-484f-baf2-b5ac38eeb314', 'source': 'tweet'}]
+
+
+#### Similarity search with score
+
+You can also search with score:
+
+
+```python
+results = vector_store.similarity_search_with_score(
+ "Will it be hot tomorrow?", k=1, filter={"source": "news"}
+)
+for res, score in results:
+ print(f"* [SIM={score:3f}] {res.page_content} [{res.metadata}]")
+```
+
+ * [SIM=21192.628906] bar [{'pk': '2', 'source': 'https://example.com'}]
+
+
+For a full list of all the search options available when using the `Milvus` vector store, you can visit the [API reference](https://api.python.langchain.com/en/latest/vectorstores/langchain_milvus.vectorstores.milvus.Milvus.html).
+
+### Query by turning into retriever
+
+You can also transform the vector store into a retriever for easier usage in your chains.
+
+
+```python
+retriever = vector_store.as_retriever(search_type="mmr", search_kwargs={"k": 1})
+retriever.invoke("Stealing from the bank is a crime", filter={"source": "news"})
+```
+
+
+
+
+ [Document(metadata={'pk': 'eacc7256-d7fa-4036-b1f7-83d7a4bee0c5', 'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]
+
+
+
+## Usage for retrieval-augmented generation
+
+For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:
+
+- [Tutorials: working with external knowledge](https://python.langchain.com/v0.2/docs/tutorials/#working-with-external-knowledge)
+- [How-to: Question and answer with RAG](https://python.langchain.com/v0.2/docs/how_to/#qa-with-rag)
+- [Retrieval conceptual docs](https://python.langchain.com/v0.2/docs/concepts/#retrieval)
+
+### Per-User Retrieval
+
+When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see eachother’s data.
+
+Milvus recommends using [partition_key](https://milvus.io/docs/multi_tenancy.md#Partition-key-based-multi-tenancy) to implement multi-tenancy, here is an example.
+> The feature of Partition key is now not available in Milvus Lite, if you want to use it, you need to start Milvus server from [docker or kubernetes](https://milvus.io/docs/install_standalone-docker.md#Start-Milvus).
+
+
+```python
+from langchain_core.documents import Document
+
+docs = [
+ Document(page_content="i worked at kensho", metadata={"namespace": "harrison"}),
+ Document(page_content="i worked at facebook", metadata={"namespace": "ankush"}),
+]
+vectorstore = Milvus.from_documents(
+ docs,
+ embeddings,
+ connection_args={"uri": URI},
+ drop_old=True,
+ partition_key_field="namespace", # Use the "namespace" field as the partition key
+)
+```
+
+To conduct a search using the partition key, you should include either of the following in the boolean expression of the search request:
+
+`search_kwargs={"expr": ' == "xxxx"'}`
+
+`search_kwargs={"expr": ' == in ["xxx", "xxx"]'}`
+
+Do replace `` with the name of the field that is designated as the partition key.
+
+Milvus changes to a partition based on the specified partition key, filters entities according to the partition key, and searches among the filtered entities.
+
+
+
+```python
+# This will only get documents for Ankush
+vectorstore.as_retriever(search_kwargs={"expr": 'namespace == "ankush"'}).invoke(
+ "where did i work?"
+)
+```
+
+
+
+
+ [Document(page_content='i worked at facebook', metadata={'namespace': 'ankush'})]
+
+
+
+
+```python
+# This will only get documents for Harrison
+vectorstore.as_retriever(search_kwargs={"expr": 'namespace == "harrison"'}).invoke(
+ "where did i work?"
+)
+```
+
+
+
+
+ [Document(page_content='i worked at kensho', metadata={'namespace': 'harrison'})]
+
+
+
+## API reference
+
+For detailed documentation of all __ModuleName__VectorStore features and configurations head to the API reference: https://api.python.langchain.com/en/latest/vectorstores/langchain_milvus.vectorstores.milvus.Milvus.html
diff --git a/site/en/integrations/integrate_with_langchain.md b/site/en/integrations/langchain/integrate_with_langchain.md
similarity index 100%
rename from site/en/integrations/integrate_with_langchain.md
rename to site/en/integrations/langchain/integrate_with_langchain.md
diff --git a/site/en/integrations/langchain/milvus_hybrid_search_retriever.md b/site/en/integrations/langchain/milvus_hybrid_search_retriever.md
new file mode 100644
index 000000000..f0a91d3df
--- /dev/null
+++ b/site/en/integrations/langchain/milvus_hybrid_search_retriever.md
@@ -0,0 +1,331 @@
+---
+id: milvus_hybrid_search_retriever.md
+summary: This notebook shows how to use functionality related to the Milvus vector database.
+title: Milvus Hybrid Search Retriever
+---
+
+# Milvus Hybrid Search Retriever
+
+## Overview
+
+> [Milvus](https://milvus.io/docs) is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment.
+
+This will help you getting started with the Milvus Hybrid Search [retriever](/docs/concepts/#retrievers), which combines the strengths of both dense and sparse vector search. For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html).
+
+See also the Milvus Multi-Vector Search [docs](https://milvus.io/docs/multi-vector-search.md).
+
+### Integration details
+
+| Retriever | Self-host | Cloud offering | Package |
+| :--- | :--- | :---: | :---: |
+[MilvusCollectionHybridSearchRetriever](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html) | ✅ | ❌ | langchain_milvus |
+
+
+
+## Setup
+
+If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:
+
+
+```python
+# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
+# os.environ["LANGSMITH_TRACING"] = "true"
+```
+
+### Installation
+
+This retriever lives in the `langchain-milvus` package. This guide requires the following dependencies:
+
+
+```python
+%pip install --upgrade --quiet pymilvus[model] langchain-milvus langchain-openai
+```
+
+
+```python
+from langchain_core.output_parsers import StrOutputParser
+from langchain_core.prompts import PromptTemplate
+from langchain_core.runnables import RunnablePassthrough
+from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever
+from langchain_milvus.utils.sparse import BM25SparseEmbedding
+from langchain_openai import ChatOpenAI, OpenAIEmbeddings
+from pymilvus import (
+ Collection,
+ CollectionSchema,
+ DataType,
+ FieldSchema,
+ WeightedRanker,
+ connections,
+)
+```
+
+### Start the Milvus service
+
+Please refer to the [Milvus documentation](https://milvus.io/docs/install_standalone-docker.md) to start the Milvus service.
+
+After starting milvus, you need to specify your milvus connection URI.
+
+
+```python
+CONNECTION_URI = "http://localhost:19530"
+```
+
+### Prepare OpenAI API Key
+
+Please refer to the [OpenAI documentation](https://platform.openai.com/account/api-keys) to obtain your OpenAI API key, and set it as an environment variable.
+
+```shell
+export OPENAI_API_KEY=
+```
+
+
+### Prepare dense and sparse embedding functions
+
+Let us fictionalize 10 fake descriptions of novels. In actual production, it may be a large amount of text data.
+
+
+```python
+texts = [
+ "In 'The Whispering Walls' by Ava Moreno, a young journalist named Sophia uncovers a decades-old conspiracy hidden within the crumbling walls of an ancient mansion, where the whispers of the past threaten to destroy her own sanity.",
+ "In 'The Last Refuge' by Ethan Blackwood, a group of survivors must band together to escape a post-apocalyptic wasteland, where the last remnants of humanity cling to life in a desperate bid for survival.",
+ "In 'The Memory Thief' by Lila Rose, a charismatic thief with the ability to steal and manipulate memories is hired by a mysterious client to pull off a daring heist, but soon finds themselves trapped in a web of deceit and betrayal.",
+ "In 'The City of Echoes' by Julian Saint Clair, a brilliant detective must navigate a labyrinthine metropolis where time is currency, and the rich can live forever, but at a terrible cost to the poor.",
+ "In 'The Starlight Serenade' by Ruby Flynn, a shy astronomer discovers a mysterious melody emanating from a distant star, which leads her on a journey to uncover the secrets of the universe and her own heart.",
+ "In 'The Shadow Weaver' by Piper Redding, a young orphan discovers she has the ability to weave powerful illusions, but soon finds herself at the center of a deadly game of cat and mouse between rival factions vying for control of the mystical arts.",
+ "In 'The Lost Expedition' by Caspian Grey, a team of explorers ventures into the heart of the Amazon rainforest in search of a lost city, but soon finds themselves hunted by a ruthless treasure hunter and the treacherous jungle itself.",
+ "In 'The Clockwork Kingdom' by Augusta Wynter, a brilliant inventor discovers a hidden world of clockwork machines and ancient magic, where a rebellion is brewing against the tyrannical ruler of the land.",
+ "In 'The Phantom Pilgrim' by Rowan Welles, a charismatic smuggler is hired by a mysterious organization to transport a valuable artifact across a war-torn continent, but soon finds themselves pursued by deadly assassins and rival factions.",
+ "In 'The Dreamwalker's Journey' by Lyra Snow, a young dreamwalker discovers she has the ability to enter people's dreams, but soon finds herself trapped in a surreal world of nightmares and illusions, where the boundaries between reality and fantasy blur.",
+]
+```
+
+We will use the [OpenAI Embedding](https://platform.openai.com/docs/guides/embeddings) to generate dense vectors, and the [BM25 algorithm](https://en.wikipedia.org/wiki/Okapi_BM25) to generate sparse vectors.
+
+Initialize dense embedding function and get dimension
+
+
+```python
+dense_embedding_func = OpenAIEmbeddings()
+dense_dim = len(dense_embedding_func.embed_query(texts[1]))
+dense_dim
+```
+
+
+
+
+ 1536
+
+
+
+Initialize sparse embedding function.
+
+Note that the output of sparse embedding is a set of sparse vectors, which represents the index and weight of the keywords of the input text.
+
+
+```python
+sparse_embedding_func = BM25SparseEmbedding(corpus=texts)
+sparse_embedding_func.embed_query(texts[1])
+```
+
+
+
+
+ {0: 0.4270424944042204,
+ 21: 1.845826690498331,
+ 22: 1.845826690498331,
+ 23: 1.845826690498331,
+ 24: 1.845826690498331,
+ 25: 1.845826690498331,
+ 26: 1.845826690498331,
+ 27: 1.2237754316221157,
+ 28: 1.845826690498331,
+ 29: 1.845826690498331,
+ 30: 1.845826690498331,
+ 31: 1.845826690498331,
+ 32: 1.845826690498331,
+ 33: 1.845826690498331,
+ 34: 1.845826690498331,
+ 35: 1.845826690498331,
+ 36: 1.845826690498331,
+ 37: 1.845826690498331,
+ 38: 1.845826690498331,
+ 39: 1.845826690498331}
+
+
+
+### Create Milvus Collection and load data
+
+Initialize connection URI and establish connection
+
+
+```python
+connections.connect(uri=CONNECTION_URI)
+```
+
+Define field names and their data types
+
+
+```python
+pk_field = "doc_id"
+dense_field = "dense_vector"
+sparse_field = "sparse_vector"
+text_field = "text"
+fields = [
+ FieldSchema(
+ name=pk_field,
+ dtype=DataType.VARCHAR,
+ is_primary=True,
+ auto_id=True,
+ max_length=100,
+ ),
+ FieldSchema(name=dense_field, dtype=DataType.FLOAT_VECTOR, dim=dense_dim),
+ FieldSchema(name=sparse_field, dtype=DataType.SPARSE_FLOAT_VECTOR),
+ FieldSchema(name=text_field, dtype=DataType.VARCHAR, max_length=65_535),
+]
+```
+
+Create a collection with the defined schema
+
+
+```python
+schema = CollectionSchema(fields=fields, enable_dynamic_field=False)
+collection = Collection(
+ name="IntroductionToTheNovels", schema=schema, consistency_level="Strong"
+)
+```
+
+Define index for dense and sparse vectors
+
+
+```python
+dense_index = {"index_type": "FLAT", "metric_type": "IP"}
+collection.create_index("dense_vector", dense_index)
+sparse_index = {"index_type": "SPARSE_INVERTED_INDEX", "metric_type": "IP"}
+collection.create_index("sparse_vector", sparse_index)
+collection.flush()
+```
+
+Insert entities into the collection and load the collection
+
+
+```python
+entities = []
+for text in texts:
+ entity = {
+ dense_field: dense_embedding_func.embed_documents([text])[0],
+ sparse_field: sparse_embedding_func.embed_documents([text])[0],
+ text_field: text,
+ }
+ entities.append(entity)
+collection.insert(entities)
+collection.load()
+```
+
+## Instantiation
+
+Now we can instantiate our retriever, defining search parameters for sparse and dense fields:
+
+
+```python
+sparse_search_params = {"metric_type": "IP"}
+dense_search_params = {"metric_type": "IP", "params": {}}
+retriever = MilvusCollectionHybridSearchRetriever(
+ collection=collection,
+ rerank=WeightedRanker(0.5, 0.5),
+ anns_fields=[dense_field, sparse_field],
+ field_embeddings=[dense_embedding_func, sparse_embedding_func],
+ field_search_params=[dense_search_params, sparse_search_params],
+ top_k=3,
+ text_field=text_field,
+)
+```
+
+In the input parameters of this Retriever, we use a dense embedding and a sparse embedding to perform hybrid search on the two fields of this Collection, and use WeightedRanker for reranking. Finally, 3 top-K Documents will be returned.
+
+## Usage
+
+
+```python
+retriever.invoke("What are the story about ventures?")
+```
+
+
+
+
+ [Document(page_content="In 'The Lost Expedition' by Caspian Grey, a team of explorers ventures into the heart of the Amazon rainforest in search of a lost city, but soon finds themselves hunted by a ruthless treasure hunter and the treacherous jungle itself.", metadata={'doc_id': '449281835035545843'}),
+ Document(page_content="In 'The Phantom Pilgrim' by Rowan Welles, a charismatic smuggler is hired by a mysterious organization to transport a valuable artifact across a war-torn continent, but soon finds themselves pursued by deadly assassins and rival factions.", metadata={'doc_id': '449281835035545845'}),
+ Document(page_content="In 'The Dreamwalker's Journey' by Lyra Snow, a young dreamwalker discovers she has the ability to enter people's dreams, but soon finds herself trapped in a surreal world of nightmares and illusions, where the boundaries between reality and fantasy blur.", metadata={'doc_id': '449281835035545846'})]
+
+
+
+## Use within a chain
+
+Initialize ChatOpenAI and define a prompt template
+
+
+```python
+llm = ChatOpenAI()
+
+PROMPT_TEMPLATE = """
+Human: You are an AI assistant, and provides answers to questions by using fact based and statistical information when possible.
+Use the following pieces of information to provide a concise answer to the question enclosed in tags.
+
+
+{context}
+
+
+
+{question}
+
+
+Assistant:"""
+
+prompt = PromptTemplate(
+ template=PROMPT_TEMPLATE, input_variables=["context", "question"]
+)
+```
+
+Define a function for formatting documents
+
+
+```python
+def format_docs(docs):
+ return "\n\n".join(doc.page_content for doc in docs)
+```
+
+Define a chain using the retriever and other components
+
+
+```python
+rag_chain = (
+ {"context": retriever | format_docs, "question": RunnablePassthrough()}
+ | prompt
+ | llm
+ | StrOutputParser()
+)
+```
+
+Perform a query using the defined chain
+
+
+```python
+rag_chain.invoke("What novels has Lila written and what are their contents?")
+```
+
+
+
+
+ "Lila Rose has written 'The Memory Thief,' which follows a charismatic thief with the ability to steal and manipulate memories as they navigate a daring heist and a web of deceit and betrayal."
+
+
+
+Drop the collection
+
+
+```python
+collection.drop()
+```
+
+## API reference
+
+For detailed documentation of all `MilvusCollectionHybridSearchRetriever` features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/retrievers/langchain_milvus.retrievers.milvus_hybrid_search.MilvusCollectionHybridSearchRetriever.html).
diff --git a/site/en/menuStructure/en.json b/site/en/menuStructure/en.json
index de28f31e1..872f0ddf1 100644
--- a/site/en/menuStructure/en.json
+++ b/site/en/menuStructure/en.json
@@ -1218,9 +1218,28 @@
},
{
"label": "LangChain",
- "id": "integrate_with_langchain.md",
+ "id": "langchain",
"order": 6,
- "children": []
+ "children": [
+ {
+ "label": "Basic Usage",
+ "id": "basic_usage_langchain.md",
+ "order": 1,
+ "children": []
+ },
+ {
+ "label": "RAG with LangChain",
+ "id": "integrate_with_langchain.md",
+ "order": 2,
+ "children": []
+ },
+ {
+ "label": "Hybrid Search",
+ "id": "milvus_hybrid_search_retriever.md",
+ "order": 3,
+ "children": []
+ }
+ ]
},
{
"label": "HayStack",
diff --git a/site/en/tutorials/build-rag-with-milvus.md b/site/en/tutorials/build-rag-with-milvus.md
index 2733fdd35..ced9f07e4 100644
--- a/site/en/tutorials/build-rag-with-milvus.md
+++ b/site/en/tutorials/build-rag-with-milvus.md
@@ -280,3 +280,4 @@ print(response.choices[0].message.content)
## Quick Deploy
To learn about how to start an online demo with this tutorial, please refer to [the example application](https://github.com/milvus-io/bootcamp/tree/master/bootcamp/tutorials/quickstart/apps/rag_search_with_milvus).
+
diff --git a/site/en/tutorials/hybrid_search_with_milvus.md b/site/en/tutorials/hybrid_search_with_milvus.md
index ef5bc8dd4..7a24a94dd 100644
--- a/site/en/tutorials/hybrid_search_with_milvus.md
+++ b/site/en/tutorials/hybrid_search_with_milvus.md
@@ -10,8 +10,6 @@ title: Hybrid Search with Milvus
-# Hybrid Search with Dense and Sparse Vectors in Milvus
-
In this tutorial, we will demonstrate how to conduct hybrid search with [Milvus](https://milvus.io/docs/multi-vector-search.md) and [BGE-M3 model](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). BGE-M3 model can convert text into dense and sparse vectors. Milvus supports storing both types of vectors in one collection, allowing for hybrid search that enhances the result relevance.
Milvus supports Dense, Sparse, and Hybrid retrieval methods:
diff --git a/site/en/tutorials/image_similarity_search.md b/site/en/tutorials/image_similarity_search.md
index d55f2ea0c..e4fbc5058 100644
--- a/site/en/tutorials/image_similarity_search.md
+++ b/site/en/tutorials/image_similarity_search.md
@@ -186,7 +186,7 @@ display(concatenated_image)
-
+![png](image_search_with_milvus_files/image_search_with_milvus_14_1.png)
@@ -206,3 +206,4 @@ We can see that most of the images are from the same category as the search imag
To learn about how to start an online demo with this tutorial, please refer to [the example application](https://github.com/milvus-io/bootcamp/tree/master/bootcamp/tutorials/quickstart/apps/image_search_with_milvus).
+
diff --git a/site/en/tutorials/multimodal_rag_with_milvus.md b/site/en/tutorials/multimodal_rag_with_milvus.md
index bbeba2c19..c2ddd50a1 100644
--- a/site/en/tutorials/multimodal_rag_with_milvus.md
+++ b/site/en/tutorials/multimodal_rag_with_milvus.md
@@ -39,6 +39,7 @@ If you are using Google Colab, to enable dependencies just installed, you may ne
The following command will download the example data and extract to a local folder "./images_folder" including:
- **images**: A subset of [Amazon Reviews 2023](https://github.com/hyp1231/AmazonReviews2023) containing approximately 900 images from the categories "Appliance", "Cell_Phones_and_Accessories", and "Electronics".
+
- **leopard.jpg**: An example query image.
@@ -119,9 +120,9 @@ for image_path in tqdm(image_list, desc="Generating image embeddings: "):
print("Number of encoded images:", len(image_dict))
```
- Generating image embeddings: 100%|██████████| 145/145 [00:03<00:00, 43.54it/s]
+ Generating image embeddings: 100%|██████████| 900/900 [00:20<00:00, 44.08it/s]
- Number of encoded images: 145
+ Number of encoded images: 900
@@ -169,10 +170,8 @@ milvus_client.insert(
```
-
-
{'insert_count': 900,
- 'ids': [451503448502042624, 451503448502042625, ..., 451503448502042768],
+ 'ids': [451537887696781312, 451537887696781313, ..., 451537887696782211],
'cost': 0}
@@ -451,8 +450,8 @@ best_img = best_img.resize((150, 150))
best_img.show()
```
- Reasons:
- The most suitable item for the user's query intent is index 6 because the instruction specifies a phone case with the theme of the image, which is a leopard. The phone case with index 6 has a thematic design resembling the leopard pattern, making it the closest match to the user's request for a phone case with the image theme.
+ Reasons: The most suitable item for the user's query intent is index 6 because the instruction specifies a phone case with the theme of the image, which is a leopard. The phone case with index 6 has a thematic design resembling the leopard pattern, making it the closest match to the user's request for a phone case with the image theme.
+
@@ -462,4 +461,4 @@ best_img.show()
### Quick Deploy
-To learn about how to start an online demo with this tutorial, please refer to [the example application](https://github.com/milvus-io/bootcamp/tree/master/bootcamp/tutorials/quickstart/apps/multimodal_rag_with_milvus).
\ No newline at end of file
+To learn about how to start an online demo with this tutorial, please refer to [the example application](https://github.com/milvus-io/bootcamp/tree/master/bootcamp/tutorials/quickstart/apps/multimodal_rag_with_milvus).