From 70c122b7b5072541f00f8a44ec394c491d89d3bc Mon Sep 17 00:00:00 2001 From: Fanit Kolchina Date: Thu, 5 Dec 2024 13:57:09 -0500 Subject: [PATCH] Doc review Signed-off-by: Fanit Kolchina --- _analyzers/tokenizers/thai.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/_analyzers/tokenizers/thai.md b/_analyzers/tokenizers/thai.md index 4999c420b4..d1d83ff07e 100644 --- a/_analyzers/tokenizers/thai.md +++ b/_analyzers/tokenizers/thai.md @@ -7,13 +7,13 @@ nav_order: 140 # Thai tokenizer -The `thai` tokenizer is designed for tokenizing Thai language text. As words in Thai language are not separated by spaces, the tokenizer must identify word boundaries based on language-specific rules. +The `thai` tokenizer is designed for tokenizing Thai language text. Because words in Thai language are not separated by spaces, the tokenizer must identify word boundaries based on language-specific rules. ## Example usage -The following example request creates a new index named `thai_index` and configures an analyzer with `thai` tokenizer: +The following example request creates a new index named `thai_index` and configures an analyzer with a `thai` tokenizer: -``` +```json PUT /thai_index { "settings": { @@ -45,7 +45,7 @@ PUT /thai_index ## Generated tokens -Use the following request to examine the tokens generated using the created analyzer: +Use the following request to examine the tokens generated using the analyzer: ```json POST /thai_index/_analyze