Skip to content

Commit

Permalink
add thai tokenizer docs
Browse files Browse the repository at this point in the history
Signed-off-by: Anton Rubin <[email protected]>
  • Loading branch information
AntonEliatra committed Oct 10, 2024
1 parent 76486a4 commit 2dfae7c
Show file tree
Hide file tree
Showing 2 changed files with 109 additions and 1 deletion.
2 changes: 1 addition & 1 deletion _analyzers/tokenizers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
layout: default
title: Tokenizers
nav_order: 60
has_children: false
has_children: true
has_toc: false
---

Expand Down
108 changes: 108 additions & 0 deletions _analyzers/tokenizers/thai.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
---
layout: default
title: Thai
parent: Tokenizers
nav_order: 140
---

# Thai tokenizer

The `thai` tokenizer is designed for tokenizing Thai language text. As words in Thai language are not separated by spaces, the tokenizer must identify word boundaries based on language-specific rules.

## Example usage

The following example request creates a new index named `thai_index` and configures an analyzer with `thai` tokenizer:

```
PUT /thai_index
{
"settings": {
"analysis": {
"tokenizer": {
"thai_tokenizer": {
"type": "thai"
}
},
"analyzer": {
"thai_analyzer": {
"type": "custom",
"tokenizer": "thai_tokenizer"
}
}
}
},
"mappings": {
"properties": {
"content": {
"type": "text",
"analyzer": "thai_analyzer"
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the created analyzer:

```json
POST /thai_index/_analyze
{
"analyzer": "thai_analyzer",
"text": "ฉันชอบไปเที่ยวที่เชียงใหม่"
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "ฉัน",
"start_offset": 0,
"end_offset": 3,
"type": "word",
"position": 0
},
{
"token": "ชอบ",
"start_offset": 3,
"end_offset": 6,
"type": "word",
"position": 1
},
{
"token": "ไป",
"start_offset": 6,
"end_offset": 8,
"type": "word",
"position": 2
},
{
"token": "เที่ยว",
"start_offset": 8,
"end_offset": 14,
"type": "word",
"position": 3
},
{
"token": "ที่",
"start_offset": 14,
"end_offset": 17,
"type": "word",
"position": 4
},
{
"token": "เชียงใหม่",
"start_offset": 17,
"end_offset": 26,
"type": "word",
"position": 5
}
]
}
```

0 comments on commit 2dfae7c

Please sign in to comment.