You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Varnam has a tokenizer that converts Malayalam (or any other Indian lang) text to manglish patterns. While learning, Varnam makes a database of such patterns -> word :
This makes the database huge in size. Varnam makes malayalam suggestions from this database (the learnings database) looking up pattern. If it can't find one, uses the tokenizer to make word.
I want to know why this approach wasn't chosen @navaneeth :
No need of a pattern => word DB (learnings file). Instead, just need a word dictionary.
Add more patterns to VST (Varnam Symbol Table). Prioritized letters n => ന, ണ. Capitalized N will always give ണ. So pani will give suggestions in priority : പനി, പണി. Currently if only the learnings DB has pani assigned to both words will give the different outputs.
When an input say pani is given to varnam, it should tokenize to പനി and പണി using just VST, and then look up the word dictionary to find words starting with പനി and പണി and give additional suggestions.
For english words like "Cricket", the tokenization will give bad results, in such cases we can maybe use a pattern => word DB like the current learnings DB.
By doing so, the size of the learnings database can be reduced a lot.
The text was updated successfully, but these errors were encountered:
We can experiment with this approach. I don't remember why this wasn't chosen in the first place. May be because tokenizing pani will emit more tokens and the complexity of identifying which is the right pattern out of all combinations could be slow. It will be difficult to do all these and provide real time suggestions matching to the speed of typing.
However, we should really implement a prototype and evaluate and see how it performs.
I tried to see if multiple suggestions for a pattern from VST can be made, but it's complex. I can't figure out how to do it. There is a VARNAM_MATCH_ALL flag for malayalam -> manglish patterns (which is used in learning) but not for manglish -> malayalam combinations. So this is a dead end for me.
Varnam has a tokenizer that converts Malayalam (or any other Indian lang) text to manglish patterns. While learning, Varnam makes a database of such patterns -> word :
This makes the database huge in size. Varnam makes malayalam suggestions from this database (the learnings database) looking up pattern. If it can't find one, uses the tokenizer to make word.
I want to know why this approach wasn't chosen @navaneeth :
pattern => word
DB (learnings file). Instead, just need a word dictionary.n => ന, ണ
. CapitalizedN
will always giveണ
. Sopani
will give suggestions in priority :പനി, പണി
. Currently if only the learnings DB haspani
assigned to both words will give the different outputs.pani
is given to varnam, it should tokenize toപനി
andപണി
using just VST, and then look up the word dictionary to find words starting withപനി
andപണി
and give additional suggestions.pattern => word
DB like the current learnings DB.By doing so, the size of the learnings database can be reduced a lot.
The text was updated successfully, but these errors were encountered: