Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with Greek characters & searchNormalize #279

Open
nikolasr200 opened this issue Sep 5, 2023 · 7 comments
Open

Issue with Greek characters & searchNormalize #279

nikolasr200 opened this issue Sep 5, 2023 · 7 comments
Labels
enhancement New feature or request

Comments

@nikolasr200
Copy link

Hi,

i seem to face an issue with Greek characters and searchNormalize. In Greek we have accents as well, so, if i set searchNormalize to true no result at all is returned from the list. If i set searchNormalize to false, search results are returned, as long as accented vowels are entered with corresponding accent.

example

Ένα
Δύο
Τρία
Τέσσερα

if i set searchNormalize to true and search with "α" i get no results

if i set searchNormalize to false and search with term "α" i get (correctly)

Ένα
Τρία
Τέσσερα

but with searchNormalize to false, if i search with term Ε (no accent) i ll get only

Τέσσερα

and not Ένα (as well)

So searchNormalize set to true seems (in my case) to break completely the search

@gnbm gnbm added the enhancement New feature or request label Sep 6, 2023
@gnbm
Copy link
Collaborator

gnbm commented Sep 12, 2023

Hello @nikolasr200
I've run a couple of tests in order to change the function normalizeString to:

static normalizeString(text) {
const NON_WORD_REGEX = /[\u0300-\u036f]/g;

// Normalize the text to lowercase and remove accents
let normalizedText = text
    .toLowerCase()
    .normalize('NFD')
    .replace(NON_WORD_REGEX, '');

// Define a mapping of characters to remove accents for
const accentMappings = {
    'a': ['à', 'á', 'â', 'ã', 'ä', 'å'],
    'e': ['è', 'é', 'ê', 'ë','έ','ε'],
    'i': ['ì', 'í', 'î', 'ï'],
    'o': ['ò', 'ó', 'ô', 'õ', 'ö'],
    'u': ['ù', 'ú', 'û', 'ü'],
    'c': ['ç'],
    'g': ['ğ'],
    'n': ['ñ'],
};

// Replace accented characters with their non-accented equivalents
for (const baseChar in accentMappings) {
    const accentedChars = accentMappings[baseChar].join('');
    const regex = new RegExp(`[${accentedChars}]`, 'g');
    normalizedText = normalizedText.replace(regex, baseChar);
}
return normalizedText;   }

Another alternative would be:

`static normalizeString(text) {
//const NON_WORD_REGEX = /[^\w]/g;
//return text.normalize("NFD").replace(NON_WORD_REGEX, "");

const NON_WORD_REGEX = /[^a-z0-9\s]/g;

// Define a mapping of characters to remove accents for
const accentMappings = {
  'à': 'a', 'á': 'a', 'â': 'a', 'ã': 'a', 'ä': 'a', 'å': 'a',
  'è': 'e', 'é': 'e', 'ê': 'e', 'ë': 'e', 'έ': 'e', 'ε': 'e',
  'ì': 'i', 'í': 'i', 'î': 'i', 'ï': 'i',
  'ò': 'o', 'ó': 'o', 'ô': 'o', 'õ': 'o', 'ö': 'o',
  'ù': 'u', 'ú': 'u', 'û': 'u', 'ü': 'u',
  'ç': 'c',
  'ğ': 'g',
  'ñ': 'n'

};

// Create a regex pattern to match accented characters
const accentPattern = new RegExp([${Object.keys(accentMappings).join('')}], 'g');

// Normalize the text to lowercase, remove accents, and non-word characters
const normalizedText = text
.toLowerCase()
.normalize('NFD')
.replace(accentPattern, char => accentMappings[char] || char)
.replace(NON_WORD_REGEX, '');

return normalizedText;
}`

Could you run some tests on top of this in order to make sure it won't break the current use cases and fulfil the requirements you mentioned? I'm also worried about performance in these scenarios.

cc: @sa-si-dev

@abenhamdine
Copy link
Contributor

abenhamdine commented Sep 12, 2023

IMHO, it would be probably a good idea to allow to pass a searchCompareValues function, to be able to fully customize the search.
Thus one could implements fuzzy search, custom normalization, etc...

@gnbm
Copy link
Collaborator

gnbm commented Sep 12, 2023

IMHO, it would be probably a good idea to allow to pass a searchCompareValues function, to be able to fully customize the search. Thus one could implements fuzzy search, custom normalization, etc...

I personally will see that as a new feature (even to be able to be backwards compatible) and here just try to include new use cases without further impacts.
But feel free to propose that via PR.

@gnbm
Copy link
Collaborator

gnbm commented Sep 22, 2023

Hi @nikolasr200
Did you get the chance to test the alternatives I suggested?
I would like more input before moving forward with a PR.
Cheers

@nikolasr200
Copy link
Author

Hi @gnbm, sorry i didn't have the time to do any tests so far. I believe your approach is to the right direction, and i ll try to test it asap.

@abenhamdine
Copy link
Contributor

I think it's more complex to handle all diacritics, and it would be better to not reinvent the wheel : for example, this package is well maintained and tested https://github.com/motss/normalize-diacritics
It's why I would prefer to be able to apply a custom normalization function

@gnbm
Copy link
Collaborator

gnbm commented Sep 22, 2023

I think it's more complex to handle all diacritics, and it would be better to not reinvent the wheel : for example, this package is well maintained and tested https://github.com/motss/normalize-diacritics It's why I would prefer to be able to apply a custom normalization function

Feel free to add that to the library via PR so that we can avoid breaking changes and @sa-si-dev might be able to review it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants