Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENOTEMPTY: directory not empty occurs when installing models #222

Open
wacastel opened this issue Mar 22, 2023 · 5 comments
Open

ENOTEMPTY: directory not empty occurs when installing models #222

wacastel opened this issue Mar 22, 2023 · 5 comments

Comments

@wacastel
Copy link

wacastel commented Mar 22, 2023

The following error is occurring when attempting to install the llama and alpaca models on Apple Silicon running MacOS Ventura:

ERROR [Error: ENOTEMPTY: directory not empty, rename '/Users/<HOME_DIR>/dalai/tmp/models' -> '/Users/<HOME_DIR>/dalai/llama/models'] {
  errno: -66,
  code: 'ENOTEMPTY',
  syscall: 'rename',
  path: '/Users/<HOME_DIR>/dalai/tmp/models',
  dest: '/Users/<HOME_DIR>/dalai/llama/models'
}

The fix is to make sure to only remove the original models path, not the entire engine.home directory.

@tleers
Copy link

tleers commented Mar 22, 2023

Can you try #223?

@luisee
Copy link

luisee commented Mar 22, 2023

Can you try #223?

Hello, I updated Dalai with npx [email protected] setup but I still get the same error.

@hatlem
Copy link

hatlem commented Mar 22, 2023

Can you try #223?

Hello, I updated Dalai with npx [email protected] setup but I still get the same error.

You are not alone, I have tried different thing for hours, still getting the same errors.

@wacastel
Copy link
Author

I just tried my fix on a clean install and it worked. I'll keep investigating and see if I can come up with a fix. Appreciate the feedback.

@luisee
Copy link

luisee commented Mar 22, 2023

Can you try #223?

Hello, I updated Dalai with npx [email protected] setup but I still get the same error.

You are not alone, I have tried different thing for hours, still getting the same errors.

The problem after upgrading to npx [email protected] setup is that npx keeps taking the old version to run.

You can test by running npx dalai@latest llama install 7B or npx [email protected] llama install 7B

mirroredkube pushed a commit to mirroredkube/dalai that referenced this issue Mar 26, 2023
* Improved quantize script

I improved the quantize script by adding error handling and allowing to select many models for quantization at once in the command line. I also converted it to Python for generalization as well as extensibility.

* Fixes and improvements based on Matt's observations

Fixed and improved many things in the script based on the reviews made by @mattsta. The parallelization suggestion is still to be revised, but code for it was still added (commented).

* Small fixes to the previous commit

* Corrected to use the original glob pattern

The original Bash script uses a glob pattern to match files that have endings such as ...bin.0, ...bin.1, etc. That has been translated correctly to Python now.

* Added support for Windows and updated README to use this script

New code to set the name of the quantize script binary depending on the platform has been added (quantize.exe if working on Windows) and the README.md file has been updated to use this script instead of the Bash one.

* Fixed a typo and removed shell=True in the subprocess.run call

Fixed a typo regarding the new filenames of the quantized models and removed the shell=True parameter in the subprocess.run call as it was conflicting with the list of parameters.

* Corrected previous commit

* Small tweak: changed the name of the program in argparse

This was making the automatic help message to be suggesting the program's usage as being literally "$ Quantization Script [arguments]". It should now be something like "$ python3 quantize.py [arguments]".
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants