-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it necessary to store file content in db.json for large blog? #3271
Comments
Yeah @ahuigo that is an ongoing discussion what to do with large sites. Any ideas? Just keep in memory or what? The |
Some ideas about decreasing the building time of Hexo.
For example: # hexo g;
# {build_meta:{'last_time':'2018-09-29...'}, files_meta:{...}}
dbinfo = parse('db.json')
cmd = 'git diff-index --cached --name-status --diff-filter=ACMRD HEAD -- ./_posts '
output = getoutput(cmd).strip()
if output:
# find out modified files and deleted files
modified_blogs = {}
delete_blogs = []
for line in output.split('\n'):
status, path = line.split('\t')
if status == 'D':
delete_blogs.append(path)
continue
blog = parseBlog(path)
modified_blogs[path] = blog['meta']
# delete file
if path not in dbinfo['files_meta']:
html_path = f'public/{path}.html'
getoutput(f'rm {html_path}')
hexo_delete_tags(file_meta)
hexo_delete_category(file_meta)
# add & update file(Incremental Building)
for path,file_meta in modified_blogs.items():
hexo_generate_html(path)
hexo_add_update_tags(file_meta)
hexo_add_update_category(file_meta)
# save db.json
hexo_update_db('db.json',modified_blogs, delete_blogs) |
I've written a script to generate static blog. https://github.com/ahuigo/a/blob/master/tool/pre-commit It's only for my own use, not for hexo. |
See also hexojs/warehouse#13 |
I'll close this issue, because the major performance overhead of Hexo is not reading or writing See #2579 (comment) |
I have nearly about 800 markdown files, and it leads
db.json
increment to about 20M.I don't think It is necessary to store content within
db.json
.The text was updated successfully, but these errors were encountered: