-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
possible to filter-out some files? #165
Comments
I'd suggest to manage what is logged from rsyslog/syslog-ng configuration (which is the proper place to implement filters to what goes into logs). Anyway, the main point is that I would rather avoid to overcomplicate the logic in log2ram when it's already possible to leverage other system tools already available. This to reduce potential maintenance issues and avoid duplicating the same type of logic in multiple places. Possible solution: |
Thank you a lot for your care and exhaustive explanation :) |
Hi Barabba, The chain to generate a log can be either:
or
In the first case it's the the application's job to define both the log location and the granularity of the log. In the second case, again, the application should manage the granularity of the log and rsyslog/syslog-ng should take care of the location (and optionally of the granularity, but if possible that should be managed only by the application).
Regarding your unwanted log files: it should be quite easy to find what is creating them (from their names, *.dmp, they look more like data dumps or at least it looks they have a different purpose than logs and thus probably the proper place for them is a "backup directory" or "/tmp" and not with the system logs). Concerning your question about reboot: on any systemd standard event, such as start/stop/restarts, log2ram syncs the content of the Ramdisk to the /var/log in "/" (the SD or whatever drive on which the system is installed on). P.S.: If your target is only to save the SD by reducing the number of files written at sync time, that will not change a lot: log2ram is already saving your SD; an additional file is just an single additional write to the disk once per day... (with the base configuration of log2ram). The issue with log writes to the SD is that normally there are multiple writes per second, one file written per day won't change significantly the life span of your SD an application like a database will do much more damage ;). |
Like @xtvdata have already explain, using the syslog engine help to filter logs. Azlux |
thank you mates, it's much more clear now, I suggest you to include such infos (coopy this text will be enough) to the infos of the program, so in future there may be less questions like this. About .dmp files, I have no idea which app creates them, I have node red with some modules, maybe them, I have python and other services installed, it can be everything.. but I understand that log2ram can't filter them out. You are guru of Linux, do you know if exist any program that keeps tracking how many bytes are written to SD and which is the process generating them? This program should work in ram only.. and doesn't have to log on SD, In past I've used one (I don't remember it now) which caused the opposite problem, it was logging so much that in 3 months I got a fault on SD :( |
Hi, For your specific case.nodeRed (or better, one of the installed modules) could be responsible for those files. If your nodeRed is managing a lot of messages an you have left in a module configuration a debug option enabled (or if one of those modules is in beta and has debug turned on by default due to the development stage) it might very well generate a huge amount of logs. To find what process is responsible for disk usage you can use P.S.: in addition, remember that it is very important that you also set up correct rules for |
Hi mate, thanks for lot of infos! What do you mean with "rotate"? You mean to be caked up inot the SD every X hours like log2ram is supposed to do? You mean then that only default files without subdirectories are backuped? Strange because I see subdirectories there.. About atop, once I've tried iotop and my SD went burned in about one month. My RB3 works really silently, there shouldn't processes that requires resources, there is only Node Red, and verbose logs are deactivated. Anyway this stuff write their logs on SD and this is pretty dangerous, they trace everything and generate lot of writes, when not configured properly. I suppose I'll avoid it and focus to what I can slim out. I've another question, do you know if the NR file module is able always to write and read from var/logs? I want to implement an internal log, so I did a chmod 777 /var/log to drag a file with terminal, will this setting persist in future? or after reboot I'll have again a 755? |
Hi,
logrotate is a standard Linux tool, installed in any distribution I know, which takes care of rotating logs. For each managed log file, logrotate periodically (e.g. every night or every week) renames that file adding “.1” to the name, if a .1 file already exists logrotate renames it to .2, and so on. Eventually, depending on logrotate configuration, it can also gzip the copies and delete old files (you can decide how many rotated files you want to keep before deletion). See here for some more details (Ubuntu and Debian are very similar): https://www.digitalocean.com/community/tutorials/how-to-manage-logfiles-with-logrotate-on-ubuntu-16-04
“atop” and “iotop” are not tools to run continuously, but should be activated ONLY to track issues origins. Besides killing the SD card, they also eat a fair amount of resources…
NodeRed file access is constrained only by file permissions. You can write or read in any place of the file system, assuming that the user running NodeRed can access the location. P.S.: if you want to implement some kind of customized log in NodeRed you should also consider to send the messages to syslog via TCP-UDP port or even Unix socket. In this way the custom log entries would end in /var/log/syslog, but then you can also split it to a dedicated file by configuring a one line rule in rsyslog configuration. This would allow you to get a real standard logging flow that could even be rerouted easily to a separate server used to store logs (just need to add a forward rule to config of rsyslog and all entries will flow to the log server via local network, but is may vary a lot depending on the kind of log server you decide to use). P.P.S.: burning an SD in one month seems a bit extreme… all of my SD cards lasted several years (I still have one RPi1 with its first SD card working…), however I’m moving to ssd as much as possible… if you write so much on the SD to burn it, probably the system load will be quite high just to manage the I/O (see top’s iowait). |
Hi, I'm trying to seduce writes on SD, while I would keep system logging what is happening for debug. There are some useless files full of zeros which are useless for me to store, can I create a list of undesired files that I don't want to save? For example *.dmp
Every time I do a sudo reboot now does the log2ram store before reboot?
Thanks
The text was updated successfully, but these errors were encountered: