You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello Jorrit,
I need your help i am receiving around 120k flow per sec.
but I am only able to parse 3k flow per sec.
i am having 24 core(dual processor) processor with 64gb ram physical server dedicated for logstash and elasticsearch only.
it was utilizing almost 85 percent of cpu.
You are not anywhere close to the resources that you will require to collect 120K flows/sec. Depending on your exact requirements (retention periods, high-availability, peak vs avg. rates, etc) you will need at least an 8-12 Elasticsearch node cluster (much more for longer retention periods more than a few days), and a similar number of dedicated Logstash nodes.
Even with increased resources, you will need to tune Linux for optimal UDP throughput, as well as other Logstash parameters.
Whether flows, logs or other sources... 120K events per second will require the help of someone with experience dealing with that volume of data.
Hello Jorrit,
I need your help i am receiving around 120k flow per sec.
but I am only able to parse 3k flow per sec.
i am having 24 core(dual processor) processor with 64gb ram physical server dedicated for logstash and elasticsearch only.
it was utilizing almost 85 percent of cpu.
my logstash configuration.
}
I have already mailed you Pcap and template_cache file.
is there anything you can suggest to increase flow rate.
The text was updated successfully, but these errors were encountered: