Releases: SIDN/entrada
2.4.8
This ENTRADA release contains compatibility fix for handling pcaps created with tcpdump >= 4.99 which added support for Linux SLL2 Linktype.
#193 Added support for Linklayer type LINUX_SLL2
Deploy ENTRADA using Docker, the image can be found on Docker Hub.
See the ENTRADA website for upgrade/installation details.
2.4.5
This ENTRADA release contains multiple fixes and updates
#184 Wrong header offset when decoding pcap that uses Linux Cooked Capture (SLL)
#183 Update JDK to version 17 LTS dependencies
#182 Update AWS libc (incl Athena)
#181 Add option to set hdfs permission for uploaded data
#180 Bump postgresql from 42.2.25 to 42.3.3
#178 HIGH/CRITICAL CVE
#176 include hadoop-client dependency in jar
#174 AWS IRSA support on EKS
#170 support prometheus micrometer
#168 AWS s3 iterator issue when more than 1000 files are present under prefix
#166 support for pcap.bz2
#162 table compaction bug when using non-standard table names
Deploy ENTRADA using Docker, the image can be found on Docker Hub.
See the ENTRADA website for upgrade/installation details.
2.4.2
This ENTRADA release contains updated version of Log4J2
#161 Fix for CVE-2021-44228 (Log4J2 Vulnerability) bug
Deploy ENTRADA using Docker, the image can be found on Docker Hub.
See the ENTRADA website for upgrade/installation details.
2.4.0
This ENTRADA release is focussed on improving data processing performance.
Depending on the type of compression, file size, available CPU/RAM resources, the performance increase may be 100% or more.
#151 Add config for packet queue size
#152 Improve TCP packet decoding
#153 Fix request cache cleanup and persistence
#154 Update libraries and increase performance
#155 Add configuration option for Parquet file size and rowgroup size
#156 Add multicore processing
#157 Change TCP-handshake metric from median to avg
#158 Change default metric retention from 10s to 60s
Deploy ENTRADA using Docker, the image can be found on Docker Hub.
See the ENTRADA website for upgrade/installation details.
2.1.8
This ENTRADA release fixes multiple issues
#147 Stop PII purge and table compaction running at the same time
#145 Update outdated maven dependencies
#141 Archived data deleted prematurely
#140 Closed HDFS client connection
#139 ICMP PII-purge script invalid column bug
Deploy ENTRADA using Docker, the image can be found on Docker Hub.
See the ENTRADA website for upgrade/installation details.
2.1.7
This ENTRADA release fixes an issue with OpenDNS Ip address ranges
Fixes
#137 OpenDNS resolver IP addresses URL changed
Deploy ENTRADA using Docker, the image can be found on Docker Hub.
See the ENTRADA website for upgrade/installation details.
2.1.6
This ENTRADA release was made possible with the help of DNS.BE.
New features
Added columns for for IP DF (Don't Fragment) flag and ICMP "Packet Too Big" Next Hop MTU value.
#132 ICMP Type = 3, Code = 4 : Add support for Next-Hop MT
#133 Add to DNS table: IPv4 DF flag support (for respons
Fixes
#136 Ignoring oldest pcap and not newest pcap
#134 memory leak due to trailing Hadoop configuration
Deploy ENTRADA using Docker, the image can be found on Docker Hub.
See the ENTRADA website for upgrade/installation details.
2.1.5
This release changes the way that newly generated Parquet files are uploaded to the database.
Before version 2.1.5. it worked like this:
Parquet files are generated and uploaded to the database after all the input files have been processed.
This means that when processing bulk data, a lot of disk space is required on the processing system and there is a long delay before data is added to the database for analysis.
Starting with version 2.1.5 and onward it will work like this:
New Parquet files will now be uploaded when they are closed (when max # of rows is written to the file) and not when all of the input PCAP data has been processed.
This will prevent the accumulation of a large amount of Parquet files on the local system, using up storage capacity.
Also, there is no longer a long time delay before Parquet data shows up in the database.
Improvement
#131 Upload parquet files at regular interval even if processing is not done yet
Deploy ENTRADA using Docker, the image can be found on Docker Hub.
See the ENTRADA website for upgrade/installation details.
2.1.4
This release contains fixes and changes to the Postgres connection pool config.
Fixes
#126 Memory leak when not closing org/apache/hadoop/fs/FileSystem.java
The internal cache of the HDFS FileSystem caused a memory leak over time because the FileSystem was not closed correctly after use.
#127 getting rate limited by AWS due to excessive GetQueryExecution calls by Simba Athena JDBC driver
Other
#129 Support for more database connectionpool options
This release was made possible with the help of dns.be.
NOTE: If you are using Impala+SSL then make sure to have "IMPALA_SSL=1" in your Docker-compose config.
Deploy ENTRADA using Docker, the image can be found on Docker Hub.
See the ENTRADA website for upgrade/installation details.
2.1.2
This is a fix release, it fixes a memory leak and therefore reduces the memory requirements.
Fixes
#126 Memory leak when not closing org/apache/hadoop/fs/FileSystem.java
The internal cache of the HDFS FileSystem caused a memory leak over time because the FileSystem was not closed correctly after use.
Other
#123 v2.1.1 problem to connect with Hadoop
Changed default config to not use SSL for Impala, as this is also the default config for Cloudera Hadoop dist.
NOTE: If you are using Impala+SSL then make sure to have "IMPALA_SSL=1" in your Docker-compose config.
Deploy ENTRADA using Docker, the image can be found on Docker Hub.
See the ENTRADA website for upgrade/installation details.