HDDS-10697. Add Page to stream Recon logs #7253
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
HDDS-10697. Add Page to stream Recon logs
Please describe your PR in detail:
Backend
We first read the log file using a RandomAccessFile to efficiently fetch blocks of data and also quickly set the file position.
This data will be read in blocks. We are using a block size of 4096 bytes (i.e 4Kb) which is a standard block size of UTF-8 file system.
Block Data Operations:
Once we read the block we will then split the block data into individual lines on the basis of the presence of Line Feed character i.e newline.
This is stored in a List (say lines) which will act as the buffer to quickly fetch lines as required.
We have a pointer (int) to point to the last line in the list that was read. We then fetch new blocks based on the following conditions:
Line Data Operations:
For the lines that are present we parse them further into instances of Events.
An event is the representation of the log line. Each Event has the following fields:
We also use a look-ahead event to read the current event and the next event present.
If the next event has prevLines then those belong to the current event i.e the previous lines that were read as a part of the next event will be the lines present as a part of the current event.
The events are parsed using a parser and an object of the events is created.
These are then added to a Deque of Events.
We use a queue to figure out where the event should be added depending on the direction in which we are requesting data.
So if we are requesting data forwards then the new events should be added at the end.
If we request data backwards then the new events should be added at the beginning of the queue.
The first offset is the offset of the first event in the queue. The last offset is the offset of the last event in the queue.
This queue is then returned as a part of the response.
File Descriptions
LogEndpoint
This file will act as the API provider intercepting requests and sending responses.
LogFetcher/LogFetcherImpl
This takes care of fetching the actual logs and creating the data to send as response
LogEventReader
Works on top of the LogReader file to perform event based operations e.g. reading a line, parsing the line as event and then performing the necessary operations on it like setting the offset and getting the proper message for the event
LogReader
Actual reader used to read the log file.
Frontend
We display the data in a table. Attach an intersection observer to the second last row of the table. This row should be outside the current table viewport.
Once this comes into the viewport we detect the visibility using the IntersectionObserverAPI and request the backend for the next set of log lines i.e direction of FORWARD using the lastOffset field.
For fetching the previous data we should determine the scroll position. If the scroll position goes beyond the viewport of the table, then we know the user is trying to scroll upwards and we fetch the new data with the direction as REVERSE with the firstOffset field.
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-10697
How was this patch tested?
Patch was tested manually for the new changes and UI
Unit tests were run to verify no breaking changes.
!!! To Test changes locally !!!
Add the following lines to the docker-config file present under the
ozone
folder in the final build directoryThis will stop the logs from being written to the docker STDOUT and instead it will get appended to the logfile which will be present under
/var/log/hadoop/
in the docker container