An ElastichSearch logging flow built from a small client library that talks with an H3 lightweight server which runs pino and the pino-elasticsearch transport
Logger.PoC.mp4
- This is an intermediate solution to a logging problem we're facing.
- The service can be scaled in as many instances as needed.
- When it will become a bottleneck, we can move the backpressure from the application to a broker like
pino-mq
orpino-kafka
- The purpose of it is for the client and the API to remain in a familiar state, and then we can change the logging transport to whatever we want.
See the console logger in action live on StackBklitz (Open the page and then the console)
- Install client using
npm install pino-logger-client --save
from npm - Import with
import { Logger } from 'pino-logger-client'
- Instantiate and configure the logger class with
const logger = new Logger(API_URL, LoggerName);
,LoggerName
is the name that will appear in the logs, it's also optional. - Use its methods
logger.info|warn|error|success('message')
- The logger also registers some global error listeners, which can be unregisterd with
logger.unregisterListeners()
- Preferably run the backend somewhere in your infrastructure and configure the logger with the URL
When process.env
is not production
then any log will appear in the browser console like this:
The pre-requirements to this are a properly set up Node and Docker environment
cd api && npm run dev-stack
will start H3 API server and instances of ElasticSearch and Kibana- in a different terminal
cd client && npm run example-prod
will build the client withmicrobundle
and serve the exampleindex.html
file