This article mainly introduces the specific architecture design of tRPC-Cpp, which is written based on the framework v1.0.0.
The main content is divided into two parts:
- Introduce the overall architecture design and code directory structure of tRPC-Cpp, so that everyone can have a general understanding;
- Introduce the server and client workflow of tRPC-Cpp, so that everyone can understand the working principle of server and client;
- Introduce the pluggable design of tRPC-Cpp, so that everyone can understand the specific implementation mechanism of plugin;
The overall architecture of tRPC-Cpp is designed as follows:
The overall architecture consists of two parts: "framework core" and "plugins". As shown in the figure above, the dotted box is tRPC. The red solid line box in the middle is the framework core, and the blue box is the plugin part.
The framework core can be divided into three layers:
-
Runtime: It consists of thread model and io model, responsible for scheduling of framework cpu tasks and io operations. thread model currently supports: ordinary thread model (io/handle separated or merged thread model), M:N coroutine model (fiber thread model). io model currently supports: reactor model for network IO and asyncIO model for disk io(based on io-uring, currently only supported in the merged thread model);
-
Transport: Responsible for data transmission and protocol encoding and decoding. The framework currently supports tcp/udp/unix-socket and uses the tRPC protocol to carry RPC messages, and also supports other protocol through codec plugin;
-
RPC: Encapsulates services and service proxy entities, provides RPC interfaces and support synchronous, asynchronous, one-way, and streaming calls;
In addition, the framework also provides an admin management interface, which is convenient for users or operating platforms to manage services. The management interface includes functions such as updating configuration, viewing version, modifying log level, viewing framework runtime information, and the framework also supports user interface to meet the customized needs.
The main directory structure of the tRPC-Cpp framework is designed as follows:
It is mainly composed of three parts: "Framework Core Implementation", "Service Governance Plugin Implementation", "Auxiliary Tool Implementation".
The implementation of framework core mainly includes the following modules:
- server: provides a service implementation that supports multi-service, registration, unregistration, smooth exit, etc.;
- client: provides a concurrent and safe general-purpose client implementation, which is mainly responsible for operations related to rpc calls, service discovery, load balancing, circuit breaker, encoding and decoding, and custom filter. All parts support plugin extensions;
- codec: provides codec-related interfaces, allowing the framework to expand multi protocols, serialization methods, data compression methods, etc.;
- transport: provides the ability of network transmission, supports transmission methods such as tcp/udp/ssl/unix-socket;
- runtime: provides the implementation of the framework runtime environment, encapsulates the thread model and io model, supports m:n coroutine (fiber) model, io and handle separated and merged thread model;
- filter: provides the definition of a custom filter, implements the extension capability, such as tracing, metrics, logreplay, etc.;
The service governance plugin implementation mainly includes the following modules:
- naming: provides service registration (registry), service discovery (selector), load balancing (loadbalance), circuit breaker (circuitbreaker) and other capability packaging, used to connect to various name service systems;
- config: provides an interface related to configuration reading, supports reading local configuration files, remote configuration center configuration, etc., allows plugin extensions to support configuration files in different formats, different configuration centers, and supports reload and watch configuration updates;
- metrics: Provides the ability to collect and report monitoring data, supports common single-dimensional reporting, such as counter, gauge, etc., and also supports multi-dimensional reporting, allowing different monitoring systems to be connected by extending the Sink interface;
- logging: provides a general log collection interface, allowing the log implementation to be extended through plugins, allowing the log to be output to the remote;
- tracing: provides distributed tracing capabilities, allowing reporting to the tracing system through plugins;
The overall interaction process of tRPC-Cpp is as follows:
The server startup process roughly includes the following processes:
- Instantiate and start the business subclass inheriting
TrpcApp
; - Read the framework configuration file (specified by --config), the configuration here includes the configuration information of global, server, service, client, plugin, etc.;
- Read global configuration information, initialize and start the runtime environment of the framework, runtime includes threading model and network model, etc.;
- Read server/service, client configuration, initialize TrpcServer and TrpcClient;
- Read the plugin configuration to complete the initialization logic of various plugins;
- Run the
Initilize
method of theTrpcApp
business subclass to complete the initialization of the business code;- Various initialization logics of the business itself;
- Call
RegisterService
to complete service registration and start monitoring network requests;
- The service has started normally at this time, and then waits for the client to establish a connection request.
The request processing generally includes the following processes:
- The server transport calls the Acceptor to wait for the client to establish a connection;
- The client initiates a connection establishment request, and the server transport Accept returns a tcp connection;
- The server transport determines to dispatch the connection according to the current runtime environment (whether it is fiber runtime);
- If it is a fiber runtime, then select a fiber scheduling group, of which the Fiber Reactor will processes the connection;
- If not, then select an io thread, of which the Reactor will processes the connection;
- Start the logic of receiving packets. The server transport reads the request continuously according to the codec protocol, compression method, and serialization method. A msg including the ServerContext of the request will be created for each request, then handle the msg over to upper service for further processing;
- The service will find the corresponding registered processing function according to the rpc name of the msg, and call the corresponding processing function;
- Before calling the corresponding processing function, the filterchain must be executed first, and behind the filterchain execution is the processing function of the rpc we registered;
- Serialize, compress and encode the result of the response, and then return the packet to the client. The return method is divided into synchronous and asynchronous;
- The synchronization method is that after the rpc processing function is executed, the result is directly processed and returned to the client, this is the default method;
- The asynchronous method is that after the rpc processing function is executed, the package is not returned to the client, but returned by users by actively calling the framework's interface after asynchronous business logic;
The service exit process generally includes the following processes:
- Unregister all services from the name service to prevent future traffic from being routed to this node;
- Disable the readable event for connections, including:a. Disable the readable event for the listening socket to avoid creating new connections,b. Disable the readable event for the connected socket to avoid receiving new requests from that socket;
- Wait for all requests received by the server to be processed(To avoid being unable to exit due to waiting, the default maximum waiting time is set to 5 seconds.It can be configured through server::stop_max_wait_time);
- Close the server network connections;
- Call the
Destroy
method of theTrpcApp
business subclass to stop the dynamic resources created by the business (for example: started threads); - Stop the dynamic resources created by the plugins (for example: threads started inside the plugins);
- Stop the framework runtime environment runtime;
- Release the resources inside the runtime environment of the framework;
- Release the internal resources of the framework operating environment TrpcServer;
- Release the internal resources of the framework operating environment TrpcClient;
- The program exits;
- When sending a request, first assemble various call parameters;
- Execute the pre-logic of client filter;
- Find a set of ip:port lists corresponding to the called service name through the service discovery plugin;
- Through the load balancing algorithm and circuit breaker policy, find a suitable ip:port;
- Serialize, compress, and encode request data;
- Then prepare to establish a connection to ip:port. At this time, it will first check whether there is a corresponding available connection in ClientTransport, and create it if not;
- After obtaining the connection, start request data and wait for receiving the response (if it is connection multiplexing mode, multiple requests may be sent concurrently on the same connection, and the request response is associated through seqno);
- Receive the reponse data, decode, decompress, and deserialize, and submit it to the upper layer for processing;
- Execute the post-logic of client filter;
The pluggable architecture design of tRPC is based on plugin factory based on interface mechanism and filter based on AOP.
The tRPC core framework adopts the idea of interface-based programming, by abstracting the framework functions into a series of plugins, registering them in the plugin factory. The tRPC framework is responsible for connecting these plugins to assemble complete functions. The plugin model can be divided into the following three parts:
- Framework: the framework only defines the standard interface, without any plugin implementation, decoupled from the specific implementation;
- Plugin implementation: the plugin can be realized by encapsulating the plugin according to the framework standard interface;
- User: business developer only needs to use the plugins you need;
The plugin implementation of a typical framework connecting name service system is as follows:
In order to make the framework more extensible, the framework supports the interceptor filter, which draws on the programming idea of java AOP.
The specific implementation method is to set the logic of buried points in the framework request processing process, and then insert a series of filters in the buried points to realize some business custom functions, such as metrics monitoring, log collection, tracing, custom logic such as overload protection and parameter preprocessing.
The workflow of the interceptor filter is as follows:
The ultimate purpose of the interceptor is to allow business logic and framework decoupled and allow cohesive development of each. You can add or replace different plugin implementations through configuration without modifying the source code.