EpochCoordinator
is a ThreadSafeRpcEndpoint
that tracks offsets and epochs (coordinates epochs) by handling messages (in fire-and-forget one-way and request-response two-way modes) from…FIXME
EpochCoordinator
is created (using create factory method) when ContinuousExecution
is requested to run a streaming query in continuous mode.
Message | Description |
---|---|
Sent out (in one-way asynchronous mode) exclusively when |
|
|
Sent out (in request-response synchronous mode) exclusively when |
|
Sent out (in request-response synchronous mode) exclusively when |
|
Sent out (in one-way asynchronous mode) exclusively when |
Sent out (in request-response synchronous mode) exclusively when The number of partitions is exactly the number of |
|
Sent out (in request-response synchronous mode) exclusively when |
|
|
Sent out (in request-response synchronous mode) exclusively when |
Tip
|
Enable Add the following line to
Refer to Logging. |
receive: PartialFunction[Any, Unit]
Note
|
receive is part of the RpcEndpoint Contract in Apache Spark to receive messages in fire-and-forget one-way mode.
|
receive
handles the following messages:
With the queryWritesStopped turned on, receive
simply swallows messages and does nothing.
receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit]
Note
|
receiveAndReply is part of the RpcEndpoint Contract in Apache Spark to receive and reply to messages in request-response two-way mode.
|
receiveAndReply
handles the following messages:
resolveCommitsAtEpoch(epoch: Long): Unit
resolveCommitsAtEpoch
…FIXME
Note
|
resolveCommitsAtEpoch is used exclusively when EpochCoordinator is requested to handle CommitPartitionEpoch and ReportPartitionOffset messages.
|
commitEpoch(
epoch: Long,
messages: Iterable[WriterCommitMessage]): Unit
commitEpoch
…FIXME
Note
|
commitEpoch is used exclusively when EpochCoordinator is requested to resolveCommitsAtEpoch.
|
EpochCoordinator
takes the following to be created:
EpochCoordinator
initializes the internal properties.
create(
writer: StreamWriter,
reader: ContinuousReader,
query: ContinuousExecution,
epochCoordinatorId: String,
startEpoch: Long,
session: SparkSession,
env: SparkEnv): RpcEndpointRef
create
simply creates a new EpochCoordinator and requests the RpcEnv
to register a RPC endpoint as EpochCoordinator-[id] (where id
is the given epochCoordinatorId
).
create
prints out the following INFO message to the logs:
Registered EpochCoordinator endpoint
Note
|
create is used exclusively when ContinuousExecution is requested to run a streaming query in continuous mode.
|
Name | Description |
---|---|
|
Flag that indicates whether to drop messages ( Default: Turned on ( |