To see the biggest differences please consult the changelog.
- #7116 Improve missing Flink Kafka Source / Sink TypeInformation
- We lost support for old ConsumerRecord constructor supported by Flink 1.14 / 1.15
- If you used Kafka source/sink components in your scenarios then state of these scenarios won't be restored
-
- Button name for 'test adhoc' was renamed from
test-with-form
toadhoc-testing
If you are using custom button config remember to update button type totype: "adhoc-testing"
inprocessToolbarConfig
- Button name for 'test adhoc' was renamed from
-
- Scenario Activity audit log is available
- logger name,
scenario-activity-audit
, it is optional, does not have to be configured - it uses MDC context, example of configuration in
logback.xml
:
<logger name="scenario-activity-audit" level="INFO" additivity="false"> <appender-ref ref="STDOUT_FOR_SCENARIO_ACTIVITY_AUDIT"/> </logger> <appender name="STDOUT_FOR_SCENARIO_ACTIVITY_AUDIT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <Pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - [scenarioId=%X{scenarioId}][version=%X{scenarioVersionId}][user=%X{username}] %msg%n</Pattern> </encoder> </appender>
-
#6979 Add
type: "activities-panel"
to theprocessToolbarConfig
which replaces removed{ type: "versions-panel" }
{ type: "comments-panel" }
and{ type: "attachments-panel" }
- #6971
DeploymentManagerDependencies
API changes:- Added field
scenarioActivityManager: ScenarioActivityManager
scenarioActivityManager
can be used by anyDeploymentManager
to save scenario activities in the Nu database- there is
NoOpScenarioActivityManager
implementation available (if needed for tests etc.)
- Added field
- #6971
DeploymentManager
compatible API changes:DeploymentManager
may now optionally extendManagerSpecificScenarioActivitiesStoredByManager
- managers extending that trait may internally handle some manager-specific ScenarioActivities and may be queried about those custom activities
- #6695
SingleTypingResult
API changes:- Renamed
objType
toruntimeObjType
which indicates a current object in a runtime.
- Renamed
- #6766
- Process API changes:
- Field
ScenarioWithDetails.labels
was added - Field
ScenarioWithDetails.tags
was removed (it had the same value aslabels
and was not used)
- Field
- Process API changes:
- #6988 Removed unused API classes:
MultiMap
,TimestampedEvictableStateFunction
.MultiMap
was incorrectly handled by Flink's default Kryo serializer, so if you want to copy it to your code you should write and register a proper serializer. - #7162 When component declares that requires parameter with either
SpelTemplateParameterEditor
orSqlParameterEditor
editor, in the runtime, for the expression evaluation result, will be used the newTemplateEvaluationResult
class instead ofString
class. To access the previousString
useTemplateEvaluationResult.renderedTemplate
method.
- #6944
- New endpoint
/api/scenarioTesting/{scenarioName}/adhoc/validate
- New endpoint
- #6766
- Process API changes:
- PUT
/api/processes/{processName}
- optionalscenarioLabels
field added
- PUT
- Migration API changes:
- POST
/api/migrate
supports v2 request format (withscenarioLabels
field)
- POST
- Process API changes:
- #7021
- Definitions API changes:
- GET
/api/processDefinitionData/*}
- added optional query param
enrichedWithUiConfig
- added
requiredParam
property to the response for parameter config atcomponents['component-id'].parameters[*]
- added optional query param
- GET
- Definitions API changes:
- #6958 Added message size limit in the "Kafka" exceptionHandler:
maxMessageBytes
. Its default value reflects Kafka's default size limit of 1 MB (max.message.bytes
), you need to increase it if your error topic allows for larger messages. Remember to add some margin for Kafka protocol overhead (100 bytes should be enough).
-
#6692 Kryo serializers for
UnmodifiableCollection
,scala.Product
etc. are registered based on class of Serializer instead of instance of Serializer. If you have values that were serialized by these Serializers in some state, the state won't be restored after upgrade. -
#6952 Improvement: TypeInformation support for scala.Option: If you used CaseClassTypeInfoFactory with case classes that contain the Option type, the state won't be restored after the upgrade.
-
#6805 Updated Flink 1.18.1 -> 1.19.1. Due to backwards incompatible changes in this Flink version update, Nussknacker 1.18 will not work with Flink versions pre-1.19 right away. If you want to keep using Flink pre-1.19 with current Nussknacker, please refer to compatibility providing plugins in https://github.com/TouK/nussknacker-flink-compatibility.
-
#6912 Improvement: Make TimeMeasuringService usable with other Lifecycle traits
- Services that use
TimeMeasuringService
must be rebuilt
- Services that use
-
Performance optimization:
- #7058 Add missing Flink TypeInformation for better serialization
- In case of using base (bounded and unbounded) Flink components state will be probably not compatible
FlinkCustomNodeContext.typeInformationDetection
has been removed, please useTypeInformationDetection.instance
insteadFlinkCustomNodeContext.forCustomContext
has been removed, please useTypeInformationDetection.instance.forValueWithContext
instead
- #7097 Flink base types registration mechanism
- In case of using types: java.time.LocalDate, java.time.LocalTime, java.time.LocalDateTime with CaseClassTypeInfo mechanism, state probably will be lost
- #7058 Add missing Flink TypeInformation for better serialization
-
#7113 Scala 2.13 was updated to 2.13.15, you should update your
flink-scala-2.13
to 1.1.2 -
#7187 JSON decoding in
request
source (request-response processing mode) and inkafka
source (streaming processing mode): For small decimal numbers is used eitherInteger
orLong
(depending on number size) instead ofBigDecimal
. This change should be transparent in most cases as this value was mostly used after#CONV.toNumber()
invocation which still will return aNumber
.
- #6248 Removed implicit conversion from string to SpeL
expression (
pl.touk.nussknacker.engine.spel.Implicits
). The conversion should be replaced bypl.touk.nussknacker.engine.spel.SpelExtension.SpelExpresion.spel
. - #6282 If you relied on the default value of the
topicsExistenceValidationConfig.enabled
setting, you must now be aware that topics will be validated by default (Kafka'sauto.create.topics.enable
setting is only considered in case of Sinks). Create proper topics manually if needed. - Component's API changes
- #6711
SingleComponentConfig
changed toComponentConfig
for better domain naming. Associated functions and objects also changed to...ComponentConfig...
. - #6418 Improvement: Pass implicit nodeId to
EagerServiceWithStaticParameters.returnType
Now methodreturnType
fromEagerServiceWithStaticParameters
requires implicit nodeId param - #6462
CustomStreamTransformer.canHaveManyInputs
field was removed. You don't need to implement any other method in replacement, just remove this method. - #6418 Improvement: Pass implicit nodeId to
EagerServiceWithStaticParameters.returnType
- Now method
returnType
fromEagerServiceWithStaticParameters
requires implicit nodeId param
- Now method
- #6340
TestRecordParser
trait used inSourceTestSupport
trait changed to work on lists instead of single records - itsparse
method now takesList[TestRecord]
instead of a singleTestRecord
and returns a list of results instead of a single result. - #6520
ExplicitTypeInformationSource
trait was removed - nowTypeInformation
produced bySourceFunction
passed toStreamExecutionEnvironment.addSource
is detected based onTypingResult
(thanks toTypeInformationDetection
)BlockingQueueSource.create
takesClassTag
implicit parameter instead ofTypeInformation
EmitWatermarkAfterEachElementCollectionSource.create
takesClassTag
implicit parameter instead ofTypeInformation
CollectionSource
'sTypeInformation
implicit parameter was removedEmptySource
'sTypeInformation
implicit parameter was removed
- #6545
FlinkSink.prepareTestValue
was replaced byprepareTestValueFunction
- a non-parameter method returning a function. Thanks to that,FlinkSink
is not serialized during test data preparation.
- #6711
TypingResult
API changes- #6436 Changes to
TypingResult
of SpEL expressions that are maps or lists:TypedObjectTypingResult.valueOpt
now returns ajava.util.Map
instead ofscala.collection.immutable.Map
- NOTE: selection (
.?
) or operations from the#COLLECTIONS
helper cause the map to lose track of its keys/values, reverting itsfields
to an empty Map
- NOTE: selection (
- SpEL list expression are now typed as
TypedObjectWithValue
, with theunderlying
TypedClass
equal to theTypedClass
before this change, and withvalue
equal to ajava.util.List
of the elements' values.- NOTE: selection (
.?
), projection (.!
) or operations from the#COLLECTIONS
helper cause the list to lose track of its values, reverting it to a value-lessTypedClass
like before the change
- NOTE: selection (
- #6566
TypedObjectTypingResult.fields
are backed byListMap
for correctRowTypeInfo
's fields order purpose. If #5457 migrations were applied, it should be a transparent change- Removed deprecated
TypedObjectTypingResult.apply
methods - should be usedTyped.record
factory method Typed.record
factory method takesIterable
instead ofMap
- Removed deprecated
- #6570
TypingResult.canBeSubclassOf
generic parameter checking related changes. Generic parameters ofTyped[java.util.Map[X, Y]]
,Typed[java.util.List[X]]
,Typed[Array[X]]
were checked as they were either covariant or contravariant. Now they are checked more strictly - depending on collection characteristic.Key
parameters ofTyped[java.util.Map[Key, Value]]
is treated as invariantValue
parameters ofTyped[java.util.Map[Key, Value]]
is treated as covariantElement
parameters ofTyped[java.util.List[Element]]
is treated as covariantElement
parameters ofTyped[Array[Element]]
is treated as covariant
- #6436 Changes to
- #6503
FlinkTestScenarioRunner
cleanupsrunWithDataAndTimestampAssigner
method was removed. Instead,timestampAssigner
was added as an optional parameter intorunWithData
- new
runWithDataWithType
was added allowing to test using other types than classes e.g. records
- #6567 Removed ability to set Flink's execution mode
in sources:
TableSource
,CollectionSource
and inFlinkTestScenarioRunner.runWithData
method. Now you can configure it undermodelConfig.executionMode
or for test purposes throughFlinkTestScenarioRunnerBuilder.withExecutionMode
method. - #6610 Add flink node context as parameter to BasicFlinkSink.
Now one can use
FlinkCustomNodeContext
in order to build sink inBasicFlinkSink#toFlinkFunction
method. - #6635 #6643
TypingResultTypeInformation
related changesTypingResultAwareTypeInformationCustomisation
API was removedFlinkCustomNodeContext.typeInformationDetection
is deprecated - useTypeInformationDetection.instance
insteadFlinkCustomNodeContext.valueWithContextInfo.forCustomContext
is is deprecated - useTypeInformationDetection.instance.forValueWithContext
instead
- #6640
BestEffort*Encoder
naming changes:- All
BestEffort*Encoder
classes renamed to fitTo<TargetFormat>(SchemaBased)Encoder
naming schema JsonToNuStruct
renamed toFromJsonDecoder
(to fitFrom<SourceFormat>Decoder
naming schema)ToJsonEncoder
renamed toToJsonEncoderCustomisation
ToJsonBasedOnSchemaEncoder
renamed toToJsonSchemaBasedEncoderCustomisation
- All
- #6586 For now on, the SQL enricher automatically converts types as shown below:
- java.sql.Array -> java.util.List
- java.sql.Time -> java.time.LocalTime
- java.sql.Date -> java.time.LocalDate
- java.sql.Timestamp -> java.time.Instant
- java.sql.Clob -> java.lang.String
- #6437 Removed deprecated operation to create a scenario:
POST
/api/processes/{name}/{category}
. POST/api/processes
should be used instead. - #6213 Improvement: Load resource config only in test context
WithConfig
fromtest-utils
modules behaviour changes: now it only parses given config, without resolving reference configs, system env variables etc.
- #6635
globalParameters.useTypingResultTypeInformation
parameter was removed. Now we always use TypingResultTypeInformation - #6797
AVRO_USE_STRING_FOR_STRING_TYPE
environment variable is not supported anymore - we always use String for String type in Avro. If you didn't set up this environment variable, no action is needed
- #6527 Changes to
TypingResult
of SpEL expressions that are maps or lists:TypedObjectTypingResult.valueOpt
now returns ajava.util.Map
instead ofscala.collection.immutable.Map
- NOTE: selection (
.?
) or operations from the#COLLECTIONS
helper cause the map to lose track of its keys/values, reverting itsfields
to an empty Map
- NOTE: selection (
- SpEL list expression are now typed as
TypedObjectWithValue
, with theunderlying
TypedClass
equal to theTypedClass
before this change, and withvalue
equal to ajava.util.List
of the elements' values.- NOTE: selection (
.?
), projection (.!
) or operations from the#COLLECTIONS
helper cause the list to lose track of its values, reverting it to a value-lessTypedClass
like before the change
- NOTE: selection (
- #6184 Removed
Remote[]
string part from forwarded username for scenario creation and updates.processes
andprocess_versions
tables won't store username with this part anymore increatedBy
andmodifiedBy
columns. - #6053 Added impersonation mechanism:
OverrideUsername
permission was renamed asImpersonate
and is now used as a global permission.AuthManager
is now responsible for authentication and authorization.AuthenticationResources
handles only plugin specific authentication now. This leads to following changes inAuthenticationResources
API:authenticate()
returnsAuthenticationDirective[AuthenticatedUser]
and notDirective1[AuthenticatedUser]
authenticate(authCredentials)
receivesPassedAuthCredentials
parameter type instead ofAuthCredentials
as anonymous access is no longer part ofAuthenticationResources
logicauthenticationMethod()
returnsEndpointInput[Option[PassedAuthCredentials]]
instead ofEndpointInput[AuthCredentials]
. TheOption[PassedAuthCredentials]
should hold the value that will be passed to the mentionedauthenticate(authCredentials)
.AuthenticationResources
extendsAnonymousAccessSupport
trait:AnonymousAccessSupport
has one methodgetAnonymousRole()
which returns anonymous role name. If you do not want to have an anonymous access mechanism for your authentication method you can extend yourAuthenticationResources
implementation withNoAnonymousAccessSupport
trait.
AuthenticationResources
has a fieldimpersonationSupport
of typeImpersonationSupport
:ImpersonationSupport
is a trait stating whether authentication method supports impersonation. If you don't want impersonation support you can assignNoImpersonationSupport
object to it. If you wish to have it - assignImpersonationSupported
abstract class to it and implementgetImpersonatedUserData(impersonatedUserIdentity)
method which returns required user's data for the impersonation by user'sidentity
.
AnonymousAccess
extendingAuthCredentials
was renamed toNoCredentialsProvided
. It does not represent anonymous access to the designer anymore but simply represents passing no credentials.AuthenticationConfiguration
has one additional Boolean propertyisAdminImpersonationPossible
which defines whether admin users can be impersonated by users with theImpersonate
permission. The property is set tofalse
by default forBasicAuthenticationConfiguration
,OAuth2Configuration
andDummyAuthenticationConfiguration
.
- #6087 #6155
DeploymentManager
API changes:DMRunDeploymentCommand.savepointPath
was replaced byupdateStrategy: DeploymentUpdateStrategy
- In places where
savepointPath = None
was passed, theDeploymentUpdateStrategy.ReplaceDeploymentWithSameScenarioName(StateRestoringStrategy.RestoreStateFromReplacedJobSavepoint)
should be passed - In places where
savepointPath = Some(path)
was passed, theDeploymentUpdateStrategy.ReplaceDeploymentWithSameScenarioName(StateRestoringStrategy.RestoreStateFromCustomSavepoint(path))
should be passed
- In places where
DMValidateScenarioCommand.updateStrategy
was added- In every place should the
DeploymentUpdateStrategy.ReplaceDeploymentWithSameScenarioName(StateRestoringStrategy.RestoreStateFromReplacedJobSavepoint)
should be passed
- In every place should the
deploymentSynchronisationSupport
field was added for purpose of synchronisation of statuses. If synchronisation mechanism is not used in context of given DM, you should returnNoDeploymentSynchronisationSupport
object. The synchronisation mechanism is used by/api/deployments/{deploymentId}/status
endpoint. Other endpoints don't use it.
- #6249
TopicName
trait was introduced and is used in context of specialized topic name (for kafka sources and sinks). Moreover,UnspecializedTopicName
case class was added and is used in places when the specialization is unknown/not needed.
- #6082 Default Influx database was changed from
esp
tonussknacker_metrics
- #5609 #5795 #5837 #5798 Refactoring around DeploymentManager's actions:
- Custom Actions
CustomAction
,CustomActionParameter
andCustomActionResult
moved fromextension-api
todeployment-manager-api
moduleCustomActionResult.req
was removedCustomAction
was renamed toCustomActionDefinition
CustomActionRequest
(from theextension-api
) was renamed toCustomActionCommand
CustomActionRequest
has additional comment parameter (like deploy and cancel actions)
- Other "action" methods - all methods operating on a scenario (or its deployment) were replaced by case classes and
one method handling them all:
processCommand(command)
:validate
-DMValidateScenarioCommand
deploy
-DMRunDeploymentCommand
cancel
withdeploymentId
argument -DMCancelDeploymentCommand
cancel
withoutdeploymentId
argument -DMCancelScenarioCommand
stop
withdeploymentId
argument -DMStopDeploymentCommand
stop
withoutdeploymentId
argument -DMStopScenarioCommand
savepoint
-DMMakeScenarioSavepointCommand
test
-DMTestScenarioCommand
- "Action type" was renamed to "action name". Loosened the restriction on the name of the action:
ProcessActionType
(enum with fixed values) is replaced withScenarioActionName
,- in
ProcessAction
attributeactionType
renamed toactionName
- in table
process_actions
columnaction_type
is renamed toaction_name
DeploymentManagerDependencies.deploymentService
was splitted intodeployedScenariosProvider
andactionService
- Events renamed:
OnDeployActionSuccess
renamed toOnActionSuccess
OnDeployActionFailed
renamed toOnActionFailed
- Custom Actions
- #5762 for the Flink-based TestRunner scenario builder you should replace the last component that was
testResultService
withtestResultSink
- #5783 Return type of
allowedProcessingMode
method inComponent
trait has been changed toAllowedProcessingModes
type which is one of:AllowedProcessingModes.All
in case of all processing modes allowedAllowedProcessingModes.SetOf(nonEmptySetOfAllowedProcessingModes)
in case only set of processing modes is allowed
- #5757 Refactored API around
FlinkSource
- Added
StandardFlinkSource
with more granular additional traits replacing the need forFlinkIntermediateRawSource
- Removed
BasicFlinkSource
andFlinkIntermediateRawSource
. Sources extending these traits should now extendStandardFlinkSource
. For reference on how to migrate, see changes inFlinkKafkaSource
orCollectionSource
- Renamed
FlinkSource
'ssourceStream
method tocontextStream
- Removed
EmptySourceFunction
- Added
- #5757 Added support for bounded sources and Flink runtime mode in
Flink tests
CollectionSource
now takes Flink'sBoundedness
with defaultUnbounded
andRuntimeExecutionMode
with defaultNone
as a parameters. It's encouraged to set theBoundedness
to bounded if applicableBoundedness
andRuntimeExecutionMode
is also possible to set inFlinkTestScenarioRunner
in new overloadingrunWithData
method
- #5744 Extracted unbounded stream specific components into separate
module:
- Components
periodic
,union-memo
,previousValue
, aggregates, joins anddelay
frombase
were moved intobase-unbounded
module. They are now built asflinkBaseUnbounded.jar
underwork/components/flink/flinkBaseUnbounded.jar
. - Configuration of tumbling windows aggregate offset is changed at the ComponentProvider level:
components.base.aggregateWindowsConfig.tumblingWindowsOffset
should now be set ascomponents.baseUnbounded.aggregateWindowsConfig.tumblingWindowsOffset
- If you previously specified base component jar explicitly in
modelConfig.classPath
ascomponents/flink/flinkBase.jar
and want to retain the unbounded specific components you need to addcomponents/flink/flinkBaseUnbounded.jar
explicitly. - #5887 When using a custom DesignerConfig, ensure that long text elements like 'generate file' are positioned in the last row to prevent excessive spacing between elements.
- Components
- #5574 Removed the support for the pluggable expression languages:
ExpressionConfig.languages
removed - #5724 Improvements: Run Designer locally
- Introduce
JAVA_DEBUG_PORT
to run the Designer locally with remote debugging capability - Removed
SCALA_VERSION
, please useNUSSKNACKER_SCALA_VERSION
instead of it
- Introduce
- #5824 Decision Table parameters rename:
- "Basic Decision Table" -> "Decision Table"
- "Expression" -> "Match condition"
- #5881
nussknacker-interpreter
module was renamed tonussknacker-scenario-compiler
- #5875 Added configurable idle timeout to Flink Kafka source with the
default value of 3 minutes. You can configure this timeout in Kafka component config at
idleTimeout.duration
or disable it atidleTimeout.enabled
. You can learn about idleness in Flink general docs and Kafka connector-specific docs - #5875 Removed
useNamingStrategyForConsumerGroupId
feature flag allowing for disabling namespaced Kafka consumer groups - #5848: Introduced a new method for handling colors, aimed at simplifying customization. Now, all colors are centrally stored in a single location. Refer to README.md for details on theme colors customization.
- #5914 Removed dev-specific configuration files
dev-application.conf
anddev-tables-definition.sql
from public distribution artifacts
- #5271 Changed
AdditionalUIConfigProvider.getAllForProcessingType
API to be more in line with FragmentParameterSingleComponentConfigWithoutId
renamed toComponentAdditionalConfig
- field
params: Map[String, ParameterConfig]
changed toparameterConfigs: Map[String, ParameterAdditionalUIConfig]
ParameterAdditionalUIConfig
is handled analogously toFragmentParameter
(expect forvalueCompileTimeValidation
, which isn't yet handled)ParameterConfig.defaultValue
->ParameterAdditionalUIConfig.initialValue
ParameterConfig.hintText
->ParameterAdditionalUIConfig.hintText
- most of the capabilities of
ParameterConfig.editor
andParameterConfig.validators
are covered byParameterAdditionalUIConfig.valueEditor
andParameterAdditionalUIConfig.valueCompileTimeValidation
- #5285 Changes around scenario id/name fields:
CanonicalProcess.id
of typeString
was replaced byname
field of typeProcessName
CanonicalProcess.withProcessId
was renamed towithProcessName
ScenarioWithDetails.id
was removed (it had the same value asname
)ScenarioWithDetails.processId
changed the type toOption[ProcessId]
and will have alwaysNone
valueComponentUsagesInScenario.id
was removed (it had the same value asname
)ComponentUsagesInScenario.processId
was removedListenerScenarioWithDetails.id
was removed (it had the same value asname
)ValidatedDisplayableProcess.id
of typeString
was replaced byname
field of typeProcessName
DisplayableProcess.id
of typeString
was replaced byname
field of typeProcessName
,processName
field is removed- deprecated
AsyncExecutionContextPreparer.prepareExecutionContext
was removed AsyncExecutionContextPreparer.prepare
now takesProcessName
instead ofString
- #5288 #5474 RemoteEnvironment / ModelMigration changes:
ProcessMigration.failOnNewValidationError
was removed - it wasn't used anywhere anymoreRemoteEnvironment.testMigration
result types changesshouldFailOnNewErrors
field was removed - it wasn't used anywhere anymoreconverted
field was replaced by theprocessName
field which was the only information that was used
RemoteEnvironment.migrate
takesScenarioParameters
instead ofcategory
- #5361
Parameter
has new, optionallabelOpt
field which allows to specify label presented to the user without changing identifier used in scenario graph json (Parameteter.name
) - #5356 Changes in AdditionalUIConfigProvider.getAllForProcessingType now require model reload to take effect.
- #5393 #5444
- Changes around metadata removal from the REST API requests and responses:
DisplayableProcess
was renamed toScenarioGraph
ScenarioGraph
fields that were removed:name
,processingType
,category
- all these fields already were inScenarioWithDetails
ProcessProperties
field removed:isFragment
- this field already was inScenarioWithDetails
ScenarioWithDetails
fieldjson.validationResult
was moved into the top level ofScenarioWithDetails
ScenarioWithDetails
fieldjson
was renamed intoscenarioGraph
and changed the type intoScenarioGraph
ValidatedDisplayableProcess
was renamed toScenarioGraphWithValidationResult
ScenarioGraphWithValidationResult
all scenario graph fields were replaced by onescenarioGraph: DisplayableProcess
field
- Migration mechanisms (
RemoteEnvironment
andTestModelMigrations
) useScenarioWithDetailsForMigrations
instead ofScenarioWithDetails
- Changes around metadata removal from the REST API requests and responses:
- #5424 Naming cleanup around
ComponentId
/ComponentInfo
ComponentInfo
was renamed toComponentId
ComponentId
was renamed toDesignerWideComponentId
- new
ComponentId
is serialized in json to string in format$componentType-$componentName
instead of separate fields (name
andtype
) NodeComponentInfo.componentInfo
was renamed tocomponentId
- #5438 Removed sealed trait
CustomActionError
, nowCustomActionResult
is always used - #5465 #5457 Typed related changes
CommonSupertypeFinder
shouldn't be created directly anymore -CommonSupertypeFinder.*
predefined variables should be used instead, in most cases just (CommonSupertypeFinder.Default
)TypedObjectTypingResult.apply
removed legacy factory method takingList[(String, TypingResult)]
- should be used variant withMap
TypedObjectTypingResult.apply
removed legacy factory method takingTypedObjectDefinition
- should be used variant withMap
TypedObjectTypingResult.apply
is deprecated - should be usedTyped.record(...)
instead. It will be removed in further releasesTypedObjectDefinition
was removedTyped.empty
was removed,TypedUnion
now handles only >= 2 typesTyped.apply(vararg...)
was replaced byTyped.apply(NonEmptyList)
andTyped.apply(firstType, secondType, restOfTypesVaraarg...)
If you have a list of types and you are not sure how to translate it toTypingResult
you can try to useTyped.fromIterableOrUnknownIfEmpty
but it is not recommended - see docs next to it.TypedUnion
is not a case class anymore, but is still serializable - If it was used in a Flink state, state will be probably not compatible
- #5517 Legacy
OnFinished
listener-api event was removed - #5474
Component
class now need to specifyallowedProcessingModes
. Most of the implementations (CustomStreamTransformer
,Service
,SinkFactory
) has default wildcard (None
). ForSourceFactory
you need to specify whichProcessingMode
this source support. You have predefined traits:UnboundedStreamComponent
,BoundedStreamComponent
,RequestResponseComponent
,AllProcessingModesComponent
that can be mixed into the component - #5474 Changes around new scenario metadata (aka "parameters"):
ScenarioWithDetails
: addedprocessingMode
andengineSetupName
fields
- #5522, #5521, #5519
DeploymentManager
API related changes:- In the
DeploymentManager
:DeploymentManager.getProcessState(ProcessIdWithName, Option[ProcessAction])
become final. You should implementresolve
method instead. It does the same, onlyList[StatusDetails]
are already determined.- Method
DeploymentManager.getProcessStates
signature was changed and now requires an implicitfreshnessPolicy: DataFreshnessPolicy
- Trait
AlwaysFreshProcessState
and methodgetFreshProcessStates
were removed, instead of it please usegetProcessStates
withDataFreshnessPolicy.Fresh
policy - Managers
FlinkStreamingRestManager
andFlinkRestManager
require new parameter:scenarioStateCacheTTL: Option[FiniteDuration]
- In the
DeploymentManagerProvider
:- New methods were added:
defaultEngineSetupName
andengineSetupIdentity
. They have default implementations, you should consider to replace them by your own - New, overloaded
createDeploymentManager
was added. In the new one most of the parameters were bundled intoDeploymentManagerDependencies
class which allows to easier pass these dependencies to delegates. Also, this method returnsValidateNel[String, DeploymentManager]
. You can return errors that will be visible to users e.g. invalid configuration etc. The old one is deleted. - Method
createDeploymentManager
signature was changed and now requires new parameter:scenarioStateCacheTTL: Option[FiniteDuration]
- New methods were added:
- In the
- #5526 Refactored namespaces:
- Removed
ObjectNaming
SPI - Removed logging when using naming strategy
- Replaced
ObjectNaming
with singleNamingStrategy
which prepares a name with a prefix fromnamespace
key fromModelConfig
or returns the original name if the value is not configured
- Removed
- #5535
ProcessingTypeConfig.classpath
contains now raw,String
entries instead ofURL
. TheString
toURL
converting logic is now insideModelClassLoader.apply
- #5505 anonymous access functionality for Tapir-based API
AuthenticationResources
&AnonymousAccess
traits were changed to be able to introduce anonymous access featureAuthCredentials
class was changed too
- #5373#5694 changes related to
Component
s andLazyParameter
s:LazyParameter
can be evaluated on request thanks to itsevaluate
methodParams
data class was introduced as a replacement for runtime parameters values defined asMap[String, Any]
.Params
data class, in its extraction methods, assumes that a parameter with the given name exists in the underlying Map.TypedExpression
was removed fromBaseDefinedParameter
hierarchy in favour ofTypingResult
TypedExpression
doesn't depend onExpressionTypingInfo
anymoreServiceInvoker
refactoring (parameters map was removed, a context is passed to its method)ProcessListener
interface changed slightlyParameterWithExtractor
util was replaced withParameterDeclaration
.- classes renaming:
LazyParameterInterpreter
toLazyParameterInterpreter
GenericNodeTransformation
toDynamicComponent
SingleInputGenericNodeTransformation
toSingleInputDynamicComponent
JoinGenericNodeTransformation
toJoinDynamicComponent
JavaGenericTransformation
toJavaDynamicComponent
JavaGenericSingleTransformation
toJavaSingleInputDynamicComponent
JavaGenericJoinTransformation
toJavaJoinDynamicComponent
JavaSourceFactoryGenericTransformation
toJavaSourceFactoryDynamicComponent
GenericContextTransformationWrapper
toDynamicComponentWrapper
SingleGenericContextTransformationWrapper
toSingleInputDynamicComponentWrapper
SourceFactoryGenericContextTransformationWrapper
toSourceFactoryDynamicComponentWrapper
JoinGenericContextTransformationWrapper
toJoinDynamicComponentWrapper
- type
NodeTransformationDefinition
(insideDynamicComponent
) renamed toContextTransformationDefinition
- #5641
PeriodicProcessDeployment
/DeploymentWithJarData
/PeriodicProcess
now takes type parameterCanonicalProcess
orUnit
to point out whether it contains scenario json. - #5656
pl.touk.nussknacker.engine.api.expression.Expression#language
method returnsLanguage
trait instead ofString
- #5707
ParameterName
data class was introduced. It replacesString
in whole places where it's used as a parameter name - #5754 Fix for broken encoding mechanism in tests from file with Avro format, revert [0d9b600][https://github.com/TouK/nussknacker/commit/0d9b600]
- Classes
ResultsCollectingListener
,TestResults
,ExpressionInvocationResult
,ExternalInvocationResult
depend onT
- Classes
TestResults.nodeResults
usesResultContext
instead ofContext
- Classes
TestResults.exceptions
usesExceptionResult
instead ofNuExceptionInfo
- Added
variableEncoder
toResultsCollectingListenerHolder.registerRun
- Classes
- #5280#5368 Changes in the definition API:
/api/processDefinitionData/componentIds
endpoint is removed/api/processDefinitionData/*
response changes:services
,sourceFactories
,sinkFactories
,customStreamTransformers
andfragmentInputs
maps fields were replaced by onecomponents
map with key in format$componentType-$componentName
and moved into top level of responsetypesInformation
field was renamed intoclasses
, moved into top level of response and nestedclazzName
inside each element was extractedcomponentsConfig
field was removed - now all information about components are available in thecomponents
fieldnodeId
field insideedgesForNodes
was renamed intocomponentId
in the flat$componentType-$componentName
formatdefaultAsyncInterpretation
field was removed
- #5285 Changes around scenario id/name fields:
/api/process(Details)/**
endpoints:id
fields was removed (it had the same value asname
)processId
fields return alwaysnull
.json.id
fields was renamed to.json.name
/api/components/*/usages
endpoint:id
fields was removed (it had the same value asname
)processId
fields was removed
/api/processes/**/activity/attachments
-processId
fields was removed/api/processes/**/activity/comments
-processId
fields was removed- GET
processes/$name/$version/activity/attachments
-$version
segment is removed now
- #5393 Changes around metadata removal from the REST API requests and responses:
/api/processValidation
was changed to/api/processValidation/$scenarioName
and changed request type/api/testInfo/*
was changed to/api/testInfo/$scenarioName/*
and changed request format regarding code API changes/api/processManagement/generateAndTest/$samples
was changed to/api/processManagement/generateAndTest/$scenarioName/$samples
/api/processesExport/*
was changed to/api/processesExport/$scenarioName/*
and changed response format regarding code API changes/api/processes/import/$scenarioName
was changed response into{"scenarioGraph": {...}, "validationResult": {...}
- GET
/api/processes/*
and/api/processesDetails/*
changed response format regarding code API changes - PUT
/api/processes/$scenarioName
was changed request field fromprocess
toscenarioGraph
/api/adminProcessManagement/testWithParameters/$scenarioName
was changed request field fromdisplayableProcess
toscenarioGraph
- #5424 Naming cleanup around
ComponentId
/ComponentInfo
- Endpoints returning test results (
/api/processManagement/test*
) returnnodeId
instead ofnodeComponentInfo
now /processDefinitionData/*
response: fieldtype
was replaced bycomponentId
inside the path.componentGroups[].components[]
- Endpoints returning test results (
- #5462
/api/processes/category/*
endpoint was removed - #5474 POST
/api/processes/$scenarioName/$category?isFragment=$isFragment
resource become deprecated. It will be replaced by POST/processes
with fields:name
,isFragment
,forwardedUserName
,category
,processingMode
,engineSetupName
. Three last fields are optional. Please switch to the new API because in version 1.5, old API will be removed. - POST
/api/nodes/$scenarioName/validation
response for object invalidationErrors
array can havedetails
of the error
- #5297
componentsUiConfig
key handling change:$processingType-$componentType-$componentName
format was replaced by$componentType-$componentName
format
- #5323 Support for the legacy categories configuration format was removed.
In the new format, you should specify
category
field inside each scenario type. - #5419 Support for system properties was removed from model configuration (they aren't resolved and added to merged configuration)
- #5474 You have to ensure that in every scenarioType model's
classPath
, in every jar are only components with not colliding processing modes. Also at least one component has defined processing mode other than wildcard. On the other hand starting from this version, you can use the same category for many scenarioTypes. You only have to ensure that they have components with other processing modes or other deployment configuration. - #5558 The
processToolbarConfig
toolbar withtype: "process-info-panel"
no longer accepts thebuttons
property. It only display scenario information now. However, a new toolbar withtype: "process-actions-panel"
has been introduced, which does accept thebuttons
property and renders actions similar to the oldtype: "process-info-panel"
.
- #5515 #5474 Helm chart now has two preconfigured scenario types (
streaming
andrequest-response
) instead of one (default
). Because of that, scenario created using previous version of helm chart will have invalid configuration in the database. To fix that, you have to manually connect to the database and execute sql statement:UPDATE processes SET processing_type = 'given-scenario-type' where processing_type = 'default';
- #4287 Cats Effect 3 bump Be careful with IO monad mode, we provide an experimental way to create IORuntime for the cat's engine.
- #5432 Kafka client, Confluent Schema Registry Client and Avro bump
- #5447 JDK downgraded from 17 to 11 in lite runner image for scala 2.13
- #5465 Removed
strictTypeChecking
option andSupertypeClassResolutionStrategy.Union
used behind it - #5517 Removed legacy mechanism marking scenario finished based on the fact that the last action was deploy and job was finished. The new mechanism leverage deployment id which was introduced in #4462 in 1.11 version.
- #5474 The mechanism allowing migration between two environments uses by default the new,
scenario creating API. In case when the secondary environment is in the version < 1.14, you should switch
secondaryEnvironment.useLegacyCreateScenarioApi
flag to on. - #5526 Added namespacing of Kafka consumer group id in both engines.
If you have namespaces configured, the consumer group id will be prefixed with
namespace
key from model config - in that case a consumer group migration may be necessary for example to retain consumer offsets. For gradual migration, this behaviour can be disabled by settinguseNamingStrategyInConsumerGroups = false
inKafkaConfig
. Note that theuseNamingStrategyInConsumerGroups
flag is intended to be removed in the future.
- #5447 JDK downgraded from 17 to 11 in lite runner image for scala 2.13
- #4988 Method definition
def authenticationMethod(): Auth[AuthCredentials, _]
was changed todef authenticationMethod(): EndpointInput[AuthCredentials]
- #4860 DeploymentManagerProvider implementations have to implement the method
def scenarioPropertiesConfig(config: Config): Map[String, ScenarioPropertyConfig]
instead ofdef additionalPropertiesConfig(config: Config): Map[String, AdditionalPropertyConfig]
- #4919 Improvement: Support for handling runtime exceptions at FlinkTestScenarioRunner:
TestProcess.exceptions
type changed fromList[ExceptionResult[T]]
toList[NuExceptionInfo[_ <: Throwable]]
- #4912 Changes in scenario details:
pl.touk.nussknacker.restmodel.processdetails.BaseProcessDetails[_]
andpl.touk.nussknacker.restmodel.processdetails.BasicProcess
used in rest resources were merged intopl.touk.nussknacker.restmodel.scenariodetails.ScenarioWithDetails
pl.touk.nussknacker.restmodel.processdetails.BaseProcessDetails[_]
used inpl.touk.nussknacker.ui.listener.services.PullProcessRepository
were moved intolistener-api
and renamed intopl.touk.nussknacker.ui.listener.ListenerScenarioWithDetails
pl.touk.nussknacker.restmodel.processdetails.ProcessDetails
andpl.touk.nussknacker.restmodel.processdetails.ValidatedProcessDetails
type aliases are not available anymore - you should probably useScenarioWithDetails
in these placespl.touk.nussknacker.restmodel.processdetails.ProcessVersion
was moved intopl.touk.nussknacker.engine.api.process.ScenarioVersion
pl.touk.nussknacker.restmodel.processdetails.StateActionsTypes
was moved intoProcessActionType.StateActionsTypes
- #4959
listener-api
module become decoupled fromrestmodel
module. Some classes were moved toextensions-api
module to make it possible:pl.touk.nussknacker.restmodel.displayedgraph
package was renamed topl.touk.nussknacker.engine.api.displayedgraph
pl.touk.nussknacker.restmodel.displayedgraph.ValidatedDisplayableProcess
was moved topl.touk.nussknacker.restmodel.validation
packagepl.touk.nussknacker.restmodel.process.ProcessingType
was moved topl.touk.nussknacker.engine.api.process
packagepl.touk.nussknacker.restmodel.scenariodetails.ScenarioVersion
was moved topl.touk.nussknacker.engine.api.process
package
- #4745 Added method
ScenarioBuilder
to create fragments with specified input node id instead of taking a default from fragment id - #4745 Add more errors for scenario and node id validation and change names, messages of existing ones
- #4928 #5028
Validator.isValid
method now takesexpression: Expression, value: Option[Any]
instead ofvalue: String
which was not really value, but expression. Straight-forward migration is to change method definition and now useexpression.expression
instead ofvalue
if your validator depends on raw expression. If validator was doing quasi-evaluation, for example trimming'
to get string, you can just takevalue
and cast it to desired class.LiteralNumberValidator
is removed, to achieve same result useCompileTimeEvaluableValueValidator
with parameter ofNumber
type,LiteralIntegerValidator
is considered deprecated and will be removed in the future, to achieve same result useCompileTimeEvaluableValueValidator
with parameter ofInteger
type,LiteralRegExpParameterValidator
is renamed toRegExpParameterValidator
- annotation
pl.touk.nussknacker.engine.api.validation.Literal
was renamed topl.touk.nussknacker.engine.api.validation.CompileTimeEvaluableValue
- #5079
AuthCredentials
is moved topl.touk.nussknacker.security
inextensions-api
- #5103
- Values of
ExpressionConfig.globalImports
andExpressionConfig.dictionaries
aren't wrapped withWithCategories
anymore WithCategories.apply
withcategories
varrag variant is replaced by version with headcategory
and tailcategories
varrag Previous version was commonly wrongly used as an "object without categories specified" but in fact it was "object with empty categories list" which means that object should be never visible. To create "object without categories specified" use,WithCategories.anyCategory
. If you want to pass just a list of categories, useWithCategories(value, Some(list), SingleComponentConfig.zero)
- Values of
- #5171 Changes around
ComponentType
values changes:- In
ComponentType
values:- Built-in component's artificial component types (
Filter
,Split
,Switch
,Variable
,MapVariable
) were replaced byBuiltIn
type Processor
andEnricher
component types were replaced byService
Fragments
was replaced byFragment
CustomNode
was replaced byCustomComponent
- Built-in component's artificial component types (
- In
ComponentInfo
: Order of parameters swapped + names of them changedcomponentType
->type
,componentName
->name
- In
- #5209 Now
TestScenarioRunner
doesn't load components fromComponentProvider
automatically. Instead, it loads some predefined set of components. Rest of them you need to pass components usingwithExtraComponents
method. Components loaded automatically:TestScenarioRunner.liteBased
- frombase
providerTestScenarioRunner.kafkaLiteBased
- frombase
andkafka
providersTestScenarioRunner.requestResponseBased
- frombase
andrequestResponse
providersTestScenarioRunner.flinkBased
- frombase
providerTestScenarioRunner
now also uses global variables from default model
- #4956 Refactor: Cleanup TestResults
- Changed signature
DeploymentManager.test
method, and removedvariableEncoder
param - Classes
TestResults
,ExpressionInvocationResult
,ExternalInvocationResult
don't depend onT
- Classes
NodeResult
is removed. Instead,Context
is used directly - Removed
variableEncoder
fromResultsCollectingListenerHolder.registerRun
- Removed
ResultContext
, please useContext
instead of it
- Changed signature
- #5240 Simpler result types in
TestScenarioRunner
RunResult
andRunUnitResult
has no generic parameter anymoreRunResult
and its descendants has nosuccess
method anymore - forRunListResult
should be usedsuccesses
instead
- #4745 Change
api/properties/*/validation
endpoint request type- Replace
processProperties
withadditionalFields
- Add
id
field for scenario or fragment id
- Replace
- #5039#5052 Changes in endpoints
api/parameters/*/suggestions
requestvariables
is renamed tovariableTypes
and it should have only local variables now
api/processes/**
response.json.validationResult.nodeResults.variableTypes
doesn't contain global variables types anymore
api/processDefinitionData/*
response.processDefinition.globalVariables
is removed
api/parameters/*/validate
requestscenarioName
is removedprocessProperties
is removed
- #4860 In file-based configuration, the field
scenarioTypes.<scenarioType>.additionalPropertiesConfig
is renamed toscenarioTypes.<scenarioType>.scenarioPropertiesConfig
- #5077 In SQL enricher configuration,
connectionProperties
was changed todataSourceProperties
- #4901 Improvements TestScenarioRunner:
- Changes at
FlinkProcessRegistrar.register
passingresultCollector
instead oftestRunId
- Changes at
- #5033 Scala 2.13 was updated to 2.13.12, you may update your
flink-scala-2.13
to 1.1.1 (it's not required, new version is binary-compatible) - #5059 #5100 Categories configuration doesn't allow configuring multiple categories for the same scenario type. If you have such a case, you have to extract another scenario types and assign each category to each scenario type. Because of this change configuration of categories was also removed from Components configuration
- #4953 Stricter validation in base components:
- Boolean expressions in
Switch
andFilter
nodes are required not null values - Variable values in
MapVariable
,FragmentOutput
andVariable
are mandatory - Field names in
MapVariable
,FragmentOutput
are required to be unique
- Boolean expressions in
- #4698 Due to change in program argument encoding all scheduled batch scenarios handled by periodic DM must be cancelled before upgrade
- #5447 JDK downgraded from 17 to 11 in lite runner image for scala 2.13
- #4574 Improvements: at
KafkaClient
andRichKafkaConsumer
in kafka-test-utilsRichKafkaConsumer.consumeWithJson
needs json decoder- removed
RichKafkaConsumer.consumeWithConsumerRecord
, useRichKafkaConsumer.consumeWithJson
instead of it RichKafkaConsumer.defaultSecondsToWait
renamed toRichKafkaConsumer.DefaultSecondsToWait
KafkaClient.sendMessage
accepts generic content with json encoder
- #4583
DeploymentManager
has new variants of methodcancel
andstop
takingDeployomentId
next toProcessName
. They will be used with batch processing mechanism (periodic DM) so it is necessary to implement it only if your DM will be wrapped byPeriodicDeploymentManager
- #4685 In
AuthenticationResources
trait it was added two new methods that have to be implemented in the child classes:def authenticationMethod(): Auth[AuthCredentials, _]
anddef authenticate(authCredentials: AuthCredentials): Future[Option[AuthenticatedUser]]
. The first one tells what authentication method will be used (it's for Tapir-based API purposes) and the latter one is the authentication action itself. Thedef authenticate(): Directive1[AuthenticatedUser]
should be treated as deprecated. It's used in the NU APIs which are still Akka HTTP-based. When we get rid of Akka HTTP, it will be removed. - #4762 Rename
RegExpParameterValidator
toLiteralRegExpParameterValidator
- #4697 Change
api/parameters/*/validate
andapi/parameters/*/suggestions
endpoints.- Use
processingType
instead ofprocessName
- Add
scenarioName
parameter toParametersValidationRequest
used inapi/parameters/*/validate
- Use
- #4602 Cleaning subprocess usages after NU 1.11 release
- Removed isSubprocess endpoint param, use isFragment from now on.
- Removed backward compatibility for subprocess fields.
- #4492 Allow testing fragments using ad-hoc testing method.
By default, NU enables that feature but if you have some custom
processToolbarConfig
settings then You would like to removehidden: { fragment: true }
flag fortype: "test-with-form"
,type: "test-counts"
andtype: "test-hide"
insideprocessToolbarConfig -> "test-panel"
.
- #4295
DeploymentManager.getProcessState(ProcessName)
method now returnsList[StatusDetails]
instead ofOption[StatusDetails]
. If you are a consumer of this API and want to have the same behavior as previously, you can useInconsistentStateDetector.extractAtMostOneStatus
method for that. Notice, that in the future visibility of this method may be changed into private. - #4462
StatusDetails.deploymentId
field changed type intoOption[DeploymentId]
. It contains, internal designer's deployment id. It is used to assign deployment on runtime side with deployment action on designer side in periodic DM for purpose of correct status synchronization. If you want to make it filled, you should pass the correct value inDeploymentData.deploymentId
. Current value ofdeploymentId: Option[ExternalDeploymentId]
was moved intoexternalDeploymentId
field.ProcessAction
has a new field -id
.ProcessStateDefinitionManager.processState
variant of methods with multiple parameters was removed - you should useProcessStateDefinitionManager.processState(StatusDetails)
variant instead. - #4353 Removed isCancelled/isDeployed flags based on
ProcessAction
,ProcessAction.action
renamed to actionType. TraitProcess
is removed. - #4484
ProcessIdWithName
moved from packagepl.touk.nussknacker.restmodel.process
topl.touk.nussknacker.engine.api.process
DeploymentManager.getProcessState(ProcessName, Option[ProcessAction])
method now takesProcessIdWithName
as an argument instead ofProcessName
. The same withPostprocessingProcessStatus.postprocess
.
- #4454 Rename 'subprocess' to 'fragment' along with all endpoints (with backward compatibility).
isSubprocess
query parameter is renamed toisFragment
.isSubprocess
will be removed in 1.12.0
- #4462 Process state API returns
externalDeploymentId
instead ofdeploymentId
.
- #4514
AkkaHttpBackend
in designer is replaced byAsyncHttpClientFutureBackend
. To use custom http client configuration useahc.properties
file and make sure it is available in the classpath.
- #4352
TypedObjectTypingResult#fields
are no longer ordered, fields will be sorted for presentation (seeTypedObjectTypingResult#display
) - #4294
HttpRemoteEnvironmentConfig
allows you to pass flagpassUsernameInMigration
- (default true). When set to true, migration attaches username in the form ofRemote[userName]
while migrating to secondary environment. To use the old migration endpoint, set to false. - #4278 Now expression compiler and code suggestions mechanism are reusing the same
types extracted based on model. Before the change types in compiler were lazily extracted. Because of this change, some expressions
can stop to compile. You may need to add
WithExplicitTypesToExtract
to some of yoursSourceFactory
implementations. See extending classes for examples on how to implement it. - #4290 Renamed predicates used in
ClassExtractionSettings
:ClassMemberPatternPredicate
renamed toMemberNamePatternPredicate
AllMethodNamesPredicate
renamed to AllMembersPredicate
- #4299, #4300
StateStatus
is identified by its name.ProcessState
serialization uses this name as serialized state value.
Sealed traitStateStatus
is unsealed, all members are replaced by correspondingSimpleStateStatus
state definitions, custom statuses are defined within eachProcessStateDefinitionManager
.ProcessAction
is moved from restmodel to extensions-api, package engine.api.deployment. - #4339 Improvements: Don't fetch state for archived/unarchived scenario, return computed based on last state action At BaseProcessDetails we provide lastStateAction field which can have an influence on the presented state of the scenario. We currently use it to distinguish between cancel / not_deployed and to detect inconsistent states between the designer and engine
- #4302 State inconsistency detection was moved from designer to DeploymentManager.
DeploymentManager.getProcessState
for internal purposes returnsOption[StatusDetails]
which is based on job status from deployment manager (instead ofOption[ProcessState]
which contains UI info). There is separategetProcessState
that returnsProcessState
which is a status from engine resolved viaInconsistentStateDetector
and formatted with UI-related details.PeriodicProcessEvent
usesStatusDetails
instead ofProcessState
. Constants defined inProblemStateStatus
are renamed to match UpperCamelCase formatting. - #4350
StateStatus.isDuringDeploy
,StateStatus.isFinished
,StateStatus.isFailed
,StateStatus.isRunning
,ProcessState.isDeployed
methods were removed. Instead, you should compare status with specific status. - #4357 Changed structure of
MetaData
inCanonicalProcess
-TypeSpecificData
automatically migrated toProcessAdditionalFields
- Example MetaData structure before migration:
{ "id": "scenarioName", "typeSpecificData": { "parallelism": 1, "spillStateToDisk": true, "checkpointIntervalInSeconds": null, "type": "StreamMetaData" }, "additionalFields": { "description": null, "properties": { "someCustomProperty": "someCustomValue" } } }
- Example MetaData structure after migration:
{ "id": "scenarioName", "additionalFields": { "description": null, "properties": { "parallelism" : "1", "spillStateToDisk" : "true", "useAsyncInterpretation" : "", "checkpointIntervalInSeconds" : "", "someCustomProperty": "someCustomValue" }, "metaDataType": "StreamMetaData" } }
- #4283 For OIDC provider,
accessTokenIsJwt
config property is introduced, with default valuesfalse
. Please mind, that previous Nussknacker versions assumed its value is true ifauthentication.audience
was defined. - #4357
TypeSpecificData
properties are now be configured inDeploymentManagerProvider
:- Main configuration is done through
additionalPropertiesConfig
like other additional properties - Initial values overriding defaults from the main configuration can be set in
metaDataInitializer
- Main configuration is done through
- #4305
scala-compiler
andscala-reflect
are now included inflink-scala
, so you can simplify your deployment by removing them and updating to new (flink-scala
JAR) (this doesn't introduce any functional changes)
- #4350
delete
action is available only for archived scenarios. Before the change it was checked that scenario is not running
- #4030 Changes for purpose of local testing of designer with other urls than on engine side
ProcessingTypeConfig.modelConfig
now containsConfigWithUnresolvedVersion
instead ofConfig
. OldConfig
value is inConfigWithUnresolvedVersion.resolved
ModelConfigLoader.resolveInputConfigDuringExecution
takesConfigWithUnresolvedVersion
instead ofConfig
. UseConfigWithUnresolvedVersion.apply
for easy transition between those classes
- #3997 Removal of obsolete
subprocessVersions
. It affectsMetaData
,ProcessMetaDataBuilder
andDisplayableProcess
properties. - #4122, #4132, #4179, #4189
- Use
ProcessStateDefinitionManager.stateDefinitions
to describe states: 1) their default properties 2) how the states are presented in filter-by-status options.
(see an example of basic definitions inSimpleProcessStateDefinitionManager
andSimpleStateStatus
). - State defaults and allowed actions are moved to
SimpleStateStatus
,FlinkStateStatus
,PeriodicStateStatus
,EmbeddedStateStatus
andK8sStateStatus
from corresponding state-definition-managers (see exampleFlinkProcessStateDefinitionManager
). - Type
CustomStateStatus.name
renamed toStatusName
ProcessResources
exposes new endpoint/api/procecesses/statusDefinitions
- Within the base set of statuses used in Embedded, Flink, K8 and Periodic mode (
SimpleStateStatus
), statusesFailing
,Failed
,Error
,Warning
,FailedToGet
andMulipleJobsRunning
are replaced by oneProblemStateStatus
which is parametrized by specific message.ProblemStateStatus
provides several builder methods, one for each corresponding removed state. Those builders allow to preserve the exact moments when each state appears in the scenario lifecycle. - Displayed tooltip and description of
ProblemStateStatus
have the same value. - Removed
SimpleStateStatus.Unknown
- Removed status
FailedStateStatus
. UseProblemStateStatus
instead. - Status configuration for icon, tooltip and description is obligatory.
- Use
- #4104
DeploymentManager.findJobStatus
was renamed togetProcessState
. NewDataFreshnessPolicy
parameter was added. Returned type was changed toWithDataFreshnessStatus[T]
whereT
is the previous value andcached: Boolean
is additional information that should be provided. If you provideDeploymentManager
which communicate remotely with some service, and you want to use standard build-in caching forProcessState
, wrap yourDeploymentManager
usingCachingProcessStateDeploymentManager.wrapWithCachingIfNeeded
in yourDeploymentManagerProvider
. Thanks to that, caching will be handled as expected, and yourDeploymentManager
just should extendAlwaysFreshProcessState
which provide the same interface as the previous one, with only method name changed. Especially, when you use 'PeriodicDeploymentManagerProvider',delegate
should already returnDeploymentManager
wrapped by caching mechanism. - #4131
Parameter.defaultValue
now holdsOption[Expression]
instead ofOption[String]
. You have to wrap aString
withExpression.spel()
- #4224 If you're using Flink with Nussknacker built with Scala 2.13, add this
jar in your Flink installation to
lib
dir. It's our implementation oforg.apache.flink.runtime.types.FlinkScalaKryoInstantiator
(sources are here) which is needed to properly (de)serialize Flink state when using scala 2.13. Hopefully, it's temporary solution, until Flink becomes really scala-free and gets rid of thisFlinkScalaKryoInstantiator
class or allows to have it in the job code (not Flink libs). - #4190 - introduced possibility to configure offset in
FlinkComponentsProvider
(components.base.aggregateWindowsConfig.tumblingWindowsOffset
, by default 0) for aggregates with tumbling windows. You might want to set it up, especially when you want your daily windows to be aligned according to your timezone if it's not UTC. See example in Flink docs
- #4122, #4132 Changes in state definitions:
- In
ProcessStateDefinitionManager
default behaviour of methodsstatusTooltip
,statusDescription
andstatusIcon
is to return default properties defined viastateDefinitions
. It is not necessary to override those methods when all definitions have fixed default properties. - To introduce custom status properties, extensions to basic definitions, use
OverridingProcessStateDefinitionManager
. OverridingProcessStateDefinitionManager
allows to specify delegate (previously onlySimpleProcessStateDefinitionManager
was available) and custom state definitions.- Parameter
delegate
inOverridingProcessStateDefinitionManager
has no default value, it should be provided explicitly. - There is additional validation when all processing types are reloaded from configuration: check if all processing types state definitions configuration is correct.
(see comment in
ProcessStateDefinitionService
)
- In
- #3997 Due to removal of deprecated field
subprocessVersions
migration of scenarios from and to remote environment, for Nussknacker version 1.9.0+ and older will not be possible. Use export and import as a workaround while working between older and newer version.
- #3675 Improvements: Normalize kafka components params name
- Renamed kafka topic param name from
topic
toTopic
- Renamed kafka value param name from
value
toValue
- Renamed kafka topic param name from
- #3924
- Fixup:
{}
is now interpreted as "allow everything schema", not as "object schema". Objects schemas have to have declared"type": "object"
. - Unknown is now allowed on sinks in both validation modes if output schema is "everything allowed schema".
- Fixup:
- #3924 - changes to
SwaggerTyped
hierarchySwaggerMap(valuesType)
->SwaggerObject(Map.empty, additionalProperties = AdditionalPropertiesEnabled(valuesType))
AdditionalPropertiesSwaggerTyped
->AdditionalPropertiesEnabled
AdditionalPropertiesWithoutType
->AdditionalPropertiesEnabled(SwaggerAny)
SwaggerRecursiveSchema/SwaggerUnknownFallback
->SwaggerAny
-
#3835 Removed Signals and QueryableState. This change affects:
- Configuration
- Components and DeploymentManager API
- REST API
-
#3823, #3836, #3843 - scenarios with multiple sources can be tested from file
TestDataGenerator#generateTestData
returns JSON test records instead of raw bytes. Test records are serialized to a file by designer Test record can optionally contain timestamp which is used to sort records generated by many sourcesTestDataParser
was replaced withTestRecordParser
that turns a single JSON test record into a source recordTestData.newLineSeparated
helper was removed. Scenario test records have to be created explicitly. Each scenario test record has assigned sourceDeploymentManager#test
takesScenarioTestData
instead ofTestData
- Designer configuration
testDataSettings.testDataMaxBytes
renamed totestDataMaxLength
-
#3916 Designer configuration
environmentAlert.cssClass
renamed toenvironmentAlert.color
-
#3922 Bumps: jwks: 0.19.0 -> 0.21.3, jackson: 2.11.3 -> 2.13.4
-
#3929 From now,
SchemaId
value class is used in every place where schema id was represented as an Int. For conversion betweenSchemaId
andInt
useSchemaId.fromInt
andSchemaId.asInt
. UseConfluentUtils.toSchemaWithMetadata
instead ofSchemaWithMetadata.apply
for conversion between Confluent'sSchemaMetadata
and oursSchemaWithMetadata
. -
#3948 Now, we are less dependent from Confluent schema registry. To make it possible, some kafka universal/avro components refactors were done. Most important changes in public API:
- ConfluentSchemaBasedSerdeProvider.universal was replaced by UniversalSchemaBasedSerdeProvider.create
Some other, internal changes:
- Non-confluent classes renamed and moved to desired packages
- Extracted new class: SchemaIdFromMessageExtractor to make Confluent logic explicit and moved to top level
- Extracted SchemaValidator to make Confluent logic explicit and be able to compose
- Some renames: ConsumerRecordUtils -> KafkaRecordUtils
- RecordDeserializer -> AvroRecordDeserializer (also inheritance replaced by composition)
- (De)SerializerFactory - easier abstractions
- ConfluentSchemaRegistryFactory is not necessary now - removed
- #3701 Right now access in SpEL to not existing field on TypedMap won't throw exception, just will return
null
- #3727 Improvements: Change RR Sink validation way:
- Added param
Value validation mode
at RR response component - We no longer support
nullable
param from Everit schema. Nullable schema are supported by union with null e.g. `["null", "string"]
- Added param
- #3768
request-response-embedded
andstreaming-lite-embedded
DeploymentManager types where replaced by onelite-embedded
DeploymentManager type with two modes:streaming
andrequest-response
like it is done inlite-k8s
case
- #3560, #3595
Remove dependency on
flink-scala
. In particular:- Switched from using
scala.DataStream
todatastream.DataStream
. Some tools exclusive to scala datastreams are available inengine.flink.api.datastream
- Scala based
TypeInformation
derivation is no longer used, for remaining casesflink-scala-utils
module is provided (probably will be removed in the future)
- Switched from using
- #3680
SubprocessRef::outputVariableNames
type is changed fromOption[Map[String,String]]
with default None, toMap[String,String]
with defaultMap.empty
- #3692 Rename
mockedResult
toexternalInvocation
in test results collectors. - #3606 Removed nussknacker-request-response-app. As a replacement you can use:
- nussknacker-request-response-app in version <= 1.6
- Lite K8s engine with request-response processing mode
lite-embedded
Deployment Manager with request-response processing mode
- #3610 Removed deprecated code. For details see changes in pull request.
- #3607 Request-response jsonSchema based encoder:
- ValidationMode moved to package
pl.touk.nussknacker.engine.api.validation
innussknacker-components-api
- BestEffortJsonSchemaEncoder moved to package
pl.touk.nussknacker.engine.json.encode
innussknacker-json-utils
- ValidationMode moved to package
- #3738 Kafka client libraries upgraded to 3.2.3. If using older Flink version, make sure to use 2.8.x client libraries. For Flink versions 1.15.0-1.15.2 include also fixed KafkaMetricWrapper
- #3668 Method
runWithRequests
ofRequestResponseTestScenarioRunner
(returned byTestScenarioRunner.requestResponseBased()
) now returnsValidatedNel
with scenario compilation errors instead of throwing exception in that case
- #3576
/processes
endpoint without query parameters returns all scenarios - the previous behaviour was to return only unarchived ones. To fetch only unarchived scenariosisArchived=false
query parameter has to be passed.
- #3824 Due to data serialization fix, Flink scenarios using Kafka sources with schemas may be incompatible and may need to be restarted with clean state.
- #3440 Feature: allow to define fragment's outputs
- Right now using fragments in scenario is changed. We have to provide each outputName for outputs defined in fragment.
- #3370 Feature: scenario node category verification on validation From now import scenario with nodes from other categories than scenario category will be not allowed.
- #3436 Division by zero will cause validation error. Tests that rely on
1/0
to generate exceptions should have it changed to code like1/{0, 1}[0]
- #3473 JsonRequestResponseSinkFactory provides also 'raw editor', to turn on 'raw editor' add
SinkRawEditorParamName -> "true"
- #3608 Use
ZonedDateTime
fordate-time
JsonSchema format,OffsetTime
fortime
format.
- #3406 Migration from Scalatest 3.0.8 to Scalatest 3.2.10 - if necessary, see the Scalatest migration guides, https://www.scalatest.org/release_notes/3.1.0 and https://www.scalatest.org/release_notes/3.2.0
- #3431 Renamed
helper-utils
todefault-helpers
, separatedMathUtils
fromcomponents-utils
tomath-utils
, removed dependencies fromhelper-utils
- #3420
DeploymentManagerProvider.typeSpecificInitialData
takes deploymentConfigConfig
now - #3493, #3582 Added methods
DeploymentManagerProvider.additionalPropertiesConfig
,DeploymentManagerProvider.additionalValidators
- #3506 Changed
LocalDateTime
toInstant
inOnDeployActionSuccess
inlistener-api
- #3513 Replace
EspProcess
withCanonicalProcess
in all parts of the API except for the compiler. - #3545
TestScenarioRunner.flinkBased
should be used instead ofNuTestScenarioRunner.flinkBased
. Before this, you need toimport pl.touk.nussknacker.engine.flink.util.test.FlinkTestScenarioRunner._
- #3386 Changed
CustomProcessValidator
validate
method. It now receivesCanonicalProcess
instead ofDisplayableProcess
and returnsValidatedNel[ProcessCompilationError, Unit]
instead ofValidationResult
. MovedCustomProcessValidator
from modulenussknacker-restmodel
in packagevalidation
tonussknacker-extensions-api
. - #3586 Module
nussknacker-ui
was renamed tonussknacker-designer
,ui.conf
was renamed todesigner.conf
,defaultUiConfing.conf
renamed todefaultDesignerConfig.conf
- #3506 Dates returned by REST API (createdAt, modifiedAt, createDate) are now returned in Zulu time, with timezone indication. This affects e.g.
/api/procecesses
,/api/processes/{scenarioId}
,/api/processes/{scenarioId}/activity
- #3542 Node additional info path renamed from
nodes/{scenarioId}/additionalData
tonodes/{scenarioId}/additionalInfo
- #3471, #3553
RequestResponseMetaData(path)
is changed toRequestResponseMetaData(slug)
.V1_033__RequestResponseUrlToSlug
migration is ready for that, the change also applies to Scenario DSL. - #3513 Scenario DSL returns
CanonicalProcess
instead ofEspProcess
. - #3630
SubprocessOutput
changed toSubprocessUsageOutput
, changes inOutputVar
definition
- #3425 Deployment Manager for
request-response-embedded
configuration parameters changed:interface
->http.interface
port
->http.port
definitionMetadata
->request-response.definitionMetadata
- #3502 Refactor of
KafkaProperties
:kafkaAddress
property has been deprecated. Please providekafkaProperties."bootstrap.servers"
instead
- #3441 Updated Flink 1.14.5 -> 1.15.2. Some Flink artefacts no longer have Scala version. Test using Flink may need to disable checkpointing or reduce time between checkpoints to prevent timeouts or long waits for tasks to finish.
- #2992 deploySettings changed to deploymentCommentSettings, now when specified require you to also specify field validationPattern, specifying exampleComment is optional.
- commentSettings fields modified. matchExpression changed to substitutionPattern, link changed to substitutionLink.
- #3165 Config is not exposed over http (GET /api/app/config/) by default. To enable it set configuration
enableConfigEndpoint
totrue
. - #3223 OAuth2 configuration
defaultTokenExpirationTime
changed todefaultTokenExpirationDuration
- #3263 Batch periodic scenarios carry processing type to distinguish scenarios with different categories.
For existing scenarios processing type is migrated to
default
. SetdeploymentManager.processingType
todefault
or update periodic scenarios table with actual processing type value - ideally it should be same value as the periodic engine key inscenarioTypes
.
- #2992 OnDeployActionSuccess in ProcessChangeEvent now requires instance of Option[Comment] instead of Option[String] as parameter with deploymentComment information. Added abstract class Comment in listener-api.
- #3136 Improvements: Lite Kafka testkit
ConfluentUtils.serializeRecordToBytesArray
replaced byConfluentUtils.serializeDataToBytesArray
ConfluentUtils.deserializeSchemaIdAndRecord
replaced byConfluentUtils.deserializeSchemaIdAndData
- #3178 Improvements: more complex test scenario runner result:
- Right now each method from
TestScenarioRunner
should returnValidatedNel[ProcessCompilationError, RunResult[R]]
where:- Invalid is representation of process compilation errors
- Valid is representation of positive and negative scenario running result
- Right now each method from
- #3255
TestReporter
util class is safer to use in parallel tests, methods require passing scenario name - #3265 #3288 #3297 #3299#3309
#3316 #3322 #3328 #3330 Changes related with UniversalKafkaSource/Sink:
RuntimeSchemaData
is generic - parametrized byParsedSchema
(AvroSchema and JsonSchema is supported).NkSerializableAvroSchema
renamed toNkSerializableParsedSchema
SchemaWithMetadata
wrapsParsedSchema
instead of AvroSchema
.SchemaRegistryProvider
refactoring:- rename
SchemaRegistryProvider
toSchemaBasedSerdeProvider
- decouple
SchemaRegistryClientFactory
fromSchemaBasedSerdeProvider
- rename
KafkaAvroKeyValueDeserializationSchemaFactory
renamed toKafkaSchemaBasedKeyValueDeserializationSchemaFactory
KafkaAvroValueSerializationSchemaFactory
renamed toKafkaSchemaBasedValueSerializationSchemaFactory
KafkaAvroKeyValueSerializationSchemaFactory
renamed toKafkaSchemaBasedKeyValueSerializationSchemaFactory
- #3253
DeploymentManager
has separatevalidate
method, which should perform initial scenario validation and return reasonably quickly (while deploy can e.g. make Flink savepoint etc.) - #3313 Generic types handling changes:
Typed.typedClass(Class[_], List[TypingResult])
is not available anymore. You should use more explicitTyped.genericTypeClass
instead- We check count of generic parameters in
Typed.genericTypeClass
- wrong number will cause throwing exception now - We populate generic parameters by correct number of
Unknown
in non-generic aware versions ofTyped
factory methods likeTyped.apply
orTyped.typedClass
- #3071 More strict Avro schema validation:
ValidationMode.allowOptional
was removed, instead of it please useValidationMode.lax
ValidationMode.allowRedundantAndOptional
was removed, instead of it please useValidationMode.lax
- Changes of
ValidationMode
, fields:acceptUnfilledOptional
andacceptRedundant
were removed
- #3376
FlinkKafkaSource.flinkSourceFunction
,FlinkKafkaSource.createFlinkSource
andDelayedFlinkKafkaConsumer.apply
takes additional argument,FlinkCustomNodeContext
now - #3272
KafkaZookeeperServer
renamed toEmbeddedKafkaServer
,zooKeeperServer
field changed type toOption
and is hidden now. - #3365 Numerous renames:
- module
nussknacker-avro-components-utils
->nussknacker-schemed-kafka-components-utils
- module
nussknacker-flink-avro-components-utils
->nussknacker-flink-schemed-kafka-components-utils
- package
pl.touk.nussknacker.engine.avro
->pl.touk.nussknacker.engine.schemedkafka
- object
KafkaAvroBaseComponentTransformer
->KafkaUniversalComponentTransformer
- module
- #3412 More strict filtering method types. Methods with parameters or result like
Collection[IllegalType]
are no longer available in SpEl. - #3542 Numerous renames:
- trait
NodeAdditionalInfo
->AdditionalInfo
, - class
MarkdownNodeAdditionalInfo
->MarkdownAdditionalInfo
- trait
NodeAdditionalInfoProvider
->AdditionalInfoProvider
- the SPI provider's configuration files must be renamed frompl.touk.nussknacker.engine.additionalInfo.NodeAdditionalInfoProvider
topl.touk.nussknacker.engine.additionalInfo.AdditionalInfoProvider
- method
AdditionalInfoProvider.additionalInfo
renamed tonodeAdditionalInfo
and new method addedpropertiesAdditionalInfo
- trait
- #3169 API endpoint
/api/app/healthCheck
returning short JSON answer with "OK" status is now not secured - before change it required to be an authenticated user with "read" permission.
- #3187 #3224 Choice component replaces Switch component. "Default" choice edge type, exprVal and expression are now deprecated. For existing usages, you don't need to change anything. For new usages, if you want extract value e.g. to simplify choice conditions, you need to define new local variable before choice using variable component. "Default" choice edge type can be replaced by adding "true" condition at the end of list of conditions
- #3328 Due to addition of support for different schema type (AvroSchema and JsonSchema for now) serialization format of
NkSerializableParsedSchema
has changed. Flink state compatibility of scenarios which use Avro sources or sinks has been broken. - #3365 Due to renames (see section
Code API changes
) Flink state compatibility of scenarios which use Avro sources or sinks has been broken.
- #3249#3250 Some kafka related libraries were bumped: Confluent 5.5->7.2, avro 1.9->1.11, kafka 2.4 -> 3.2.
It may have influence on your custom components if you depend on
kafka-components-utils
oravro-components-utils
module - #3376 Behavior of Flink's Kafka deserialization errors handling was changed - now instead of job failure, invalid message is omitted and configured
exceptionHandler
mechanism is used.
security.rolesClaim
changed tosecurity.rolesClaims
, type changed to list of stringskafka.schemaRegistryCacheConfig
configuration entry was added - it was hardcoded before. Default value ofkafka.schemaRegistryCacheConfig.availableSchemasExpirationTime
was changed from 1 minute to 10 seconds which will cause more often schema cache invalidation- #3031 Attachments are now stored in database (see more in section
Other changes
).attachmentsPath
was removed. Optional configattachments.maxSizeInBytes
was introduced with default value of 10mb
- #2983 Extract Permission to extensions-api
- Moved
pl.touk.nussknacker.ui.security.api.Permission
(security module) topl.touk.nussknacker.security.Permission
(extension-api module)
- Moved
- #3040 Deprecated
pl.touk.nussknacker.engine.api.ProcessListener.sinkInvoked
method. Switch to more generalendEncountered
method. - #3076 new implicit parameter
componentUseCase: ComponentUseCase
was added toinvoke()
method of all services extendingEagerServiceWithStaticParameters
- #3031 Attachments are now stored in database. As this feature was rarely used, automatic migration of attachments from disk to db is not provided. To stay consistent db table
process_attachments
had to be truncated.
- #3029
KafkaConfig
has new fieldschemaRegistryCacheConfig: SchemaRegistryCacheConfig
. Flink state compatibility has been broken. - #3116 Refactor
SchemaRegistryClientFactory
so it takes dedicated config object instead of KafkaConfig. This change minimizes chance of future Flink state compatibility break.SchemaIdBasedAvroGenericRecordSerializer
is serialized in Flink state, so we provide it now with as little dependencies as necessary. Flink state compatibility has been broken again. - #3363 Kafka consumer no longer set
auto.offset.reset
toearliest
by default. For default configuration files, you can useKAFKA_AUTO_OFFSET_RESET
env variable to easily change this setting.
- #2741 #2841 Remove custom scenario provides some changes on API:
- Replace ProcessDeploymentData by CanonicalProcess (as VO)
- Replace scenario jsonString by CanonicalProcess at DeploymentManager, ProcessConfigEnricherInputData
- #2773 Using VersionId / ProcessId / ProcessName instead of Long or String:
PullProcessRepository
API was changed, right now we use VersionId instead of Long
- #2830
RunMode
is renamed toComponanteUseCase
andNormal
value is split into: EngineRuntime, Validation, ServiceQuery, TestDataGeneration.RunMode.Test
becomesComponanteUseCase.TestRuntime
- #2825, #2868 #2912 API modules changes:
- Extracted new modules:
nussknacker-scenario-api
with all scenario API parts fromapi
andinterpreter
nussknacker-components-api
(andnussknacker-lite-components-api
,nussknacker-flink-components-api
etc.), which contain API for creating componentsnussknacker-common-api
- base value classes shared betweenscenario-api
andcomponents-api
likeNodeId
,Metadata
etc.nussknacker-extensions-api
- API of extensions other than components
- Because of that, some changes in code were also introduced:
NodeId
moved frompl.touk.nussknacker.engine.api.context.ProcessCompilationError
topl.touk.nussknacker.engine.api
NodeExpressionId
,DefaultExpressionId
andbranchParameterExpressionId
moved frompl.touk.nussknacker.engine.api.context.ProcessCompilationError
topl.touk.nussknacker.engine.graph.expression
JobData
no longer containsDeploymentData
, which is not accessible for components anymoreDisplayJson
,WithJobData
,MultiMap
moved toutils
- Some methods from API classes (e.g.
Parameter.validate
) and classes (InterpretationResult
) moved to interpreter DeploymentManagerProvider.createDeploymentManager
takes nowBaseModelData
as an argument instead ofModelData
. If you want to use this data to invoke scenario, you should cast it to invokable representation via:import ModelData._; modelData.asInvokableModelData
- Extracted new modules:
- #2878 #2898 #2924 Cleaning up of
-utils
modules- Extracted internal classes, not intended to be used in extensions to nussknacker-internal-utils module
- Extracted component classes, not used directly by runtime/designer to nussknacker-components-utils module
- Extracted kafka component classes, not used directly by lite-kafka-runtime/kafka-test-utils to nussknacker-kafka-components-utils
- Moved some classes that are in fact part of API to -api modules (e.g.
ToJsonEncoder
) - Module renames:
- nussknacker-avro-util to nussknacker-avro-components-utils
- nussknacker-flink-avro-util to nussknacker-flink-avro-components-utils
- nussknacker-flink-kafka-util to nussknacker-flink-kafka-components-utils
- nussknacker-flink-util to nussknacker-flink-components-utils
- nussknacker-request-response-util to nussknacker-request-response-components-utils
- nussknacker-model-util to nussknacker-helpers-utils
- Minor changes in code:
- Use
val docsConfig = new DocsConfig(config); import docsConfig._
instead ofimplicit val docsConfig = (...); import DocsConfig._
- Some components specific methods are not available from
KafkaUtils
. Instead, they are available fromKafkaComponentsUtils
ToJsonEncoder.encoder
takesAny => Json
function instead ofBestEffortJsonEncoder
as a parameter
- Use
- #2907 Hide some details of metrics to
utils-internal
(InstantRateMeter
,InstantRateMeterWithCount
), use method added toMetricsProviderForScenario
- #2916 Changes in
ProcessState
API.- Six similar methods creating
ProcessState
based onStateStatus
and some other details merged to one.- Methods removed:
- Two variants of
ProcessState.apply
takingProcessStateDefinitionManager
as a parameter SimpleProcessState.apply
- Two variants of
ProcessStatus.simple
ProcessStatus.createState
takingProcessStateDefinitionManager
as a parameter
- Two variants of
- Method added:
ProcessStateDefinitionManager.processState
with some default parameter values
- Methods removed:
ProcessStatus
class is removed at all. All methods returningProcessState
by it moved toSimpleProcessStateDefinitionManager
and removedpreviousState: Option[ProcessState]
from it. If you want to keep previous state's deployment details and only change "status details" just useprocessState.withStatusDetails
methodProcessState
,CustomAction
and its dependencies moved fromnussknacker-deployment-manager-api
tonussknacker-scenario-deployment-api
,restmodel
module not depend ondeployment-manager-api
anymore- #2969 Action
ProcessActionType.Deploy
is now available by default for scenarios inSimpleStateStatus.DuringDeploy
state. Mind this if you depend onOverridingProcessStateDefinitionManager
orSimpleProcessStateDefinitionManager
, and specifically on theirsstatusActions
method. As an exception, implementation for FlinkFlinkProcessStateDefinitionManager
stays the same as before (onlyProcessActionType.Cancel
is possible in this state), but this may be unified in the future.
- Six similar methods creating
- #2886 This change can break previous flink snapshot compatibility. Restoring state from previous snapshot asserts that restored serializer UID matches current serializer UID. This change ensures that in further release deployments UIDs persisted within snapshots are not re-generated in runtime.
- #2950 Remove
MATH
helper, useNUMERIC
methods (they work better with some number types conversions)
- #2483
COUNTS_URL
environment variable is notINFLUXDB_URL
, withoutquery
path part. - #2493 kafka configuration should be moved to components provider configuration - look at
components.kafka
in dev-application.conf for example - #2624 Default name for
process
tag is nowscenario
. This affects metrics and count functionalities. Please update you Flink/Telegraf setup accordingly (see nussknacker-quickstart for details). If you still want to useprocess
tag (e.g. you have a lot of dashboards), please setcountsSettings.metricsConfig.scenarioTag
setting toprocess
Also, dashboard links format changed, see documentation for the details. - #2645 Default models:
genericModel.jar
,liteModel.jar
. were merged todefaultModel.jar
,managementSample.jar
was renamed todevModel.jar
. If you usedefaultModel.jar
it's important to includeflinkExecutor.jar
explicitly on model classpath.
- #2564 Flink union now takes only 'Output expression' parameters for branches (previously 'value' parameter), output variable must be of the same type, if you want to distinguish source branch in output variable please use map variable, example in Basic Nodes docs.
- #2554 Maven artifact
nussknacker-kafka-flink-util
becomenussknacker-flink-kafka-util
andnussknacker-avro-flink-util
becomenussknacker-flink-avro-util
. General naming convention isnussknacker-$runtimeType-$moduleName
. Components inside distribution changed layout tocomponents(/$runtimeType)/componentName.jar
e.g.components/flink/kafka.jar
orcomponents/openapi.jar
KafkaSource
becomeFlinkKafkaSource
,ConsumerRecordBasedKafkaSource
becomeFlinkConsumerRecordBasedKafkaSource
,KafkaSink
becomeFlinkKafkaSink
,KafkaAvroSink
becomeFlinkKafkaAvroSink
- #2535, #2625, #2645 Rename
standalone
torequest-response
:- Renamed modules and artifacts
StandaloneMetaData
is nowRequestResponseMetaData
- Move
request-response
modules tobase
dir. standalone
in package names changed torequestresponse
Standalone
in class/variable names changed toRequestResponse
DeploymentManager/Service
uses dedicated format of status DTO, instead of the ones fromdeployment-manager-api
- Removed old, deprecated
jarPath
settings, in favour ofclassPath
used in other places - Extracted
nussknacker-lite-request-response-components
module
- #2582
KafkaUtils.toProducerProperties
setup only basic properties now (bootstrap.servers
and serializers) - before the change it was setting options which are not always good choice (for transactional producers wasn't) - #2600
ScenarioInterpreter
,ScenarioInterpreterWithLifecycle
now takes additional generic parameter:Input
.ScenarioInterpreter.invoke
takesScenarioInputBatch
which now contains list ofSourceId -> Input
instead ofSourceId -> Context
. Logic ofContext
preparation should be done inLiteSource
instead of beforeScenarioInterpreter.invoke
. invocation It means thatLiteSource
also takes this parameter and have a new methodcreateTransformation
. - #2635
ContextInitializer.initContext
now takesContextIdGenerator
instead ofnodeId
and returns just a function with strategy of context initialization instead of serializable function withLifecycle
. To use it with Flink engine, useFlinkContextInitializingFunction
wrapper. - #2649
DeploymentManagerProvider
takes newProcessingTypeDeploymentService
class as an implicit parameter - #2564 'UnionParametersMigration' available to migrate parameter name from 'value' to 'Output expression' - please turn it on you are using 'union' like component
- #2645 Simplify structure of available models (implementations of
ProcessConfigCreator
).defaultModel.jar
and components should be used instead of custom implementations ofProcessConfigCreator
, the only exception is when one wants to customizeExpressionConfig
. Also,nussknacker-flink-engine
module becamenussknacker-flink-executor
. - #2651
ValidationContext.clearVariables
now clears also parent reference. Important when invoked inside fragments. - #2673
KafkaZookeeperUtils
renamed toKafkaTestUtils
, it doesn't depend on ZooKeeper anymore. - #2686
ServiceWithStaticParameters
renamed toEagerServiceWithStaticParameters
. - #2695
nodeId
replaced withNodeComponentInfo
inNuExceptionInfo
. Simple wrapper class which holds the samenodeId
and alsocomponentInfo
. Migration is straightforward, just putnodeId
into the new case class:NuExceptionInfo(None, exception, context)
=> stays the sameNuExceptionInfo(Some(nodeId), exception, context)
=>NuExceptionInfo(Some(NodeComponentInfo(nodeId, None)), exception, context)
- if an exception is thrown inside the component, additional information can be provided:
- for base component (like
filter
orsplit
):NodeComponentInfo.forBaseNode("nodeId", ComponentType.Filter)
- for other:
NodeComponentInfo("nodeId", ComponentInfo("kafka-avro-source", ComponentType.Source))
- for base component (like
- if an exception is thrown inside the component, additional information can be provided:
- The same migration has to be applied to
ExceptionHandler.handling()
method.
- #2824 'ProcessSplitterMigration' available to migrate node name from 'split' to 'for-each' (see #2781)- please turn it on if you are using 'split' component
:::info Summary:
- A lot of internal refactoring was made to separate code/API specific for Flink.
If your deployment has custom components pay special attention to:
Lifecycle
management- Kafka components
- Differences in artifacts and packages
- Some of the core dependencies: cats, cats-effect and circe were upgraded. It affects mainly code, but it may also have impact on state compatibility and performance.
- Default Flink version was bumped do 1.14 - see https://github.com/TouK/nussknacker-flink-compatibility on how to run Nu on older Flink versions.
- Execution of SpEL expressions is now checked more strictly, due to security considerations. These checks can be overridden with custom
ExpressionConfig
. ::: :::info - Apart from that:
- minor configuration naming changes
- removal of a few of minor, not documented features (e.g. SQL Variable) :::
- #2208 Upgrade, cats, cats-effects, circe. An important nuisance: we didn't upgrade sttp, so we cannot depend on
"com.softwaremill.sttp.client" %% "circe"
. Instead, the code is copied. Make sure you don't include sttp-circe integration as transitive dependency, but use class from http-utils instead. - #2176
EnrichDeploymentWithJarDataFactory
was replaced withProcessConfigEnricher
. - #2278 SQL Variable is removed
- #2280 Added optional
defaultValue
field toParameter
. InGenericNodeTransformation
can be set toNone
- values will be determined automatically. - #2289 Savepoint path in
/api/adminProcessManagement/deploy
endpoint is passed as asavepointPath
parameter instead of path segment. - #2293 Enhancement: change
nodeCategoryMapping
configuration tocomponentsGroupMapping
- #2301 #2620
GenericNodeTransformation.initialParameters
was removed - nowGenericNodeTransformation.contextTransformation
is used instead. To make Admin tab -> Invoke service form working, useWithLegacyStaticParameters
trait - #2409
JsonValidator
is now not determined by default based onJsonParameterEditor
but must be explicitly defined by@JsonValidator
annotation - #2304 Upgrade to Flink 1.14. Pay attention to Flink dependencies - in some (e.g. runtime) there is no longer scala version.
- #2295
FlinkLazyParameterFunctionHelper
allows (and sometimes requires) correct exception handling - #2307 Changed
nussknacker-kafka
module name tonussknacker-kafka-util
- #2310 Changed
nussknacker-process
module name tonussknacker-flink-engine
- #2300 #2343 Enhancement: refactor and improvements at components group:
- Provided
ComponentGroupName
as VO SingleNodeConfig
was renamed toSingleComponentConfig
and moved frompl.touk.nussknacker.engine.api.process
package topl.touk.nussknacker.engine.api.component
- Configuration
category
in node configuration was replaced bycomponentGroup
- Configuration
nodes
in model configuration was replaced bycomponentsUiConfig
- Additional refactor:
ProcessToolbarService
moved frompl.touk.nussknacker.ui.service
package topl.touk.nussknacker.ui.process
- Additional refactor:
ProcessToolbarService
moved frompl.touk.nussknacker.ui.service
package topl.touk.nussknacker.ui.process
DefinitionPreparer
was renamed toComponentDefinitionPreparer
NodesConfigCombiner
was removed- REST API /api/processDefinitionData/* response JSON was changed:
nodesToAdd
was renamed tocomponentGroups
posibleNode
was renamed tocomponents
nodesConfig
was renamed tocomponentsConfig
- config
icon
property fromcomponentsConfig
right now should be relative tohttp.publicPath
e.g./assets/components/Filter.svg
(before was justFilter.svg
) or url (withhttp
/https
)
- Provided
- #2346 Remove
endResult
fromSink
in graph.Sink
no longer definestestOutput
method - they should be handled by respective implementations- Change in definition of
StandaloneSink
previouslyStandaloneSinkWithParameters
, as output always has to be computed with sink parameters now - Changes in definition of
FlinkSink
, to better handle capturing test data - Removal of
.sink
method inGraphBuilder
- use.emptySink
if suitable
- #2331
KafkaAvroBaseTransformer
companion object renamed toKafkaAvroBaseComponentTransformer
KryoGenericRecordSchemaIdSerializationSupport
renamed toGenericRecordSchemaIdSerializationSupport
- #2305 Enhancement: change
processingTypeToDashboard
configuration toscenarioTypeToDashboard
- #2296 Scenarios & Fragments have separate TypeSpecificData implementations. Also, we remove
isSubprocess
field from process JSON, and respectively from MetaData constructor. See corresponding db migrationV1_031__FragmentSpecificData.scala
- #2368
WithCategories
now takes categories as anOption[List[String]]
instead ofList[String]
. You should wrap given list of categories withSome(...)
.None
mean that component will be available in all categories. - #2360
union
,union-memo
anddead-end
components were extracted frommodel/genericModel.jar
tocomponents/baseComponents.jar
If you have your ownapplication.conf
which changesscenarioTypes
, you should add"components/baseComponents.jar"
entry intoclassPath
array - #2337 Extract base engine from standalone
- Common functionality of base engine (i.e. microservice based, without Flink) is extracted to
base-api
andbase-runtime
- new API for custom components (
pl.touk.nussknacker.engine.baseengine.api.customComponentTypes
) StandaloneProcessInterpreter
becomesStandaloneScenarioEngine
- Replace
Either[NonEmptyList[Error], _]
withValidatedNel[Error, _]
as return type StandaloneContext
becomesEngineRuntimeContext
- Common functionality of base engine (i.e. microservice based, without Flink) is extracted to
- #2349
queryable-state
module was removed,FlinkQueryableClient
was moved tonussknacker-flink-manager
.PrettyValidationErrors
,CustomActionRequest
andCustomActionResponse
moved fromnussknacker-ui
tonussknacker-restmodel
. - #2361 Removed
security
dependency fromlistener-api
.LoggedUser
replaced with dedicated class inlistener-api
. - #2385 Deprecated
CustomStreamTransformer.clearsContext
was removed. Use
@MethodToInvoke
def execute(...) =
ContextTransformation
.definedBy(ctx => Valid(ctx.clearVariables ...))
.implementedBy(...)
}
instead.
- #2348 #2459 #2486
#2490 #2496 #2536
Introduce
KafkaDeserializationSchema
andKafkaSerializationSchema
traits to decouple from flink dependency. moveKeyedValue
tonussknacker-util
, moveSchemaRegistryProvider
toutils/avro-util
To move between nussknacker's/flink's Kafka(De)serializationSchema usewrapToFlink(De)serializatioinSchema
fromFlinkSerializationSchemaConversions
.SchemaRegistryProvider
andConfluentSchemaRegistryProvider
is now innussknacker-avro-util
module.FlinkSourceFactory
is gone - useSourceFactory
instead.KafkaSourceFactory
,KafkaAvroSourceFactory
,KafkaSinkFactory
,KafkaAvroSinkFactory
, andContextIdGenerator
not depends on flink. ExtractedKafkaSourceImplFactory
,KafkaSinkImplFactory
andKafkaAvroSinkImplFactory
which deliver implementation of component (after all validations and parameters evaluation). Use respectively:FlinkKafkaSourceImplFactory
,FlinkKafkaSinkImplFactory
andFlinkKafkaAvroSinkImplFactory
to deliver flink implementations. Moved non-flink specific serializers, deserializers,BestEffortAvroEncoder
,ContextIdGenerator
s andRecordFormatter
s to kafka-util/avro-utilKafkaDelayedSourceFactory
is nowDelayedKafkaSourceFactory
.FixedRecordFormatterFactoryWrapper
moved toRecordFormatterFactory
- #2477
FlinkContextInitializer
andFlinkGenericContextInitializer
merged toContextInitializer
,BasicFlinkContextInitializer
andBasicFlinkGenericContextInitializer
merged toBasicContextInitializer
. All of them moved topl.touk.nussknacker.engine.api.process
package.ContextInitializer.validationContext
returnsValidatedNel
- before this change errors during context initialization weren't accumulated.ContextInitializingFunction
now is a scala's function instead of Flink's MapFunction. You should wrap it withRichLifecycleMapFunction
to make sure that it will be opened correctly by Flink.InputMeta
was moved tokafka-util
module. - #2389 #2391
deployment-manager-api
module was extracted andDeploymentManagerProvider
,ProcessingTypeData
andQueryableClient
was moved frominterpreter
into it.DeploymentManager
,CustomAction
andProcessState
was moved fromapi
todeployment-manager-api
.ProcessingType
was moved torest-model
package. - #2393 Added
ActorSystem
,ExecutionContext
andSttpBackend
intoDeploymentManagerProvider.createDeploymentManager
. During clean ups also was removednussknacker-http-utils
dependency toasync-http-client-backend-future
and addedSttpBackend
toCountsReporterCreator.createReporter
arguments. - #2397 Common
EngineRuntimeContext
lifecycle andMetricsProvider
. This may cause runtime consequences - make sure your custom services/listeners invokeopen
/close
correctly - especially in complex inheritance scenarios.Lifecycle
has nowEngineRuntimeContext
as parameter,JobData
is embedded in it.TimeMeasuringService
replacesGenericTimeMeasuringService
, Flink/Standalone flavours ofTimeMeasuringService
are removedEngineRuntimeContext
andMetricsProvider
moved to base API,RuntimeContextLifecycle
moved to base API asLifecycle
GenericInstantRateMeter
is nowInstantRateMeter
- Flink
RuntimeContextLifecycle
should be replaced in most cases byLifecycle
- In Flink engine
MetricsProvider
(obtained withEngineRuntimeContext
) should be used in most places instead ofMetricUtils
- #2486
Context.withInitialId
is deprecated now - useEngineRuntimeContext.contextIdGenerator
instead.EngineRuntimeContext
can be accessible viaFlinkCustomNodeContext.convertToEngineRuntimeContext
- #2377 #2534 Removed
clazz
fromSourceFactory
. Remove generic parameter fromSource
andSourceFactory
. Return type of source should be returned either by:returnType
field of@MethodToInvoke
ContextTransformation
APIGenericNodeTransformer
APISourceFactory.noParam
- #2453 Custom actions for
PeriodicDeploymentManager
now can be defined and implemented outside this class, inPeriodicCustomActionsProvider
created byPeriodicCustomActionsProviderFactory
. If you do not need them, just passPeriodicCustomActionsProviderFactory.noOp
to object'sPeriodicDeploymentManager
factory method. - #2501
nussknacker-baseengine-components
module renamed tonussknacker-lite-base-components
- #2221 ReflectUtils
fixedClassSimpleNameWithoutParentModule
renamed tosimpleNameWithoutSuffix
- #2495 TypeSpecificDataInitializer trait change to TypeSpecificDataInitializ
- #2245
FailedEvent
has been specified inFailedOnDeployEvent
andFailedOnRunEvent
- #1439 #2090 Upgrade do Flink 1.13.
setTimeCharacteristic
is deprecated, and should be handled automatically by Flink.UserClassLoader
was removed, use appropriate Flink objects or context ClassLoader.- RocksDB configuration is turned on by
rocksdb.enable
instead ofrocksdb.checkpointDataUri
which is not used now.
- #2133 SQL Variable is hidden in generic model, please look at comment in
defaultModelConfig.conf
- #2152
schedulePropertyExtractor
parameter ofPeriodicDeploymentManagerProvider
was changed to a factory, replace with a lambda creating the original property extractor. - #2159
useTypingResultTypeInformation
option is now enabled by default - #2108 Changes in
ClassExtractionSettings
:- Refactor of classes defining extraction rules,
TypedClass
has privateapply
method, please useTyped.typedClass
- Fewer classes/methods are accessible in SpEL, in particular Scala collections, internal time API, methods returning or having parameters from excluded classes
- Changes in
OAuth2
security components:- refactoring of
OpenIdConnectService
, now it's namedGenericOidcService
(it's best to useOidcService
, which can handle most of the configuration automatically)
- refactoring of
- New security settings, in particular new flags in
ExpressionConfig
:strictMethodsChecking
staticMethodInvocationsChecking
methodExecutionForUnknownAllowed
dynamicPropertyAccessAllowed
spelExpressionExcludeList
- #2101 Global permissions can be arbitrary string, for admin user it's not necessary to return global permissions
- #2182 To avoid classloader leaks during SQL
DriverManager
registration, HSQLDB (used e.g. for SQL Variable) is no longer included in model jars, it should be added in Flinklib
dir
-
#1479
ProcessId
andVersionId
moved to API included inProcessVersion
, remove spuriousProcessId
andProcessVersionId
in restmodel. -
#1422 Removed
ServiceReturningType
andWithExplicitMethod
, useEagerServiceWithStaticParameters
,EnricherContextTransformation
orSingleInputGenericNodeTransformation
-
#1845
AuthenticatorData
has been renamed toAuthenticationResources
and changed into a trait,apply
construction has been preserved.AuthenticatorFactory
and itscreateAuthenticator
method has been renamed toAuthenticationProvider
andcreateAuthenticationResources
. It is recommended to rename the main class of any custom authentication module to<Something>AuthenticationProvider
accordingly. -
#1542
KafkaConfig
now has new parametertopicsExistenceValidationConfig
. WhentopicsExistenceValidationConfig.enabled = true
Kafka sources/sinks do not validate if provided topic does not exist and cluster is configured withauto.create.topics.enable=false
-
#1416
OAuth2Service
has changed. You can still use your old implementation by importingOAuth2OldService
with an alias.OAuth2ServiceFactory.create
method now accepts implicit parameters for anExecutionContext
andsttp.HttpBackend
. You can ignore them to maintain previous behaviour, but it is always better to use them instead of locally defined ones. -
#1346
AggregatorFunction
now takes type of stored state that can beimmutable.SortedMap
(previous behaviour) orjava.util.Map
(using Flink's serialization) andvalidatedStoredType
parameter for providing betterTypeInformation
for aggregated values -
#1343
FirstAggregator
changed serialized state, it is not compatible,Aggregator
trait has new methodcomputeStoredType
-
#1352 and #1568 AvroStringSettings class has been introduced, which allows control whether Avro type
string
is represented byjava.lang.String
(also in runtime) orjava.lang.CharSequence
(implemented in runtime byorg.apache.avro.util.Utf8
). This setting is available through environment variableAVRO_USE_STRING_FOR_STRING_TYPE
- default istrue
. Please mind that this setting is global - it applies to all processes running on Flink and also requires restarting TaskManager when changing the value. -
#1361 Lazy variables are removed, you should use standard enrichers for those cases. Their handling has been source of many problems and they made it harder to reason about the exeuction of process.
-
#1373 Creating
ClassLoaderModelData
directly is not allowed, useModelData.apply
with plain config, wrapping with ModelConfigToLoad by yourself is not needed. -
#1406
ServiceReturningType
is deprecated in favour ofEagerService
-
#1445
RecordFormatter
now handlesTestDataSplit
for Kafka sources. It is required inKafkaSource
creation, instead ofTestDataSplit
-
#1433 Pass DeploymentData to process via JobData, additional parameters to deployment methods are needed. Separate
ExternalDeploymentId
fromDeploymentId
(generated by NK) -
#1466
ProcessManager.deploy
can returnExternalDeploymentId
-
- Slight change of API of
StringKeyedValueMapper
- Change of semantics of some parameters of
AggregatorFunction
,AggregatorFunctionMixin
(storedAggregateType becomes aggregateElementType)
- Slight change of API of
-
#1405 'KafkaAvroSink' requires more generic 'AvroSinkValue' as value parameter
-
- Change of
FlinkSource
API: sourceStream produces stream of initializedContext
(DataStream[Context]
) This initialization step was previously performed withinFlinkProcessRegistrar.registerSourcePart
. Now it happens explicitly within the flink source. FlinkIntermediateRawSource
is used as an extension to flink sources, it prepares source with typical stream transformations (add source function, set uid, assign timestamp, initializeContext
)FlinkContextInitializer
is used to initializeContext
. It provides map function that transforms raw event (produced by flink source function) intoContext
variable. Default implementation ofFlinkContextInitializer
, seeBasicFlinkContextInitializer
, sets raw event value to singe "input" variable.- For sources based on
GenericNodeTransformation
it allows to initializeContext
with more than one variable. Default implementation of initializer, seeBasicFlinkGenericContextInitializer
, provides default definition of variables as aValidationContext
with single "input" variable. The implementation requires to provide separately the definition of "input" variable type (TypingResult
). SeeGenericSourceWithCustomVariablesSample
. - To enable "test source" functionality, a source needs to be extended with
SourceTestSupport
. - For flink sources that use
TestDataParserProvider
switch toFlinkSourceTestSupport
(which is used to provide "test source" functionality for flink sources). - Old
TestDataParserProvider
is renamed toSourceTestSupport
- To enable test data generator for "test source" , a source needs to be extended with both
SourceTestSupport
andTestDataGenerator
. What was related to "test source" functionality and was obsolete inFlinkSource
now is excluded toFlinkSourceTestSupport
. FlinkCustomNodeContext
has access toTypeInformationDetection
, it allows to get TypeInformation for the node stream mapping from ValidationContext.- For kafka sources
RecordFormatter
parses raw test data toConsumerRecord
which fits into deserializer (instead ofProducerRecord
that required another transformation). - Definitions of names of common
Context
variables are moved toVariableConstants
(instead ofInterpreter
).
- Change of
-
#1497 Changes in
PeriodicProcessManager
, changePeriodicProperty
toScheduleProperty
-
- trait
KafkaAvroDeserializationSchemaFactory
uses both key and value ClassTags and schemas (instead of value-only), check the order of parameters. - ClassTag is provided in params in avro key-value deserialization schema factory:
KafkaAvroKeyValueDeserializationSchemaFactory
BaseKafkaAvroSourceFactory
is able to read both key and value schema determiner to build proper DeserializationSchema (support for keys is not fully introduced in this change)
- trait
-
#1514
ExecutionConfigPreparer
has different method parameter -JobData
, which has more info than previous parameters -
#1532
TypedObjectTypingResult#fields
uses nowscala.collection.immutable.ListMap
to keep fields order -
#1546
StandaloneCustomTransformer
now takes a list ofContext
objects, to process them in one go -
#1557 Some classes from standalone engine were moved to standalone api to remove engine to (model) utils dependency:
StandaloneContext
,StandaloneContextLifecycle
,MetricsProvider
-
#1558
FlinkProcessRegistrar
takes configuration directly fromFlinkProcessCompiler
(this can affect some tests setup) -
#1631 Introduction of
nussknacker.config.locations
property, drop use of standardconfig.file
property. Model configuration no longer has direct access to root UI config. -
- Replaced
KafkaSourceFactory
with source based onGenericNodeTransformation
, which gives access to setup ofValidationContext
andContext
initialization. To migrateKafkaSourceFactory
:- provide deserializer factory (source factory requires deserialization to
ConsumerRecord
):- use
ConsumerRecordDeserializationSchemaFactory
with currentDeserializationSchema
as a value deserializer, add key deserializer (e.g. org.apache.kafka.common.serialization.StringDeserializer) - or use
FixedValueDeserializationSchemaFactory
with simple key-as-string deserializer
- use
- provide RecordFormatterFactory
- use
ConsumerRecordToJsonFormatterFactory
for whole key-value-and-metadata serialization - or, for value-only-and-without-metadata scenario, you can use current
RecordFormater
wrapped inFixedRecordFormatterFactoryWrapper
- use
- provide timestampAssigner that is able to extract time from
ConsumerRecord[K, V]
- provide deserializer factory (source factory requires deserialization to
- Removed
BaseKafkaSourceFactory
with multiple topics support: useKafkaSourceFactory
instead, see "source with two input topics" test case - Removed
SingleTopicKafkaSourceFactory
: useKafkaSourceFactory
with customprepareInitialParameters
,contextTransformation
andextractTopics
to alter parameter list and provide constant topic value. TypingResultAwareTypeInformationCustomisation
is moved to package pl.touk.nussknacker.engine.flink.api.typeinformation
Example of source with value-only deserialization and custom timestampAssigner:
// provide new deserializer factory with old schema definition for event's value val oldSchema = new EspDeserializationSchema[SampleValue](bytes => io.circe.parser.decode[SampleValue](new String(bytes)).toOption.get) val schemaFactory: KafkaDeserializationSchemaFactory[ConsumerRecord[String, SampleValue]] = new FixedValueDeserializationSchemaFactory(oldSchema) // ... provide timestampAssigner that extracts timestamp from SampleValue.customTimestampField // ... or use event's metadata: record.timestamp() def timestampExtractor(record: ConsumerRecord[String, SampleValue]): Long = record.value().customTimestampField val watermarkHandler = StandardTimestampWatermarkHandler.boundedOutOfOrderness[ConsumerRecord[String, SampleValue]](timestampExtractor, java.time.Duration.ofMinutes(10L)) val timestampAssigner: Option[TimestampWatermarkHandler[ConsumerRecord[String, SampleValue]]] = Some(watermarkHandler) // ... provide RecordFormatterFactory that allows to generate and parse test data with key, headers and other metadata val formatterFactory: RecordFormatterFactory = new ConsumerRecordToJsonFormatterFactory[String, SampleValue] // ... and finally val sourceFactory = new KafkaSourceFactory[String, SampleValue](schemaFactory, timestampAssigner, formatterFactory, dummyProcessObjectDependencies)
- Replaced
-
#1651
KafkaAvroSourceFactory
provides additional #inputMeta variable with event's metadata.- That source now has key and value type parameters. That parameters are relevant for sources handling
SpecificRecord
s. ForGenericRecords
use explicitlyKafkaAvroSourceFactory[Any, Any]
. SpecificRecordKafkaAvroSourceFactory
extends wholeKafkaAvroSourceFactory
with context validation and initialization- New flag in
KafkaConfig
:useStringForKey
determines if event's key should be intepreted as ordinary String (which is default scenario). It is used in deserialization and for generating/parsing test data. SchemaRegistryProvider
now provides factories to produce SchemaRegistryClient and RecordFormatter.- For
ConfluentSchemaRegistryProvider
KafkaConfig and ProcessObjectDependencies (that contains KafkaConfig data) are no longer required. That configuration is required by factories in the moment the creation of requested objects that happens inKafkaAvroSourceFactory
(and that makes that all objects withinKafkaAvroSourceFactory
see the same kafka configuration). - Removed:
BaseKafkaAvroSourceFactory
, the class is incorporated intoKafkaAvroSourceFactory
to provide elastic approach to create KafkaSourcewith ReturningType
for generic types (this is defined by ValidationContext, see alsoKafkaContextInitializer
that allows to return more than one variable)KafkaAvroValueDeserializationSchemaFactory
(source requires deserialization toConsumerRecord[K, V]
, there are only deserializers based onKafkaAvroKeyValueDeserializationSchemaFactory
)ConfluentKafkaAvroDeserializationSchemaFactory
, useConfluentKeyValueKafkaAvroDeserializationFactory
TupleAvroKeyValueKafkaAvroDeserializerSchemaFactory
, this approach is deprecated due to #inputMeta variable that contains key data
To migrate
KafkaAvroSourceFactory
:- Provide
KafkaConfig
with correctuseStringForKey
flag value. By default we want to EvictableStatehandle keys as ordinary String and all topics related to such config require only value schema definitions (key schemas are ignored). For specific scenario, when complex key with its own schema is provided, this flag is false and all topics related to this config require both key and value schema definitions. Example of default KafkaConfig override:override protected def prepareKafkaConfig: KafkaConfig = super.prepareKafkaConfig.copy(useStringForKey = false)
- provide your own
SchemaRegistryProvider
(or useConfluentSchemaRegistryProvider
) - custom RecordFormatter can be wrapped in
FixedRecordFormatterFactoryWrapper
(or keepConfluentAvroToJsonFormatterFactory
) - provide timestampAssigner that is able to extract time from
ConsumerRecord[K, V]
(see example above)
- That source now has key and value type parameters. That parameters are relevant for sources handling
-
#1741 Minor changes in
KafkaUtils
,NonTransientException
usesInstant
instead ofLocalDateTime
-
#1806 Remove old, deprecated API:
EvictableState
,RichEvictableState
- useEvictableStateFunction
checkpointInterval
- usecheckpointConfig.checkpointInterval
- old versions of
sampleTransformers
- use newer ones MiniClusterExecutionEnvironment.runningJobs()
- useflinkMiniClusterHolder.runningJobs()
-
#1807 Removed
jdbcServer
, please use Postgres for production-ready setups -
- RecordFormatterFactory instead of one, uses two type parameters: K, V
- ConfluentAvroToJsonFormatter is produced by ConfluentAvroToJsonFormatterFactory
- ConfluentAvroToJsonFormatter produces test data in valid JSON format, does not use Separator
- ConfluentAvroMessageFormatter has asJson method instead of writeTo
- ConfluentAvroMessageReader has readJson method instead of readMessage Example test data object:
{"keySchemaId":null,"valueSchemaId":1,"consumerRecord":{"key":null,"value":{"first":"Jan","last":"Kowalski"},"topic":"testAvroRecordTopic1","partition":0,"offset":0,"timestamp":1624279687756,"timestampType":"CreateTime","headers":{},"leaderEpoch":0}}
-
#1663 Default
FlinkExceptionHandler
implementations are deprecated, useConfigurableExceptionHandler
instead. -
#1731 RockDB config's flag
incrementalCheckpoints
is turned on by default. -
#1825 Default dashboard renamed from
flink-esp
tonussknacker-scenario
-
#1836 Change default
kafka.consumerGroupNamingStrategy
toprocessId-nodeId
. -
#1357 Run mode added to nodes.
ServiceInvoker
interface was extended with new, implicitrunMode
parameter. -
#1836 Change default
kafka.consumerGroupNamingStrategy
toprocessId-nodeId
. -
#1886 aggregate-sliding with emitWhenEventLeft = true, aggregate-tumbling and aggregate-session components now doesn't emit full context of variables that were before node (because of performance reasons and because that wasn't obvious which one context is emitted). If you want to emit some information other than aggregated value and key (availabled via new
#key
variable), you should use#AGG.map
expression inaggregateBy
. -
#1910
processTypes
renamed toscenarioTypes
. You can still use oldprocessTypes
configuration. Old configuration will be removed in version0.5.0
. -
Various naming changes:
-
#1313 Kafka Avro API passes
KafkaConfig
duringTypeInformation
determining -
#1305 Kafka Avro API passes
RuntimeSchemaData
instead ofSchema
in various places -
#1304
SerializerWithSpecifiedClass
was moved toflink-api
module. -
#1044 Upgrade to Flink 1.11. Current watermark/timestamp mechanisms are deprectated in Flink 1.11, new API
TimestampWatermarkHandler
is introduced, withLegacyTimestampWatermarkHandler
as wrapper for previous mechanisms. -
#1244
Parameter
has new parameter 'variablesToHide' withSet
of variable names that will be hidden before parameter's evaluation -
#1159 #1170 Changes in
GenericNodeTransformation
API:- Now
implementation
takes additional parameter with final state value determined duringcontextTransformation
DefinedLazyParameter
andDefinedEagerParameter
holdsexpression: TypedExpression
instead ofreturnType: TypingResult
DefinedLazyBranchParameter
andDefinedEagerBranchParameter
holdsexpressionByBranchId: Map[String, TypedExpression]
instead ofreturnTypeByBranchId: Map[String, TypingResult]
- Now
-
- Now
SimpleSlidingAggregateTransformerV2
andSlidingAggregateTransformer
is deprecated in favour ofSlidingAggregateTransformerV2
- Now
SimpleTumblingAggregateTransformer
is deprecated in favour ofTumblingAggregateTransformer
- Now
SumAggregator
,MaxAggregator
andMinAggregator
doesn't change type of aggregated value (previously was changed to Double) - Now
SumAggregator
,MaxAggregator
andMinAggregator
return null instead of0D
/Double.MaxValue
/Double.MinValue
for case when there was no element added beforegetResult
- Now
-
#1149 FlinkProcessRegistrar refactor (can affect test code)
-
#1166
model.conf
should be renamed todefaultModelConfig.conf
-
#1218 FlinkProcessManager is no longer bundled in ui uber-jar. In docker/tgz distribution
-
#1255 Moved displaying
Metrics tab
tocustomTabs
-
#1257 Improvements: Flink test util package
- Added methods:
cancelJob
,submitJob
,listJobs
,runningJobs
toFlinkMiniClusterHolder
- Deprecated:
runningJobs
, fromMiniClusterExecutionEnvironment
- Removed:
getClusterClient
fromFlinkMiniClusterHolder
interface, because of flink compatibility at Flink 1.9 - Renamed:
FlinkStreamingProcessRegistrar
toFlinkProcessManager
- Added methods:
-
#1303 TypedObjectTypingResult has a new field: additionalInfo
-
#1104 Creation of
FlinkMiniCluster
is now extracted fromStoppableExecutionEnvironment
. You should create it using e.g.:val flinkMiniClusterHolder = FlinkMiniClusterHolder(FlinkTestConfiguration.configuration(parallelism)) flinkMiniClusterHolder.start()
and then create environment using:
flinkMiniClusterHolder.createExecutionEnvironment()
. At the end you should cleanup
flinkMiniClusterHolder
by:flinkMiniClusterHolder.stop()
.
FlinkMiniClusterHolder
should be created once for test class - it is thread safe and resource expensive.MiniClusterExecutionEnvironment
in the other hand should be created for each process. It is not thread safe because underlyingStreamExecutionEnvironment
is not. You can useFlinkSpec
to achieve that. -
pl.touk.nussknacker.engine.queryablestate.QueryableClient
was moved fromqueryableState
module topl.touk.nussknacker.engine.api.queryablestate
package inapi
modulepl.touk.nussknacker.engine.queryablestate.QueryableState
was moved topl.touk.nussknacker.engine.api.queryablestate
- CustomTransformers from
pl.touk.nussknacker.engine.flink.util.transformer
inflinkUtil
module were moved to newflinkModelUtil
module. pl.touk.nussknacker.engine.testing.EmptyProcessConfigCreator
was moved frominterpreter
module topl.touk.nussknacker.engine.util.process
package inutil
module
-
#1039 Generic parameter of
LazyParameter[T]
has upper bound AnyRef now to avoid problems with bad type extraction. It caused changesAny
toAnyRef
in a few places - mainlyFlinkLazyParameterFunctionHelper
andFlinkCustomStreamTransformation
-
#1039
FlinkStreamingProcessRegistrar.apply
has a new parameter of typeExecutionConfigPreparer
. In production code you should passExecutionConfigPreparer.defaultChain()
there and in test code you should passExecutionConfigPreparer.unOptimizedChain()
. See scaladocs for more info. If you already have done some Flink'sExecutionConfig
set up before you've registered process, you should consider create your own chain usingExecutionConfigPreparer.chain()
. -
#1039
FlinkSourceFactory
doesn't takeTypeInformation
type class as a generic parameter now. Instead of this, it takesClassTag
.TypeInformation
is determined during source creation.typeInformation[T]
method was moved fromBasicFlinkSource
toFlinkSource
because still must be some place to determine it for tests purpose. -
#965 'aggregate' node in generic model was renamed to 'aggregate-sliding'
-
#922 HealthCheck API has new structure, naming and JSON responses:
- old
/healthCheck
is moved to/healthCheck/process/deployment
- old
/sanityCheck
is moved to/healthCheck/process/validation
- top level
/healthCheck
indicates general "app-is-running" state
- old
-
#879 Metrics use variables by default, see docs to enable old mode, suitable for graphite protocol. To use old way of sending:
- put
globalParameters.useLegacyMetrics = true
in each model configuration (to configure metrics sending in Flink) - put:
countsSettings { user: ... password: ... influxUrl: ... metricsConfig { nodeCountMetric: "nodeCount.count" sourceCountMetric: "source.count" nodeIdTag: "action" countField: "value" } }
- put
-
Introduction to KafkaAvro API: #871, #881, #903, #981, #989, #998, #1007, #1014, #1034, #1041
API for KafkaAvroSourceFactory
was changed:
KafkaAvroSourceFactory
old way:
val clientFactory = new SchemaRegistryClientFactory
val source = new KafkaAvroSourceFactory(
new AvroDeserializationSchemaFactory[GenericData.Record](clientFactory, useSpecificAvroReader = false),
clientFactory,
None,
processObjectDependencies = processObjectDependencies
)
KafkaAvroSourceFactory
new way :
val schemaRegistryProvider = ConfluentSchemaRegistryProvider(processObjectDependencies)
val source = new KafkaAvroSourceFactory(schemaRegistryProvider, processObjectDependencies, None)
Provided new API for Kafka Avro Sink:
val kafkaAvroSinkFactory = new KafkaAvroSinkFactory(schemaRegistryProvider, processObjectDependencies)
Additional changes:
- Bump up confluent package to 5.5.0
- (Refactor Kafka API) Moved
KafkaSourceFactory
topl.touk.nussknacker.engine.kafka.sink
package - (Refactor Kafka API) Changed
BaseKafkaSourceFactory
, now it requiresdeserializationSchemaFactory: KafkaDeserializationSchemaFactory[T]
- (Refactor Kafka API) Moved
KafkaSinkFactory
topl.touk.nussknacker.engine.kafka.source
package - (Refactor Kafka API) Renamed
SerializationSchemaFactory
toKafkaSerializationSchemaFactory
- (Refactor Kafka API) Renamed
DeserializationSchemaFactory
toKafkaDeserializationSchemaFactory
- (Refactor Kafka API) Renamed
FixedDeserializationSchemaFactory
toFixedKafkaDeserializationSchemaFactory
- (Refactor Kafka API) Renamed
FixedSerializationSchemaFactory
toFixedKafkaSerializationSchemaFactory
- (Refactor Kafka API) Removed
KafkaSerializationSchemaFactoryBase
- (Refactor Kafka API) Replaced
KafkaKeyValueSerializationSchemaFactoryBase
byKafkaAvroKeyValueSerializationSchemaFactory
(it handles only avro case now) - (Refactor Kafka API) Removed
KafkaDeserializationSchemaFactoryBase
- (Refactor Kafka API) Replaced
KafkaKeyValueDeserializationSchemaFactoryBase
byKafkaAvroKeyValueDeserializationSchemaFactory
(it handles only avro case now) - (Refactor KafkaAvro API) Renamed
AvroDeserializationSchemaFactory
toConfluentKafkaAvroDeserializationSchemaFactory
and moved toavro.schemaregistry.confluent
package - (Refactor KafkaAvro API) Renamed
AvroKeyValueDeserializationSchemaFactory
toConfluentKafkaAvroDeserializationSchemaFactory
and moved toavro.schemaregistry.confluent
package - (Refactor KafkaAvro API) Renamed
AvroSerializationSchemaFactory
toConfluentAvroSerializationSchemaFactory
and moved toavro.schemaregistry.confluent
package - (Refactor KafkaAvro API) Renamed
AvroKeyValueSerializationSchemaFactory
toConfluentAvroKeyValueSerializationSchemaFactory
and moved toavro.schemaregistry.confluent
package - (Refactor KafkaAvro API) Removed
FixedKafkaAvroSourceFactory
andFixedKafkaAvroSinkFactory
(now we don't support fixed schema) - (Refactor Kafka API) Replaced
topics: List[String]
byList[PreparedKafkaTopic]
and removedprocessObjectDependencies
inKafkaSource
Be aware that we are using Avro 1.9.2 instead of default Flink's 1.8.2 (for Java time logical types conversions purpose).
- #1013 Expression evaluation is synchronous now. It shouldn't cause any problems (all languages were synchronous anyway), but some internal code may have to change.
- #957 Custom node
aggregate
fromgeneric
model has changed parameter fromwindowLengthInSeconds
towindowLength
with human friendly duration input. If you have used it in process, you need to insert correct value again. - #954
TypedMap
is not a case class wrapping scala Map anymore. If you have done some pattern matching on it, you should usecase typedMap: TypedMap => typedMap.asScala
instead.
- #930
DeeplyCheckingExceptionExtractor
was moved fromnussknacker-flink-util
module tonussknacker-util
module. - #919
KafkaSource
constructor now doesn't takeconsumerGroup
. Instead of this it computesconsumerGroup
on their own based onkafka.consumerGroupNamingStrategy
inmodelConfig
(default set toprocessId
). You can also override it byoverriddenConsumerGroup
optional parameter. Regards to this changes,KafkaConfig
has new, optional parameterconsumerGroupNamingStrategy
. - #920
KafkaSource
constructor now takesKafkaConfig
instead of using one that was parsed byBaseKafkaSourceFactory.kafkaConfig
. Also if you parse Typesafe Config toKafkaSource
on your own, now you should use dedicated methodKafkaConfig.parseConfig
to avoid further problems when parsing strategy will be changed. - #914
pl.touk.nussknacker.engine.api.definition.Parameter
has deprecated main factory method withruntimeClass
parameter. Now should be passedisLazyParameter
instead. Also were removedruntimeClass
from variances of factory methods prepared for easy testing (optional
method and so on).
- #755 Default async execution context does not depend on parallelism.
asyncExecutionConfig.parallelismMultiplier
has been deprecated and should be replaced withasyncExecutionConfig.workers
. 8 should be sane default value. - #722 Old way of configuring Flink and model (via
flinkConfig
andprocessConfig
) is removed.processTypes
configuration should be used from now on. Example:becomes:flinkConfig {...} processConfig {...}
processTypes { "type e.g. streaming" { deploymentConfig { type: "flinkStreaming" PUT HERE PROPERTIES OF flinkConfig FROM OLD CONFIG } modelConfig { classPath: PUT HERE VALUE OF flinkConfig.classPath FROM OLD CONFIG PUT HERE PROPERTIES OF processConfig FROM OLD CONFIG } } }
- #763 Some API traits (ProcessManager, DictRegistry DictQueryService, CountsReporter) now extend
java.lang.AutoCloseable
. - Old way of additional properties configuration should be replaced by the new one, which is now mapped to
Map[String, AdditionalPropertyConfig]
. Example in your config:becomes:additionalFieldsConfig: { mySelectProperty { label: "Description" type: "select" isRequired: false values: ["true", "false"] } }
additionalPropertiesConfig { mySelectProperty { label: "Description" defaultValue: "false" editor: { type: "FixedValuesParameterEditor", possibleValues: [ {"label": "Yes", "expression": "true"}, {"label": "No", "expression": "false"} ] } } }
- #588 #882
FlinkSource
API changed, current implementation is nowBasicFlinkSource
- #839 #882
FlinkSink
API changed, current implementation is nowBasicFlinkSink
- #841
ProcessConfigCreator
API changed; note that currently all process objects are invoked withProcessObjectDependencies
as a parameter. The APIs ofKafkaSinkFactory
,KafkaSourceFactory
, and all their implementations were changed.Config
is available as property ofProcessObjectDependencies
instance. - #863
restUrl
indeploymentConfig
need to be preceded with protocol. Host with port only is not allowed anymore. - Rename
grafanaSettings
tometricsSettings
in configuration.
- Upgrade to Flink 1.7
- Refactor of custom transformations, dictionaries, unions, please look at samples in example or generic to see API changes
- Considerable changes to authorization configuration, please look at sample config to see changes
- Circe is now used by default instead of Argonaut, but still can use Argonaut in Displayable
- Changes in CustomStreamTransformer implementation, LazyInterpreter became LazyParameter, please look at samples to see changes in API
- Upgrade to Flink 1.4
- Change of format of Flink cluster configuration
- Parameters of sources and sinks are expressions now - automatic update of DB is available
- Change of configuration of Grafana dashboards
- Custom processes are defined in main configuration file now