Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encountered a 'null' in 'list', this is illegal in Besom #518

Open
jchapuis opened this issue Jun 14, 2024 · 4 comments
Open

Encountered a 'null' in 'list', this is illegal in Besom #518

jchapuis opened this issue Jun 14, 2024 · 4 comments

Comments

@jchapuis
Copy link

Hello there 👋 we are stuck with the confluent provider due to the following trace.

It could be that the provider is mibehaving. However, in our case, we are not interested in this particular list value which can't be decoded but some other properties from the result type.

Would there be anything you can propose as a workaround? This is blocking for us atm.

error] 	at besom.internal.ResourceOps.decodeResponse$1$$anonfun$1(ResourceOps.scala:91)
[error] 	at besom.internal.Result.flatMap$$anonfun$1(Result.scala:166)
[error] 	at besom.internal.Result.runM$$anonfun$7(Result.scala:273)
[error] 	at besom.internal.Runtime.flatMapBothM$$anonfun$1$$anonfun$1(Result.scala:118)
[error] 	at besom.internal.FutureRuntime.flatMapBoth$$anonfun$1(Result.scala:404)
[error] 	at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:477)
[error] 	at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1395)
[error] 	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
[error] 	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
[error] 	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
[error] 	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:[162](https://gitlab.transics-cicd.aws.zf.com/core/orchestration/infra-common/-/jobs/6931083#L162)2)
[error] 	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
[error] Caused by: besom.internal.DecodingError: [confluentcloud:index/getKafkaCluster:getKafkaCluster().standards] confluentcloud:index/getKafkaCluster:getKafkaCluster().standards: Encountered an error when deserializing an option
[error] 	at besom.internal.DecodingError$.apply(codecs.scala:80)
[error] 	at besom.internal.Decoder$.besom$internal$Decoder$$anon$7$$_$decode$$anonfun$3$$anonfun$3(codecs.scala:265)
[error] 	at besom.util.NonEmptyVector.map(Validated.scala:[167](https://gitlab.transics-cicd.aws.zf.com/core/orchestration/infra-common/-/jobs/6931083#L167))
[error] 	at besom.util.Validated.lmap(Validated.scala:47)
[error] 	at besom.util.Validated$.lmap$$anonfun$1(Validated.scala:137)
[error] 	at besom.internal.Result.map$$anonfun$1(Result.scala:205)
[error] 	... 11 more
[error] Caused by: besom.internal.DecodingError: [confluentcloud:index/getKafkaCluster:getKafkaCluster().standards] confluentcloud:index/getKafkaCluster:getKafkaCluster().standards: Encountered an error when deserializing a list
[error] 	at besom.internal.DecodingError$.apply(codecs.scala:80)
[error] 	at besom.internal.Decoder$.besom$internal$Decoder$$anon$8$$_$decode$$anonfun$4$$anonfun$3(codecs.scala:296)
[error] 	... 15 more
[error] Caused by: besom.internal.DecodingError: [confluentcloud:index/getKafkaCluster:getKafkaCluster().standards] Encountered a 'null' in 'list', this is illegal in Besom, please file an issue on GitHub
[error] 	at besom.internal.DecodingError$.apply(codecs.scala:81)
[error] 	at besom.internal.DecoderHelpers.accumulatedOutputDataOrErrors(codecs.scala:791)
[error] 	at besom.internal.DecoderHelpers.accumulatedOutputDataOrErrors$(codecs.scala:588)
[error] 	at besom.internal.Decoder$.accumulatedOutputDataOrErrors(codecs.scala:193)
[error] 	at besom.internal.Decoder$.besom$internal$Decoder$$anon$8$$_$decode$$anonfun$4$$anonfun$1$$anonfun$2(codecs.scala:289)
[error] 	at scala.collection.IterableOnceOps.foldLeft(IterableOnce.scala:644)
[error] 	at scala.collection.IterableOnceOps.foldLeft$(IterableOnce.scala:670)
[error] 	at scala.collection.AbstractIterable.foldLeft(Iterable.scala:933)
[error] 	at besom.internal.Decoder$$anon$8.decode$$anonfun$4$$anonfun$1(codecs.scala:289)
[error] 	at besom.internal.OutputData.traverseValidatedResult(OutputData.scala:99)
[error] 	at besom.internal.Decoder$$anon$8.decode$$anonfun$4(codecs.scala:293)
[error] 	at besom.util.Validated$.flatMap$$anonfun$1(Validated.scala:112)
@jchapuis
Copy link
Author

The value that trips besom is the following:

"standards" -> Value(kind = ListValue(value = ListValue(values = Vector(Value(kind = NullValue(value = NULL_VALUE))))))

The full structure (edited for privacy):

invokeResponse(
  return = Some(
    value = Struct(
      fields = HashMap(
        "networks" -> Value(
          kind = ListValue(
            value = ListValue(
              values = Vector(Value(kind = StructValue(value = Struct(fields = Map("id" -> Value(kind = StringValue(value = "")))))))
            )
          )
        ),
        "availability" -> Value(kind = StringValue(value = "SINGLE_ZONE")),
        "region" -> Value(kind = StringValue(value = "eu-west-1")),
        "restEndpoint" -> Value(kind = StringValue(value = "[...:443](...)")),
        "freights" -> Value(kind = ListValue(value = ListValue(values = Vector()))),
        "displayName" -> Value(kind = StringValue(value = "shared")),
        "byokKeys" -> Value(
          kind = ListValue(
            value = ListValue(
              values = Vector(Value(kind = StructValue(value = Struct(fields = Map("id" -> Value(kind = StringValue(value = "")))))))
            )
          )
        ),
        "basics" -> Value(kind = ListValue(value = ListValue(values = Vector()))),
        "standards" -> Value(kind = ListValue(value = ListValue(values = Vector(Value(kind = NullValue(value = NULL_VALUE)))))),
        "id" -> Value(kind = StringValue(value = "...")),
        "enterprises" -> Value(kind = ListValue(value = ListValue(values = Vector()))),
        "bootstrapEndpoint" -> Value(kind = StringValue(value = "...")),
        "cloud" -> Value(kind = StringValue(value = "AWS")),
        "environment" -> Value(kind = StructValue(value = Struct(fields = Map("id" -> Value(kind = StringValue(value = "...")))))),
        "dedicated" -> Value(kind = NullValue(value = NULL_VALUE)),
        "kind" -> Value(kind = StringValue(value = "Cluster")),
        "apiVersion" -> Value(kind = StringValue(value = "cmk/v2")),
        "rbacCrn" -> Value(
          kind = StringValue(
            value = "..."
          )
        )
      )
    )
  ),
  failures = Vector()

@lbialy
Copy link
Collaborator

lbialy commented Jun 17, 2024

Hi @jchapuis, yeah, this is, in fact, provider misbehaving (or just schema being borked). I just checked and 1.47.0 is the latest version so there was an update of the provider itself (latest released for Besom is 1.42.0-core.0.3). I can publish 1.47.0-core.0.3 for you but I don't think it will solve the issue, really. Problem here is that for output type of classes (usually datatypes that are inside of Output-typed fields of resources but also return values from pulumi function calls) we do all-or-nothing deserialization so even if one of the subfields is broken, it affects the whole deserialization logic - it's just Either[Exception, A] so you get A or an error. This turns out to be a huge problem in the end and the scope for 0.4 contains a planned change where each field would be a separate Output[A] as this would allow granular failures on per-field level. Unfortunately until this is done I don't really have a way to overcome this that is not a huge hack.

I do have a hack, however.

If you are interested in solving this quick and dirty, you could clone our repo and publish locally a hotfixed artifact of confluentcloud package (you will need tooling mentioned in CONTRIBUTING.md!). First thing you need to do is to run just publish-local-all. Next step is to generate the package locally. The command to do that is just cli packages local confluent - this will pull stuff from github metadata (hence the need to set up GITHUB_TOKEN env var) and use our codegen to generate the codebase. After that you will have to go to .out/codegen/confluentcloud/1.47.0/ and modify the file src/index/GetKafkaClusterResult.scala - just comment out the lines that contain standards so one field and two extension methods. Then you need to publish the modified package locally (or even publicly if you have an org on maven central and want the hotfixed package to be available on CI and such). If you would like to preserve our package naming scheme, you just have to run scala-cli publish local . --organization org.virtuslab --name besom-confluentcloud --project-version=1.47.0-core.0.3-HOTFIX, I think. For anything public it would be necessary to adjust the values in project.scala.

@jchapuis
Copy link
Author

@lbialy excellent, thanks a lot for the quick answer, makes sense, and good idea on the granular field access. Thanks also for the procedure for custom packages, great to know, I might follow that or just hardcode the values I'm missing for the moment, let's see

@lbialy
Copy link
Collaborator

lbialy commented Jun 17, 2024

@jchapuis you might want to checkout a tag (v0.3.2 for example) and publish from that detached head so that you aren't affected by any changes on main branch that might not be compatible anymore (for now there's nothing merged that would be like that but it's better to be safe than NoSuchMethodException).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants