Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NullPointerException in ZKAsyncMultiMa on shutdown of a cluster memeber node #96

Open
radai-rosenblatt opened this issue Apr 22, 2020 · 0 comments
Labels

Comments

@radai-rosenblatt
Copy link

radai-rosenblatt commented Apr 22, 2020

im running an integration test where i spin up embedded ZK and 2 clustered vertx instances.
at the end of the test i shut them both down.

on shutdown i get the following exception:

08:54:33.779 [vert.x-eventloop-thread-16] FATAL com.linkedin.mario.server.MarioApplication - m1 vertx hit uncaught exception during RUNNING
java.lang.NullPointerException: null
	at io.vertx.spi.cluster.zookeeper.impl.ZKAsyncMultiMap.lambda$null$24(ZKAsyncMultiMap.java:189) ~[vertx-zookeeper-3.8.5.jar:3.8.5]
	at java.lang.Iterable.forEach(Iterable.java:75) ~[?:1.8.0_172]
	at io.vertx.spi.cluster.zookeeper.impl.ZKAsyncMultiMap.lambda$removeAllMatching$26(ZKAsyncMultiMap.java:187) ~[vertx-zookeeper-3.8.5.jar:3.8.5]
	at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_172]
	at io.vertx.spi.cluster.zookeeper.impl.ZKAsyncMultiMap.removeAllMatching(ZKAsyncMultiMap.java:186) ~[vertx-zookeeper-3.8.5.jar:3.8.5]
	at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.lambda$setClusterViewChangedHandler$12(ClusteredEventBus.java:274) ~[vertx-core-3.8.5.jar:3.8.5]
	at io.vertx.core.impl.HAManager.lambda$checkSubs$12(HAManager.java:520) ~[vertx-core-3.8.5.jar:3.8.5]
	at io.vertx.core.impl.HAManager.lambda$runOnContextAndWait$13(HAManager.java:529) ~[vertx-core-3.8.5.jar:3.8.5]
	at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:369) ~[vertx-core-3.8.5.jar:3.8.5]
	at io.vertx.core.impl.EventLoopContext.lambda$executeAsync$0(EventLoopContext.java:38) ~[vertx-core-3.8.5.jar:3.8.5]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510) [netty-common-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518) [netty-transport-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044) [netty-common-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.42.Final.jar:4.1.42.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]

this happens on the still-alive vertx instance when the 1st instance shuts down.

the actual null is in treeCache.getCurrentChildren(keyPath) in ZKAsyncMultiMap.removeAllMatching():

@Override
  public void removeAllMatching(Predicate<V> p, Handler<AsyncResult<Void>> completionHandler) {
    List<Future> futures = new ArrayList<>();
    Optional.ofNullable(treeCache.getCurrentChildren(mapPath)).ifPresent(childDataMap -> {  
      childDataMap.keySet().forEach(partKeyPath -> {
        String keyPath = mapPath + "/" + partKeyPath;
        treeCache.getCurrentChildren(keyPath).keySet().forEach(valuePath -> {  <--- HERE
          String fullPath = keyPath + "/" + valuePath;
          Optional.ofNullable(treeCache.getCurrentData(fullPath))
            .filter(childData -> Optional.of(childData.getData()).isPresent())
            .ifPresent(childData -> {
              try {
                V value = asObject(childData.getData());
                if (p.test(value)) {
                  futures.add(remove(keyPath, value, fullPath));
                }
              } catch (Exception e) {
                futures.add(Future.failedFuture(e));
              }
            });
        });
      });
      //
      CompositeFuture.all(futures).compose(compositeFuture -> {
        Future<Void> future = Future.future();
        future.complete();
        return future;
      }).setHandler(completionHandler);
    });
  }

keypath is /asyncMultiMap/__vertx.subs/__VERTX_ZK_TTL_HANDLER_ADDRESS

this only happens ~30% of the time? so it appears to be a race.

the contents of treecache at that point under "/asyncMultiMap/__vertx.subs" are 3 nodes - all clustered eventbus "topics" for my application

Questions

Do not use this issue tracker to ask questions, instead use one of these channels. Questions will likely be closed without notice.

Version

vertx-zookeeper 3.8.5

Context

see above

Do you have a reproducer?

reproducer is in code i cant share, sadly

Steps to reproduce

  1. ...
  2. ...
  3. ...
  4. ...

Extra

  • Anything that can be relevant such as OS version, JVM version
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

No branches or pull requests

1 participant