From a122069d9051c50c0e409047704d545472e48370 Mon Sep 17 00:00:00 2001 From: Jose Carranza Date: Fri, 3 Nov 2023 08:29:27 +0100 Subject: [PATCH] create module vertx-web-validation making some progress rename methods names, modify router.get path and price logic modify SHOPLIS1_URL to d it simpler include vertx-web-validation in README pass @Observes Router router to the validateHandlerSoppingList remove reactive routes in pom , they are deprecated improve readability in README adapt method without @ROUTE reactive improve tests add handling error and test parameter missing in request rm quarkus-resteasy-reactive dependency use static final variables and include error tests validations include /filterByArrayItem and /createShoppingList logic and test coverage and modify README extend of VertxWebValidationIT on OpenshiftVertxWebValidationIT rename OpenShift class and fix/rename failed RestService name for openshift deployment to lowercase valid --- README.md | 169 +++++++++--------- http/vertx-web-validation/pom.xml | 27 +++ .../ts/vertx/web/validation/ShopResource.java | 36 ++++ .../ts/vertx/web/validation/ShoppingList.java | 60 +++++++ .../validation/ValidationHandlerOnRoutes.java | 159 ++++++++++++++++ .../OpenShiftVertxWebValidationIT.java | 7 + .../web/validation/VertxWebValidationIT.java | 124 +++++++++++++ pom.xml | 1 + 8 files changed, 502 insertions(+), 81 deletions(-) create mode 100644 http/vertx-web-validation/pom.xml create mode 100644 http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ShopResource.java create mode 100644 http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ShoppingList.java create mode 100644 http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ValidationHandlerOnRoutes.java create mode 100644 http/vertx-web-validation/src/test/java/io/quarkus/ts/vertx/web/validation/OpenShiftVertxWebValidationIT.java create mode 100644 http/vertx-web-validation/src/test/java/io/quarkus/ts/vertx/web/validation/VertxWebValidationIT.java diff --git a/README.md b/README.md index 895194a313..e6d75346ff 100644 --- a/README.md +++ b/README.md @@ -66,7 +66,7 @@ By default, all your tests are running on bare metal (JVM / Dev mode), but you c All of these profiles are not mutual exclusive, indeed we encourage you to combine these profiles in order to run complex scenarios. -**Example:** +**Example:** To run in OpenShift a native version of root, security and SQL modules and also run knative scenarios of those modules @@ -99,19 +99,19 @@ By default [Quarkus-test-framework](https://github.com/quarkus-qe/quarkus-test-f **Example:** -User: `Run http-minimum module in OpenShift.` +User: `Run http-minimum module in OpenShift.` ```shell -mvn clean verify -Dall-modules -Dopenshift -pl http/http-minimum +mvn clean verify -Dall-modules -Dopenshift -pl http/http-minimum ``` **NOTE:** here we are combining two profiles, profile `openshift` in order to trigger OpenShift execution mode and property `all-modules` to enable `http-modules` profile, where `http/http-minimum` is located. -### OpenShift & Native +### OpenShift & Native Please read [OpenShift](#OpenShift) section first and login into OCP. -When we are running a [native compilation](https://quarkus.io/guides/building-native-image) the flow is the same as the regular way, the only difference is that we need to compile our application first with GraalVM/Mandrel in order to generate the binary application. To do that we will add the flag `-Dnative` to our maven command. +When we are running a [native compilation](https://quarkus.io/guides/building-native-image) the flow is the same as the regular way, the only difference is that we need to compile our application first with GraalVM/Mandrel in order to generate the binary application. To do that we will add the flag `-Dnative` to our maven command. You have a choice of using locally installed GraalVM or a Docker base image in order to generate native executable. #### OpenShift & Native via Docker @@ -121,7 +121,7 @@ You have a choice of using locally installed GraalVM or a Docker base image in o User: `Deploy in Openshift and run http-minimum module in native mode.` ```shell -mvn clean verify -Dall-modules -Dnative -Dopenshift -pl http/http-minimum +mvn clean verify -Dall-modules -Dnative -Dopenshift -pl http/http-minimum ``` Quarkus test framework will reuse the Native binary generated by Maven to run the test, except if the scenario provides a build property, then it will generate a new native executable. @@ -150,7 +150,7 @@ Be sure that GraalVM is installed by running the following command, otherwise yo User: `Deploy in OpenShift the module http-minimum compiled with my local GraalVM in order to build my application` ```shell -mvn clean verify -Popenshift -Dall-modules -Dquarkus.package.type=native -pl http/http-minimum +mvn clean verify -Popenshift -Dall-modules -Dquarkus.package.type=native -pl http/http-minimum ``` ### Bare metal @@ -166,7 +166,7 @@ docker run hello-world User: `Run http-minimum module.` ```shell -mvn clean verify -Dall-modules -pl http/http-minimum +mvn clean verify -Dall-modules -pl http/http-minimum ``` #### Bare metal & Native @@ -178,21 +178,21 @@ Same as [OpenShift & Native](#OpenShift--Native) scenarios, Quarkus test framewo User: `Run http-minimum module in native mode.` ```shell -mvn clean verify -Dall-modules -Dnative -pl http/http-minimum +mvn clean verify -Dall-modules -Dnative -pl http/http-minimum ``` All the above example for OpenShift are also valid for Bare metal, just remove the flag `-Dopenshift` and play with [native image generation properties](https://quarkus.io/guides/building-native-image#configuration-reference) ### Additional notes -Have a look at the main `pom.xml` file and pay attention to some useful areas as how the scenarios are categorized by topics/profiles, or some global properties as `quarkus.platform.version` that could be overwritten by a flag. +Have a look at the main `pom.xml` file and pay attention to some useful areas as how the scenarios are categorized by topics/profiles, or some global properties as `quarkus.platform.version` that could be overwritten by a flag. **Example:** As a user I would like to run all core modules of Quarkus `2.2.3.Final` ```shell -mvn clean verify -Droot-modules -Dquarkus.platform.version=2.2.3.Final +mvn clean verify -Droot-modules -Dquarkus.platform.version=2.2.3.Final ``` Since this is standard Quarkus configuration, it's possible to override using a system property. @@ -225,7 +225,7 @@ And also, the same user `qe` should have access to the `openshift-user-workload- oc adm policy add-role-to-user edit qe -n openshift-user-workload-monitoring ``` -These requirements are necessary to verify the `micrometer/prometheus` and `micrometer/prometheus-kafka` tests. +These requirements are necessary to verify the `micrometer/prometheus` and `micrometer/prometheus-kafka` tests. - the OpenShift user must have permission to create Operators: @@ -290,7 +290,7 @@ When creating new branch please ensure following items: We use a Quarkus QE Test Framework to verify this test suite. For further information about it, please go to [here](https://github.com/quarkus-qe/quarkus-test-framework). -## Name Convention +## Name Convention For bare metal testing, test classes must be named `*IT`, executed by Failsafe. OpenShift tests should be named `OpenShift*IT`. @@ -347,7 +347,7 @@ This module covers basic scenarios about HTTP servlets under `quarkus-undertow` - Undertow web.xml configuration ### `http/jakarta-rest` -Simple bootstrap project created by *quarkus-maven-plugin* +Simple bootstrap project created by *quarkus-maven-plugin* ### `http/jakarta-rest-reactive` RESTEasy Reactive equivalent of `http/jakarta-rest`. Tests simple and multipart endpoints. @@ -367,7 +367,7 @@ This module will setup a very minimal configuration (only `quarkus-resteasy`) an - Two endpoints to get the value of the previous endpoints using the rest client interface. ### `http/rest-client-reactive` -Reactive equivalent of the http/rest-client module. +Reactive equivalent of the http/rest-client module. Exclusions: XML test. Reason: https://quarkus.io/blog/resteasy-reactive/#what-jax-rs-features-are-missing ### `http/hibernate-validator` @@ -382,7 +382,7 @@ It also verifies multiple deployment strategies like: - Using OpenShift quarkus extension and Docker Build strategy ### `http/management` -Verifies, that management interface (micrometer metrics and health endpoints) can be hosted on a separate port +Verifies, that management interface (micrometer metrics and health endpoints) can be hosted on a separate port #### Additions * *@Deprecated* annotation has been added for test regression purposes to ensure `java.lang` annotations are allowed for resources @@ -402,6 +402,13 @@ Vert.x Mutiny webClient exploratory test. Also see http/vertx-web-client/README.md +### `http/vertx-web-validation` +Ensure that you can deploy a simple Quarkus application with predefined routes using schema parser from vertx-json-schema and Router Vert.x approach, + incorporating web validation configuration through ValidationHandlerBuilder. One of the goal of vertx-web-validation functionality is to validate parameters and bodies of incoming requests. + +It also verifies multiple deployment strategies like: +- Using Quarkus OpenShift extension + ### `http/graphql` This module covers some basic scenarios around GraphQL. @@ -431,7 +438,7 @@ Tests: - Test the health endpoints responses. - Test greeting resource endpoint response. - Reproducer for [QUARKUS-662](https://issues.redhat.com/browse/QUARKUS-662): "Injection of HttpSession throws UnsatisfiedResolutionException during the build phase" is covered by the test `InjectingScopedBeansResourceTest` and `NativeInjectingScopedBeansResourceIT`. -- Test to cover the functionality of the Fallback feature and ensure the associated metrics are properly updated. +- Test to cover the functionality of the Fallback feature and ensure the associated metrics are properly updated. ### `config` Checks that the application can read configuration from a ConfigMap and a Secret. @@ -458,7 +465,7 @@ Module that covers the logging functionality using JBoss Logging Manager. The fo - Usage of `quarkus-logging-json` extension - Inject the `Logger` instance in beans - Inject a `Logger` instance using a custom category -- Setting up the log level property for logger instances +- Setting up the log level property for logger instances - Check default `quarkus.log.min-level` value ### `sql-db/hibernate` @@ -466,10 +473,10 @@ Module that covers the logging functionality using JBoss Logging Manager. The fo This module contains Hibernate integration scenarios. The features covered: -* Reproducer for [14201](https://github.com/quarkusio/quarkus/issues/14201) and - [14881](https://github.com/quarkusio/quarkus/issues/14881): possible data loss bug in hibernate. This is covered under +* Reproducer for [14201](https://github.com/quarkusio/quarkus/issues/14201) and + [14881](https://github.com/quarkusio/quarkus/issues/14881): possible data loss bug in hibernate. This is covered under the Java package `io.quarkus.qe.hibernate.items`. -- Reproducer for [QUARKUS-661](https://issues.redhat.com/browse/QUARKUS-661): `@TransactionScoped` Context does not call +- Reproducer for [QUARKUS-661](https://issues.redhat.com/browse/QUARKUS-661): `@TransactionScoped` Context does not call `@Predestroy` on `TransactionScoped` beans. This is covered under the Java package `io.quarkus.qe.hibernate.transaction`. ### `sql-db/hibernate-fulltext-search` @@ -493,7 +500,7 @@ There are actually coverage scenarios `sql-app` directory: - `mysql`: same for MysQL - `mariadb`: same for MariaDB - `mssql`: same for MSSQL -- `oracle`: The same case as the others, but for Oracle, only JVM mode is supported. Native mode is not covered due to a bug in Quarkus, which causes it to fail when used in combination with other JDBC drivers (see `OracleDatabaseIT`). OpenShift scenario is also not supported due to another bug (see `OpenShiftOracleDatabaseIT`). +- `oracle`: The same case as the others, but for Oracle, only JVM mode is supported. Native mode is not covered due to a bug in Quarkus, which causes it to fail when used in combination with other JDBC drivers (see `OracleDatabaseIT`). OpenShift scenario is also not supported due to another bug (see `OpenShiftOracleDatabaseIT`). All the tests deploy an SQL database directly into OpenShift, alongside the application. This might not be recommended for production, but is good enough for test. @@ -553,13 +560,13 @@ Base application: - Define a REST resource `DataSourceResource` that provides info about the datasources. Additional tests: -- Rest Data with Panache test according to https://github.com/quarkus-qe/quarkus-test-plans/blob/main/QUARKUS-976.md +- Rest Data with Panache test according to https://github.com/quarkus-qe/quarkus-test-plans/blob/main/QUARKUS-976.md Additional UserEntity is a simple Jakarta Persistence entity that was created with aim to avoid inheritance of PanacheEntity methods and instead test the additional combination of Jakarta Persistence entity + PanacheRepository + PanacheRepositoryResource, where PanacheRepository is a facade class. Facade class can override certain methods to change the default behaviour of the PanacheRepositoryResource methods. -- AgroalPoolTest, will cover how the db pool is managed in terms of IDLE-timeout, max connections and concurrency. +- AgroalPoolTest, will cover how the db pool is managed in terms of IDLE-timeout, max connections and concurrency. ### `sql-db/reactive-rest-data-panache` @@ -578,10 +585,10 @@ invalid input, filtering, sorting, pagination. Verifies Quarkus transaction programmatic API, JDBC object store and transaction recovery. Base application contains REST resource `TransferResource` and three main services: `TransferTransactionService`, `TransferWithdrawalService` -and `TransferTopUpService` which implement various bank transactions. The main scenario is implemented in `TransactionGeneralUsageIT` +and `TransferTopUpService` which implement various bank transactions. The main scenario is implemented in `TransactionGeneralUsageIT` and checks whether transactions and rollbacks always done in full. -OpenTelemetry JDBC instrumentation test coverage is also placed here. JDBC tracing is tested for all supported +OpenTelemetry JDBC instrumentation test coverage is also placed here. JDBC tracing is tested for all supported databases in JVM mode, native mode and OpenShift. Smoke tests for DEV mode are using PostgreSQL. Smallrye Context Propagation cooperation with OpenTelemetry in DEV mode is also placed in this module. @@ -593,9 +600,9 @@ Authorization is based on roles, restrictions are defined using common annotatio ### `security/bouncycastle-fips` -Verify `bouncy castle FIPS` integration with Quarkus-security. +Verify `bouncy castle FIPS` integration with Quarkus-security. Bouncy castle providers: -- BCFIPS +- BCFIPS - BCFIPSJSSE ### `security/form-authn` @@ -628,7 +635,7 @@ Authorization is based on URL patterns, and Keycloak is used for defining and en A simple Keycloak realm with 1 client (protected application), 2 users, 2 roles and 2 protected resources is provided in `test-realm.json`. ### `security/keycloak-authz-reactive` -QUARKUS-1257 - Verifies authenticated endpoints with a generic body in parent class +QUARKUS-1257 - Verifies authenticated endpoints with a generic body in parent class Verifies token-based authn and URL-based authz. Authentication is OIDC, and Keycloak is used for issuing and verifying tokens. Authorization is based on URL patterns, and Keycloak is used for defining and enforcing restrictions. @@ -660,7 +667,7 @@ Restrictions are defined using common annotations (`@RolesAllowed` etc.). ### `security/keycloak-multitenant` -Verifies that we can use a multitenant configuration using JWT, web applications and code flow authorization in different tenants. +Verifies that we can use a multitenant configuration using JWT, web applications and code flow authorization in different tenants. Authentication is OIDC, and Keycloak is used. Authorization is based on roles, which are configured in Keycloak. @@ -668,7 +675,7 @@ A simple Keycloak realm with 1 client (protected application), 2 users and 2 rol ### `security/keycloak-oidc-client-basic` -Verifies authorization using `OIDC Client` extension as token generator. +Verifies authorization using `OIDC Client` extension as token generator. Keycloak is used for issuing and verifying tokens. Restrictions are defined using common annotations (`@RolesAllowed` etc.). @@ -686,7 +693,7 @@ Applications: - OIDC logout flow Test cases: -- When calling `/ping` or `/pong` endpoints without bearer token, then it should return 401 Unauthorized. +- When calling `/ping` or `/pong` endpoints without bearer token, then it should return 401 Unauthorized. - When calling `/ping` or `/pong` endpoints with incorrect bearer token, then it should return 401 Unauthorized. - When calling `/ping` endpoint with valid bearer token, then it should return 200 OK and "ping pong" as response. - When calling `/pong` endpoint with valid bearer token, then it should return 200 OK and "pong" as response. @@ -694,9 +701,9 @@ Test cases: Variants: - Using REST endpoints (quarkus-resteasy extension) - Using Reactive endpoints (quarkus-resteasy-mutiny extension) -- Using Lookup authorization via `@ClientHeaderParam` annotation +- Using Lookup authorization via `@ClientHeaderParam` annotation - Using `OIDC Client Filter` extension to automatically acquire the access token from Keycloak when calling to the RestClient. -- Using `OIDC Token Propagation` extension to propagate the tokens from the source REST call to the target RestClient. +- Using `OIDC Token Propagation` extension to propagate the tokens from the source REST call to the target RestClient. ### `security/keycloak-oidc-client-reactive` @@ -711,8 +718,8 @@ Reactive twin of the `security/keycloak-oidc-client-extended`, extends `security ### `securty/oidc-client-mutual-tls` -Verifies OIDC client can be authenticated as part of the `Mutual TLS` (`mTLS`) authentication process -when OpenID Connect Providers requires so. Keycloak is used as a primary OIDC server and Red Hat SSO +Verifies OIDC client can be authenticated as part of the `Mutual TLS` (`mTLS`) authentication process +when OpenID Connect Providers requires so. Keycloak is used as a primary OIDC server and Red Hat SSO is used for OpenShift scenarios. Test cases: @@ -738,21 +745,21 @@ This test doesn't run on OpenShift (yet). ### `security/vertx-jwt` In order to test Quarkus / Vertx extension security, we have set up an HTTP server with Vertx [Reactive Routes](https://quarkus.io/guides/reactive-routes#using-the-vert-x-web-router). -Basically Vertx it's an event loop that handler any kind of request as an event (Async and non-blocking). In this case the events are going to be generated by an HTTP-client, for example a browser. -This event is going to be managed by a Router (Application.class), that based on some criteria, will dispatch these events to an existing handler. +Basically Vertx it's an event loop that handler any kind of request as an event (Async and non-blocking). In this case the events are going to be generated by an HTTP-client, for example a browser. +This event is going to be managed by a Router (Application.class), that based on some criteria, will dispatch these events to an existing handler. -When a handler ends with a request, could reply a response or could propagate this request to the next handler (Handler chain approach). By this way you can segregate responsibilities between handlers. -In our case we are going to have several handlers. +When a handler ends with a request, could reply a response or could propagate this request to the next handler (Handler chain approach). By this way you can segregate responsibilities between handlers. +In our case we are going to have several handlers. Example: ``` this.router.get("/secured") - .handler(CorsHandler.create("*")) - .handler(LoggerHandler.create()) - .handler(JWTAuthHandler.create(authN)) - .handler(authZ::authorize) - .handler(rc -> secure.helloWorld(rc)); + .handler(CorsHandler.create("*")) + .handler(LoggerHandler.create()) + .handler(JWTAuthHandler.create(authN)) + .handler(authZ::authorize) + .handler(rc -> secure.helloWorld(rc)); ``` * CorsHandler: add cross origin headers to the HTTP response @@ -764,10 +771,10 @@ this.router.get("/secured") ### service-binding/postgresql-crunchy-classic and service-binding/postgresql-crunchy-reactive Modules verifying Quarkus `kubernetes-service-binding` extension is able to inject application projection service -binding from a PostgreSQL cluster created by Crunchy Postgres operator. +binding from a PostgreSQL cluster created by Crunchy Postgres operator. Binding is verified for both classic and reactive SQL clients (`quarkus-jdbc-postgresql` and `quarkus-reactive-pg-client`). -The module requires a cluster with Kubernetes API >=1.21 to work with Red Hat Service Binding Operator and Crunchy +The module requires a cluster with Kubernetes API >=1.21 to work with Red Hat Service Binding Operator and Crunchy Postgres v5 (this means OCP 4.7 and upwards.) This module requires an installed Crunchy Postgres Operator v5 and Red Hat Service Binding Operator. @@ -783,8 +790,8 @@ Verifies Stork integration in order to provide service discovering and round-rob * Pung: is a simple endpoint that returns "pung" as a string * Pong: is a simple endpoint that returns "pong" as a string * PongReplica: is a "Pong service" replica, that is deployed in another physical service -* Ping: is the main client microservice that will use `pung` and `pong` (Pong and PongReplica) services. The service -discovery will be done by Stork, and the request dispatching between "pong" services is going to be done by Stork load balancer. +* Ping: is the main client microservice that will use `pung` and `pong` (Pong and PongReplica) services. The service +discovery will be done by Stork, and the request dispatching between "pong" services is going to be done by Stork load balancer. ### Service-discovery/stork-custom @@ -811,7 +818,7 @@ Jaeger is deployed in an "all-in-one" configuration, and the OpenShift test veri Testing OpenTelemetry with Jaeger components - Extension `quarkus-opentelemetry` - responsible for traces generation in OpenTelemetry format and export into OpenTelemetry components (opentelemetry-agent, opentelemetry-collector) - + Scenarios that test proper traces export to Jaeger components, context propagation, OpenTelemetry SDK Autoconfiguration and CDI injection of OpenTelemetry beans. See also `monitoring/opentelemetry/README.md` @@ -825,9 +832,9 @@ There is a PrimeNumberResource that checks whether an integer is prime or not. T Where `{uniqueId}` is an unique identifier that is calculated at startup time to uniquely identify the metrics of the application. -This module also covers the usage of `MeterRegistry` and `MicroProfile API`: - -- The `MeterRegistry` approach includes three scenarios: +This module also covers the usage of `MeterRegistry` and `MicroProfile API`: + +- The `MeterRegistry` approach includes three scenarios: `simple`: single call will increment the counter. `forloop`: will increment the counter a number of times. `forloop parallel`: will increment the counter a number of times using a parallel flow. @@ -882,11 +889,11 @@ Verifies KafkaSSL integration. This module cover a simple Kafka producer/consume ### `messaging/kafka-streams-reactive-messaging` -Verifies that `Quarkus Kafka Stream` and `Quarkus SmallRye Reactive Messaging` extensions works as expected. +Verifies that `Quarkus Kafka Stream` and `Quarkus SmallRye Reactive Messaging` extensions works as expected. -There is an EventsProducer that generate login status events every 100ms. -A Kafka stream called `WindowedLoginDeniedStream` will aggregate these events in fixed time windows of 3 seconds. -So if the number of wrong access excess a threshold, then a new alert event is thrown. All aggregated events(not only unauthorized) are persisted. +There is an EventsProducer that generate login status events every 100ms. +A Kafka stream called `WindowedLoginDeniedStream` will aggregate these events in fixed time windows of 3 seconds. +So if the number of wrong access excess a threshold, then a new alert event is thrown. All aggregated events(not only unauthorized) are persisted. - Quarkus Grateful Shutdown for Kafka connectors @@ -896,22 +903,22 @@ The test will confirm that no messages are lost when the `grateful-shutdown` is - Reactive Kafka and Kafka Streams SSL - Auto-detect serializers and deserializers for the Reactive Messaging Kafka Connector -All current tests are running under a secured Kafka by SSL. -Kafka streams pipeline is configured by `quarkus.kafka-streams.ssl` prefix property, but reactive Kafka producer/consumer is configured by `kafka` prefix as you can see on `SslStrimziKafkaTestResource` +All current tests are running under a secured Kafka by SSL. +Kafka streams pipeline is configured by `quarkus.kafka-streams.ssl` prefix property, but reactive Kafka producer/consumer is configured by `kafka` prefix as you can see on `SslStrimziKafkaTestResource` ### `messaging/kafka-confluent-avro-reactive-messaging` -- Verifies that `Quarkus Kafka` + `Apicurio Kakfa Registry`(AVRO) and `Quarkus SmallRye Reactive Messaging` extensions work as expected. +- Verifies that `Quarkus Kafka` + `Apicurio Kakfa Registry`(AVRO) and `Quarkus SmallRye Reactive Messaging` extensions work as expected. -There is an EventsProducer that generate stock prices events every 1s. The events are typed by an AVRO schema. -A Kafka consumer will read these events serialized by AVRO and change an `status` property to `COMPLETED`. -The streams of completed events will be exposed through an SSE endpoint. +There is an EventsProducer that generate stock prices events every 1s. The events are typed by an AVRO schema. +A Kafka consumer will read these events serialized by AVRO and change an `status` property to `COMPLETED`. +The streams of completed events will be exposed through an SSE endpoint. ### `messaging/kafka-strimzi-avro-reactive-messaging` - Verifies that `Quarkus Kafka` + `Apicurio Kakfa Registry`(AVRO) and `Quarkus SmallRye Reactive Messaging` extensions work as expected. -There is an EventsProducer that generate stock prices events every 1s. The events are typed by an AVRO schema. +There is an EventsProducer that generate stock prices events every 1s. The events are typed by an AVRO schema. A Kafka consumer will read these events serialized by AVRO and change an `status` property to `COMPLETED`. The streams of completed events will be exposed through an SSE endpoint. @@ -920,7 +927,7 @@ The streams of completed events will be exposed through an SSE endpoint. ### `messaging/kafka-producer` -This scenario is focus on issues related only to Kafka producer. +This scenario is focus on issues related only to Kafka producer. Verifies that Kafka producer doesn't block the main thread and also doesn't takes more time than `mp.messaging.outgoing..max.block.ms`, and also doesn't retry more times than `mp.messaging.outgoing..retries` @@ -952,14 +959,14 @@ It contains three applications: #### `todo-demo-app` -This test produces an S2I source deployment config for OpenShift with [todo-demo-app](https://github.com/quarkusio/todo-demo-app) +This test produces an S2I source deployment config for OpenShift with [todo-demo-app](https://github.com/quarkusio/todo-demo-app) serving a simple todo checklist. The code for this application lives outside of the test suite's codebase. The test verifies that the application with a sample of libraries is buildable and deployable via supported means. #### `quarkus-workshop-super-heroes` -This test produces an S2I source deployment config for OpenShift with +This test produces an S2I source deployment config for OpenShift with [Quarkus Super heroes workshop](https://github.com/quarkusio/quarkus-workshops) application. The code for this application lives outside of the test suite's codebase. @@ -1012,7 +1019,7 @@ Covers two areas related to Spring Web: - CRUD endpoints. - Custom error handlers. - Cooperation with Qute templating engine. - + ### `spring/spring-web-reactive` Covers two areas related to Spring Web Reactive: - Proper behavior of SmallRye OpenAPI with Mutiny method signatures - correct content types in OpenAPI endpoint output (`/q/openapi`). @@ -1022,7 +1029,7 @@ Covers two areas related to Spring Web: - Cooperation with Qute templating engine. - Verify functionality of methods with transactional annotation @ReactiveTransactional - Verify functionality of methods with transactional method (.withTransactional) - + ### `spring/spring-cloud-config` Verifies that we can use an external Spring Cloud Server to inject configuration in our Quarkus applications. @@ -1037,45 +1044,45 @@ Current limitations: ### `infinispan-client` -Verifies the way of the sharing cache by Datagrid operator and Infinispan cluster and data consistency after failures. +Verifies the way of the sharing cache by Datagrid operator and Infinispan cluster and data consistency after failures. Verifies cache entries serialization, querying and cache eviction. #### Prerequisites - Datagrid operator installed in `datagrid-operator` namespace. This needs cluster-admin rights to install. -- The operator supports only single-namespace so it has to watch another well-known namespace `datagrid-cluster`. +- The operator supports only single-namespace so it has to watch another well-known namespace `datagrid-cluster`. This namespace must be created by "qe" user or this user must have access to it because tests are connecting to it. - These namespaces should be prepared after the Openshift installation - See [Installing Data Grid Operator](https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.1/html/running_data_grid_on_openshift/installation) -Tests create an Infinispan cluster in the `datagrid-cluster` namespace. Cluster is created before tests by `infinispan_cluster_config.yaml`. -To allow parallel runs of tests this cluster is renamed for every test run - along with configmap `infinispan-config`. The configmap contains -configuration property `quarkus.infinispan-client.hosts`. Value of this property is a path to the infinispan cluster from test namespace, -its structure is `infinispan-cluster-name.datagrid-cluster-namespace.svc.cluster.local:11222`. It is because the testsuite uses dynamically generated +Tests create an Infinispan cluster in the `datagrid-cluster` namespace. Cluster is created before tests by `infinispan_cluster_config.yaml`. +To allow parallel runs of tests this cluster is renamed for every test run - along with configmap `infinispan-config`. The configmap contains +configuration property `quarkus.infinispan-client.hosts`. Value of this property is a path to the infinispan cluster from test namespace, +its structure is `infinispan-cluster-name.datagrid-cluster-namespace.svc.cluster.local:11222`. It is because the testsuite uses dynamically generated namespaces for tests. So this path is needed for the tests to find Infinispan cluster in another namespace. The Infinispan cluster needs 2 special secrets - tls-secret with TLS certificate and connect-secret with the credentials. -TLS certificate is a substitution of `secrets/signing-key` in openshift-service-ca namespace, which "qe" user cannot use (doesn't have rights on it). +TLS certificate is a substitution of `secrets/signing-key` in openshift-service-ca namespace, which "qe" user cannot use (doesn't have rights on it). Clientcert secret is generated for "qe" from the tls-secret mentioned above. -Infinispan client tests use the cache directly with `@Inject` and `@RemoteCache`. Through the Jakarta REST endpoint, we send data into the cache and retrieve it through another Jakarta REST endpoint. +Infinispan client tests use the cache directly with `@Inject` and `@RemoteCache`. Through the Jakarta REST endpoint, we send data into the cache and retrieve it through another Jakarta REST endpoint. The next tests are checking a simple fail-over - first client (application) fail, then Infinispan cluster (cache) fail. Tests kill first the Quarkus pod then Infinispan cluster pod and then check data. For the Quarkus application, pod killing is used the same approach as in configmap tests. For the Infinispan cluster, pod killing is updated its YAML snipped and uploaded with zero replicas. By default, when the Infinispan server is down and the application can't open a connection, it tries to connect again, up to 10 times (max_retries) and gives up after 60s (connect_timeout). Because of that we are using the `hotrod-client.properties` file where are the max_retries and connect_timeout reduced. Without this the application will be still trying to connect to the Infinispan server next 10 minutes and the incremented number can appear later. -The last three tests are for testing of the multiple client access to the cache. We simulate the second client by deploying the second deployment config, Service, and Route for these tests. These are copied from the `openshift.yml` file. +The last three tests are for testing of the multiple client access to the cache. We simulate the second client by deploying the second deployment config, Service, and Route for these tests. These are copied from the `openshift.yml` file. ### `cache/caffeine` Verifies the `quarkus-cache` extension using `@CacheResult`, `@CacheInvalidate`, `@CacheInvalidateAll` and `@CacheKey`. -It covers different usages: +It covers different usages: 1. from an application scoped service 2. from a request scoped service 3. from a blocking endpoint -4. from a reactive endpoint +4. from a reactive endpoint ### `cache/spring` Verifies the `quarkus-spring-cache` extension using `@Cacheable`, `@CacheEvict` and `@CachePut`. -It covers different usages: +It covers different usages: 1. from an application scoped service 2. from a request scoped service 3. from a REST controller endpoint (using `@RestController) diff --git a/http/vertx-web-validation/pom.xml b/http/vertx-web-validation/pom.xml new file mode 100644 index 0000000000..a3859d72f7 --- /dev/null +++ b/http/vertx-web-validation/pom.xml @@ -0,0 +1,27 @@ + + + 4.0.0 + + io.quarkus.ts.qe + parent + 1.0.0-SNAPSHOT + ../.. + + vertx-web-validation + jar + Quarkus QE TS: HTTP: Vert.x-Web-Validation + + + io.quarkus + quarkus-resteasy-reactive-jackson + + + io.vertx + vertx-web-validation + + + io.vertx + vertx-json-schema + + + diff --git a/http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ShopResource.java b/http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ShopResource.java new file mode 100644 index 0000000000..d02040b597 --- /dev/null +++ b/http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ShopResource.java @@ -0,0 +1,36 @@ +package io.quarkus.ts.vertx.web.validation; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; +import java.util.UUID; + +import jakarta.ws.rs.GET; +import jakarta.ws.rs.Path; +import jakarta.ws.rs.Produces; +import jakarta.ws.rs.core.MediaType; + +@Path("/shoppinglist") +@Produces(MediaType.APPLICATION_JSON) +public class ShopResource { + + private List shoppingList; + + private List createSampleProductList() { + shoppingList = new ArrayList<>(); + shoppingList.add(new ShoppingList(UUID.randomUUID(), "ListName1", 25, + new ArrayList<>(Arrays.asList("Carrots", "Water", "Cheese", "Beer")))); + shoppingList.add(new ShoppingList(UUID.randomUUID(), "ListName2", 80, + new ArrayList<>(Arrays.asList("Meat", "Wine", "Almonds", "Potatoes", "Cake")))); + return shoppingList; + } + + @GET + public List get() { + if (shoppingList == null) { + createSampleProductList(); + } + return shoppingList; + } + +} diff --git a/http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ShoppingList.java b/http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ShoppingList.java new file mode 100644 index 0000000000..45ca3d5d16 --- /dev/null +++ b/http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ShoppingList.java @@ -0,0 +1,60 @@ +package io.quarkus.ts.vertx.web.validation; + +import java.util.ArrayList; +import java.util.UUID; + +public class ShoppingList { + + public UUID id; + + public String name; + + public ArrayList products; + + public double price; + + public ShoppingList(UUID id, String name, double price, ArrayList products) { + this.id = id; + this.name = name; + this.price = price; + this.products = products; + } + + public ArrayList getProducts() { + return products; + } + + public void setProducts(ArrayList products) { + this.products = products; + } + + public UUID getId() { + return id; + } + + public String getName() { + return name; + } + + public void setName(String name) { + this.name = name; + } + + public double getPrice() { + return price; + } + + public void setPrice(double price) { + this.price = price; + } + + @Override + public String toString() { + return String.format( + "Shopping list{id=%s, name=%s, products=%s, price=%s}", + getId(), + getName(), + getProducts().toString(), + getPrice()); + } +} diff --git a/http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ValidationHandlerOnRoutes.java b/http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ValidationHandlerOnRoutes.java new file mode 100644 index 0000000000..a512a9f7dc --- /dev/null +++ b/http/vertx-web-validation/src/main/java/io/quarkus/ts/vertx/web/validation/ValidationHandlerOnRoutes.java @@ -0,0 +1,159 @@ +package io.quarkus.ts.vertx.web.validation; + +import static io.vertx.ext.web.validation.builder.Parameters.param; +import static io.vertx.json.schema.common.dsl.Schemas.arraySchema; +import static io.vertx.json.schema.common.dsl.Schemas.numberSchema; +import static io.vertx.json.schema.common.dsl.Schemas.objectSchema; +import static io.vertx.json.schema.common.dsl.Schemas.stringSchema; +import static io.vertx.json.schema.draft7.dsl.Keywords.maximum; + +import java.util.List; +import java.util.concurrent.atomic.AtomicReference; +import java.util.stream.Collectors; + +import jakarta.annotation.PostConstruct; +import jakarta.enterprise.context.ApplicationScoped; +import jakarta.enterprise.event.Observes; +import jakarta.inject.Inject; +import jakarta.ws.rs.core.HttpHeaders; +import jakarta.ws.rs.core.MediaType; + +import io.netty.handler.codec.http.HttpResponseStatus; +import io.vertx.core.Vertx; +import io.vertx.core.json.Json; +import io.vertx.core.json.JsonArray; +import io.vertx.ext.web.Router; +import io.vertx.ext.web.handler.BodyHandler; +import io.vertx.ext.web.validation.BadRequestException; +import io.vertx.ext.web.validation.BodyProcessorException; +import io.vertx.ext.web.validation.ParameterProcessorException; +import io.vertx.ext.web.validation.RequestParameters; +import io.vertx.ext.web.validation.RequestPredicate; +import io.vertx.ext.web.validation.RequestPredicateException; +import io.vertx.ext.web.validation.ValidationHandler; +import io.vertx.ext.web.validation.builder.Bodies; +import io.vertx.ext.web.validation.builder.Parameters; +import io.vertx.ext.web.validation.builder.ValidationHandlerBuilder; +import io.vertx.json.schema.SchemaParser; +import io.vertx.json.schema.SchemaRouter; +import io.vertx.json.schema.SchemaRouterOptions; +import io.vertx.json.schema.common.dsl.ObjectSchemaBuilder; + +@ApplicationScoped +public class ValidationHandlerOnRoutes { + //TODO when Quarkus use vert.x version 4.4.6 we can use SchemaRepository instead of SchemaParser with SchemaRouter + //private SchemaRepository schemaRepository =SchemaRepository.create(new JsonSchemaOptions().setDraft(Draft.DRAFT7).setBaseUri(BASEURI)); + private SchemaParser schemaParser; + private SchemaRouter schemaRouter; + + @Inject + Vertx vertx; + + private Router router; + + @Inject + ShopResource shopResource; + + private static final String ERROR_MESSAGE = "{\"error\": \"%s\"}"; + private static final String SHOPPINGLIST_NOT_FOUND = "Shopping list not found in the list or does not exist with that name or price"; + + @PostConstruct + void initialize() { + router = Router.router(vertx); + router.route().handler(BodyHandler.create()); + schemaParser = createSchema(); + validateHandlerSoppingList(router); + } + + private SchemaParser createSchema() { + schemaRouter = SchemaRouter.create(vertx, new SchemaRouterOptions()); + schemaParser = SchemaParser.createDraft7SchemaParser(schemaRouter); + return schemaParser; + } + + public void validateHandlerSoppingList(@Observes Router router) { + AtomicReference queryAnswer = new AtomicReference<>(); + router.get("/filterList") + .handler(ValidationHandlerBuilder + .create(schemaParser) + .queryParameter(param("shoppingListName", stringSchema())) + .queryParameter(param("shoppingListPrice", numberSchema().with(maximum(100)))).build()) + .handler(routingContext -> { + RequestParameters parameters = routingContext.get(ValidationHandler.REQUEST_CONTEXT_KEY); + String shoppingListName = parameters.queryParameter("shoppingListName").getString(); + Double totalPrice = parameters.queryParameter("shoppingListPrice").getDouble(); + + // Logic to list shoppingList based on shoppingListName and totalPrice + String shoppingListFound = fetchProductDetailsFromQuery(shoppingListName, totalPrice); + queryAnswer.set(shoppingListFound); + + if (queryAnswer.get().equalsIgnoreCase(SHOPPINGLIST_NOT_FOUND)) { + routingContext.response().setStatusCode(HttpResponseStatus.NOT_FOUND.code()); + } + + routingContext.response().putHeader(HttpHeaders.CONTENT_TYPE, MediaType.TEXT_PLAIN).end(queryAnswer.get()); + }).failureHandler(routingContext -> { + // Error handling: + if (routingContext.failure() instanceof BadRequestException || + routingContext.failure() instanceof ParameterProcessorException || + routingContext.failure() instanceof BodyProcessorException || + routingContext.failure() instanceof RequestPredicateException) { + + String errorMessage = routingContext.failure().toString(); + routingContext.response() + .setStatusCode(HttpResponseStatus.BAD_REQUEST.code()) + .putHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON) + .end(String.format(ERROR_MESSAGE, errorMessage)); + } + }); + // Create a ValidationHandlerBuilder with explodedParam and arraySchema to filter by array items + ObjectSchemaBuilder bodySchemaBuilder = objectSchema() + .property("shoppingListName", stringSchema()); + ValidationHandlerBuilder + .create(schemaParser) + .body(Bodies.json(bodySchemaBuilder)); + router.get("/filterByArrayItem") + .handler( + ValidationHandlerBuilder + .create(schemaParser) + .queryParameter(Parameters.explodedParam("shoppingArray", arraySchema().items(stringSchema()))) + .body(Bodies.json(bodySchemaBuilder)) + .build()) + .handler(routingContext -> { + RequestParameters parameters = routingContext.get(ValidationHandler.REQUEST_CONTEXT_KEY); + JsonArray myArray = parameters.queryParameter("shoppingArray").getJsonArray(); + // Retrieve the list of all shoppingLists + List shoppingLists = fetchProductDetailsFromArrayQuery(myArray); + + routingContext.response().putHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON) + .end(Json.encodeToBuffer(shoppingLists)); + }); + // Let's allow to create a new item + router.post("/createShoppingList").handler( + ValidationHandlerBuilder + .create(schemaParser) + .predicate(RequestPredicate.BODY_REQUIRED) + .queryParameter(param("shoppingListName", stringSchema())) + .queryParameter(param("shoppingListPrice", numberSchema().with(maximum(100)))) + .build()) + .handler(routingContext -> { + routingContext.response().setStatusCode(HttpResponseStatus.OK.code()).end("Shopping list created"); + }); + + } + + public List fetchProductDetailsFromArrayQuery(JsonArray myArray) { + return shopResource.get().stream() + .filter(shoppingList -> myArray.contains(shoppingList.getName())) + .collect(Collectors.toList()); + } + + public String fetchProductDetailsFromQuery(String name, Double price) { + return shopResource.get().stream() + .filter(product -> name.equalsIgnoreCase(product.getName()) && price.equals(product.getPrice())) + .map(ShoppingList::toString) + .findFirst() + .orElse(SHOPPINGLIST_NOT_FOUND); + } + +} diff --git a/http/vertx-web-validation/src/test/java/io/quarkus/ts/vertx/web/validation/OpenShiftVertxWebValidationIT.java b/http/vertx-web-validation/src/test/java/io/quarkus/ts/vertx/web/validation/OpenShiftVertxWebValidationIT.java new file mode 100644 index 0000000000..49497491f4 --- /dev/null +++ b/http/vertx-web-validation/src/test/java/io/quarkus/ts/vertx/web/validation/OpenShiftVertxWebValidationIT.java @@ -0,0 +1,7 @@ +package io.quarkus.ts.vertx.web.validation; + +import io.quarkus.test.scenarios.OpenShiftScenario; + +@OpenShiftScenario +public class OpenShiftVertxWebValidationIT extends VertxWebValidationIT { +} diff --git a/http/vertx-web-validation/src/test/java/io/quarkus/ts/vertx/web/validation/VertxWebValidationIT.java b/http/vertx-web-validation/src/test/java/io/quarkus/ts/vertx/web/validation/VertxWebValidationIT.java new file mode 100644 index 0000000000..5117c6cd31 --- /dev/null +++ b/http/vertx-web-validation/src/test/java/io/quarkus/ts/vertx/web/validation/VertxWebValidationIT.java @@ -0,0 +1,124 @@ +package io.quarkus.ts.vertx.web.validation; + +import static org.hamcrest.MatcherAssert.assertThat; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.equalTo; + +import org.apache.http.HttpStatus; +import org.junit.jupiter.api.Test; + +import io.quarkus.test.bootstrap.RestService; +import io.quarkus.test.scenarios.QuarkusScenario; +import io.quarkus.test.services.QuarkusApplication; +import io.restassured.http.ContentType; +import io.restassured.response.Response; + +@QuarkusScenario +public class VertxWebValidationIT { + @QuarkusApplication + static RestService restserviceapp = new RestService(); + + private static final String SHOPPING_LIST_URL = "/shoppinglist"; + private static final String LIST_NAME1 = "name=ListName1, products=[Carrots, Water, Cheese, Beer], price=25.0"; + private static final String LIST_NAME2 = "name=ListName2, products=[Meat, Wine, Almonds, Potatoes, Cake], price=80.0"; + private static final String FILTER_BY_NAME_PRICE_URL_1 = "/filterList?shoppingListName=ListName1&shoppingListPrice=25"; + private static final String FILTER_BY_NAME_PRICE_URL_2 = "/filterList?shoppingListName=ListName2&shoppingListPrice=80"; + private static final String FILTER_BY_WRONG_NAME_PRICE_URL = "/filterList?shoppingListName=ListName35&shoppingListPrice=25"; + private static final String ONLY_FILTER_BY_NAME = "/filterList?shoppingListName=ListName1"; + private static final String PRICE_OUT_OF_RANGE = "/filterList?shoppingListName=ListName1&shoppingListPrice=125"; + private static final String FILTER_BY_LIST_NAMES = "/filterByArrayItem?shoppingArray=ListName1&shoppingArray=ListName2"; + + private static final String ERROR_PARAMETER_MISSING = "ParameterProcessorException"; + + @Test + void checkShoppingListUrl() { + Response response = restserviceapp.given() + .get(SHOPPING_LIST_URL) + .then() + .statusCode(200) + .contentType(ContentType.JSON) + .extract() + .response(); + + assertThat(response.getBody().jsonPath().getString("name"), containsString("[ListName1, ListName2]")); + } + + @Test + void checkNamePriceParams() { + Response response = restserviceapp + .given() + .get(FILTER_BY_NAME_PRICE_URL_1) + .then() + .statusCode(HttpStatus.SC_OK).extract().response(); + assertThat(response.asString(), containsString(LIST_NAME1)); + Response response2 = restserviceapp + .given() + .get(FILTER_BY_NAME_PRICE_URL_2) + .then() + .statusCode(HttpStatus.SC_OK).extract().response(); + assertThat(response2.asString(), containsString(LIST_NAME2)); + } + + @Test + void checkWrongNamePriceParams() { + Response response = restserviceapp + .given() + .get(FILTER_BY_WRONG_NAME_PRICE_URL) + .then() + .statusCode(HttpStatus.SC_NOT_FOUND).extract().response(); + assertThat(response.asString(), + containsString("Shopping list not found in the list or does not exist with that name or price")); + } + + @Test + void checkParameterMissingError() { + Response response = restserviceapp + .given() + .get(ONLY_FILTER_BY_NAME) + .then() + .statusCode(HttpStatus.SC_BAD_REQUEST) + .extract() + .response(); + assertThat(response.asString(), containsString(ERROR_PARAMETER_MISSING)); + assertThat(response.asString(), containsString("Missing parameter shoppingListPrice in QUERY")); + } + + @Test + void checkPriceOutOfRangeError() { + Response response = restserviceapp + .given() + .get(PRICE_OUT_OF_RANGE) + .then() + .statusCode(HttpStatus.SC_BAD_REQUEST) + .extract() + .response(); + assertThat(response.asString(), containsString(ERROR_PARAMETER_MISSING)); + assertThat(response.asString(), containsString("value should be <= 100.0")); + } + + @Test + void checkFilterByArrayListName() { + Response response = restserviceapp + .given() + .get(FILTER_BY_LIST_NAMES) + .then() + .statusCode(HttpStatus.SC_OK).extract().response(); + assertThat(response.getBody().jsonPath().getString("name"), containsString("[ListName1, ListName2]")); + assertThat(response.getBody().jsonPath().getString("price"), equalTo("[25.0, 80.0]")); + } + + @Test + void createShoppingList() { + restserviceapp.given() + .contentType(ContentType.JSON) + .body("{}") + .queryParam("shoppingListName", "MyList3") + .queryParam("shoppingListPrice", 50) + .when() + .post("/createShoppingList") + .then() + .statusCode(200) + .body(equalTo("Shopping list created")); + } + +} diff --git a/pom.xml b/pom.xml index 221daae908..d17753912b 100644 --- a/pom.xml +++ b/pom.xml @@ -466,6 +466,7 @@ http/rest-client-reactive http/servlet-undertow http/vertx-web-client + http/vertx-web-validation http/hibernate-validator http/graphql http/graphql-telemetry