diff --git a/site/content/attachment_files/web-console/login-12.3.png b/site/content/attachment_files/web-console/login-12.3.png
index a323a763d..67fd47ae8 100644
Binary files a/site/content/attachment_files/web-console/login-12.3.png and b/site/content/attachment_files/web-console/login-12.3.png differ
diff --git a/site/content/xap/12.3/admin/memoryxtend-rocksdb-ssd.markdown b/site/content/xap/12.3/admin/memoryxtend-rocksdb-ssd.markdown
index 17335ffdf..6f65e68cb 100644
--- a/site/content/xap/12.3/admin/memoryxtend-rocksdb-ssd.markdown
+++ b/site/content/xap/12.3/admin/memoryxtend-rocksdb-ssd.markdown
@@ -7,11 +7,13 @@ weight: 200
---
# Introduction
-XAP MemoryXtend for Flash/SSD delivers built-in high speed persistence leveraging local or attached SSD devices and all-flash-arrays (AFA). It delivers low latency write and read performance, as well as fast data recovery. XAP MemoryXtend for Flash/SSD is based on {{%exurl "RocksDB""http://rocksdb.org/"%}} which is a persistent key/value store optimized for fast storage environments.
+
+XAP MemoryXtend for Flash/SSD delivers built-in, high-speed persistence that leverages local or attached SSD devices and all-flash-arrays (AFA). It delivers low latency write and read performance, as well as fast data recovery. XAP MemoryXtend for Flash/SSD is based on {{%exurl "RocksDB""http://rocksdb.org/"%}}, which is a persistent key/value store optimized for fast storage environments.
# Architecture and Components
-When configured for Flash/SSD, the MemoryXtend architecture tiers the storage of each space partition instance across two components: a space partition instance (managed JVM heap) and an embedded key/value store (the blob store) as shown in the diagram down below.
+
+When configured for Flash/SSD, the MemoryXtend architecture tiers the storage of each Space partition instance across two components: a Space partition instance (managed JVM heap) and an embedded key/value store (the blobstore) as shown in the diagram below.
{{%align center%}}
![image](/attachment_files/blobstore/memoryxtend-rocksdb-architecture.png)
@@ -19,23 +21,20 @@ When configured for Flash/SSD, the MemoryXtend architecture tiers the storage of
## Space Partition Instance
-The space partition instance is a JVM heap which acts as a LRU cache against the underlying blob store. This tier in the architecture stores indexes, space class metadata, transactions, replciation redo log, leases and statistics.
-Upon a space read operation, if the object exists in the JVM heap (i.e. a cache hit) it will be immediately returned to the space proxy client. Otherwise, the space will load it from the underlying blob store and place it on the JVM heap (known as a cache miss).
-
-
-## Blob Store
-The blob store is based on a log-structured merge tree architecture (similar to popular NoSQL databases such as: {{%exurl "HBase""https://hbase.apache.org/"%}}, {{%exurl "BigTable""https://cloud.google.com/bigtable/"%}}, or {{%exurl "Cassandra""https://cassandra.apache.org/"%}}). There are three main components in the blob store:
-
-- MemTable: An in-memory data structure (residing on off-heap RAM) where all incoming writes are stored. When a MemTable fills up, itis flushed to a SST file on storage.
-- Log: A write ahead log (WAL) which serializes MemTable operations to persistent medium as log files. In the event of failure, WAL files can be used to recover the key/value store to its consistent state, by reconstructing the MemTable from teh logs.
-- Sorted String Table (SST) files: SSTable is a data structure (residing on disk) to efficiently store a large data footprint while optimizing for high throughput, sequential read/write workloads. When a MemTable fills up, it is flushed to a SST file on storage and the corresponding write ahead log file can be deleted.
-
+The Space partition instance is a JVM heap that acts as a LRU cache against the underlying blobstore. This tier in the architecture stores indexes, Space class metadata, transactions, replication redo logs, leases, and statistics.
+Upon a Space read operation, if the object exists in the JVM heap (i.e. a cache hit) it is immediately returned to the Space proxy client. Otherwise, the Space loads it from the underlying blobstore and places it on the JVM heap (known as a cache miss).
+## Blobstore
+The blobstore is based on a log-structured merge tree architecture (similar to popular NoSQL databases such as: {{%exurl "HBase""https://hbase.apache.org/"%}}, {{%exurl "BigTable""https://cloud.google.com/bigtable/"%}}, or {{%exurl "Cassandra""https://cassandra.apache.org/"%}}). There are three main components in the blobstore:
+- MemTable: An in-memory data structure (residing on off-heap RAM) where all incoming writes are stored. When a MemTable fills up, it is flushed to an SST file on storage.
+- Log: A write-ahead log (WAL) that serializes MemTable operations to a persistent medium as log files. In the event of failure, WAL files can be used to recover the key/value store to its consistent state, by reconstructing the MemTable from the logs.
+- Sorted String Table (SST) files: SSTable is a data structure (residing on disk) that efficiently stores a large data footprint while optimizing for high throughput, and sequential read/write workloads. When a MemTable fills up, it is flushed to an SST file on storage and the corresponding WAL file can be deleted.
# Configuration and Deployment
-Any existing XAP space can be configured to integrate a blob store with it. As with a typical processing unit, configuration is done through `pu.xml` or code. For example:
+
+Any existing XAP Apace can be configured to integrate a blobstore with it. As with a typical Processing Unit, configuration is done through `pu.xml` or code. For example:
{{%tabs%}}
{{%tab "Namespace"%}}
@@ -105,22 +104,22 @@ The following tables describes the configuration options used in `rocksdb-blob-s
| Property | Description | Default | Use |
|:-----------------------|:----------------------------------------------------------|:--------|:--------|
-| paths | A comma-separated array of mount paths used for each space partition's blob store. Th number of paths in the array shall correspond to the number of partition instances in the space (primaries and backups). For instance, for a two-partition space with no backups, `/mnt/db1` corresponds to the first partition, while `/mnt/db2` to the second one. | | required |
-| mapping-dir | A directory on the filesystem which contains the "partition to blob store" mapping file. This file is automatically generated by MemoryXtend. In the event of a parition re-location, the mapping file will be updated. | | required |
+| paths | A comma-separated array of mount paths used for each Space partition's blobstore. Th number of paths in the array should correspond to the number of partition instances in the Space (primaries and backups). For instance, for a two-partition Space with no backups, `/mnt/db1` corresponds to the first partition, while `/mnt/db2` to the second one. | | required |
+| mapping-dir | A directory in the file system that contains the "partition to blobstore" mapping file. This file is automatically generated by MemoryXtend. In the event of a partition re-location, the mapping file is updated. | | required |
| central-storage | Specifies whether the deployment strategy is for [central storage](./memoryxtend-rocksdb-ssd.html#central-storage) (i.e. SAN configuration) or [local storage](./memoryxtend-rocksdb-ssd.html#local-storage) on each grid machine (default)| false | optional |
-| db-options | Specifies the tuning parameters for the persistent data store in the underlying blob store. This includes SST formats, compaction settings and flushes. See [Performance Tuning](./memoryxtend-rocksdb-ssd.html#performance-tuning) section for details. | | optional |
-| data-column-family-options | Specifies the tuning parameters for the LSM logic and memory tables. See [Performance Tuning](./memoryxtend-rocksdb-ssd.html#performance-tuning) section for details.| | optional |
-| blob-store-handler | BlobStore implementation | | required |
-| cache-entries-percentage | On-Heap cache stores objects in their native format.This cache size determined based on the percentage of the GSC JVM max memory(-Xmx). If `-Xmx` is not specified the cache size default to `10000` objects. This is an LRU based data cache.(*)| 20% | optional |
-| avg-object-size-KB | Average object size in KB. avg-object-size-bytes and avg-object-size-KB cannot be used together. | 5 | optional |
-| avg-object-size-bytes | Average object size in bytes. avg-object-size-bytes and avg-object-size-KB cannot be used together. | 5000 | optional |
-| persistent | data is written to flash, space will perform recovery from flash if needed. | | required |
-| blob-store-cache-query | one or more SQL queries that determine which objects will be stored in cache | | optional |
+| db-options | Specifies the tuning parameters for the persistent data store in the underlying blobstore. This includes SST formats, compaction settings and flushes. See the [Performance Tuning](./memoryxtend-rocksdb-ssd.html#performance-tuning) section for details. | | optional |
+| data-column-family-options | Specifies the tuning parameters for the LSM logic and memory tables. See the [Performance Tuning](./memoryxtend-rocksdb-ssd.html#performance-tuning) section for details.| | optional |
+| blob-store-handler | Blobstore implementation | | required |
+| cache-entries-percentage | On-Heap cache stores objects in their native format. This cache size is determined based on the percentage of the GSC JVM max memory(-Xmx). If `-Xmx` is not specified, the default cache size is `10000` objects. This is an LRU-based data cache.(*)| 20% | optional |
+| avg-object-size-KB | Average object size, in KB. `avg-object-size-bytes` and `avg-object-size-KB` cannot be used together. | 5 | optional |
+| avg-object-size-bytes | Average object size, in bytes. `avg-object-size-bytes` and `avg-object-size-KB` cannot be used together. | 5000 | optional |
+| persistent | Data is written to flash memory, and the space performs recovery from the flash memory if needed. | | required |
+| blob-store-cache-query | One or more SQL queries that determine which objects will be stored in cache. | | optional |
**Calculating cache-entries-percentage**
-The purpose of this settings is to set the maximum number of objects that a GSC heap can hold before it starts evicting. Here is the formula for calculating it:
+This setting is used to define the maximum number of objects that a GSC heap can hold before it starts evicting. Here is the formula for calculating it:
`Number of objects = ((GSC Xmx) * (cache-entries-percentage/100))/average-object-size-KB`
@@ -134,26 +133,28 @@ N = {10GB * 1024 * 1024) * (20/100) } / 2
## Custom Caching
-**Data Recovery on Restart**
-The MemoryXtend architecture allows for data persisted on the blob store to be available for the data grid upon restart. To enable this, all that's needed is to enable the `persistent` option on the blobstore policy.
+### Data Recovery on Restart
+
+The MemoryXtend architecture allows for data persisted on the blobstore to be available for the data grid upon restart. To enable this, you have to enable the `persistent` option in the blobstore policy.
```xml
```
-**Blob Store Cache Custom Queries**
+### Blobstore Cache Custom Queries
+
+The `blob-store-cache-query` option enables customizing the cache contents. You can define a set of SQL criteria, so that only objects that fit the queries:
-The `blob-store-cache-query` option provides a way of customizing the cache contents. By defining a set of SQL criteria, only objects that fit the queries:
-- Will pre-load into the JVM heap upon data grid initialization/restart.
-- Will be stored in the JVM heap after space operations.
+- Are pre-loaded into the JVM heap upon data grid initialization/restart.
+- Are stored in the JVM heap after Space operations.
-This guarantees any subsequent read request will hit RAM, thereby providing predictable latency (avoiding cache misses).
+This guarantees that any subsequent read requests will hit RAM, providing predictable latency (and avoiding cache misses).
-This customization is useful when read latencies for specific class type (e.g. hot data, current day stocks) need to be predictable upfront.
+This customization is useful when read latencies for specific class types (such as hot data, current day stocks) need to be predictable upfront.
-**Lazy Load**
+### Lazy Load
-If no custom queries are defined, data will be lazily loaded. In this approach, no data is loaded into the JVM heap upon a restart. MemoryXtend saves only indexes in RAM and the rest of the objects on disk. As read throughput increases from clients, most of the data will eventually load into the data grid RAM tier. This is a preferred approach when the volume of data persisted on flash far exceeds what can fit into memory.
+If no custom queries are defined, the "lazy load" approach is used and no data is loaded into the JVM heap upon restart. MemoryXtend saves only the indexes in RAM, and the rest of the objects are stored on disk. As read throughput increases from clients, most of the data eventually loads into the data grid RAM tier. This is a preferred approach when the volume of data persisted on flash memory exceeds what can fit into memory.
**Example**
@@ -205,7 +206,7 @@ In the example below we are loading `Stock` instances where the name=a1000 and `
{{%/tab%}}
{{%/tabs%}}
-When the logging `com.gigaspaces.cache` is turned on the following output is generated:
+When the `com.gigaspaces.cache` logging is turned on, the following output is generated:
```bash
2016-12-26 07:57:56,378 INFO [com.gigaspaces.cache] - BlobStore internal cache recovery:
@@ -298,13 +299,13 @@ This configuration allows each space partition instance (primary or backup) to u
## Central Storage
-This deployment strategy works well with {{%exurl "storage area networks (SAN)" "http://en.wikipedia.org/wiki/Storage_area_network"%}}, which means that the disk drive devices are installed in a remote storage array but behave as if they're attached the the local machine. Most storage networks use the iSCSI or Fibre Channel protocol for communication between servers and disk drive devices. This configuration breaks the coupling between a partition instance and a local machin device, thereby allowing seamless relocation of paritions across data grid machines.
+This deployment strategy works well with {{%exurl "storage area networks (SAN)" "http://en.wikipedia.org/wiki/Storage_area_network"%}}, which means that the disk drive devices are installed in a remote storage array but behave as if they're attached the the local machine. Most storage networks use the iSCSI or Fibre Channel protocol for communication between servers and disk drive devices. This configuration breaks the coupling between a partition instance and a local machine device, allowing seamless relocation of paritions across data grid machines.
{{%align center%}}
![image](/attachment_files/blobstore/memoryxtend-central-storage.png)
{{%/align%}}
-Tiering storage between space partition instances and attached storage can be applied across one or more storage arrays, as shown in the configurations below:
+Tiering storage between Space partition instances and attached storage can be applied across one or more storage arrays, as shown in the configurations below:
### Single Storage Array
@@ -327,11 +328,11 @@ The following example deployes a 2 partitions space with a single backup (2,1) i
```
-### Two storage arrays
+### Two Storage Arrays
{{%section%}}
{{%column width="80%" %}}
-The following example deployes a 2 partitions space with a single backup (2,1) in the following manner:
+The following example deployes a 2-partition Space with a single backup (2,1) in the following manner:
- `/mnt1/db1` will be mounted to the 1st primary.
- `/mnt1/db2` will be mounted to the 2nd primary.
@@ -351,6 +352,7 @@ The following example deployes a 2 partitions space with a single backup (2,1) i
# Performance Tuning
## Persistent Data Store Tuning Parameters
+
XAP uses the default DBOptions class `com.com.gigaspaces.blobstore.rocksdb.config.XAPDBOptions`.
{{%refer%}}
@@ -360,13 +362,14 @@ A list of configuration properties can be found in the {{%exurl "org.rocksdb.DB
| Property | Description | Value |
|:-----------------------|:----------------------------------------------------------|:--------|
-| createIfMissing | Configure whether to create the database if it is missing. Note that this value is always overriden with `true`. | true |
-| maxBackgroundCompactions | Specifies the maximum number of concurrent background compaction jobs, submitted to the default LOW priority thread pool.
If you're increasing this, also consider increasing number of threads in LOW priority thread pool | 8 |
-| maxBackgroundFlushes | Specifies the maximum number of concurrent background flush jobs.
If you're increasing this, also consider increasing number of threads in HIGH priority thread pool. | 8 |
-| maxOpenFiles | Number of open files that can be used by the DB. You may need to increase this if your database has a large working set. Value -1 means files opened are always kept open. | -1 |
-| tableFormatConfig | Set the config for table format.
Default is BlockBasedTableConfig with - noBlockCache = opposite of `useCache`
- blockCacheSize = `cacheSize`
- blockSize = `blockSize`
- filter = BloomFilter(10,false)
- formatVersion = 0
The highlighted values are taken from the `rocksdb-blob-store` namespace / `RocksDBBlobStoreConfigurer` if provided, otherwise the following defaults will be used: - useCache = true
- cacheSize = 100MB
- blockSize = 16KB
If a custom tableFormatConfig is provided, the values from the namespace/configurer are ignored. | |
+| createIfMissing | Configure whether to create the database if it is missing. This value is always overriden with `true`. | true |
+| maxBackgroundCompactions | Specifies the maximum number of concurrent background compaction jobs, submitted to the default LOW priority thread pool.
If you're increasing this, also consider increasing the number of threads in the LOW priority thread pool | 8 |
+| maxBackgroundFlushes | Specifies the maximum number of concurrent background flush jobs.
If you're increasing this, also consider increasing the number of threads in the HIGH priority thread pool. | 8 |
+| maxOpenFiles | Number of open files that can be used by the database. You may need to increase this if your database has a large working set. When the value is set to -1, files that are opened are always kept open. | -1 |
+| tableFormatConfig | Set the configuration for the table format.
The default is BlockBasedTableConfig with - noBlockCache = opposite of `useCache`
- blockCacheSize = `cacheSize`
- blockSize = `blockSize`
- filter = BloomFilter(10,false)
- formatVersion = 0
The highlighted values are taken from the `rocksdb-blob-store` namespace / `RocksDBBlobStoreConfigurer` if provided, otherwise the following defaults are used: - useCache = true
- cacheSize = 100MB
- blockSize = 16KB
If a custom tableFormatConfig is provided, the values from the namespace/configurer are ignored. | |
## LSM Logic and MemTables Tuning Parameters
+
Below are the values for the default class `com.com.gigaspaces.blobstore.rocksdb.config.XAPColumnFamilyOptions`.
{{%refer%}}
@@ -376,16 +379,16 @@ A list of configuration properties can be found in the [org.rocksdb.ColumnFamil
| Property | Description | Value |
|:-----------------------|:----------------------------------------------------------|:--------|
| writeBufferSize | Amount of data to build up in memory (backed by an unsorted log on disk)
before converting to a sorted on-disk file. Should be in bytes. | 64 MB |
-| levelZeroSlowdownWritesTrigger | Soft limit on number of level-0 files. We start slowing down writes at this point.
A value < 0 means that no writing slow down will be triggered by number of files in level-0. | 8 |
+| levelZeroSlowdownWritesTrigger | Soft limit on number of level-0 files. XAP begins to slow down writes at this point.
A value < 0 means that no writing slowdown is triggered by the number of files in level-0. | 8 |
| maxWriteBufferNumber | The maximum number of write buffers that are built up in memory. | 4 |
| targetFileSizeBase | The target file size for compaction, should be in bytes. | 64 MB |
| softPendingCompactionBytesLimit | The soft limit to impose on compaction. | 0 |
| hardPendingCompactionBytesLimit | The hard limit to impose on compaction. | 0 |
-| levelCompactionDynamicLevelBytes | If true, RocksDB will pick target size of each level dynamically. | false |
-| maxBytesForLevelBase | The upper-bound of the total size of level-1 files in bytes. | 512 MB |
-| compressionPerLevel | Sets the compression policy for each level | [NO_COMPRESSION,
NO_COMPRESSION,
SNAPPY_COMPRESSION] |
-| mergeOperatorName | Set the merge operator to be used for merging two merge operands of the same key. | put |
-| fsync | By default, each write returns after the data is pushed into the operating system. The transfer from operating system memory to the underlying persistent storage happens asynchronously. When configuring sync to true each write operation not return until the data being written has been pushed all the way to persistent storage.
This parameter should be set to true while storing data to filesystem like ext3 that can lose files after a reboot.
Default is false. If this property is set, the `fsync` that is passed to the `rocksdb-blob-store` namespace/`RocksDBBlobStoreConfigurer` (if any) will be ignored. | false |
+| levelCompactionDynamicLevelBytes | If true, RocksDB will pick the target size of each level dynamically. | false |
+| maxBytesForLevelBase | The upper limit of the total size of level-1 files in bytes. | 512 MB |
+| compressionPerLevel | Sets the compression policy for each level. | [NO_COMPRESSION,
NO_COMPRESSION,
SNAPPY_COMPRESSION] |
+| mergeOperatorName | Sets the merge operator to be used for merging two merge operands of the same key. | put |
+| fsync | By default, each write returns after the data is pushed into the operating system. The transfer from operating system memory to the underlying persistent storage happens asynchronously. When sync is configured to true, each write operation doesn't return until the data being written has been pushed all the way to persistent storage.
This parameter should be set to true while storing data to file systems like ext3, which can lose files after a reboot.
If this property is set, the `fsync` that is passed to the `rocksdb-blob-store` namespace/`RocksDBBlobStoreConfigurer` (if any) is ignored. | false |
## Examples
diff --git a/site/content/xap/12.3/dev-java/event-processing.markdown b/site/content/xap/12.3/dev-java/event-processing.markdown
index 9f14406ea..c08801dea 100644
--- a/site/content/xap/12.3/dev-java/event-processing.markdown
+++ b/site/content/xap/12.3/dev-java/event-processing.markdown
@@ -7,36 +7,13 @@ weight: 1500
---
-This section will guide you through the event processing APIs and configuration on top of the Space.
+This section describes the event processing APIs and how to configure them on top of the Space. The relevant APIs include the [Notify Container](./notify-container-overview.html), which wraps the space data event session API with event container abstraction, and the Polling Container](./polling-container-overview.html), which allows you to perform polling receive operations against the Space.
+Events that are received by the polling and notify containers are handled by the [Event Listener](./data-event-listener.html), which is a Space Data Event Listener and the [Event Exception Listener](./event-exception-handler.html).
-
+Regarding [FIFO Ordering](./fifo-overview.html), XAP supports both non-ordered Entries and FIFO ordered Entries when performing Space operations. XAP also includes [JMS message support](./messaging-support.html) that is built on top of the core spaces architecture.
-
-{{% fpanel%}}
-[Notify Container](./notify-container-overview.html){{}}
-The notify event container wraps the space data event session API with event container abstraction.
-
-[Polling Container](./polling-container-overview.html){{}}
-Allows you to perform polling receive operations against the space.
-
-[Event Listener](./data-event-listener.html){{}}
-Describe the common Space Data Event Listener and its different adapters.
-
-[Event Exception Listener](./event-exception-handler.html){{}}
-Describe the common Exception Event Listener and its different adapters.
-
-[FIFO Ordering](./fifo-overview.html){{}}
-XAP supports both non-ordered Entries and FIFO ordered Entries when performing space operations.
-
-[JMS Integration](./messaging-support.html){{}}
-XAP provide a JMS implementation, built on top of the core spaces architecture.
-{{%/fpanel%}}
-
-
-
-
-#### Additional Resources
+# Additional Resources
{{%youtube "GwLfDYgl6f8" "Event Processing"%}}
diff --git a/site/content/xap/12.3/dev-java/mule-esb.markdown b/site/content/xap/12.3/dev-java/mule-esb.markdown
index dfe4b424c..22d2066b8 100644
--- a/site/content/xap/12.3/dev-java/mule-esb.markdown
+++ b/site/content/xap/12.3/dev-java/mule-esb.markdown
@@ -6,28 +6,12 @@ parent: none
weight: 1700
---
-XAP comes with comprehensive support for Mule v3.7. It allows you to use the Space as a Mule external transport, enabling receiving and dispatching of POJO messages over the Space.
+XAP comes with comprehensive support for Mule v3.7. This allows you to use the Space as a Mule external transport, enabling receiving and dispatching of POJO messages over the Space using an [Event Container](./mule-event-container-transport.html).
An additional transport called os-queue allows you to replace the Mule VM transport with highly available inter VM transport over the Space.
-A Mule application can be packaged and run as a Processing Unit within one of the SLA-driven Processing Unit containers.
-
+A Mule application can be packaged and run as a [Processing Unit](./mule-processing-unit.html) within one of the SLA-driven Processing Unit containers.
-{{%fpanel%}}
-
-[Event Container](./mule-event-container-transport.html){{}}
-XAP's event container transport uses event components that allow you to send and receive POJO messages over the Space using Mule.
-
-[Processing Unit](./mule-processing-unit.html){{}}
-The Mule Processing Unit allows you to run Mule within a Processing Unit, thus leveraging all of the Processing Unit and SLA-driven container capabilities.
-
-[Queue Provider](./mule-queue-provider.html){{}}
-The XAP queue provider is used for internal space-based communication between services managed by Mule.
-
-{{%/fpanel%}}
-
-
-
-## Additional Resources
+# Additional Resources
{{%exurl "Mule Site" "http://www.mulesoft.org"%}}
diff --git a/site/content/xap/12.3/dev-java/space-based-remoting-overview.markdown b/site/content/xap/12.3/dev-java/space-based-remoting-overview.markdown
index 78764eae7..8fe709d32 100644
--- a/site/content/xap/12.3/dev-java/space-based-remoting-overview.markdown
+++ b/site/content/xap/12.3/dev-java/space-based-remoting-overview.markdown
@@ -8,26 +8,33 @@ weight: 1600
-Remoting allows you to use remote invocations of POJO services, with the space as the transport layer.
+Remoting allows you to use remote invocations of POJO services, with the Space as the transport layer. Spring provides support for [various remoting technologies](http://static.springframework.org/spring/docs/2.0.x/reference/remoting.html). XAP uses the same concepts to provide remoting, using the Space as the underlying protocol.
-
+Some benefits of using the Space as the transport layer include:
-{{%fpanel%}}
+- **High availability** -- the Space by its nature (based on the cluster topology) is highly available, so remote invocations get this feature automatically when using the Space as the transport layer.
+- **Load-balancing** -- when using a Space with a partitioned cluster topology, each remote invocation is automatically directed to the appropriate partition (based on its routing handler), providing automatic load balancing.
+- **Performance** -- remote invocations are represented in fast internal OpenSpaces objects, providing fast serialization and transport over the net.
+- **Asynchronous execution** -- by its nature, remoting support is asynchronous, allowing for much higher throughput of remote invocations. OpenSpaces allows you to use asynchronous execution using Futures, and also provides synchronous support (built on top of it).
-[Overview](./space-based-remoting.html){{}}
-Remoting services overview.
+The OpenSpaces API supports two types of remoting, distinguished by the underlying implementation used to send the remote call. The first is called [Executor-Based Remoting](./executor-based-remoting.html), and the second is called [Event-Driven Remoting](./event-driven-remoting.html).
-[Executor based remoting](./executor-based-remoting.html){{}}
-Executor Remoting allows you to use remote invocations of POJO services, with the space as the transport layer using OpenSpaces Executors.
+# Choosing the Correct Remoting Mechanism
+This section explains when you should choose to use each of the remoting implementations. As far as the calling code is concerned, the choice between the implementations is transparent and requires only configuration changes.
-[Event driven remoting](./event-driven-remoting.html){{}}
-Event Driven Remoting allows you to use remote invocations of POJO services, with the space as the transport layer using a polling container on the space side to process the invocations.
-{{%/fpanel%}}
+In most cases, you should choose [Executor-Based Remoting](./executor-based-remoting.html). It is based on the XAP [Task Executors](./task-execution-over-the-space.html) feature, and executes the method invocation by submitting a special kind of task that executes on the Space side by calling the invoked service. This option allows for synchronous and asynchronous invocation, map/reduce style invocations, and transparent invocation failover.
+[Event-Driven Remoting](./event-driven-remoting.html) supports most of the above capabilities, but does not support map/reduce style invocations. In terms of implementation, it's based on the [Polling Container](./polling-container-overview.html) feature, which means that it writes an invocation entry to the space which is later consumed by a polling container. Once taking the invocation entry from the space, the polling container's event handler delegates the call to the space-side service.
-
+The [Event-Driven Remoting](./event-driven-remoting.html) implementation is slower than [Executor-Based Remoting](./executor-based-remoting.html) because it requires 4 Space operations to complete a single remote call: write invocation entry by client --> take invocation entry by polling container --> write invocation result by polling container --> take invocation result by client. In contrast, [Executor-Based Remoting](./executor-based-remoting.html) only requires a single `execute()` call.
+
+However, there are two main scenarios where you should opt for [Event-Driven Remoting](./event-driven-remoting.html) over [Executor-Based Remoting](./executor-based-remoting.html):
+
+- When you would like the actual service not to be co-located with the Space. With [Executor-Based Remoting](./executor-based-remoting.html), the remote service implementation can only be located within the Space's JVM(s). With [Event-Driven Remoting](./event-driven-remoting.html), you can put the client on a remote machine and use the classic **Master/Worker pattern** for processing the invocation. This offloads the processing from the Space (at the expense of moving your service away from the data it might need to process).
+- When unexpected bursts of invocations are a probable scenario, using [Event-Driven Remoting](./event-driven-remoting.html) may prove worthwhile, because invocations are not processed as they occur; they are "queued" in the Space and are processed by the polling container when resources are available. By limiting the number of threads of the polling container, you can ensure that the invocations don't monopolize the CPU of the Space. (The [Alerts](./administrative-alerts.html) API can help monitor this situation.)
+
+# Additional Resources
-#### Additional Resources
{{%youtube "-07-0PXUoeM" "Space based remoting"%}}
diff --git a/site/content/xap/12.3/dev-java/space-based-remoting.markdown b/site/content/xap/12.3/dev-java/space-based-remoting.markdown
deleted file mode 100644
index 26890775d..000000000
--- a/site/content/xap/12.3/dev-java/space-based-remoting.markdown
+++ /dev/null
@@ -1,40 +0,0 @@
----
-type: post123
-title: Overview
-categories: XAP123, OSS
-parent: space-based-remoting-overview.html
-weight: 100
----
-
-
-
-
-Spring provides support for [various remoting technologies](http://static.springframework.org/spring/docs/2.0.x/reference/remoting.html). GigaSpaces uses the same concepts to provide remoting, using the space as the underlying protocol.
-
-Some benefits of using the space as the transport layer include:
-
-- **High availability** -- since the space by its nature (based on the cluster topology) is highly available, remote invocations get this feature automatically when using the space as the transport layer.
-- **Load-balancing** -- when using a space with a partitioned cluster topology, each remote invocation is automatically directed to the appropriate partition (based on its routing handler), providing automatic load-balancing.
-- **Performance** -- remote invocations are represented in fast internal OpenSpaces objects, providing fast serialization and transport over the net.
-- **Asynchronous execution** -- by its nature, remoting support is asynchronous, allowing for much higher throughput of remote invocations. OpenSpaces allows you to use asynchronous execution using Futures, and also provides synchronous support (built on top of it).
-
-
-
-The OpenSpaces API supports two types of remoting, distinguished by the underlying implementation used to send the remote call. The first is called [Executor Based Remoting](./executor-based-remoting.html), and the second is called [Event Driven Remoting](./event-driven-remoting.html).
-
-# Choosing the Correct Remoting Mechanism
-
-This section explains when you should choose to use each of the remoting implementations. Note that as far as the calling code is concerned, the choice between the implementations is transparent and requires only configuration changes.
-
-In most cases, you should choose [Executor Based Remoting](./executor-based-remoting.html). It is based on the GigaSpaces [Task Executors](./task-execution-over-the-space.html) feature, which means that it executes the method invocation by submitting a special kind of task which executes on the space side by calling the invoked service. It allows for synchronous and asynchronous invocation, map/reduce style invocations and transparent invocation failover.
-
-[Event Driven Remoting](./event-driven-remoting.html) supports most of the above capabilities, but does not support map/reduce style invocations. In terms of implementation, it's based on the [Polling Container](./polling-container-overview.html) feature, which means that it writes an invocation entry to the space which is later consumed by a polling container. Once taking the invocation entry from the space, the polling container's event handler delegates the call to the space-side service.
-
-The [Event Driven Remoting](./event-driven-remoting.html) implementation is slower than the [Executor Based Remoting](./executor-based-remoting.html) since it requires 4 space operations to complete a single remote call: write invocation entry by client --> take invocation entry by polling container --> write invocation result by polling container --> take invocation result by client. In contrast, [Executor Based Remoting](./executor-based-remoting.html) only requires a single `execute()` call.
-
-However, there are two main scenarios where you should prefer [Event Driven Remoting](./event-driven-remoting.html) on top of [Executor Based Remoting](./executor-based-remoting.html):
-
-- When you would like the actual service to not to be co-located with the space. With [Executor Based Remoting](./executor-based-remoting.html), the remote service implementation can only be located within the space's JVM(s). With [Event Driven Remoting](./event-driven-remoting.html), you can locate the client on a remote machine and use the classic **Master/Worker pattern** for processing the invocation. This offloads the processing from the space (at the expense of moving your service away from the data it might need to do the processing).
-- When unexpected bursts of invocations are a probable scenario, using [Event Driven Remoting](./event-driven-remoting.html) may prove worthwhile, since invocations are not processed as they occur; they are "queued" in the space and are processed by the polling container when resources are available. By limiting the number of threads of the polling container you can make sure the invocations do not maximize the CPU of the space. (The [Alerts](./administrative-alerts.html) API can help monitor this situation.)
-
-
diff --git a/site/content/xap/12.3/dev-java/task-execution-over-the-space.markdown b/site/content/xap/12.3/dev-java/task-execution-over-the-space.markdown
deleted file mode 100644
index fb413612e..000000000
--- a/site/content/xap/12.3/dev-java/task-execution-over-the-space.markdown
+++ /dev/null
@@ -1,634 +0,0 @@
----
-type: post123
-title: Overview
-categories: XAP123, OSS
-parent: task-execution-overview.html
-weight: 100
----
-
-{{%ssummary%}}{{%/ssummary%}}
-{{%section%}}
-{{%column width="70%" %}}
-XAP supports `Task` execution in an asynchronous manner, collocated with the Space (Processing Unit that started an embedded Space). `Tasks` can be executed directly on a specific cluster member using routing declarations. `Tasks` can also be executed in "broadcast" mode on all the primary cluster members concurrently and reduced to a single result on the client-side. `Tasks` are dynamic in terms of content and class definition. (The `Task` does not have to be available within the space classpath.)
-{{%/column%}}
-{{%column width="30%" %}}
-![Executors_task_flow_basic.jpg](/attachment_files/Executors_task_flow_basic.jpg)
-{{%/column%}}
-{{%/section%}}
-
-{{%note%}}
-Please note that this feature allows dynamic class loading the first time a task is executed. If your use case requires loading a class afterward, use static import or keep the type as a member, changing the task in runtime is not supported.
-{{%/note%}}
-
-
-
-
-# Task API
-
-The `Task` interface is defined as follows:
-
-
-```java
-public interface Task extends Serializable {
-
- /**
- * Computes a result, or throws an exception if unable to do so.
- *
- * @return computed result
- * @throws Exception if unable to compute a result
- */
- T execute() throws Exception;
-}
-```
-
-Here is a simple implementation of a task that accepts a value that will be returned in the execute phase.
-
-
-```java
-public class MyTask implements Task {
-
- private int value;
-
- public MyTask(int value) {
- this.value = value;
- }
-
- public Integer execute() throws Exception {
- return value;
- }
-}
-```
-
-Executing the task uses the `GigaSpace` API with a routing value of 2 (the second parameter):
-
-
-```java
-AsyncFuture future = gigaSpace.execute(new MyTask(2), 2);
-int result = future.get();
-```
-
-# Async API
-
-`Task` execution is asynchronous in nature, returning an `AyncFuture`. This allows the result to be retrieved at a later stage. `AsyncFuture` allows registration of an `AsyncFutureListener` that will execute specified logic when the `Task` completes.
-
-Here are the interfaces for both `AsyncFuture` and `AsyncFutureListener`:
-
-
-```java
-public interface AsyncFuture extends Future {
-
- void setListener(AsyncFutureListener listener);
-}
-
-public interface AsyncFutureListener {
-
- /**
- * A callback when a result of an async invocation arrives.
- */
- void onResult(AsyncResult result);
-}
-```
-
-Passing the listener can be done by setting it on the `AsyncFuture` or when executing a `Task` using the `GigaSpace` API as an additional parameter.
-
-`AsyncResult` can be used to extract the result or the exception of the execution:
-
-
-```java
-public interface AsyncResult {
-
- /**
- * Returns the result of the async invocation. Returns null
- * in case of an exception. {@link #getException()} should be checked for
- * successful execution.
- */
- T getResult();
-
- /**
- * In case of an async invocation failure, returns the exception causing it.
- * If the invocation is successful, this method returns null
.
- */
- Exception getException();
-}
-```
-
-# Task Routing
-
-When executing a single `Task`, there are several ways its routing can be controlled. Passing the routing information as a parameter to the execute command is the simplest form:
-
-
-```java
-AsyncFuture future = gigaSpace.execute(new MyTask(2), 2);
-int result = future.get();
-```
-
-Alternatively, it is sufficient to define a POJO property annotated `@SpaceRouting`. The value of that property will be used to route any `Tasks` defined in this way. For example:
-
-
-```java
-public void Order {
-
- // ...
-
- @SpaceRouting
- public Integer getOrderRouting() {
- // ...
- }
-
-}
-
-Order order = new Order();
-AsyncFuture future = gigaSpace.execute(new MyTask(2), order);
-int result = future.get();
-```
-
-Routing information can also be defined at the `Task`-level, in two ways:
-
-1. Provide an instance property and annotate the getter with the `@SpaceRouting` annotation.
-1. Implement the `TaskRoutingProvider` interface (for non annotations based configuration).
-
-{{%tabs%}}
-{{%tab " Annotation "%}}
-
-
-```java
-
-public class MyTask implements Task {
-
- private int value;
-
- public MyTask(int value) {
- this.value = value;
- }
-
- public Integer execute() throws Exception {
- return value;
- }
-
- @SpaceRouting
- public Integer routing() {
- return this.value;
- }
-}
-```
-
-{{% /tab %}}
-{{%tab " Interface "%}}
-
-
-```java
-
-public class MyTask implements Task implements TaskRoutingProvider {
-
- private int value;
-
- public MyTask(int value) {
- this.value = value;
- }
-
- public Integer execute() throws Exception {
- return value;
- }
-
- public Integer getRouting() {
- return this.value;
- }
-}
-```
-
-{{% /tab %}}
-{{% /tabs %}}
-
-Using either mechanism to define routing at the the `Task`-level removes the need for the routing parameter:
-
-
-```java
-AsyncFuture future = gigaSpace.execute(new MyTask(2));
-int result = future.get();
-```
-
-# DistributedTask API
-
-A `DistributedTask` is a `Task` that is executed more than once (concurrently). It returns a result that is the reduced product of all operations. This reduction is calculated in the `Task`'s `reduce(...)` method.
-
-{{% section %}}
-{{% column width="45%" %}}
-Phase 1 - Sending the Tasks to be executed:
-![DistributedTaskExecution_phase1.jpg](/attachment_files/DistributedTaskExecution_phase1.jpg)
-{{% /column %}}
-{{% column width="45%"%}}
-Phase 2 - Getting the results back to be reduced:
-![DistributedTaskExecution_phase2.jpg](/attachment_files/DistributedTaskExecution_phase2.jpg)
-{{% /column %}}
-{{% /section %}}
-
-Here is the `DistributedTask` API:
-
-
-```java
-public interface AsyncResultsReducer {
-
- R reduce(List> results) throws Exception;
-
-}
-
-public interface DistributedTask extends Task, AsyncResultsReducer {
-}
-```
-
-The distributed task interface extends both `Task` and `AsyncResultsReducer`. The `Task` interface is used to execute a specific execution of the distributed task (there will be several executions of it), and the `AsyncResultsReducer` is used to reduce the results of all the executions.
-
-Lets write a (very) simple example of a `DistributedTask`:
-
-
-```java
-public class MyDistTask implements DistributedTask {
-
- public Integer execute() throws Exception {
- return 1;
- }
-
- public Long reduce(List> results) throws Exception {
- long sum = 0;
- for (AsyncResult result : results) {
- if (result.getException() != null) {
- throw result.getException();
- }
- sum += result.getResult();
- }
- return sum;
- }
-}
-```
-
-`MyDistTask` returns `1` for each of its `execute` operations, and the reducer sums all of the executions. If there was an exception thrown during the `execute` operation (in our case, it will never happen), the exception will be throws back to the user during the `reduce` operation.
-
-A `DistributedTask` can be broadcast to all primary nodes of the cluster or routed selectively. Executing a distributed task on several nodes could be done as follows:
-
-
-```java
-AsyncFuture future = gigaSpace.execute(new MyDistTask(), 1, 4, 6, 7);
-long result = future.get(); // result will be 4
-```
-
-In this case, `MyDistTask` is executed concurrently and asynchronously on the nodes that correspond to routing values of `1`, `4`, `6`, and `7`.
-
-Broadcasting the execution to all current primary nodes can be done by simply executing **just** the `DistributedTask`. Here is an example:
-
-
-```java
-AsyncFuture future = gigaSpace.execute(new MyDistTask());
-long result = future.get(); // result will be the number of primary spaces
-```
-
-In this case, the `DistributedTask` is executed on all primary spaces of the cluster.
-
-## AsyncResultFilter
-
-When executing a distributed task, results arrive in an asynchronous manner and once all the results have arrived, the `AsyncResultsReducer` is used to reduce them. The `AsyncResultFitler` can be used to as a callback and filter mechanism to be invoked for each result that arrives.
-
-
-```java
-public interface AsyncResultFilter {
-
- /**
- * Controls what should be done with the results.
- */
- enum Decision {
-
- /**
- * Continue processing the distributed task.
- */
- CONTINUE,
-
- /**
- * Break out of the processing of the distributed task and move
- * to the reduce phase.
- */
- BREAK,
-
- /**
- * Skip this result and continue processing the rest of the results.
- */
- SKIP
- }
-
- /**
- * A callback invoked for each result that arrives as a result of a distributed task execution allowing
- * to access the result that caused this event, the events received so far, and the total expected results.
- */
- Decision onResult(AsyncResultFilterEvent event);
-}
-```
-
-The filter can be used to control if a result should be used or not (the `SKIP` decision). If a we have enough results and we can move to the reduce phase (the `BREAK` decision). Or, if we should continue accumulating results (the `CONTINUE` decision).
-
-The filter can also be used as a way to be identify that results have arrived and we can do something within our application as a result of that. Note, in this case, make sure that heavy processing should be performed on a separate (probably pooled) thread.
-
-# ExecutorBuilder API
-
-The executor builder API allows to combine several task executions (both distributed ones and non distributed ones) into a seemingly single execution (with a reduce phase). Think of the `ExecutorBuilder` as a cartridge that accumulates all the tasks to be executed, and then executes all of them at once giving back a reduced result (in a concurrent and asynchronous manner). Here is an example of the API:
-
-![executorBuilder.jpg](/attachment_files/executorBuilder.jpg)
-
-
-```java
-AsyncFuture future = gigaSpace.executorBuilder(new SumReducer(Integer.class))
- .add(new MyTask(2))
- .add(new MyOtherTask(), 3)
- .add(new MyDistTask())
- .execute();
-Integer result = future.get();
-```
-
-In the above case, there are several tasks that are "added" to the `ExecutorBuilder`, executed (in a similar manner to a single distributed task) and then reduced using a sum reducer that is provided when building the `ExecutorBuilder`.
-
-The `ExecutorBuilder` can also be passed an optional `AsyncResultFilter` if the reducer also implements it.
-
-{{% tip %}}
-See the [Elastic Distributed Risk Analysis Engine](/sbp/elastic-distributed-calculation-engine.html) for a full `ExecutorBuilder` API example.
-{{% /tip %}}
-
-# Space Injection
-
-The most common scenario for using executors is by interacting with the collocated Space on which the task is executed. A `GigaSpace` instance, which works against a collocated Space can be easily injected either using annotations or using an interface. Here is an example:
-
-{{%tabs%}}
-{{%tab " Annotation "%}}
-
-
-```java
-
-public class TemplateCountTask implements DistributedTask {
-
- private Object template;
-
- @TaskGigaSpace
- private transient GigaSpace gigaSpace;
-
- public TemplateCountTask(Object template) {
- this.template = template;
- }
-
- public Integer execute() throws Exception {
- return gigaSpace.count(template);
- }
-
- public Long reduce(List> results) throws Exception {
- long sum = 0;
- for (AsyncResult result : results) {
- if (result.getException() != null) {
- throw result.getException();
- }
- sum += result.getResult();
- }
- return sum;
- }
-}
-```
-
-{{% /tab %}}
-{{%tab " Interface "%}}
-
-
-```java
-
-public class TemplateCountTask implements DistributedTask, TaskGigaSpaceAware {
-
- private Object template;
-
- private transient GigaSpace gigaSpace;
-
- public TemplateCountTask(Object template) {
- this.template = template;
- }
-
- public void setGigaSpace(GigaSpace gigaSpace) {
- this.gigaSpace = gigaSpace;
- }
-
- public Integer execute() throws Exception {
- return gigaSpace.count(template);
- }
-
- public Long reduce(List> results) throws Exception {
- long sum = 0;
- for (AsyncResult result : results) {
- if (result.getException() != null) {
- throw result.getException();
- }
- sum += result.getResult();
- }
- return sum;
- }
-}
-```
-
-{{% /tab %}}
-{{% /tabs %}}
-
-## Injecting a Clustered Space Proxy
-
-You may use the `ApplicationContextAware` interface to inject a clustered proxy into the Task implementation. This is useful when the Task should access other partitions. See below example:
-
-
-```java
-public class MyTask implements Task, ApplicationContextAware {
-
- @TaskGigaSpace
- private transient GigaSpace colocatedSpace;
- private transient GigaSpace clusteredSpace;
-
- public MyTask() {
- }
-
- public void setApplicationContext(ApplicationContext applicationContext)
- throws BeansException {
- clusteredSpace= (GigaSpace) applicationContext.getBean("clusteredGigaSpace");
- }
-....
-}
-```
-
-where the pu.xml should have:
-
-
-```xml
-
-
-```
-
-# Task Resource Injection
-
-A task might need to make use of resources defined within the processing unit it is executed at (which are not the collocated Space). For example, have access to a bean defined within the collocated processing unit. A `Task` executed goes through the same lifecycle of a bean defined within a processing unit (except for the fact that it is not registered with a processing unit). Thanks to this fact, injecting resources can be done using annotations (`@Autowired` and `@Resource`) or lifecycle interfaces (such as `ApplicationContextAware`).
-
-In order to enable resource injection, the Task must either be annotated with `AutowireTask` or implement the marker interface `AutowireTaskMarker`. Here is an example of injecting a resource of type `OrderDao` registered under the bean name `orderDao`. The `OrderDao` is then used to count the number of orders for each node.
-
-
-```java
-@AutowireTask
-public class OrderCountTask implements DistributedTask {
-
- private Object template;
-
- @Resource(name = "orderDao")
- private transient OrderDao orderDao;
-
- public Integer execute() throws Exception {
- return orderDao.countOrders();
- }
-
- public Long reduce(List> results) throws Exception {
- long sum = 0;
- for (AsyncResult result : results) {
- if (result.getException() != null) {
- throw result.getException();
- }
- sum += result.getResult();
- }
- return sum;
- }
-}
-```
-
-(remember to add context:annotation-config to the pu)
-
-When enabling autowiring of tasks, OpenSpaces annotations/interface injection can also be used such as `ClusterInfo` injection.
-
-{{% info "Why use @TaskGigaSpace/TaskGigaSpaceAware when you can autowire using standard Spring? "%}}
-You can inject a collocated `GigaSpace` instance to the task using the `@TaskGigaSpace` annotation implementing the `TaskGigaSpaceAware` interface. However, you can also wire the task through standard Spring dependency injection using the `@AutowireTask` and `@Resource` annotations. However, there's a big difference between the two: the `@TaskGigaSpace` annotation and the `TaskGigaSpaceAware` interface are intentionally designed not to trigger the spring dependency resolution and injection process, since it can be quite costly in terms of performance if executed every time a task is submitted. Therefore, for the common case where you only need to inject the collocated `GigaSpace` instance to the task, it is recommended to use `@TaskGigaSpace` or `TaskGigaSpaceAware`.
-{{% /info %}}
-
-# Built in Reducers
-
-OpenSpaces comes with several built in reducers and distributed tasks that can be used to perform common reduce operations (such as Min, Max, Avg and Sum). For example, if you use a simple `Task`:
-
-
-```java
-public class MyTask implements Task {
-
- public Integer execute() throws Exception {
- return 1;
- }
-}
-```
-
-We can easily make a distributed task out of it that sums all the results using the `SumTask`:
-
-
-```java
-AsyncFuture future = gigaSpace.execute(new SumTask(Integer.class, new MyTask()));
-int result = future.get(); // returns the number of active cluster members
-```
-
-In the above case, `SumTask` is a distributed task that wraps a simple `Task`. It automatically implements the `reduce` operation by summing all the results. This execution will result in executing a distributed task against all the primaries.
-
-`SumTask` uses internally the `SumReducer` which is just implements `AsyncResultsReducer`. The reducer, by itself, can be used with APIs that just use a reducer, for example, the `ExecutorBuilder` construction.
-
-See the [Aggregators](./aggregators.html) section for more details.
-
-
-# Change code without restarts
-
-When executing a task over the space, the code is loaded from the remote client and cached for future executions.
-Since the code is cached, modifications are ignored, and users are forced to restart the space whenever they modify the code.
-
-Starting with 12.1, you can use the `@SupportCodeChange` annotation to tell the space your code has changed.
-The space can store multiple versions of the same task. This is ideal for supporting clients using different versions of a task.
-
-
-For example, start with annotating your task with @SupportCodeChange(id="1"), and when the code changes, set the annotation to @SupportCodeChange(id="2"), and the space will load the new task.
-
-
-{{%tabs%}}
-{{%tab "Task version 1"%}}
-
-```java
-import org.openspaces.core.executor.Task;
-
-import com.gigaspaces.annotation.SupportCodeChange;
-
-@SupportCodeChange(id="1")
-public class DynamicTask implements Task {
-
- @Override
- public Integer execute() throws Exception {
- return new Integer(1);
- }
-}
-```
-{{%/tab%}}
-
-{{%tab "Task version 2"%}}
-
-```java
-import org.openspaces.core.executor.Task;
-
-import com.gigaspaces.annotation.SupportCodeChange;
-
-@SupportCodeChange(id="2")
-public class DynamicTask implements Task {
-
- @Override
- public Integer execute() throws Exception {
- return new Integer(2);
- }
-}
-```
-{{%/tab%}}
-{{%/tabs%}}
-
-
-{{%refer%}}
-[Change code without restarts](./the-space-no-restart.html)
-{{%/refer%}}
-
-
-
-# Transactions
-
-Executors fully support transactions similar to other `GigaSpace` API. Once an `execute` operation is executed within a declarative transaction, it will automatically join it. The transaction itself is then passed to the node the task executed on and added declaratively to it. This means that **any** `GigaSpace` operation performed within the task `execute` operation will automatically join the transaction started on the **client** side.
-
-An exception thrown within the `execute` operation will not cause the transaction to rollback (since it might be a valid exception). Transaction commit/rollback is controlled just by the client the executed the task.
-
-{{% tip %}}
-When executing distributed tasks or tasks that executed on more than one node within the same execution should use the distributed transaction manager. Tasks that execute just on a single node can use the distributed transaction manager, but should use the local transaction manager.
-{{%/tip%}}
-
-{{% anchor ExecutorService %}}
-
-# ExecutorService
-
-OpenSpaces executors support allows to easily implement java.util.concurrent.ExecutorService which allows to support the `ExecutorService` API and executed `Callable` and `Runnable` as tasks within the Space. Here is an example of how to get an `ExecutorService` implementation based on OpenSpaces executors and use it:
-
-
-```java
-ExecutorService executorService = TaskExecutors.newExecutorService(gigaSpace);
-Future future = executorService.submit(new MyCallable());
-int result = future.get();
-```
-
-The `java.util.concurrent` support also comes with built in adapters from `Callable`/`Runnable` to `Task`/`DistributedTask`. The adapters are used internally to implement the `ExecutorService`, but can be used on their own. The adapters can be constructed easily using utility methods found within the `TaskExecutors` factory. Here is an example:
-
-
-```java
-// convert a simple callable to task
-Task task1 = TaskExecutors.task(new MyCallable());
-// convert a simple callable to distributed task
-DistributedTask task2 = TaskExecutors.task(new MyCallable(),
- new SumReducer(Integer.class));
-```
-
-
-{{% refer %}}
-The following [example](/sbp/map-reduce-pattern-executors-example.html) demonstrates how to use the `Task` Execution API
-{{% /refer %}}
-
-
-# Considerations
-
-If the Task `execute` method is called frequently or large complex objects are used as return types, it is recommended to implement optimized serialization such as `Externalizable` for the returned value object or use libraries such as {{%giturl "kryo" "https://github.com/EsotericSoftware/kryo"%}}.
-
-{{% refer %}}
-For more information see [Custom Serialization](./custom-serialization.html).
-{{% /refer %}}
diff --git a/site/content/xap/12.3/dev-java/task-execution-overview.markdown b/site/content/xap/12.3/dev-java/task-execution-overview.markdown
index 21bd47a24..a4bed543b 100644
--- a/site/content/xap/12.3/dev-java/task-execution-overview.markdown
+++ b/site/content/xap/12.3/dev-java/task-execution-overview.markdown
@@ -8,26 +8,633 @@ weight: 1400
-Task executors allow you to easily execute grid-wide tasks on the space using the XAP API.
+{{%section%}}
+{{%column width="70%" %}}
+XAP supports `Task` execution in an asynchronous manner, co-located with the Space (Processing Unit that started an embedded Space). `Tasks` can be executed directly on a specific cluster member using routing declarations. `Tasks` can also be executed in "broadcast" mode on all the primary cluster members concurrently, and reduced to a single result on the client side. `Tasks` are dynamic in terms of content and class definition. (The `Task` does not have to be available within the Space classpath.)
+{{%/column%}}
+{{%column width="30%" %}}
+![Executors_task_flow_basic.jpg](/attachment_files/Executors_task_flow_basic.jpg)
+{{%/column%}}
+{{%/section%}}
-
+{{%note "Note"%}}
+This feature allows dynamic class loading the first time a task is executed. If your use case requires loading a class afterward, use static import or keep the type as a member. Changing the task in runtime is not supported.
+{{%/note%}}
+# Task API
-{{%fpanel%}}
+The `Task` interface is defined as follows:
-[Overview](./task-execution-over-the-space.html)
-Task executor overview.
-[Dynamic language tasks](./task-dynamic-language.html)
-XAP supports the execution of tasks using scripting languages like JavaScipt and Groovy. These can be defined dynamically using the JDK 1.6 dynamic languages support. The dynamic language support is based on the ordinary task executors and OpenSpaces remoting support.
+```java
+public interface Task extends Serializable {
-[Metadata](./task-metadata.html)
-This section explains the different Task metadata.
-{{%/fpanel%}}
+ /**
+ * Computes a result, or throws an exception if unable to do so.
+ *
+ * @return computed result
+ * @throws Exception if unable to compute a result
+ */
+ T execute() throws Exception;
+}
+```
+The following is a simple implementation of a task that accepts a value that will be returned in the execute phase.
-
-#### Additional Resources
+```java
+public class MyTask implements Task {
+
+ private int value;
+
+ public MyTask(int value) {
+ this.value = value;
+ }
+
+ public Integer execute() throws Exception {
+ return value;
+ }
+}
+```
+
+Executing the task uses the `GigaSpace` API with a routing value of 2 (the second parameter):
+
+
+```java
+AsyncFuture future = gigaSpace.execute(new MyTask(2), 2);
+int result = future.get();
+```
+
+# Async API
+
+`Task` execution is asynchronous in nature, returning an `AyncFuture`. This allows the result to be retrieved at a later stage. `AsyncFuture` allows registration of an `AsyncFutureListener` that will execute the specified logic when the `Task` completes.
+
+The following are the interfaces for both `AsyncFuture` and `AsyncFutureListener`:
+
+
+```java
+public interface AsyncFuture extends Future {
+
+ void setListener(AsyncFutureListener listener);
+}
+
+public interface AsyncFutureListener {
+
+ /**
+ * A callback when a result of an async invocation arrives.
+ */
+ void onResult(AsyncResult result);
+}
+```
+
+Passing the listener can be done by setting it on the `AsyncFuture`, or when executing a `Task` using the `GigaSpace` API as an additional parameter.
+
+`AsyncResult` can be used to extract the result or the exception of the execution:
+
+
+```java
+public interface AsyncResult {
+
+ /**
+ * Returns the result of the async invocation. Returns null
+ * in case of an exception. {@link #getException()} should be checked for
+ * successful execution.
+ */
+ T getResult();
+
+ /**
+ * In case of an async invocation failure, returns the exception causing it.
+ * If the invocation is successful, this method returns null
.
+ */
+ Exception getException();
+}
+```
+
+# Task Routing
+
+When executing a single `Task`, there are several ways its routing can be controlled. Passing the routing information as a parameter to the execute command is the simplest approach:
+
+
+```java
+AsyncFuture future = gigaSpace.execute(new MyTask(2), 2);
+int result = future.get();
+```
+
+Alternatively, it is sufficient to define a POJO property annotated `@SpaceRouting`. The value of that property will be used to route any `Tasks` defined in this way. For example:
+
+
+```java
+public void Order {
+
+ // ...
+
+ @SpaceRouting
+ public Integer getOrderRouting() {
+ // ...
+ }
+
+}
+
+Order order = new Order();
+AsyncFuture future = gigaSpace.execute(new MyTask(2), order);
+int result = future.get();
+```
+
+Routing information can also be defined at the `Task` level, in two ways:
+
+1. Provide an instance property and annotate the getter with the `@SpaceRouting` annotation.
+1. Implement the `TaskRoutingProvider` interface (for non annotations based configuration).
+
+{{%tabs%}}
+{{%tab " Annotation "%}}
+
+
+```java
+
+public class MyTask implements Task {
+
+ private int value;
+
+ public MyTask(int value) {
+ this.value = value;
+ }
+
+ public Integer execute() throws Exception {
+ return value;
+ }
+
+ @SpaceRouting
+ public Integer routing() {
+ return this.value;
+ }
+}
+```
+
+{{% /tab %}}
+{{%tab " Interface "%}}
+
+
+```java
+
+public class MyTask implements Task implements TaskRoutingProvider {
+
+ private int value;
+
+ public MyTask(int value) {
+ this.value = value;
+ }
+
+ public Integer execute() throws Exception {
+ return value;
+ }
+
+ public Integer getRouting() {
+ return this.value;
+ }
+}
+```
+
+{{% /tab %}}
+{{% /tabs %}}
+
+Using either mechanism to define routing at the the `Task` level removes the need for the routing parameter:
+
+
+```java
+AsyncFuture future = gigaSpace.execute(new MyTask(2));
+int result = future.get();
+```
+
+# DistributedTask API
+
+A `DistributedTask` is a `Task` that is executed more than once (concurrently). It returns a result that is the reduced product of all operations. This reduction is calculated in the `Task`'s `reduce(...)` method.
+
+{{% section %}}
+{{% column width="45%" %}}
+Phase 1 - Sending the Tasks to be executed:
+![DistributedTaskExecution_phase1.jpg](/attachment_files/DistributedTaskExecution_phase1.jpg)
+{{% /column %}}
+{{% column width="45%"%}}
+Phase 2 - Getting the results back to be reduced:
+![DistributedTaskExecution_phase2.jpg](/attachment_files/DistributedTaskExecution_phase2.jpg)
+{{% /column %}}
+{{% /section %}}
+
+Here is the `DistributedTask` API:
+
+
+```java
+public interface AsyncResultsReducer {
+
+ R reduce(List> results) throws Exception;
+
+}
+
+public interface DistributedTask extends Task, AsyncResultsReducer {
+}
+```
+
+The distributed task interface extends both `Task` and `AsyncResultsReducer`. The `Task` interface is used to execute a specific execution of the distributed task (there will be several executions of it), and the `AsyncResultsReducer` is used to reduce the results of all the executions.
+
+Lets write a (very) simple example of a `DistributedTask`:
+
+
+```java
+public class MyDistTask implements DistributedTask {
+
+ public Integer execute() throws Exception {
+ return 1;
+ }
+
+ public Long reduce(List> results) throws Exception {
+ long sum = 0;
+ for (AsyncResult result : results) {
+ if (result.getException() != null) {
+ throw result.getException();
+ }
+ sum += result.getResult();
+ }
+ return sum;
+ }
+}
+```
+
+`MyDistTask` returns `1` for each of its `execute` operations, and the reducer sums all of the executions. If an exception was thrown during the `execute` operation (in our case, it will never happen), the exception is thrown back to the user during the `reduce` operation.
+
+A `DistributedTask` can be broadcast to all primary nodes of the cluster or routed selectively. Executing a distributed task on several nodes can be done as follows:
+
+
+```java
+AsyncFuture future = gigaSpace.execute(new MyDistTask(), 1, 4, 6, 7);
+long result = future.get(); // result will be 4
+```
+
+In this case, `MyDistTask` is executed concurrently and asynchronously on the nodes that correspond to routing values of `1`, `4`, `6`, and `7`.
+
+Broadcasting the execution to all current primary nodes can be done by simply executing **just** the `DistributedTask`. Here is an example:
+
+
+```java
+AsyncFuture future = gigaSpace.execute(new MyDistTask());
+long result = future.get(); // result will be the number of primary spaces
+```
+
+In this case, the `DistributedTask` is executed on all primary Spaces of the cluster.
+
+## AsyncResultFilter
+
+When executing a distributed task, results arrive in an asynchronous manner. When all the results have arrived, the `AsyncResultsReducer` is used to reduce them. The `AsyncResultFitler` can be used to as a callback and filter mechanism to be invoked for each result that arrives.
+
+
+```java
+public interface AsyncResultFilter {
+
+ /**
+ * Controls what should be done with the results.
+ */
+ enum Decision {
+
+ /**
+ * Continue processing the distributed task.
+ */
+ CONTINUE,
+
+ /**
+ * Break out of the processing of the distributed task and move
+ * to the reduce phase.
+ */
+ BREAK,
+
+ /**
+ * Skip this result and continue processing the rest of the results.
+ */
+ SKIP
+ }
+
+ /**
+ * A callback invoked for each result that arrives as a result of a distributed task execution allowing
+ * to access the result that caused this event, the events received so far, and the total expected results.
+ */
+ Decision onResult(AsyncResultFilterEvent event);
+}
+```
+
+The filter can be used to control if:
+
+- A result should be used or not (the `SKIP` decision).
+- There are enough results to move to the reduce phase (the `BREAK` decision).
+- Results should continue to accumulate (the `CONTINUE` decision).
+
+The filter can also be used to identify that results have arrived and we can do something within our application as a result. In this case, make sure that heavy processing is performed on a separate (probably pooled) thread.
+
+# ExecutorBuilder API
+
+The executor builder API allows combining several task executions (both distributed and non-distributed) into a seemingly single execution (with a reduce phase). Think of the `ExecutorBuilder` as a cartridge that accumulates all the tasks to be executed, and then executes all of them at once giving back a reduced result (in a concurrent and asynchronous manner). The following is an example of the API:
+
+![executorBuilder.jpg](/attachment_files/executorBuilder.jpg)
+
+
+```java
+AsyncFuture future = gigaSpace.executorBuilder(new SumReducer(Integer.class))
+ .add(new MyTask(2))
+ .add(new MyOtherTask(), 3)
+ .add(new MyDistTask())
+ .execute();
+Integer result = future.get();
+```
+
+In the above case, there are several tasks that are "added" to the `ExecutorBuilder`, executed (in a similar manner to a single distributed task) and then reduced using a sum reducer that is provided when building the `ExecutorBuilder`.
+
+The `ExecutorBuilder` can also be passed an optional `AsyncResultFilter` if the reducer also implements it.
+
+{{% tip %}}
+See the [Elastic Distributed Risk Analysis Engine](/sbp/elastic-distributed-calculation-engine.html) for a full `ExecutorBuilder` API example.
+{{% /tip %}}
+
+# Space Injection
+
+The most common scenario for using executors is by interacting with the co-located Space on which the task is executed. A `GigaSpace` instance, which works against a co-located Space, can be easily injected either using annotations or using an interface. Here is an example:
+
+{{%tabs%}}
+{{%tab " Annotation "%}}
+
+
+```java
+
+public class TemplateCountTask implements DistributedTask {
+
+ private Object template;
+
+ @TaskGigaSpace
+ private transient GigaSpace gigaSpace;
+
+ public TemplateCountTask(Object template) {
+ this.template = template;
+ }
+
+ public Integer execute() throws Exception {
+ return gigaSpace.count(template);
+ }
+
+ public Long reduce(List> results) throws Exception {
+ long sum = 0;
+ for (AsyncResult result : results) {
+ if (result.getException() != null) {
+ throw result.getException();
+ }
+ sum += result.getResult();
+ }
+ return sum;
+ }
+}
+```
+
+{{% /tab %}}
+{{%tab " Interface "%}}
+
+
+```java
+
+public class TemplateCountTask implements DistributedTask, TaskGigaSpaceAware {
+
+ private Object template;
+
+ private transient GigaSpace gigaSpace;
+
+ public TemplateCountTask(Object template) {
+ this.template = template;
+ }
+
+ public void setGigaSpace(GigaSpace gigaSpace) {
+ this.gigaSpace = gigaSpace;
+ }
+
+ public Integer execute() throws Exception {
+ return gigaSpace.count(template);
+ }
+
+ public Long reduce(List> results) throws Exception {
+ long sum = 0;
+ for (AsyncResult result : results) {
+ if (result.getException() != null) {
+ throw result.getException();
+ }
+ sum += result.getResult();
+ }
+ return sum;
+ }
+}
+```
+
+{{% /tab %}}
+{{% /tabs %}}
+
+## Injecting a Clustered Space Proxy
+
+You can use the `ApplicationContextAware` interface to inject a clustered proxy into the task implementation. This is useful when the task should access other partitions. See the following example:
+
+
+```java
+public class MyTask implements Task, ApplicationContextAware {
+
+ @TaskGigaSpace
+ private transient GigaSpace colocatedSpace;
+ private transient GigaSpace clusteredSpace;
+
+ public MyTask() {
+ }
+
+ public void setApplicationContext(ApplicationContext applicationContext)
+ throws BeansException {
+ clusteredSpace= (GigaSpace) applicationContext.getBean("clusteredGigaSpace");
+ }
+....
+}
+```
+
+where the pu.xml should have:
+
+
+```xml
+
+
+```
+
+# Task Resource Injection
+
+A task may have to make use of resources defined within the Processing Unit where it is executed (that are not the co-located Space). For example, access a bean defined within the co-located Processing Unit. A `Task` executed goes through the same lifecycle of a bean defined within a Processing Unit (except that it isn't registered with a Processing Unit). As such, injecting resources can be done using annotations (`@Autowired` and `@Resource`) or lifecycle interfaces (such as `ApplicationContextAware`).
+
+In order to enable resource injection, the task must either be annotated with `AutowireTask` or implement the marker interface `AutowireTaskMarker`. The following is an example of injecting a resource of type `OrderDao` registered under the bean name `orderDao`. The `OrderDao` is then used to count the number of orders for each node.
+
+
+```java
+@AutowireTask
+public class OrderCountTask implements DistributedTask {
+
+ private Object template;
+
+ @Resource(name = "orderDao")
+ private transient OrderDao orderDao;
+
+ public Integer execute() throws Exception {
+ return orderDao.countOrders();
+ }
+
+ public Long reduce(List> results) throws Exception {
+ long sum = 0;
+ for (AsyncResult result : results) {
+ if (result.getException() != null) {
+ throw result.getException();
+ }
+ sum += result.getResult();
+ }
+ return sum;
+ }
+}
+```
+
+(Remember to add context:annotation-config to the Processing Unit.)
+
+When enabling autowiring of tasks, OpenSpaces annotations/interface injections can also be used, such as the `ClusterInfo` injection.
+
+{{% info "Why use @TaskGigaSpace/TaskGigaSpaceAware when you can autowire using standard Spring? "%}}
+You can inject a co-located `GigaSpace` instance to the task using the `@TaskGigaSpace` annotation implementing the `TaskGigaSpaceAware` interface. You can also wire the task through standard Spring dependency injection using the `@AutowireTask` and `@Resource` annotations.
+
+However, there's a big difference between the two; the `@TaskGigaSpace` annotation and the `TaskGigaSpaceAware` interface are intentionally designed not to trigger the Spring dependency resolution and injection process, because it can be quite costly in terms of performance if executed every time a task is submitted. Therefore, for the common case where you only need to inject the collocated `GigaSpace` instance to the task, it is recommended to use `@TaskGigaSpace` or `TaskGigaSpaceAware`.
+{{% /info %}}
+
+# Built-In Reducers
+
+OpenSpaces comes with several built-in reducers and distributed tasks that can be used to perform common reduce operations (such as `Min`, `Max`, `Avg` and `Sum`). For example, if you use a simple `Task`:
+
+
+```java
+public class MyTask implements Task {
+
+ public Integer execute() throws Exception {
+ return 1;
+ }
+}
+```
+
+We can easily make a distributed task out of it that sums all the results using the `SumTask`:
+
+
+```java
+AsyncFuture future = gigaSpace.execute(new SumTask(Integer.class, new MyTask()));
+int result = future.get(); // returns the number of active cluster members
+```
+
+In the above case, `SumTask` is a distributed task that wraps a simple `Task`. It automatically implements the `reduce` operation by summing all the results. This execution results in executing a distributed task against all the primaries.
+
+`SumTask` uses the `SumReducer` internally, which just implements `AsyncResultsReducer`. The reducer by itself can be used with APIs that just use a reducer, for example, the `ExecutorBuilder` construction.
+
+See the [Aggregators](./aggregators.html) section for more details.
+
+
+# Changing Code without Restarts
+
+When executing a task over the Space, the code is loaded from the remote client and cached for future executions. Since the code is cached, modifications are ignored, and users are forced to restart the Space whenever they modify the code.
+
+Starting with 12.1, you can use the `@SupportCodeChange` annotation to tell the Space your code has changed. The Space can store multiple versions of the same task. This is ideal for supporting clients using different versions of a task.
+
+
+For example, start with annotating your task with @SupportCodeChange(id="1"). When the code changes, set the annotation to @SupportCodeChange(id="2"), and the Space will load the new task.
+
+
+{{%tabs%}}
+{{%tab "Task version 1"%}}
+
+```java
+import org.openspaces.core.executor.Task;
+
+import com.gigaspaces.annotation.SupportCodeChange;
+
+@SupportCodeChange(id="1")
+public class DynamicTask implements Task {
+
+ @Override
+ public Integer execute() throws Exception {
+ return new Integer(1);
+ }
+}
+```
+{{%/tab%}}
+
+{{%tab "Task version 2"%}}
+
+```java
+import org.openspaces.core.executor.Task;
+
+import com.gigaspaces.annotation.SupportCodeChange;
+
+@SupportCodeChange(id="2")
+public class DynamicTask implements Task {
+
+ @Override
+ public Integer execute() throws Exception {
+ return new Integer(2);
+ }
+}
+```
+{{%/tab%}}
+{{%/tabs%}}
+
+
+{{%refer%}}
+[Change code without restarts](./the-space-no-restart.html)
+{{%/refer%}}
+
+
+
+# Transactions
+
+Executors fully support transactions similar to other `GigaSpace` APIs. When an `execute` operation is executed within a declarative transaction, it will automatically join it. The transaction itself is then passed to the node the task executed on, and added to it declaratively. This means that **any** `GigaSpace` operation performed within the task `execute` operation automatically joins the transaction started on the **client** side.
+
+An exception thrown within the `execute` operation will not cause the transaction to roll back (because it might be a valid exception). Transaction commit/rollback is controlled just by the client that executed the task.
+
+{{% tip "Tip"%}}
+When executing distributed tasks, or tasks that are executed on more than one node within the same execution, they should use the distributed transaction manager. Tasks that execute on a single node can use the distributed transaction manager, but should use the local transaction manager.
+{{%/tip%}}
+
+{{% anchor ExecutorService %}}
+
+# ExecutorService
+
+Support for OpenSpaces executors allows easily implementing the `java.util.concurrent.ExecutorService`, which supports the `ExecutorService` API and executed `Callable` and `Runnable` as tasks within the Space. The following is an example of how to get an `ExecutorService` implementation based on OpenSpaces executors and use it:
+
+
+```java
+ExecutorService executorService = TaskExecutors.newExecutorService(gigaSpace);
+Future future = executorService.submit(new MyCallable());
+int result = future.get();
+```
+
+The `java.util.concurrent` support also comes with built-in adapters from `Callable`/`Runnable` to `Task`/`DistributedTask`. The adapters are used internally to implement the `ExecutorService`, but can be used on their own. The adapters can be constructed using utility methods found within the `TaskExecutors` factory. See the following example:
+
+
+```java
+// convert a simple callable to task
+Task task1 = TaskExecutors.task(new MyCallable());
+// convert a simple callable to distributed task
+DistributedTask task2 = TaskExecutors.task(new MyCallable(),
+ new SumReducer(Integer.class));
+```
+
+
+{{% refer %}}
+The following [example](/sbp/map-reduce-pattern-executors-example.html) demonstrates how to use the `Task` Execution API.
+{{% /refer %}}
+
+
+# Considerations
+
+If the Task `execute` method is called frequently or large complex objects are used as return types, it is recommended to implement optimized serialization such as `Externalizable` for the returned value object or use libraries such as {{%giturl "kryo" "https://github.com/EsotericSoftware/kryo"%}}.
+
+{{% refer %}}
+For more information, refer to [Custom Serialization](./custom-serialization.html).
+{{% /refer %}}
+
+# Additional Resources
{{%youtube "n7P4rnQN1gw" "Map Reduce"%}}