From 1a9a93c4d458b1a51bfed8a13f0a6c3e58ebbab4 Mon Sep 17 00:00:00 2001 From: Tim Koornstra <89044870+TimKoornstra@users.noreply.github.com> Date: Mon, 2 Oct 2023 15:13:44 +0200 Subject: [PATCH] Update README.md Fix double entries --- README.md | 19 +------------------ 1 file changed, 1 insertion(+), 18 deletions(-) diff --git a/README.md b/README.md index 49c02fb2..7c9f8017 100644 --- a/README.md +++ b/README.md @@ -157,7 +157,6 @@ Variable-size Graph Specification Language (VGSL) is a powerful tool that enable VGSL operates through short definition strings. For instance: -`[None,64,None,1 Cr3,3,32 Mp2,2,2,2 Cr3,3,64 Mp2,2,2,2 Rc Fc64 D20 Lrs128 D20 Lrs64 D20 O1s92]` `None,64,None,1 Cr3,3,32 Mp2,2,2,2 Cr3,3,64 Mp2,2,2,2 Rc Fc64 D20 Lrs128 D20 Lrs64 D20 O1s92` In this example, the string defines a neural network with input layers, convolutional layers, pooling, reshaping, fully connected layers, LSTM and output layers. Each segment of the string corresponds to a specific layer or operation in the neural network. Moreover, VGSL provides the flexibility to specify the type of activation function for certain layers, enhancing customization. @@ -166,10 +165,8 @@ In this example, the string defines a neural network with input layers, convolut | **Layer** | **Spec** | **Example** | **Description** | |--------------------|------------------------------------------------|--------------------|--------------------------------------------------------------------------------------------------------------| -| Input | `[batch, height, width, depth]` | `None,64,None,1` | Input layer with variable batch_size & width, depth of 1 channel | -| Input | `batch,height,width,depth]` | `None,64,None,1` | Input layer with variable batch_size & width, depth of 1 channel | +| Input | `batch,height,width,depth` | `None,64,None,1` | Input layer with variable batch_size & width, depth of 1 channel | | Output | `O(2\|1\|0)(l\|s)` | `O1s10` | Dense layer with a 1D sequence as with 10 output classes and softmax | -| Conv2D | `C(s\|t\|r\|e\|l\|m),,[,],` | `Cr,3,3,64` | Conv2D layer with Relu, a 3x3 filter, 1x1 stride and 64 filters | | Conv2D | `C(s\|t\|r\|e\|l\|m),,[,],` | `Cr3,3,64` | Conv2D layer with Relu, a 3x3 filter, 1x1 stride and 64 filters | | Dense (FC) | `F(s\|t\|r\|l\|m)` | `Fs64` | Dense layer with softmax and 64 units | | LSTM | `L(f\|r)[s],[D,Rd]` | `Lf64` | Forward-only LSTM cell with 64 units | @@ -177,21 +174,16 @@ In this example, the string defines a neural network with input layers, convolut | Bidirectional | `B(g\|l)[DRd]` | `Bl256` | Bidirectional layer wrapping a LSTM RNN with 256 units | | BatchNormalization | `Bn` | `Bn` | BatchNormalization layer | | MaxPooling2D | `Mp,,,` | `Mp2,2,1,1` | MaxPooling2D layer with 2x2 pool size and 1x1 strides | -| AvgPooling2D | `Ap,,,` | `Ap2,2,2,2` | AveragePooling2D layer with 2x2 pool size and 1x1 strides | -| Dropout | `D` | `Do25` | Dropout layer with `dropout` = 0.25 | | AvgPooling2D | `Ap,,,` | `Ap2,2,2,2` | AveragePooling2D layer with 2x2 pool size and 2x2 strides | | Dropout | `D` | `D25` | Dropout layer with `dropout` = 0.25 | | Reshape | `Rc` | `Rc` | Reshape layer returns a new (collapsed) tf.Tensor with a different shape based on the previous layer outputs | -| ResidualBlock | `RB[d],,` | `RB3,3,64` | Residual Block with optional downsample. Has a kernel size of , and a depth of . If [d] is provided, the block will downsample the input | | ResidualBlock | `RB[d],,` | `RB3,3,64` | Residual Block with optional downsample. Has a kernel size of , and a depth of . If `d` is provided, the block will downsample the input | ### Layer Details #### Input -- **Spec**: `[batch, height, width, depth]` - **Spec**: `batch,height,width,depth` - **Description**: Represents the input layer in TensorFlow, based on standard TF tensor dimensions. -- **Example**: `None,64,None,1` creates a tf.layers.Input with a variable batch size, height of 64, variable width and a depth of 1 (input channels) - **Example**: `None,64,None,1` creates a `tf.layers.Input` with a variable batch size, height of 64, variable width and a depth of 1 (input channels) #### Output @@ -202,7 +194,6 @@ In this example, the string defines a neural network with input layers, convolut #### Conv2D -- **Spec**: `C(s|t|r|e|l|m),,[,],` - **Spec**: `C(s|t|r|e|l|m),[,,],` - **Description**: Convolutional layer using a `x`,`y` window and `d` filters. Optionally, the stride window can be set with (`s_x`, `s_y`). - **Examples**: @@ -217,21 +208,18 @@ In this example, the string defines a neural network with input layers, convolut #### LSTM -- **Spec**: `L(f|r)[s],[D,Rd]` - **Spec**: `L(f|r)[s][,D,Rd]` - **Description**: LSTM cell running either forward-only (`f`) or reversed-only (`r`), with `n` units. Optionally, the `rate` can be set for the `dropout` and/or the `recurrent_dropout`, where `rate` indicates a percentage between 0 and 100. - **Example**: `Lf64` creates a forward-only LSTM cell with 64 units. #### GRU -- **Spec**: `G(f|r)[s],[D,Rd]` - **Spec**: `G(f|r)[s][,D,Rd]` - **Description**: GRU cell running either forward-only (`f`) or reversed-only (`r`), with `n` units. Optionally, the `rate` can be set for the `dropout` and/or the `recurrent_dropout`, where `rate` indicates a percentage between 0 and 100. - **Example**: `Gf64` creates a forward-only GRU cell with 64 units. #### Bidirectional -- **Spec**: `B(g|l),[D,Rd]` - **Spec**: `B(g|l)[,D,Rd]` - **Description**: Bidirectional layer wrapping either a LSTM (`l`) or GRU (`g`) RNN layer, running in both directions, with `n` units. Optionally, the `rate` can be set for the `dropout` and/or the `recurrent_dropout`, where `rate` indicates a percentage between 0 and 100. - **Example**: `Bl256` creates a Bidirectional RNN layer using a LSTM Cell with 256 units. @@ -258,7 +246,6 @@ In this example, the string defines a neural network with input layers, convolut - **Spec**: `D` - **Description**: Regularization layer that sets input units to 0 at a rate of `rate` during training. Used to prevent overfitting. -- **Example**: `Do50` creates a Dropout layer with a dropout rate of 0.5 (`D`/100). - **Example**: `D50` creates a Dropout layer with a dropout rate of 0.5 (`D`/100). #### Reshape @@ -268,8 +255,6 @@ In this example, the string defines a neural network with input layers, convolut - **Example**: `Rc` applies a specific transformation: `layers.Reshape((-1, prev_layer_y * prev_layer_x))`. #### ResidualBlock -- **Spec**: `RB[d],,` -- **Description**: A Residual Block with a kernel size of , and a depth of . If [d] is provided, the block will downsample the input. Residual blocks are used to allow for deeper networks by adding skip connections, which helps in preventing the vanishing gradient problem. - **Spec**: `RB[d],,` - **Description**: A Residual Block with a kernel size of , and a depth of . If [d] is provided, the block will downsample the input. Residual blocks are used to allow for deeper networks by adding skip connections, which helps in preventing the vanishing gradient problem. - **Example**: `RB3,3,64` creates a Residual Block with a 3x3 kernel size and a depth of 64 filters. @@ -400,5 +385,3 @@ This error usually indicates that there is a mismatch in the expected input dime To resolve this: - Ensure that your VGSL string for the LSTM layer has an `s` in it, which will make the layer return sequences. For instance, instead of "Lf128", use "Lfs128". - Re-run the script or command with the corrected VGSL string. - -