Skip to content

Commit 4b3330b

Browse files
committed
update week 5
1 parent 09cd3a8 commit 4b3330b

File tree

8 files changed

+254
-254
lines changed

8 files changed

+254
-254
lines changed

doc/pub/week5/html/week5-bs.html

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -806,11 +806,11 @@ <h2 id="example-of-how-we-can-up-a-model-without-a-specific-image" class="anchor
806806

807807
<p>The 6 lines of code below define the convolutional base using a common pattern: a stack of Conv2D and MaxPooling2D layers.</p>
808808

809-
<p>As input, a CNN takes tensors of shape (image_height, image_width,
810-
color_channels), ignoring the batch size. If you are new to these
811-
dimensions, color_channels refers to (R,G,B). In this example, you
809+
<p>As input, a CNN takes tensors of shape (image$\_$height, image$\_$width,
810+
color$\_$channels), ignoring the batch size. If you are new to these
811+
dimensions, color$\_$channels refers to (R,G,B). In this example, you
812812
will configure the CNN to process inputs of shape (32, 32, 3) as an
813-
example. You can do this by passing the argument input_shape to our
813+
example. You can do this by passing the argument input$\_$shape to our
814814
first layer.
815815
</p>
816816

@@ -1344,10 +1344,10 @@ <h3 id="schedulers" class="anchor">Schedulers </h3>
13441344
<p>The code below shows object oriented implementations of the Constant,
13451345
Momentum, Adagrad, AdagradMomentum, RMS prop and Adam schedulers. All
13461346
of the classes belong to the shared abstract Scheduler class, and
1347-
share the update_change() and reset() methods allowing for any of the
1347+
share the update$\_$change() and reset() methods allowing for any of the
13481348
schedulers to be seamlessly used during the training stage, as will
13491349
later be shown in the fit() method of the neural
1350-
network. Update_change() only has one parameter, the gradient
1350+
network. The function Update$\_$change() only has one parameter, the gradient
13511351
(\( \delta^{l}_{j}a^{l-1}_k \)), and returns the change which will be
13521352
subtracted from the weights. The reset() function takes no parameters,
13531353
and resets the desired variables. For Constant and Momentum, reset
@@ -4098,8 +4098,8 @@ <h3 id="usage-of-cnn-code" class="anchor">Usage of CNN code </h3>
40984098
</div>
40994099

41004100
<p>Now that we have our CNN object, we can begin to add layers to it!
4101-
Many of the add_layer functions have default values, for example
4102-
add_Convolution2DLayer() has a default v_stride and h_stride of
4101+
Many of the add$\_$layer functions have default values, for example
4102+
add$\_$Convolution2DLayer() has a default v$\_$stride and h$\_$stride of
41034103
1. However, these can of course be set to any value you please. Note
41044104
that the input channels of a subsequent convolutional layer must equal
41054105
the previous convolutional layer's feature maps.
@@ -4257,11 +4257,11 @@ <h3 id="usage-of-cnn-code" class="anchor">Usage of CNN code </h3>
42574257
<p>The codebase allows for great flexibility in CNN
42584258
architectures. Pooling layers can be added before, inbetween or after
42594259
convolutional layers, but due to the great optimizations made within
4260-
Convolution2DLayerOPT, we recommend using the v_stride and h_stride
4261-
parameters in add_Convolution2DLayer() to reduce the dimentionality of
4260+
Convolution2DLayerOPT, we recommend using the v$\_$stride and h$\_$stride
4261+
parameters in add$\_$Convolution2DLayer() to reduce the dimentionality of
42624262
the problem as the pooling layer is slow in comparison. To use the
42634263
unoptimized version of Convolution2DLayer, simply pass optimized=False
4264-
as an argument in add_Convolution2DLayer().
4264+
as an argument in add$\_$Convolution2DLayer().
42654265
</p>
42664266

42674267
<p>If one wishes to perform binary classification using the CNN, simply

doc/pub/week5/html/week5-reveal.html

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -592,11 +592,11 @@ <h2 id="example-of-how-we-can-up-a-model-without-a-specific-image">Example of ho
592592

593593
<p>The 6 lines of code below define the convolutional base using a common pattern: a stack of Conv2D and MaxPooling2D layers.</p>
594594

595-
<p>As input, a CNN takes tensors of shape (image_height, image_width,
596-
color_channels), ignoring the batch size. If you are new to these
597-
dimensions, color_channels refers to (R,G,B). In this example, you
595+
<p>As input, a CNN takes tensors of shape (image$\_$height, image$\_$width,
596+
color$\_$channels), ignoring the batch size. If you are new to these
597+
dimensions, color$\_$channels refers to (R,G,B). In this example, you
598598
will configure the CNN to process inputs of shape (32, 32, 3) as an
599-
example. You can do this by passing the argument input_shape to our
599+
example. You can do this by passing the argument input$\_$shape to our
600600
first layer.
601601
</p>
602602

@@ -1134,10 +1134,10 @@ <h3 id="schedulers">Schedulers </h3>
11341134
<p>The code below shows object oriented implementations of the Constant,
11351135
Momentum, Adagrad, AdagradMomentum, RMS prop and Adam schedulers. All
11361136
of the classes belong to the shared abstract Scheduler class, and
1137-
share the update_change() and reset() methods allowing for any of the
1137+
share the update$\_$change() and reset() methods allowing for any of the
11381138
schedulers to be seamlessly used during the training stage, as will
11391139
later be shown in the fit() method of the neural
1140-
network. Update_change() only has one parameter, the gradient
1140+
network. The function Update$\_$change() only has one parameter, the gradient
11411141
(\( \delta^{l}_{j}a^{l-1}_k \)), and returns the change which will be
11421142
subtracted from the weights. The reset() function takes no parameters,
11431143
and resets the desired variables. For Constant and Momentum, reset
@@ -3908,8 +3908,8 @@ <h3 id="usage-of-cnn-code">Usage of CNN code </h3>
39083908
</div>
39093909

39103910
<p>Now that we have our CNN object, we can begin to add layers to it!
3911-
Many of the add_layer functions have default values, for example
3912-
add_Convolution2DLayer() has a default v_stride and h_stride of
3911+
Many of the add$\_$layer functions have default values, for example
3912+
add$\_$Convolution2DLayer() has a default v$\_$stride and h$\_$stride of
39133913
1. However, these can of course be set to any value you please. Note
39143914
that the input channels of a subsequent convolutional layer must equal
39153915
the previous convolutional layer's feature maps.
@@ -4068,11 +4068,11 @@ <h3 id="usage-of-cnn-code">Usage of CNN code </h3>
40684068
<p>The codebase allows for great flexibility in CNN
40694069
architectures. Pooling layers can be added before, inbetween or after
40704070
convolutional layers, but due to the great optimizations made within
4071-
Convolution2DLayerOPT, we recommend using the v_stride and h_stride
4072-
parameters in add_Convolution2DLayer() to reduce the dimentionality of
4071+
Convolution2DLayerOPT, we recommend using the v$\_$stride and h$\_$stride
4072+
parameters in add$\_$Convolution2DLayer() to reduce the dimentionality of
40734073
the problem as the pooling layer is slow in comparison. To use the
40744074
unoptimized version of Convolution2DLayer, simply pass optimized=False
4075-
as an argument in add_Convolution2DLayer().
4075+
as an argument in add$\_$Convolution2DLayer().
40764076
</p>
40774077

40784078
<p>If one wishes to perform binary classification using the CNN, simply

doc/pub/week5/html/week5-solarized.html

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -714,11 +714,11 @@ <h2 id="example-of-how-we-can-up-a-model-without-a-specific-image">Example of ho
714714

715715
<p>The 6 lines of code below define the convolutional base using a common pattern: a stack of Conv2D and MaxPooling2D layers.</p>
716716

717-
<p>As input, a CNN takes tensors of shape (image_height, image_width,
718-
color_channels), ignoring the batch size. If you are new to these
719-
dimensions, color_channels refers to (R,G,B). In this example, you
717+
<p>As input, a CNN takes tensors of shape (image$\_$height, image$\_$width,
718+
color$\_$channels), ignoring the batch size. If you are new to these
719+
dimensions, color$\_$channels refers to (R,G,B). In this example, you
720720
will configure the CNN to process inputs of shape (32, 32, 3) as an
721-
example. You can do this by passing the argument input_shape to our
721+
example. You can do this by passing the argument input$\_$shape to our
722722
first layer.
723723
</p>
724724

@@ -1252,10 +1252,10 @@ <h3 id="schedulers">Schedulers </h3>
12521252
<p>The code below shows object oriented implementations of the Constant,
12531253
Momentum, Adagrad, AdagradMomentum, RMS prop and Adam schedulers. All
12541254
of the classes belong to the shared abstract Scheduler class, and
1255-
share the update_change() and reset() methods allowing for any of the
1255+
share the update$\_$change() and reset() methods allowing for any of the
12561256
schedulers to be seamlessly used during the training stage, as will
12571257
later be shown in the fit() method of the neural
1258-
network. Update_change() only has one parameter, the gradient
1258+
network. The function Update$\_$change() only has one parameter, the gradient
12591259
(\( \delta^{l}_{j}a^{l-1}_k \)), and returns the change which will be
12601260
subtracted from the weights. The reset() function takes no parameters,
12611261
and resets the desired variables. For Constant and Momentum, reset
@@ -4006,8 +4006,8 @@ <h3 id="usage-of-cnn-code">Usage of CNN code </h3>
40064006
</div>
40074007

40084008
<p>Now that we have our CNN object, we can begin to add layers to it!
4009-
Many of the add_layer functions have default values, for example
4010-
add_Convolution2DLayer() has a default v_stride and h_stride of
4009+
Many of the add$\_$layer functions have default values, for example
4010+
add$\_$Convolution2DLayer() has a default v$\_$stride and h$\_$stride of
40114011
1. However, these can of course be set to any value you please. Note
40124012
that the input channels of a subsequent convolutional layer must equal
40134013
the previous convolutional layer's feature maps.
@@ -4165,11 +4165,11 @@ <h3 id="usage-of-cnn-code">Usage of CNN code </h3>
41654165
<p>The codebase allows for great flexibility in CNN
41664166
architectures. Pooling layers can be added before, inbetween or after
41674167
convolutional layers, but due to the great optimizations made within
4168-
Convolution2DLayerOPT, we recommend using the v_stride and h_stride
4169-
parameters in add_Convolution2DLayer() to reduce the dimentionality of
4168+
Convolution2DLayerOPT, we recommend using the v$\_$stride and h$\_$stride
4169+
parameters in add$\_$Convolution2DLayer() to reduce the dimentionality of
41704170
the problem as the pooling layer is slow in comparison. To use the
41714171
unoptimized version of Convolution2DLayer, simply pass optimized=False
4172-
as an argument in add_Convolution2DLayer().
4172+
as an argument in add$\_$Convolution2DLayer().
41734173
</p>
41744174

41754175
<p>If one wishes to perform binary classification using the CNN, simply

doc/pub/week5/html/week5.html

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -791,11 +791,11 @@ <h2 id="example-of-how-we-can-up-a-model-without-a-specific-image">Example of ho
791791

792792
<p>The 6 lines of code below define the convolutional base using a common pattern: a stack of Conv2D and MaxPooling2D layers.</p>
793793

794-
<p>As input, a CNN takes tensors of shape (image_height, image_width,
795-
color_channels), ignoring the batch size. If you are new to these
796-
dimensions, color_channels refers to (R,G,B). In this example, you
794+
<p>As input, a CNN takes tensors of shape (image$\_$height, image$\_$width,
795+
color$\_$channels), ignoring the batch size. If you are new to these
796+
dimensions, color$\_$channels refers to (R,G,B). In this example, you
797797
will configure the CNN to process inputs of shape (32, 32, 3) as an
798-
example. You can do this by passing the argument input_shape to our
798+
example. You can do this by passing the argument input$\_$shape to our
799799
first layer.
800800
</p>
801801

@@ -1329,10 +1329,10 @@ <h3 id="schedulers">Schedulers </h3>
13291329
<p>The code below shows object oriented implementations of the Constant,
13301330
Momentum, Adagrad, AdagradMomentum, RMS prop and Adam schedulers. All
13311331
of the classes belong to the shared abstract Scheduler class, and
1332-
share the update_change() and reset() methods allowing for any of the
1332+
share the update$\_$change() and reset() methods allowing for any of the
13331333
schedulers to be seamlessly used during the training stage, as will
13341334
later be shown in the fit() method of the neural
1335-
network. Update_change() only has one parameter, the gradient
1335+
network. The function Update$\_$change() only has one parameter, the gradient
13361336
(\( \delta^{l}_{j}a^{l-1}_k \)), and returns the change which will be
13371337
subtracted from the weights. The reset() function takes no parameters,
13381338
and resets the desired variables. For Constant and Momentum, reset
@@ -4083,8 +4083,8 @@ <h3 id="usage-of-cnn-code">Usage of CNN code </h3>
40834083
</div>
40844084

40854085
<p>Now that we have our CNN object, we can begin to add layers to it!
4086-
Many of the add_layer functions have default values, for example
4087-
add_Convolution2DLayer() has a default v_stride and h_stride of
4086+
Many of the add$\_$layer functions have default values, for example
4087+
add$\_$Convolution2DLayer() has a default v$\_$stride and h$\_$stride of
40884088
1. However, these can of course be set to any value you please. Note
40894089
that the input channels of a subsequent convolutional layer must equal
40904090
the previous convolutional layer's feature maps.
@@ -4242,11 +4242,11 @@ <h3 id="usage-of-cnn-code">Usage of CNN code </h3>
42424242
<p>The codebase allows for great flexibility in CNN
42434243
architectures. Pooling layers can be added before, inbetween or after
42444244
convolutional layers, but due to the great optimizations made within
4245-
Convolution2DLayerOPT, we recommend using the v_stride and h_stride
4246-
parameters in add_Convolution2DLayer() to reduce the dimentionality of
4245+
Convolution2DLayerOPT, we recommend using the v$\_$stride and h$\_$stride
4246+
parameters in add$\_$Convolution2DLayer() to reduce the dimentionality of
42474247
the problem as the pooling layer is slow in comparison. To use the
42484248
unoptimized version of Convolution2DLayer, simply pass optimized=False
4249-
as an argument in add_Convolution2DLayer().
4249+
as an argument in add$\_$Convolution2DLayer().
42504250
</p>
42514251

42524252
<p>If one wishes to perform binary classification using the CNN, simply
0 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)