TABLE 2.

CNN Architecture, Activations, and Parameters

LayerNameActivationsParameters
1Tensor input layer[725,725,3]
22D convolution layer[239,239,64]Weights [11,11,3,64], bias [1,1,64]
3Batch normalization[239,239,64]Offset and scale [1,1,64]
4ReLU layer[239,239,64]
5Max pooling layer[119,119,64]Size [3,3], stride [2,2], padding [0,0,0,0]
62D convolution layer[40,40,128]Weights 5,5,64,128], bias [1,1,128]
7Batch normalization[40,40,128]Offset and scale [1,1,128]
8ReLU layer[40,40,128]
9Max pooling layer[19,19,128]Size [3,3], stride [2,2], padding [0,0,0,0]
102D convolution layer[19,19,256]Weights [3,3,128,256], bias [1,1,256]
11Batch normalization[19,19,256]Offset and scale [1,1,256]
12ReLU layer[19,19,256]
13Max pooling layer[9,9,256]Size [3,3], stride [2,2], padding [0,0,0,0]
142D convolution layer[9,9,192]Weights [3,3,256,192], Bias [1,1,192]
15Batch normalization[9,9,192]Offset and scale [1,1,192]
16ReLU layer[9,9,192]
17Max pooling layer[4,4,192]Size [3,3], stride [2,2], padding [0,0,0,0]
182D convolution layer[4,4,192]Weights [3,3,256,192], bias [1,1,192]
19Batch normalization[4,4,192]Offset and scale [1,1,192]
20ReLU layer[4,4,192]
21Max pooling layer[1,1,192]Size [3,3], stride [2,2], padding [0,0,0,0]
22Fully connected layer[1,1,192]Weights [192,192], bias [192,1]
23ReLU layer[1,1,192]
24Dropout layer[1,1,192]0.5
25Fully connected layer[1,1,86]Weights [86,192], bias [86,1]
26ReLU layer[1,1,86]
27Dropout layer[1,1,86]0.5
28Fully connected layer[1,1,2]Weights [2,86], bias [2,1]
29Softmax layer[1,1,2]
30Classification layerCross entropy loss function
  • 2D = 2-dimensional; ReLU = rectified linear unit.