DLPrimitives
Network JSON Format

JSON Network format consists of three field, each one of them is array

Inputs

Input is object that carries following fields:

For example:

"inputs": [
    {
        "shape": [ 64, 1, 28, 28 ],
        "name":  "data"
    },
    {
        "shape": [ 64 ],
        "name":  "label",
        "dtype": "int"
    }
],

Outputs

Outputs are either string representing output tensor name or an object with fields name with string value as tensor name and loss_weight for weight for loss.

Note: if tensor name starts with loss it is considered a loss and back propagation goes from it. Default loss weight is 1.0. If loss_weight is provided than the tensor considered as loss and it will participate in back propagation.

For example

"outputs" : [ 
    "prob", 
    { "name" : "cross_entropy_loss", "loss_weight": 1.0 }
]

Operators

Operators is list of operators executed during forward and back-propagation in the order provided. They create edges between nodes. The graph must be acyclic with exception of in-place operators (i.e. it may be self edge)

Each operator has following fields:

For example:

{
    "name": "fc2",
    "type": "InnerProduct",
    "inputs": ["fc1"],
    "outputs": ["fc2"],
    "options": {"outputs": 10}
}

Supported Operators and Options

SoftmaxWithLoss

No parameters at this point

NLLLoss

Negative Likelihood Log Loss. Expects log of probability as input.

Parameters:

MSELoss

Mean Square Error loss, expects as input two identical shapes

Parameters:

Softmax

Parameters:

Activation

Parameters:

Elementwise

Parameters:

Pooling2D

Parameters

Note: kernel, stride and pad can be either single number for symmetric values or pair of integers, 1st for height dimension and second for width

GlobalPooling

Parameters

InnerProduct

Parameters

Convolution2D

Parameters:

Note: kernel, stride, dilate and pad can be either single number for symmetric values or pair of integers, 1st for height dimension and second for width

TransposedConvolution2D

Parameters:

Note: kernel, stride, dilate, pad and output_pad can be either single number for symmetric values or pair of integers, 1st for height dimension and second for width

BatchNorm

Parameters

Concat

Parameters

Slice

Parameters

For example: { "begin":1, "end":2","dim":1 } - slice green channel

Flatten

Flattens the shape to [batch,features] no parameters

Squeeze

Squeezes the shape

Reshape

Reshapes the tensor

Threshold

Compute x > threshold ? 1 : 0

Parameters

Hardtanh

Compute max(min_val,min(max_val,x))

Parameters

Standard Activations

Following are standard activation names: relu, sigmoid, tanh, relu6, identity

Example MNIST MLP Network

This is an example of a simple MLP network for mnist training

{
    "inputs": [
        { 
            "shape": [ 64,1,28,28 ],
            "name": "data"
        },
        {
            "shape": [64],
            "name": "label",
            "dtype" : "int"
        }
    ],
    "outputs" : [ 
        "prob", "loss" 
    ],
    "operators": [
        {
            "name": "fc1",
            "type": "InnerProduct",
            "inputs": [ "data" ],
            "outputs": [ "fc1" ],
            "options": {
                "outputs": 256,
                "activation": "relu"
            }
        },
        {
            "name": "fc2",
            "type": "InnerProduct",
            "inputs": ["fc1" ],
            "outputs": [ "fc2" ],
            "options": {
                "outputs": 10
            }
        },
        {
            "name": "prob",
            "type": "Softmax",
            "inputs": ["fc2"],
            "outputs": ["prob"]
        },
        {
            "name": "loss",
            "type": "SoftmaxWithLoss",
            "inputs": ["fc2","label"],
            "outputs": ["loss"]
        }
    ]
}