Troubleshooting Neural Network Conversion Errors

Applicable products


Application note description

This application note describes some common errors that can occur when converting neural network files and provides a list of supported layers.

Related articles

Preparing for use

Before you use your camera, we recommend that you are aware of the following resources:

  • Camera Reference for the camera—HTML document containing specifications, EMVA imaging, installation guide, and technical reference for the camera model. Replace <PART-NUMBER> with your model's part number:<PART-NUMBER>/latest/Model/Readme.html
    For example:
  • Getting Started Manual for the camera—provides information on installing components and software needed to run the camera.
  • Technical Reference for the camera—provides information on the camera’s specifications, features and operations, as well as imaging and acquisition controls.
  • Firmware updates—ensure you are using the most up-to-date firmware for the camera to take advantage of improvements and fixes.
  • Tech Insights—Subscribe to our bi-monthly email updates containing information on new knowledge base articles, new firmware and software releases, and Product Change Notices (PCN).

Common Errors

Whether using either the FLIR NeuroUtility (Windows) or mvNCCompile (Linux) for converting your inference network files, here are some common errors and ways to fix them:

Toolkit Error: Stage Details Not Supported

This error can occur if at least one of the layers being used in the network is unsupported.

  1. Check the layer name for type (the error gives the name).
  2. Check of list of accepted layer types (listed at end of this application note).

It can also mean that not all training code or placeholders were properly removed before doing the final conversion.

Toolkit error: Provided OutputNode/InputNode name does not exist or does not match with one contained in caffemodel file provided

This error can occur when at least one of the node names provided is incorrect. This can be as simple as having an incorrect capitalization or spelling, or the wrong node name entirely.

Toolkit Error: Parser not supported

This error can occur when an incorrect file location is provided, for example the inference network file.

Setup Error: Not enough resources on Myriad to process this network

This error can occur when there is not enough memory for the number of layers for the inference network file.

  • Reduce the number of layers, or
  • Reduce the channels per layer

Image Injection

What is image injection?

Image injection is when you upload your own image to the camera. Then the camera streams the uploaded image over and over every frame. The camera also runs inference on the uploaded image.

Why would I do this?

This can be an extremely helpful debugging tool if Firefly inference results are incorrect or unexpected. There are a myriad different reasons why this could happen and image injection can help narrow down the cause of the discrepancy. The idea is to inject an image with a known class and a known confidence, then to compare the known values to the values that Firefly outputs for this image.

For example, assume that I have trained a classification network to classify between cats and dogs. I convert this network to the firefly format and upload it to the camera, however now that I point the camera at real cats and dogs, I get unexpected inference results. Then I can take a picture of a cat from my training set to inject into the camera. I know that on my host machine, the network predicts that this is a cat with 82% accuracy. When I inject the image into the camera, I see that the inference result is "cat" with a 80% confidence. This tells me that the network was converted properly and something else is causing my inference results to be incorrect (for example, inconsistent lighting between the training images and during deployment).

How can I do this?

  1. The injected image needs to be in a 10-bit raw pixel format. Convert the source raw 8-bit image to a 10-bit pixel format using your own code. Here's an overview of how this can be done:
  2. For mono Images
    • Read in the desired image into an array of 8-bit pixels. This can be done with an image processing library such as OpenCV and its imread function. The input image can be of any file type but should have a bit depth of 8 bits.
    • Convert the image to a one channel mono (grayscale) image.
    • Create a new 16-bit array for the new 10-bit image.
    • For every pixel in the 8-bit image, copy it into the new 16-bit array and left bit shift it twice (i.e. pixels_16bit[i] = pixels_8bit[i] << 2).
    • Save the new image from the 16-bit array as .raw file type.
  3. For Color Images
    1. Due to the complexity of injecting a color image (for a color variant firefly camera), we have provided a complete python example on how to do so, available here.
  4. Upload the image to the camera. Go to the File Access tab in Spinview, select Injected Image, and click upload.
  5. Set the Injected Image Width and Height. Go to the Features tab in Spinview, search for "Injected" and set the Injected Image Width and Height nodes to the dimensions of the original input image.
  6. Stream the injected image. Set the Test Pattern Generator Selector to Pipeline Start and the Test Pattern to Injected Image.
  7. Turn on inference and begin acquisition.

List of Supported Layers

The following list of Supported layers are separated by what they have been tested on; Tensorflow, Classification, and Caffe.  Where appropriate, we have added any restrictions we have found when using a particular layer type.


    Depth Convolution
    Restrictions : Input/output channel dimensions must match
    Dilated convolution
    Max Pooling Radix NxM with Stride S
    Note : 2x2 and 3x3 are optimized
    Average Pooling: Radix NxM with Stride S, Global average pooling
    Note : 3x3 and 7x7 are optimized
    Local Response Normalization
    Batch Normalization (fused)
    L2 Normalization
    Input Layer
    Fully Connected Layers (limited support)
    Relu-X, , Leaky-Relu
    Restrictions : Input/output storage order must be the same




    Restrictions : Input stride must (in channels x 2)
    ElmWise unit : supported operations - sum, prod, max
    Restrictions : Input/output must have the same storage order, input/output tensors must be the same size, only channel minor (YXZ) and interleaved (YZX) storage orders supported
    Regular Convolution - 1x1s1, 3x3s1, 5x5s1, 7x7s1, 7x7s2, 7x7s4
    Restrictions (not applicable to 1x1s1):
    Width and height must be >= 8 pixels.
    output channels must be >=8.
    input channels must be < K_MAX
    Group Convolution - <1024 groups total
    Lastly, Tensorflow object detection is unsupported in the Firefly-DL. To run object detection inference, you must use a supported Caffe network.
Related Articles