Gravio also supports software sensors. An example for a software sensor could be a camera that detects how many people are in a room and triggers certain Actions if certain thresholds are reached. These sensors are created using software and artificial intelligence (with inference files) using TensorFlow.

Creating a Software Sensor

Adding a Camera

First you need to add a ONVIF compatible camera to the network. Gravio supports the ONVIF standard ( need to provide Profile T), which is very common in security camera settings. For this, please find the camera in your devices window:

You will see ONVIF compatible camera(s) in your network. Please note that Gravio can detect ONVIF cameras that adhere to the ONVIF discovery standard. If you don’t see your camera, please check your network or camera setting:

In this case we connect to a IODATA TS-NA220 and to a Sony IPELA camera.

In the following screen you can set the parameters of the camera:

  • The Device ID is fixed and needed to identify all information associated with this camera. It typically consists of the word Camera followed by the IP address
  • You can choose the display name to make sense for your setting
  • Pick either Picture or Video depending on what you like to capture. We recommend picture as it uses less space.
  • The interval sets how often a picture should be taken. Note again, the less often, the less space you need on your hard drive.
  • Both the above items are relevant if you decide to save the picture. The Save image tickbox determines if the picture should be saved after it has been processed. Leave it blank to save space, leave it ticked, to verify if the computer is doing the right thing
  • If you save the picture, you can also draw a box around the detected area. For example, if you are detecting people in a picture, it will draw a line around the detected people. This helps to verify if the visual recognition model works correctly. Here you see where the images get stored. Note, if you store the images, your hard disk might fill up very quickly. You may therefore go to “Settings” and under “Disk Management” set an auto-delete of the media files after a suitable amount of days.
  • The username and password are the authentication details to access your camera.

Adding a Image Ingestion Folder

You can also define a folder on your computer, where you can dump image files. Gravio will then automatically detect as soon as an image has been dumped to that folder and will treat it like a camera input sensor image.

There are two ways to use that:

1. Either you run your ingested image past an AI first, in which case you have to deploy your AI model first. Then, assign the AI model to a Area and Layer, and bind it to the newly created image ingestion folder. Adding images to the folder will be ingested immediately and run by the AI model. Results of the model’s detection will appear in the Data Viewer.

If you choose to save and/or draw a frame around the image, the output will be stored in your regular video output folder in HubKit/mediadata/

2. The second option is to trigger an Action with the dropped image, in which case the tv.Data will be the path to the ingested file. You then use a “File Read” component to read the file and process it in the upcoming components via the cv.Payload.

Adding the AI/Image Recognition Model

Now that you have added the camera, you can add the Image Recognition models to Gravio that the camera should process. You can find the available models in the Settings tab of Gravio Studio, then in the “Image Inference Models” section:

On the right-hand side, you will see all the available models. Make sure your Gravio has access to the internet, then just press the “Deploy” button to the right of the model you require. It will download the necessary model files and make them available for the final setup. Please note, that each model file can easily reach 200-300 megabytes in data, so be aware of this when downloading and storing the models.

You can also create your own models, however, this is more advanced, and needs a bit more bespoke work. Please don’t hesitate to get in touch with us in our slack channel if you are interested in doing this: Join Gravio Slack

Adding the Layer and connecting the device and trigger

Now that you have installed and connected the camera and deployed the AI model, you have a new layer available that you can add in the Area section of the Gravio Devices tab. Just add the layer as you would any layer and pick the layer named after the AI model. This will then behave like any other hardware sensor.

Add a layer to the area:

Give it a meaningful name. You now have a new Sensing Device Type from the AI model, choose it and save it:

Pick the camera feed you want to use with this model. These camera devices will automatically appear here if they are using the standard ONVIF and its discovery protocol (see above):

Enable the layer:

Now let’s add a trigger to the software sensor. Pick the trigger tab and create a new trigger. Give it a sensible name and pick the right Area that contains the software sensor layer:

Under “Conditions” select the layer and the camera:

Then set the condition under which the trigger should fire:

And an action it should execute:

Don’t forget to enable the trigger

You are now ready to go and have created a camera vision layer and trigger. You can find more info about creating triggers under the section Trigger Tab
You can find more about how to setup a ONVIF Camera by watching this video: https://www.youtube.com/watch?v=8dkSIz_5810
You can find out where the images are stored here

Need more help with this?
Join our slack community for help

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.