-
Notifications
You must be signed in to change notification settings - Fork 2
Color Calibration
- The goal is to make the important colours (currently green, white and background) as distinguishable as possible. A good camera calibration makes colour calibration (next step) much easier.
- Exposure - how long the camera lens is open for. Higher values make a brighter image, but cause motion blur when the robot moves its head. The trade off here is between high exposure giving motion blur, and low exposure being dark. Aim to set this value as high as possible, without causing significant motion blur.
- Gain - artificially boosts the brightness of an image, but introduces noise and artifacts in the process. Use this to compensate for a lower-than-ideal exposure. Usually the noise is not significant enough to worry about, even at max value, so crank it as high as you need. The added brightness helps make up for the low exposure.
- Saturation - how much colour is in the image. Low values give black and white images, high values give really intense colours that tend to blend together. Scrolling along these values and watching the colours change is the easiest way to find a good middle ground. (There should be a point (~200) where the ball changes to a much darker red, but the orange jerseys do not. Use this setting to maximise the difference between ball and jersey.)
- White balance affects the warmth of white. Low white balance will show white as a blueish colour, whereas high white balance will show white as a more orange colour. Don't stress about this too much. *Values are very hard to tune in offnao currently. Support Tools Team must fix this. Turn off any auto settings.
- Connect the robot to your computer directly using an Ethernet cable. It is hard to get vision data over the wireless.
- Start runswift on the robot and offnao on your computer.
- Connect to the robot by going File -> Connect to Nao
- Make sure Raw Vision is ticked, and enter ip address (192.168.XXX.XXX) of robot in the menu. Press Enter.
- Click record on the bottom. You should now see some vision information and images in the vision tab.
- Manually move the robot around the field, facing both on and off the field in several directions. The aim here is to try and cover all possible orientations the robot could be in during the game. Good points are facing inwards from all four corners of the field and a 360 degree circle standing in the centre circle looking out.
- Save the raw images by going File -> Save File, and saving it as an ofn file. The ofn file can be opened from the menu bar.
When selecting a pixel through the colour calibrating system, it adds that pixel and a radius of RGB values to the defined colour space you are selecting. With a gaussian filter this radius is smaller, while without it a larger radius is selected.
The colour calibration defines what colours in the camera are in the real world. It is therefore important to do the colour calibration based on the camera's image not whats in the real world.
Tells the robot what colours actually look like in the image. "this is white, this is green, etc"
- Go to the Calibration tab. You should see an image there.
- If you are starting a new calibration from scratch, click on "New Kernel".
- You can set what colours are shown in the overlay from the menu above.
- With the selected colour start clicking the regions you want to assign. It should assign that colour and other colours similar to it.
- If you want the selection to not pick up as many colours you can select Auto-weight gaussians.
- To calibrate in further detail, scroll up and down on the image to zoom in and out.
- To save the file click Save. This would save the colour information, which would be pushed to the robot on the next nao_sync.
- [Green] Field
- Differentiate field and background
- Green should be underclassified so that there is almost no green above the field, but the field should be mostly green
- Background on the field is ok, provided its mainly green
- In the calibration tab, tick "vision module" to display detected field edge points. These points should mostly lie on the field edge. The field edge line can be found in the Vision tab.
- [Orange] Ball + [Light Orange] Home Jersey
- Orange should be classified so that there is none on any jersey, none meaning absolutely zero
- The ball needs to have some orange on it, but doesn't need to be entirely orange
- In particular, dont worry about the top and bottom of the ball which are often significantly lighter/darker respectively
- Undercalibrate orange. As long as the ball has SOME orange, it should get detected. Seeing false positive balls are a much bigger problem.
- [Dark Blue] Away Jersey
- Should be underclassified
- Just make sure the jersey has some blue on it and that the field isn't covered in blue
- The Away team is always our competitors even if we are wearing our Away jersey.
- [White] Field Lines and Goal Post
- This should be easy
- The overlay white is a similar colour to the field line, so sometimes it can look classified. To get around this, turn on 'all colours + unclassified'
- Dont be surprised if lots of things are white (in particular robots)
- Do a good camera calibration first
- If two objects are the same colour in the image, colour calibrating them to be different colours is almost impossible. Instead, change the camera settings so they look different, then calibrate them appropriately.
- Good knowledge of the vision algorithms will tell you whether to under or over calibrate colours
- It is usually sufficient to do 1 camera/colour calibration for the entire team, since the cameras are usually the same across robots. In the past this wasn't always true, and we sometimes had one calibration for robots A,C,D, with another calibration for robots B, E, F.
- Calibration files (*.nnmc and *.nnmc.bz2) are found in /image/home/nao/data/
- Kernel information (*_kernel) is found in /utils/kernels/
- If you don't want to save the calibration kernel to the default location click "Save As".
- If you want to start a new colour calibration, select "New".
- There are two calibrations, one for each camera. When you click "save", it only saves the calibration information for the camera selected. Make sure to save them both.
- You can load a kernel which you have saved at another time by clicking load. It loads the kernel into the currently selected camera.
Let Kenji ([email protected]) know if anything doesn't make sense or if you have questions :)
We no longer have the Gaussian nearest neighbour method (Can look into adding that in if we find something like that would be useful).
We have two tabs for calibration (NNMC and Calibration). For both cases, we load up a dump file with raw images and build upon a previously saved classification or start from scratch. The way classification works is we layer new colour definitions that override old colour definitions (There isn't an option to undo actions so save regularly (saving and loading is done via buttons. Ignore the drop down menus)). The way I would go about it is try something and if it looks good, save it and proceed.
The classifications used by the robots are image/home/nao/data/top.nnmc.bz2 and image/home/nao/bot.nnmc.bz2
This is used for algorithmic sweeping classification of white and green.
It relies on VERY STRICT assumptions so ensure that these are met (dump file has appropriate images) when performing classification.
We assume that for the image, starting from the bottom of the image, we see green, a green->white edge followed by a white->green edge. All "green" and "white" pixels we see in between these edges are classified appropriately.
Tick the button "Show detected edges?" for a mock run of what this classification would look like in the image. (Black is unclassified). Since some field colours match with black colours on the ball or off field colours, we roll that back with the Calibration tab.
I think regardless of what buttons you select ("Naive", "Sobel", "Robert's Cross", "isotropic", "Y", "U" and "V" ), and what you see in the 3rd column, the algorithm uses Y and Sobel. The number entry on the next line represents the density of the image we use for calculating edges. (The larger the number, the larger the edges tend to be). At the moment 1 is fine.
To actually run the classification, untick "Show detected edges?" and it will alter your nnmc file.
Some useful buttons are "Fill classification (green)" and "Fill classification (white)" which fills our classification so that it's more of a filled solid. (Kinda broken because multiple presses do work to some extent but still useful). This is probably more useful for white since we don't get many samples of white.
(If you can imagine the pixels in a 3D space, the button simply fills in any points between two white classified points as white)
Using a Sobel gradient operator on Y, we generate an edge image. We then run Otsus on this to calculate an appropriate threshold for edges (Pink pixels in the 3rd image). Since edges are harder to see towards the horizon, I've artificially decreased the threshold for a pixel to be an edge as we reach the higher portions of the image so that almost everything becomes an edge and we practically terminate.
All code can be found in GreenYUVClassifier.cpp and NNMCTab.cpp
This is used for fine tuning our classification manually for white, green and black
For this, any dump file is appropriate. Can typically follow the instructions we had prior to 2017.
You can zoom in by scrolling, colour buttons represent what we are classifying and overlays are self explanatory.
"vision module" and "Auto-Weight Gaussians" and "Undo" buttons are broken.
I think the general idea here is to prevent overclassification? So if we see too much colour above the field boundary, we bring that back with Background or the appropriate colour.
The "Radius for classification" at the bottom gives you a sort of cube tool if you like. If we classify a point as Green say, anything within radius distance of the point (in any dimension Y, U or V) (See chebyshev distance https://en.wikipedia.org/wiki/Chebyshev_distance) is also classified as Green.