Main Content

visionhdl.BilateralFilter

Perform 2-D filtering of a pixel stream

Description

The visionhdl.BilateralFilter object filters images while preserving edges. Some applications of bilateral filtering are denoising while preserving edges, separating texture from illumination, and cartooning to enhance edges. The filter replaces each pixel at the center of a neighborhood by an average that is calculated using spatial and intensity Gaussian filters. The object determines the filter coefficients from:

  • Spatial location in the neighborhood (similar to a Gaussian blur filter)

  • Intensity difference from the neighborhood center value

The object provides two standard deviation parameters for independent control of the spatial and intensity coefficients.

To perform bilateral filtering of a pixel stream:

  1. Create the visionhdl.BilateralFilter object and set its properties.

  2. Call the object with arguments, as if it were a function.

To learn more about how System objects work, see What Are System Objects?

Creation

Description

example

filt2d = visionhdl.BilateralFilter(Name,Value) returns a bilateral filter System object™. Set properties using name-value pairs. Enclose each property name in single quotes.

For example:

filt2d = visionhdl.BilateralFilter('CoefficientsDataType','Custom',...
                      'CustomCoefficientsDataType',numerictype(0,18,17))

Properties

expand all

Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them.

If a property is tunable, you can change its value at any time.

For more information on changing property values, see System Design in MATLAB Using System Objects.

Size of the image region used to compute the average, specified as an N-by-N pixel square.

Spatial standard deviation target used to compute coefficients for the spatial Gaussian filter, specified as a positive real number. This parameter has no limits, but recommended values are from 0.1 to 10. At the high end, the distribution becomes flat and the coefficients are small. At the low end, the distribution peaks in the center and has small coefficients in the rest of the neighborhood. These boundary values also depend on the neighborhood size and the data type used for the coefficients.

Intensity standard deviation target used to compute coefficients for the intensity Gaussian filter, specified as a positive real number. This parameter has no limits, but recommended values are from 0.1 to 10. At the high end, the distribution becomes flat and the coefficients are small. At the low end, the distribution peaks in the center and has small coefficients in the rest of the neighborhood. These boundary values also depend on the neighborhood size and the data type used for the coefficients.

When the intensity standard deviation is large, the bilateral filter acts more like a Gaussian blur filter, because the intensity Gaussian has a lower peak. Conversely, when the intensity standard deviation is smaller, edges in the intensity are preserved or enhanced.

Select one of these methods for padding the boundary of the input image.

  • 'Constant' — Interpret pixels outside the image frame as having a constant value.

  • 'Replicate' — Repeat the value of pixels at the edge of the image.

  • 'Symmetric' — Set the value of the padding pixels to mirror the edge of the image.

  • 'Reflection' — Set the value of the padding pixels to reflect around the pixel at the edge of the image.

  • 'None' — Exclude padding logic. The object does not set the pixels outside the image frame to any particular value. This option reduces the hardware resources that are used by the object and reduces the blanking that is required between frames. However, this option affects the accuracy of the output pixels at the edges of the frame. To maintain pixel stream timing, the output frame is the same size as the input frame. However, to avoid using pixels calculated from undefined padding values, mask off the n/2 pixels around the edge of the frame for downstream operations. n is the size of the operation kernel. For more details, see Increase Throughput with Padding None.

For more information about these methods, see Edge Padding.

Value used to pad the boundary of the input image, specified as an integer. The object casts this value to the same data type as the input pixel.

Dependencies

This parameter applies when you set PaddingMethod to 'Constant'.

Size of line memory buffer, specified as a positive integer. Choose a power of two that accommodates the number of active pixels in a horizontal line. If you specify a value that is not a power of two, the buffer uses the next largest power of two.

Rounding mode used for fixed-point operations. When the input is any integer or fixed-point data type, the algorithm uses fixed-point arithmetic for internal calculations. This option does not apply when the input data type is single or double.

Overflow mode used for fixed-point operations. When the input is any integer or fixed-point data type, the algorithm uses fixed-point arithmetic for internal calculations. This option does not apply when the input data type is single or double.

Method for determining the data type of the filter coefficients. The coefficients usually require a data type with more precision than the input data type.

  • 'Custom' — Sets the data type of the coefficients to match the data type defined in the CustomCoefficientsDataType property.

  • 'Same as first input'' — Sets the data type of the coefficients to match the data type of the pixelin argument.

Data type for the filter coefficients, specified as numerictype(0,WL,FL), where WL is the word length and FL is the fraction length in bits.

Specify an unsigned data type that can represent values less than 1. The coefficients usually require a data type with more precision than the input data type. The object calculates the coefficients based on the neighborhood size and the values of IntensityStdDev and SpatialStdDev. Larger neighborhoods spread the Gaussian function such that each coefficient value is smaller. A larger standard deviation flattens the Gaussian so that the coefficients are more uniform in nature, and a smaller standard deviation produces a peaked response.

Note

If you try a data type and after quantization, more than half of the coefficients become zero, the object issues a warning. If all the coefficients are zero after quantization, the object issues an error. These messages mean that the object was unable to express the requested filter by using the data type specified. To avoid this issue, choose a higher-precision coefficient data type or adjust the standard deviation parameters.

Dependencies

This property applies when you set CoefficientsDataType to 'Custom'.

Method to determine data type of output pixels.

  • 'Same as first input'' — Sets the data type of the output pixels to match the data type of pixelin.

  • 'Custom' — Sets the data type of the output pixels to match the data type defined in the CustomOutputDataType property.

Data type for the output pixels, specified as numerictype(signed,WL,FL), where WL is the word length and FL is the fraction length in bits. The filtered pixel values are cast to this data type.

Dependencies

This property applies when you set OutputDataType to 'Custom'.

Usage

Description

example

[pixelout,ctrlout] = filt2d(pixelin,ctrlin) returns the filtered pixel value and accompanying control signals.

This object uses a streaming pixel interface with a structure for frame control signals. This interface enables the object to operate independently of image size and format and to connect with other Vision HDL Toolbox™ objects. The object accepts and returns a scalar pixel value and control signals as a structure containing five signals. The control signals indicate the validity of each pixel and its location in the frame. To convert a pixel matrix into a pixel stream and control signals, use the visionhdl.FrameToPixels object. For a full description of the interface, see Streaming Pixel Interface.

Input Arguments

expand all

Single image pixel in a pixel stream, specified as a scalar value representing intensity. Integer and fixed-point data types larger than 16 bits are not supported.

double and single data types are supported for simulation, but not for HDL code generation.

Data Types: uint8 | uint16 | int8 | int16 | fi | logical | double | single

Control signals accompanying the input pixel stream, specified as a pixelcontrol structure containing five logical data type signals. The signals describe the validity of the pixel and its location in the frame. For more details, see Pixel Control Structure.

Data Types: struct

Output Arguments

expand all

Single image pixel in a pixel stream, returned as a scalar value representing intensity. Integer and fixed-point data types larger than 16 bits are not supported.

double and single data types are supported for simulation, but not for HDL code generation.

Data Types: uint8 | uint16 | int8 | int16 | fi | logical | double | single

Control signals accompanying the output pixel stream, returned as a pixelcontrol structure containing five logical data type signals. The signals describe the validity of the pixel and its location in the frame. For more details, see Pixel Control Structure.

Data Types: struct

Object Functions

To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named obj, use this syntax:

release(obj)

expand all

stepRun System object algorithm
releaseRelease resources and allow changes to System object property values and input characteristics
resetReset internal states of System object

Examples

collapse all

Load input image and create serializer and deserializer objects.

frmOrig = imread('rice.png');
frmActivePixels = 48;
frmActiveLines = 32;
frmIn = frmOrig(1:frmActiveLines,1:frmActivePixels);
figure
imshow(frmIn,'InitialMagnification',300)
title 'Input Image'

frm2pix = visionhdl.FrameToPixels(...
      'NumComponents',1,...
      'VideoFormat','custom',...
      'ActivePixelsPerLine',frmActivePixels,...
      'ActiveVideoLines',frmActiveLines,...
      'TotalPixelsPerLine',frmActivePixels+10,...
      'TotalVideoLines',frmActiveLines+10,...
      'StartingActiveLine',6,...
      'FrontPorch',5);
[~,~,numPixPerFrm] = getparamfromfrm2pix(frm2pix);

pix2frm = visionhdl.PixelsToFrame(...
      'NumComponents',1,...
      'VideoFormat','custom',...
      'ActivePixelsPerLine',frmActivePixels,...
      'ActiveVideoLines',frmActiveLines);

Write a function that creates and calls the System object™. You can generate HDL from this function.

Note: This object syntax runs only in R2016b or later. If you are using an earlier release, replace each call of an object with the equivalent step syntax. For example, replace myObject(x) with step(myObject,x).

function  [pixOut,ctrlOut] = BilatFilt(pixIn,ctrlIn)
%bilatFilt 
% Filters one pixel according to the default spatial and intensity standard
% deviation, 0.5.
% pixIn and pixOut are scalar intensity values.
% ctrlIn and ctrlOut are structures that contain control signals associated
% with the pixel.
% You can generate HDL code from this function.

  persistent filt2d;
  if isempty(filt2d)
    filt2d = visionhdl.BilateralFilter(...
      'CoefficientsDataType','Custom',...
      'CustomCoefficientsDataType',numerictype(0,18,17));
 
  end    
 [pixOut,ctrlOut] = filt2d(pixIn,ctrlIn);
end

Filter the image by calling the function for each pixel.

pixOutVec = zeros(numPixPerFrm,1,'uint8');
ctrlOutVec = repmat(pixelcontrolstruct,numPixPerFrm,1);

[pixInVec,ctrlInVec] = frm2pix(frmIn);
for p = 1:numPixPerFrm
    [pixOutVec(p),ctrlOutVec(p)] = BilatFilt(pixInVec(p),ctrlInVec(p));
end
[frmOut,frmValid] = pix2frm(pixOutVec,ctrlOutVec);

if frmValid
   figure;
   imshow(frmOut,'InitialMagnification',300)
   title 'Output Image'
end

Extended Capabilities

Introduced in R2017b