石鑫华视觉论坛微信公众号:石鑫华视觉

 找回密码
 注册会员
查看: 5799|回复: 3

[原创] NI Vision Development Module 2013新功能

[复制链接]
  • TA的每日心情
    慵懒
    昨天 08:32
  • 签到天数: 3175 天

    连续签到: 45 天

    [LV.Master]2000FPS

    发表于 2013-8-7 15:11:55 | 显示全部楼层 |阅读模式 来自:广东省东莞市 电信

    注册登陆后可查看附件和大图,以及购买相关内容

    您需要 登录 才可以下载或查看,没有账号?注册会员

    x
    本帖最后由 石鑫华视觉 于 2013-8-7 15:16 编辑

    What's New in the NI Vision Development Module 2013
    OverviewThe NI Vision Development Module 2013 includes many new features and performance enhancements. This document provides an overview of the new algorithm and usability improvements and describes how these features can benefit you when you are implementing your vision system.
    Table of Contents
    • New Pattern Matching Algorithm
    • Object Tracking
    • OCR Improvements
    1. New Pattern Matching Algorithm
    Pattern matching is a commonly used technique to locate regions of an image that match a known reference pattern, referred to as a template. Pattern matching algorithms are some of the most important functions in machine vision because of their use in varying applications, including alignment, gauging, and inspection. The NI Vision Development Module 2013 adds a new pattern matching algorithm called pyramidal matching, which improves performance in images with blur or low contrast.

    match

    match
    Figure 1: Example of pattern matching with blur and low contrast
    Pyramidal matching improves the computation time of pattern matching by reducing the size of the image and template. In pyramidal matching, both the image and the template are sampled to smaller spatial resolutions using Gaussian pyramids. This method samples every other pixel and thus the image and the template can both be reduced to one-fourth of their original sizes for every successive pyramid 'level'.

    匹配

    匹配
    Figure 2: Pyramid matching uses multiple levels to quickly refine searches.
    In the learning phase, the algorithm automatically computes the maximum pyramid level that can be used for the given template, and learns the data needed to represent the template and its rotated versions across all pyramid levels. The algorithm attempts to find an 'optimal' pyramid level (based on an analysis of template data) which would give the fastest and most accurate match. The algorithm then iterates through each level of the pyramid, refining the match at each stage until the full resolution is used to give the best match while still achieving a speed boost. You can also choose to refine the match candidates to one last stage of refinement to find sub-pixel accurate locations and sub-degree accurate angles. This stage relies on specially-extracted edge and pixel information from the template and employs interpolation techniques to get a highly accurate match location and angle.

    2. Object Tracking
    The NI Vision Development Module 2013 introduces a new algorithm for object tracking, which tracks the location of an object over a sequence of images to determine how it is moving relative to other objects in the image. Object tracking has many uses in application areas such as:
    • Security and surveillance - In the surveillance industry, objects of interest such as people and vehicles can be tracked. Object tracking can be used for detecting trespassing or observing anomalies like unattended baggage.
    • Traffic management - The flow of traffic can be analyzed, and collisions detected.
    • Medicine - Cells can be tracked in medical images.
    • Industry - Defective items can be detected and tracked.
    • Robotics and navigation - Robots can follow the trajectory of an object. Robotic assistance can maneuver in a factory (de-palletizing objects).
    • Human-computer interaction (HCI) - Users can be tracked in a gaming environment.
    • Object modeling - An object tracked from multiple perspectives can be used to create a partial 3D model of the object.
    • Bio-mechanics - Tracking body parts to interpret gestures or movements.

    目标跟踪

    目标跟踪
    Figure 3: Example of object tracking for a traffic monitoring application
    NI Vision implements two object tracking algorithms: Mean shift and EM-based mean shift. Mean shift tracks the user-defined objects by iteratively updating the location of the object while EM-based mean shift not only tracks the location but also the shape and scale of the object is adapted for each frame. Both algorithms are tolerant of gradual changes in the tracked object, including geometric transformations such as shifting, rotation, scaling, or partial occlusion of the object.

    3. OCR Improvements
    Optical Character Recognition (OCR) provides machine vision functions you can use in an application to read text or characters in an image. The NI Vision Development Module 2013 brings improvements to OCR functionality including multi-line, weak rotation tolerance, and better segmentation.
    Multiline detection allows a user to set a region of interest (ROI) enclosing multiple lines of text rather than needing to specify an ROI for each expected line. Multiline uses particle analysis and clustering based on vertical overlap to detect the lines in a specified ROI. Users can explicitly set the number of lines expected or the algorithm can auto detect the number of lines and apply character segmentation to all lines. If multiple lines are detected and the number of lines expected is specified, the lines with the highest ranked classification score will be returned.

    多行OCR

    多行OCR
    Figure 4: Multiline support reduces the need for a separate ROI for each line of text and detects the highest scoring lines.
    OCR reading functionality has also been improved to support detection and reading of lines and characters with slight rotations (±20°) and differing character heights. Character segmentation refers to the process of locations and separating each character in the image from the background and other characters. This process applies to both the training and reading procedures and has significant impact upon the performance of the OCR application. OCR includes multiple threshold methods to separate the characters from the background and an AutoSplit algorithm to segment slanted, or italic, characters. A shortest segment algorithm is also implemented to ensure valid segmentation even when the characters are merged. The algorithm works in three steps:
    • Attempt to divide the characters by applying multiple shortest cut paths.
    • Choose the cuts that are closest to the maximum character width.
    • Intelligently choose the cuts which segment a character correctly based on classification during reading.

    改善字符分割

    改善字符分割
    Figure 5: Segmentation improvements ensure robust reading for OCR applications.



    回复

    使用道具 举报

  • TA的每日心情
    奋斗
    2015-11-1 13:11
  • 签到天数: 3 天

    连续签到: 1 天

    [LV.2]200FPS

    发表于 2013-8-12 08:39:05 | 显示全部楼层 来自:广西桂林市 电信
    回复 支持 反对

    使用道具 举报

  • TA的每日心情

    2024-2-27 08:03
  • 签到天数: 148 天

    连续签到: 1 天

    [LV.7]700FPS

    发表于 2013-9-25 11:29:57 | 显示全部楼层 来自:浙江省温州市 电信
    回复 支持 反对

    使用道具 举报

  • TA的每日心情
    慵懒
    2016-4-11 17:27
  • 签到天数: 15 天

    连续签到: 1 天

    [LV.4]400FPS

    发表于 2015-8-18 09:09:56 | 显示全部楼层 来自:广东省东莞市 电信
    回复 支持 反对

    使用道具 举报

    您需要登录后才可以回帖 登录 | 注册会员

    本版积分规则

    LabVIEW HALCON图像处理入门教程(第二版)
    石鑫华机器视觉与LabVIEW Vision图像处理PDF+视频教程11种全套
    《LabVIEW Vision函数实例详解》教程-NI Vision所有函数使用方法介绍,基于NI VISION2020,兼容VDM21/22/23
    LabVIEW图像处理教程
    机器视觉商城淘宝店铺
    视觉论坛充值赞助方法

    QQ|石鑫华视觉论坛 |网站地图

    GMT+8, 2024-4-18 08:55

    Powered by Discuz! X3.4

    © 2001-2024 Discuz! Team.

    快速回复 返回顶部 返回列表