Kolam




Kolam is a tool to visualize large datasets, notably biomedical and geospatial imagery.

Firefly




Firefly is a web based tool for image analysis, tracking, and segmentation. It supports different annotation types such as point, lines, polygones, and ploylines. It interacts with a database on a webserver to provide an interactive platform for visualizing and editing tracking data.

RootFlow




RootFlow is a tool for biological motion estimation for plant root growth. It can be downloaded here.

pyTAG




pyTAG is an interactive lightweight python-based desktop tool for ground-truth generation. pyTAG has three modes of ground-truth generation: Manual, Semi-Automatic and Fully Automatic. It also allows its users to edit and review the generated ground-truth


[Software]

Cornea Detection and CNV Grading




This work proposes a robust automated approach to grade Cornea NeoVascularization (CNV) disease based on in-growth vessels. The figure describes the whole automated process. The intuition behind our work is to predict the grade of the corresponding cornea using vessel specific features and a regression network. The first part of our algorithm is to separate the cornea region from other parts of the mice image. While the second part is to learn a regression network how to predict a class or a grade of the disease. For the first part, we utilize Mask R-CNN, the state of the art deep learning network in biomedical segmentation, to detect the cornea region. A set of mice images have been selected and annotated to train the Mask R-CNN. As a result, a binary mask is produced, the white region represents the cornea and the black region represents the background that cover all other parts such as eyelid and lashes. Eliminating cornea's outside region, decreases the errors that can affected by texture and color of those parts and produces more robust classifier. However, mask R-CNN binary mask result is not always a proper circle. For this reason, we fitted a circle on our binary mask result to produce an optimal circular mask. The raw image is masked out using the circular binary mask to produce the extracted cornea region. A set of vessel specific features have been generated based on multiscale Hessian eigenvalues, intensity, oriented second derivatives, and multiscale line detector responses along with a random forest classifier. Random forest algorithm is a supervised statistical classifier that needs to be trained first using a set of cornea images with corresponding grades. The images are divided to 5 grades: No CNV Naive (0), No CNV (1), Mild CNV (2), Moderate CNV (3), and Severe CNV (4). We trained a regression network to learn random forest how to grade images based on the generated features. As a result, we utilize the trained random forest regression model to produce the grades of our testing data. The testing data are a set of images that have been kept aside to assess the quality of our automated learning.


[Software]

Presentations


ShareBoost: Boosting for Multi-View Learning with Performance Guarantees



Algorithms combining multi-view information are known to exponentially quicken classification, and have been applied to many fields. However, they lack the ability to mine most discriminant information sources (or data types) for making predictions. In this paper, we propose an algorithm based on boosting to address these problems. The proposed algorithm builds base classifiers independently from each data type (view) that provides a partial view about an object of interest. Different from AdaBoost, where each view has its own re-sampling weight, our algorithm uses a single re-sampling distribution for all views at each boosting round. This distribution is determined by the view whose training error is minimal. This shared sampling mechanism restricts noise to individual views, thereby reducing sensitivity to noise. Furthermore, in order to establish performance guarantees, we introduce a randomized version of the algorithm, where a winning view is chosen probabilistically. As a result, it can be cast within a multi-armed bandit framework, which allows us to show that with high probability the algorithm seeks out most discriminant views of data for making predictions. We provide experimental results that show its performance against noise and competing techniques.

Shareboost presentation


pyTAG: Python-based Interactive Training Data Generation for Visual Tracking Algorithms




Visual object tracking has been always an important topic in the computer vision community due to its wide range of applications in many domains. While a considerable number of unsupervised and supervised visual tracking algorithms have been developed, the visual tracking field continues to explore improved algorithms and challenging new applications such as multispectral object tracking, multiobject tracking and tracking from moving platforms. Ground truth-based evaluation of tracking algorithms is an essential component for measuring tracker performance. However, manual ground-truth generation by tagging objects of interest in video sequences is an extremely tedious and error prone task that can be subjective especially in complex scenes. In visual tracking, some of the common challenges based on the environment characteristics and object movements are occlusion of the objects of interest, splitting or merging of groups of objects, id switches, and target drift due to model update errors during tracking. These problems can cause video object tracking algorithms to lose the object of interest or start tracking the wrong object due to id switches between objects of similar appearance or target drift due to model update errors. To improve longer, persistent, more accurate tracking for intelligent scene perception, the new generation of tracking algorithms incorporate machine learning based approaches. These supervised tracking algorithms require large training sets, especially deep convolution neural networks with millions of parameters. Therefore, it is important to generate accurate training data and ground truth for tracking. In this study, a training data and ground-truth generation tool called pyTAG is implemented for visual tracking. The proposed tool's plugin structure allows integration, testing, and validation of different trackers. The tracker can be paused, resumed, forwarded, rewound and re-initialized after it loses the object during training data generation. Most importantly pyTAG allows users to change the object tracking method during ground-truth generation. This feature provides the flexibility to adapt the challenges that occur in ground-truth generation by switching the object tracking method that performs better than the other object tracking methods. This tool has been implemented to assist researchers to rapidly generate ground-truth and training data, fix annotations, run and visualize custom or existing object tracking techniques.