We discussed object detection on my previous article so this time around I’ll be showing another example of how to identify objects: Template Matching.
Let’s take a loot at this reference image
From my previous post, we were able to segment images based on multiple approaches such as blob detection and connected components. Have you ever thought that once these objects were identified, we can actually generate features based on how the labels were generated?
Let’s take a look at the red blood cells from my blob detection article once again, and copy all the necessary codes and outputs from there. What we will reference back to are the following:
Once these steps are done, we can then move on with the machine learning part
From my previous articles, I have discussed different approaches in terms of image processing. Some of which are image cleaning and even object detection. For this article, we will be discussing more on transformation and feature generation for our images, namely: Homography and Texture Metrics.
The transformations that Homography can do include rotation, translation, scaling, or even skewing. This is made possible by mapping coordinates between two planar projections.
An example of this is evident with our document scanner apps, where pieces of paper are then transformed to a readable perspective. …
Over at my previous post, we learned how to identify objects of interest based on blob detection and connected components. However, there will be times where we have to isolate specific objects of interest from our images. Let’s take a look at one of the most colorful pieces made by mankind: The Rubik’s Cube
For this story, we’ll be discussing two topics on how to isolate specific objects of interest for this Cube: Thresholding, and Color Image Segmentation
The first method will entail numerous trial and error steps in order to determine the perfect value for the object that…
I had my first class in Image Processing last November 2020, with the hopes of learning a lot about how to understand how images work, and how they are transformed to drive out essential business value to the customer.
We discussed a lot of terms and references during the first meeting and I’m here to share with you some of the things that I have learned in class:
Image Analysis Enables the following:
Blobs are objects of interest in a given image. In order to determine blobs, they must be defined as bright objects in a dark background to ensure that algorithms will be able to properly detect them. Three methods were discussed to detect blobs:
The main benefit of transforming images into blobs is when our machine is able to detect the said…
Detecting and cleaning objects of interest in images can be done in multiple ways. For this post, we’ll be discussing two types: Spatial Filters and Morphological Operations.
Filters are basically matrices that apply a specific value onto a pixel based on their neighbors. In order to apply these matrices, convolution is done.
Let’s take a look at this cute little ornament and set it as our object of interest for this post:
from skimage.io import imread, imshow
from skimage.color import rgb2gray
import numpy as npplant = rgb2gray(imread('medium/plant.jpg'))
The appearance of the image can be…
Image enhancements can Fourier Transforms, White Balancing, Histogram Manipulation, and Contrast Stretching. I will discuss what I have learned here in this post.
Since images can be transformed into numbers, it can also be thought that these numbers are a superposition of wave patterns. The resulting wave patterns, just like signals, can have frequencies that can be transformed using Fourier Transforms. As a result, Fourier Transforms can help data scientists remove artifacts from images.
Here, we use Marty from Back to the Future. On the left side is the grayscale image and to its right is its corresponding…
Sharing the world of data, one story at a time