fence protection algorithm using cctv...
Post on 20-Apr-2020
3 Views
Preview:
TRANSCRIPT
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Faculty of Arts, Computing, Engineering and Sciences
FENCE PROTECTION ALGORITHM USING CCTV
CAMERAS
By
RAMYA TUMMURU
(18041184)
M.Sc in COMPUTER AND NETWORK
ENGINEERING
(2009-2010)
SHEFFIELD HALLAM UNIVERSITY Page | 1
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
ACKNOWLEDGEMENT
My heart full gratitude and thanks to God, my parents and other family
members and friends who gave me the prospect and support to complete
this project.
I wish to place on my record my deep sense of gratitude and hearty thanks to
my project supervisor, Dr. Hussein Abdul-Rahman for his constant
motivation and valuable help and also for his active interest throughout the
project work. I also extend my thanks to other Faculties for their Cooperation
during my Course.
Finally I would like to thank my friends for their cooperation to complete this
project.
SHEFFIELD HALLAM UNIVERSITY Page | 2
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
ABSTRACT
The main aim of this project is protecting a fence by continuous monitoring
using CCTV cameras.
This project is basically developed to protect an area surrounded by fence
from any intruders entering into it. CCTV cameras are used to detect any
person or a moving object entering into fencing of a designated security area.
An alarm will immediately start when the camera detects a moving object
entering the fencing virtual zone. It is used for security purpose in many high
security areas to detect intruders efficiently. We have used MATLAB software
for the development of this project. Thresholding is done in order to
distinguish object of interest from its background. Binary images are created
from gray scale images. Median filter is used to remove any noise in binary
image and labeling is done. Connected components labelling scans an
image and groups its pixels into components based on pixel connectivity, i.e.
all pixels in a connected component share similar pixel intensity values and
are in some way connected with each other.
When developing this project factors like external disturbances like shadows,
wind etc are considered and are successfully solved to accomplish expected
requirements.
.
SHEFFIELD HALLAM UNIVERSITY Page | 3
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
CONTENTS
TITLE 1
ACKNOWLEDGEMENT 2
ABSTRACT 3
CHAPTER 1: INTRODUCTION 10
1.1 Aim 10
1.2 Objectives 10
1.3 Research methodology 11
1.4 Activities undertaken 11
1.5 Related work 12
1.6 Scope of work 13
1.7 Structure of dissertation 13
CHAPTER 2: LIERATURE REVIEW 14
2.1 Image processing 14
2.1.1 Applications of image processing 15
2.1.2 Benefits of image processing 16
2.2 MATLAB 16
2.2.1 Mat lab syntax 17
2.2.2 Variables 17
2.2.3 Vectors /matrices 18
SHEFFIELD HALLAM UNIVERSITY Page | 4
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
2.2.4 Semicolon 20
2.2.5 Graphics 20
2.2.6 Structures 20
2.2.7 Function handles 21
2.2.8 Secondary programming 21
2.2.9 Simulink 21
2.2.10 Classes 21
2.2.11 Object- oriented programming 21
2.2.12 Interactions with other languages 22
2.3 Image processing in MATLAB 22
2.3.1 Image formats supported by MATLAB 23
2.3.2 Intensity image/gray scale image 23
2.3.3 Binary image 24
2.3.4 Indexed image 24
2.3.5 RGB image 24
2.3.6 Multiframe image 24
2.3.7 Converting between different formats 24
2.3.8 Conversion between double and uint8 25
2.3.9 How to read files 26
2.3.10 Reading and writing image files 26
2.3.11 Loading and saving variables in mat lab 26
SHEFFIELD HALLAM UNIVERSITY Page | 5
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
2.3.12 Loading and saving variables 27
2.3.13 Syntax for reading image 27
2.3.14 Syntax for displaying image 27
2.3.15 Syntax rgb to gray scale 28
2.3.16 To display image given on matrix form which does not require
any tool box 28
2.3.17 To display an image given in matrix form using image processing
tool box 29
CHAPTER 3: VIDEO CONTENT ANALYSIS 30
3.1 Video content analysis 30
3.1.1 Content based video retrieval 30
3.1.2 Applications of video content analysis 31
3.1.3 Professional and educational applications 31
3.1.4 Consumer domain applications 31
3.1.5 Feature extraction for content analysis 32
3.1.6 Structure analysis 33
3.2 People counting/footfall 34
3.3 Fence protection (Intruder detection) 35
3.3.1 Fence protection techniques 35
CHAPTER 4: ALGORITHMS 37
4.1 Thresholding 37
SHEFFIELD HALLAM UNIVERSITY Page | 6
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
4.2 Adaptive Thresholding 39
4.3 Histogram 41
4.4 Connected components Labeling 47
4.5 Median filtering 51
4.6 Hybrid median filtering 53
4.7 Flowchart showing process how this project works 54
CHAPTER 5: RESULTS 56
CHAPTER 6: CONCLUSION 61
6.1 Conclusion 61
6.2 Future work 62
REFERENCES 63
SHEFFIELD HALLAM UNIVERSITY Page | 7
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
LIST OF FIGURES:
Fig 2.1: Plotting of a sine wave function using MATLAB 20
Fig 3.1: Fence showing a virtual zone in red 36
Fig 4.1 Threshold, Density slicing 37
Fig 4.2 Adaptive thresholding or Dynamic thresholding 39
Fig 4.3 Histograms 42
Fig 4.4 Original image / Gray scale image 43
Fig 4.5 Binary image 43
Fig 4.6 Original image 44
Fig 4.7 Thresholded image 44
Fig 4.8 Threshold too low 45
Fig 4.9 Threshold too high 45
Fig 4.10 Effect of uneven illumination in single value thresholding 46
Fig 4.11 Adaptive thresholding 47
Fig 4.12: Original image 49
Fig 4.13: Result of applying geodesic operator 49
Fig 4.14: Original image 50
Fig 4.15: Thresholded image 50
Fig 4.16: Labels coded as gray values 50
Fig 4.17: Labels coded as colours 51
Fig 4.18: Labels coded as 8 different colours 51
Fig 4.19: Showing actual process of this project 54
SHEFFIELD HALLAM UNIVERSITY Page | 8
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Fig 5.1. (a) Green virtual zone as an intruder approaching the zone,
(b) Foreground objects in the scene. 56
Fig 5.2. (a) Green virtual zone as an intruder approaching the zone,
(b) Foreground objects in the scene. 57
Fig 5.3. (a) Green virtual zone as an intruder approaching the zone,
(b) Foreground objects in the scene. 57
Fig 5.4. (a) Green virtual zone as an intruder approaching the zone,
(b) Foreground objects in the scene. 58
Fig 5.5. (a) Red virtual zone as an intruder approaching the zone,
(b) Foreground objects in the scene. 58
Fig 5.6. (a) Red virtual zone as an intruder approaching the zone,
(b) Foreground objects in the scene. 59
Fig 5.7: The effect of shadow in the proposed algorithm. 60
SHEFFIELD HALLAM UNIVERSITY Page | 9
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
CHAPTER 1
INTRODUCTION
1.1 AIM:
The main aim of this project is protecting fencing by using CCTV cameras.
Basically this project is developed to protect a designated security area from
any intruders entering into it by continuous monitoring through CCTV
cameras. CCTV cameras are used to detect any person or a moving object
entering into fencing of a designated security area. An alarm will immediately
start when CCTV camera detects a moving object entering the fencing virtual
zone. It is used for security purpose in many high security areas to detect
intruders efficiently. I have used Mat Lab technology for development of this
project. Thresholding, labeling and median filter algorithms are used for this
project. This project provides security by continuous automatic security
monitoring. This project also considers various types of fence protection
techniques and how this is better than efficient than other types of fence
protection.
1.2 OBJECTIVES:
To introduce fence protection using CCTV camera and also explain
how it is going to be implemented in the project.
To explain about image processing and how the images are monitored
using CCTV cameras.
Research into image processing techniques and algorithms used for
processing the images and identifying the intruder.
Literature survey and research into similar journals, books, articles
and websites.
Writing Mat Lab program for obtaining the result or output.
SHEFFIELD HALLAM UNIVERSITY Page | 10
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Monitoring and preventing any intruder entering the fencing using
CCTV cameras.
1.3 RESEARCH METHODOLOGY:
Research methodology is about the activity of the project. Research is
the application of systematic techniques and methods in pursuit of answers
to questions. This comprises of gathering information of books, theories,
papers and individual methods. Most of the data mentioned in this project is
been gathered by the survey. Literature survey and research are been done
using some journals, books, articles and websites etc Practical research is
been done by learning about the techniques of image processing and fence
protection.
Some of the research methods that are been followed in this project are.
Survey
Case study
Practical research.
Quantitative research
1.4 ACTIVITIES UNDERTAKEN:
This project mainly aims on how a fence can be protected from intruder by
using CCTV camera
To analyze what fence protection techniques are being used in the
present world.
Research is been done on image processing techniques and video
content analysis and also on the other techniques implemented in this
project.
A wide research has been made on image processing techniques
thresholding, labeling and using median filter to reduce the noise.
SHEFFIELD HALLAM UNIVERSITY Page | 11
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Literature survey and research in to similar technical papers, books,
articles and websites.
Learning MatLab functions for implementing and writing code.
Monitoring and controlling fence by using CCTV camera and detecting
the intruder and ringing an alarm.
1.5 RELATED WORK:
To complete the project I have done a research and survey on the following
I have studied about various fence protection techniques used in world today
from
G.Honey(1998).Electronic protection and security systems.
Chapter-3: page 111
John Fay, John J Fay(1993). Encyclopedia of security
management: techniques and technology . page 228
Robert L. O’Block, Joseph F. Donnermeyer(1991). Security and
crime prevention. Page 313.
I have studied about image processing techniques from
Rafael C.Gonzalez, Richard Eugene Woods, Steven L.Eddins (2004).
Digital image processing using matlab. Pearson prentice hall.
Ting-Chung Poon, Partha P.Banerjee(2001).Contemporary optical
image processing with matlab.
I have studied about video content analysis from
Ajay Divakaran(2008). Multimedia content analysis: theory and
applications. Page 67.
Klaus Krippendorff, Mary angela Bock(2008). The content analysis
reader.
SHEFFIELD HALLAM UNIVERSITY Page | 12
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Sagarmay Deb(2004). Multimedia systems and content based image
retrieval. Page 199.
I have studied about thresholding and labeling from
Yue Hao(2005). Computational intelligence and security:
international conference. Pg 1055
Tarek Sobh, Khaled Elleithy, Asif Mahmood(2008). New algorithms
and techniques in telecommunications, automation. Pg 193.
1.6 SCOPE OF WORK:
The project FENCE PROTECTION USING CCTV CAMERAS is used in
many high security areas like airports and banks for preventing any
intruder entering that area.
1.7 STRUCTURE OF DISSERTATION:
The report is divided into several sections and a brief overview of the section
is described here.
Chapter 2- A Brief Review. This consists of introduction about image
processing and Mat lab.
Chapter 3- Requirements and Approaches to the project. Video content
analysis, fence protection techniques and people counting(footfall) and how
it is used in security applications.
Chapter 4- This consists of methods implemented. This consists of
describing image processing techniques like thresholding, labeling and
median filter used to remove noise and actual procedure how project is done.
Chapter 5- This consists of Results and Conclusion obtained for the project.
SHEFFIELD HALLAM UNIVERSITY Page | 13
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Chapter 6- This contains Bibliography and list of web sites used.
SHEFFIELD HALLAM UNIVERSITY Page | 14
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
CHAPTER-2
LITERATURE REVIEW
2.1 IMAGE PROCESSING:
Manipulating and analyzing images by using a computer is known as image
processing. These Image processing techniques were first developed in 1960
by collaboration through a wide range of scientists and academics. The
Image processing is a process which is used for converting a digital or
analog image signal which into a physical image. The output which is
obtained can be actual physical image or will be having the characteristics of
an image. The most common type of image processing is photography.
There are many wide range of image processing operations in addition to
this. The digital image processing field is developed and has created new
range of applications and tools like face recognition software, medical image
processing and remote sensing. To enhance images and to correct images
specialized computer programs are used. Algorithms are applied to the
actual data which will reduce signal distortion, adds light to an underexposed
image and clarify fuzzy images. when the image processing techniques are
first developed equipment cost and processing cost are very high. Wide
range of scientist and academics focused mainly on development of medical
imaging, character recognition and creating high quality images at
microscopic level. Costs of computing equipment dropped by 1970s which
has made digital image processing more realistic. Film and software
companies have invested funds for development and enhancement of image
processing which has created a new industry.
The image processing field grows continuously as long as speed of computer
processing increases while storage memory cost continues to drop.
The acquisition of images which produces input image in first place is
referred as imaging. Typical image processing operations are
SHEFFIELD HALLAM UNIVERSITY Page | 15
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Euclidean geometry transformations such as rotation, enlargement
and reduction.
Color corrections such as brightness and contrast adjustments,
color mapping, color balancing, quantization, or color translation to
a different color space.
Digital composting or optical composting which is a combination of
two or more images which is used in film-making to make a "matte"
Interpolation, de-mosaicing, and recovery of a full image from a
raw image format by using a Bayer filter pattern
Image recognition, the alignment of two or more images
Image differencing and morphing
Image recognition, for example, extract the text from the image
using optical character recognition or checkbox and bubble values
using optical mark recognition
Image segmentation
High dynamic range imaging obtained by combining multiple
images
Geometric hashing for 2-D object recognition with affine invariance.
2.1.1 APPLICATIONS OF IMAGE PROCESSING ARE:
Computer vision
Optical sorting
Augmented reality
Face detection
Feature detection
Lane departure warning system
Non- photo-realistic rendering
Medical image processing
Microscope image processing
Morphological image processing
Remote sensing
SHEFFIELD HALLAM UNIVERSITY Page | 16
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
2.1.2 BENEFITS OF IMAGE PROCESSING:
Three major benefits of digital image processing are:
Consistent high quality of image
Low cost of processing
Ability to manipulate all aspects of the process
The Image Processing Toolbox provides a comprehensive set of reference-
standard algorithms and graphical tools for image processing, analysis,
visualization, and algorithm development. we can perform image
enhancement, image de-blurring, feature detection, noise reduction, image
segmentation, spatial transformations, and image registration. In order to
take advantage of multicore and multiprocessor computers many functions in
the toolbox are multi-threaded.
A diverse set of image types are supported by image processing toolbox
including high dynamic range, gigapixel resolution, ICC-compliant color, and
tomographic images. Graphical tools helps us to explore an image, examine
a region of pixels, adjust the contrast, create contours or histograms, and
manipulate regions of interest or ROIs.
With the toolbox algorithms you can restore degraded images, detect and
measure features, analyze shapes and textures, and adjust the color balance
of images.
2.2 MATLAB:
Mat lab stands for Matrix Laboratory and it is numerical computing
environment and is developed by math works. Mat lab is high level language
and interactive software package for engineering and scientific computation.
It offers high performance and enables us to perform computationally
SHEFFIELD HALLAM UNIVERSITY Page | 17
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
intensive tasks faster than traditional programming languages like c, c++ and
Fortran.
Mat lab integrates matrix computation, numerical analysis, signal processing,
and graphics in a easy –to –use environment where problems and solutions
are expressed just as they are written in mathematical way. It allows matrix
manipulations, plotting of functions and data, implementation of algorithms,
creation of user interfaces, and interfacing with programs written in other
languages like c, c++ and Fortran. Mat lab was used by more than one
million people across the industry and academic world. The users come from
various economic backgrounds like engineering, science, and economics.
Mat lab is intended primarily for numerical computing but an optional toolbox
uses MuPAD search engine which allows access to symbolic computing
capabilities. Simulink is an additional package which adds graphical multi-
domain simulation and model based design for embedded and dynamic
systems.
MATLAB was first adopted by control design engineers but it quickly spread
to many other domains. It is now also used in education, in particular the
teaching of linear algebra and numerical analysis, and is popular amongst
scientists involved with image processing.
2.2.1 MATLAB SYNTAX :
Simplest way of executing the MATLAB code by typing it in the prompt, >>,
Command Window, will be one of the element of the desktop of MATLAB and
it can also be used as interactive mathematical shell. Extending these
available commands the sequences of command by using MATLAB editor
can be saved in a text file, as a script or also as encapsulated into a function.
2.2.2 VARIABLES:
MATLAB is a programming language which is weakly dynamically typed.
Where the variables are defined using the assignment operator, =. It can be
said as a weakly typed language because types are implicitly converted and
because variables can be assigned without declaring their type and except if
they are to be treated as the symbolic objects, and so that their type can be
SHEFFIELD HALLAM UNIVERSITY Page | 18
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
changed it can be called dynamically typed. The Values can be from
constants and from the computation in which it involves values of the other
variables, or from the output of a function.
The MATLAB has many functions for rounding into integers the fractional
values:
• round(X): Rounds to nearest integer, it trails 5 rounds to the nearest
integer away from the zero
• fix(X): Rounds to nearest integer towards the zero
• floor(X): Rounds to the nearest integer towards the minus infinity. It
rounds to nearest integer less than or equal to
• ceil(X): Rounds to the nearest integer towards positive infinity. It
rounds to the nearest integer greater than or equal to X
2.2.3 VECTORS /MATRICES:
MATLAB the "Matrix Laboratory" generally refers to a 2 dimensional array
and it provides many convenient ways for creating the vectors, matrices, and
multi-dimensional arrays. In the MATLAB vernacular, a vector refers to a one
dimensional matrix, it is commonly referred as an array in the other
programming languages. In an m×n array m and n are greater than or equal
to 1. Arrays with more than two dimensions can be referred to as
multidimensional arrays.
MATLAB provides a simple way to define simple arrays using the syntax:
init:increment:terminator.
For instance:
>> array = 1:2:9
array =
1 3 5 7 9
It defines a variable named array or it assigns a new value to an existing
variable with the name array which is an array consisting of the values 1, 3,
SHEFFIELD HALLAM UNIVERSITY Page | 19
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
5, 7, and 9. Which means the array starts at 1 which is the init value, and
increments with each step from the previous value by 2 which is the
increment value, and it stops once it reaches 9 which is the terminator value.
>> array = 1:3:9
array =
1 4 7
The increment value can actually be left out of this syntax along with one of
the colons to use the default value of 1.
>> ari = 1:5
ari =
1 2 3 4 5
It assigns to the variable named ari an array with the values 1, 2, 3, 4, and 5,
since the default value of 1 is used as the incrementor.
Indexing is the one-based on which is the usual convention for matrices is in
mathematics, although it is not for some programming languages.
Matrices can be defined by separating the elements of a row with using blank
space or comma and by using a semicolon to terminate each row. The list of
elements must be surrounded by the square brackets: [ ]. Parentheses: () are
used to access elements and sub arrays. Parenthesis are also used to
denote a function argument list.
Most of the MATLAB functions can accept matrices and they will apply
themselves to each element. MATLAB includes standard "for" and "while"
loops, but by using MATLAB's vectorized notation it often produces code that
is easier to read and is faster to execute. This code, when excerpted from the
function magic m, creates a magic square M for odd values of n
SHEFFIELD HALLAM UNIVERSITY Page | 20
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
2.2.4 SEMICOLON:
In many other languages the semicolon is used to terminate commands, but
in MATLAB the semicolon serves to suppress the output of the line that it
concludes
2.2.5 GRAPHICS:
A Function plot can be used to produce a graph from two vectors x and y.
The code:
x = 0:pi/100:2*pi;
y = sin(x);
plot(y);
The following figure of the sine function is produced.
Fig 2.1: Plotting of a sine wave function using MATLAB
2.2.6 STRUCTURES:
MATLAB also will support the structure data types. Because all the variables
in MATLAB are arrays, the name structure array will be the most adequate in
which each element of an array will be having the same field names. It also
SHEFFIELD HALLAM UNIVERSITY Page | 21
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
supports the field names which are dynamic. Unfortunately the MATLAB JIT
don’t supports the structures of MATLAB. And as a result just simple bundling
of various variables into structure will be coming at a cost.
2.2.7 FUNCTION HANDLES:
The MATLAB also support elements of the lambda-calculus which are
implemented in .m files or nested and anonymous functions by introducing
references to the functions, function handles.
2.2.8 SECONDARY PROGRAMMING
Secondary programming carried in the matlab which will incorporate the
MATLABs standard code into the more user friendly way inturn and inorder to
represent a function or system.
2.2.9 SIMULINK:
A secondary program which can be incorporated by using MATLAB is
simulink. A way to create or to collaborate Equations of the Motion by using
an infrastructure of the click, drag, and connecting the blocks. These blocks
inturn can be used in doing many things such as defining the inputs,
variables, EOMs and Scopes.
2.2.10 CLASSES:
MATLAB also will support the classes, and however the syntax and calling
conventions are different compared to other languages significantly, because
the MATLAB does not have any data types for reference. To call a method
Object method();
We cannot alter normally any variables of object variable. Inorder to create
an impression that this method alters the state of variable, MATLAB
toolboxes use evalin() command, which will have its own restrictions.
2.2.11 OBJECT-ORIENTED PROGRAMMING:
SHEFFIELD HALLAM UNIVERSITY Page | 22
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
The MATLAB supports the object oriented programming including the
classes, inheritance, virtual dispatch, packages, pass-by-value semantics,
and pass-by-reference semantics.
2.2.12 INTERACTIONS WITH OTHER LANGUAGES:
MATLAB will be able to call functions and subroutines written in the c
programming language and Fortran. And allowing MATLAB data types to be
passed and also returned a wrapper function is created. The dynamically
loadable object files can be created by compiling such functions which are
termed as "MEX-files" which means matlab excecutable MATLAB
executable.
Libraries which are written in java, ActiveX or .NET can also be called directly
from MATLAB and from many MATLAB libraries like XML or SQL support and
are implemented as wrappers around the Java or ActiveX libraries. It is
more complicated to Call MATLAB from the Java, but it can be done using
the MATLAB extension or by using an undocumented mechanism known as
JMI or Java-to-Matlab Interface. It may be confused with related Java
Metadata used with a interface also called as JMI.
2.3 IMAGE PROCESSING IN MATLAB
.A digital image can be said as an instruction of how to color each pixel.
Digital image will be composing pixels which can be thought as small dots on
the screen. Generally typical size of an image is 512-by-512 pixels and it
would be convenient if dimensions of the image are let to be a power of 2.
For example, 29=512. Generally we say that if an image is composed of m
pixels in the vertical direction and n pixels in the horizontal direction then it is
of size m-by-n.
For example if we have an image of the format 512-by-1024 pixels then it
means that the data for that image must contain information about 524288
pixels and it requires a lot of memory. Hence, compressing of images is
essential for an efficient image processing. For compressing an image
wavelet analysis and Fourier analysis helps us significantly. For reducing
SHEFFIELD HALLAM UNIVERSITY Page | 23
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
amount of data required to store a image there are few computer scientific
tricks like entropy coding.
2.3.1 IMAGE FORMATS SUPPORTED BY MATLAB ARE:
The below image formats are supported by Matlab:
• BMP
• JPEG
• HDF
• XWB
• TIFF
• PCX
Most widely used compression standard for images is JPEG and images
which on the Internet are mostly JPEG-images. Generally by looking at the
suffix of an image which is stored we can see what format is the image
stored in. An image which is named as myimage.jpg is stored in the JPEG
format. To start working with image or to perform any operation like wavelet
transform on the image we must convert it into a different format. We must
first read image which is stored as JPEG into mat lab. There are four
common formats to which images must be converted.
Intensity image or gray scale image
Binary image
Indexed image
RGB image
Multiframe image
2.3.2 INTENSITY IMAGE/ GRAYSCALE IMAGE:
Gray scale image is the image we work mostly. An intensity image represents
an image as a matrix in which every element has a value and according to
how bright or dark the pixel must be colored at corresponding position. To
represent the number which represents brightness of the pixel there are two
ways. The double class or data type which will assign a floating number to
each pixel which is "a number with decimals" between 0 and 1. The value 0
SHEFFIELD HALLAM UNIVERSITY Page | 24
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
will correspond to black and the value 1 corresponds for white. There is other
class called uint8 which assigns an integer between 0 and 255 inorder to
represent the brightness of a pixel. The value 0 corresponds to black and
value 255 corresponds to white. Compared to the class double the class
uint8 requires only 1/8th of storage roughly. Many mathematical functions
can only be applied to the double class on the other hand.
2.3.3 BINARY IMAGE:
This image format will also store an image as a matrix but it can only color a
pixel black or white and nothing in between black or white. It will assign a 0
for the black and a 1 for white.
2.3.4 INDEXED IMAGE:
Practical way of representing the color images is indexed image. An indexed
image will store an image as two matrices. The first matrix has the same size
as the image and has one number for each pixel. The size of the second
matrix may be different from the image and it is called the color map. The
numbers from the first matrix will be an instruction of what numbers must be
used in the color map matrix.
2.3.5 RGB IMAGE:
Another format for color images is RGB image. It is an image with three
matrices of sizes matching the image format where each matrix corresponds
to one of the each colors red, green or blue. It also gives an instruction
about of how much of each of these colors a certain pixel must use.
2.3.6 MULTIFRAME IMAGE:
This format is very common in medical and biological imaging where you
study a sequence of slices of a cell. This format is very useful In applications
where we study a sequence of images. For the cases in biological and
medical imaging, the multi frame format would be a convenient way of
working with a sequence of images.
2.3.7 CONVERTING BETWEEN DIFFERENT FORMATS:
SHEFFIELD HALLAM UNIVERSITY Page | 25
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
The following table shows how we can convert between different formats. all
the commands below needs an image processing tool box. We must type
name of the image which we wish to convert within the parenthesis.
Operation Matlab commandTo Convert between
itensity/indexed/RGB format into
binary format.
dither()
To Convert between intensity format
into indexed format.
gray2ind()
To Convert between indexed format
into intensity format. ind2gray()
To Convert between indexed format
into RGB format. ind2rgb()
To Convert a regular matrix into
intensity format by scaling. mat2gray()
To Convert between RGB format into
intensity format. rgb2gray()
To Convert between RGB format into
indexed format. rgb2ind()
The mat2gray command is useful if we have a matrix which represents an
image but if the values represent the gray scale range in between, 0 and
1000. This command mat2gray automatically re scales all entries such that
they fall within 0 and 255 if we use the uint8 class or 0 and 1 if we use the
double class.
2.3.8 CONVERSION BETWEEN DOUBLE AND UINT8:
When we store an image if we store the image as a uint8 image because it
will require far less memory than the double. When we performing
mathematical operations on a image or processing an image we should
convert it into a double. Conversion of back and forth between these two
classes is very easy.
I=im2double(I);
It converts an image which is named I from uint8 into double.
SHEFFIELD HALLAM UNIVERSITY Page | 26
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
I=im2uint8(I);
It converts an image which is named I from double into uint8.
2.3.9 HOW TO READ FILES:
When we want to work with an image, it will generally be in the form of file. if
try downloading an image from the web, it will be usually stored as a JPEG-
file. If once we have done the processing an image, and if we may want to
write it back to a JPEG-file for that we will be able to post the processed
image on the web. This can be done using the commands imread and
imwrite. These commands will also require an image processing toolbox.
2.3.10 HOW TO READ AND WRITE THE IMAGE FILES:
Within parenthesis we must type the image file name within the single quotes
’ ‘ which we wish to read. As the first argument within the parenthesis we
must type the name of the image we have worked with.
As a second argument within the parenthesis we must type the name of the
file and the format that we wanted to write the image into.
file name must be put within the single quotes ' '.
Operation Matlab command
To Read an image imread()
To Write an image to a file imwrite( , )
After these commands be sure to use semi-colon ; or otherwise we will get
a lots of numbers which will be scrolling on our screen. These commands
imread and imwrite supports all the formats supported by the matlab.
2.3.11 LOADING AND SAVING THE VARIABLES IN MATLAB:
Once after we read a file, we probably convert it into an intensity image i.e, a
matrix and the work with this matrix. Inorder to continue to work with this
matrix again we need to save the matrix representing the image. This can be
done easily using the commands save and load which are very commonly
SHEFFIELD HALLAM UNIVERSITY Page | 27
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
used matlab commands and they work independently of what tool boxes are
installed.
2.3.12 LOADING AND SAVING THE VARIABLES:
Operation: Matlab command: Save variable X . save X Load variable X . load X
Firstly download an image from web, and read it into matlab, then investigate
its format and save the matrix representing the image.
Now open the mat lab and make sure that we are in same directory as our
stored file.
2.3.13 SYNTAX FOR READING AN IMAGE:
I=imread('img1.jpg');
whos
save I
ls
Now there must be a file named "I.mat" in our directory containing our
variable I.
2.3.14 SYNTAX FOR DISPLAYING AN IMAGE:
clear
load I
whos
imshow(I)
I=im2double(I);
whos
for i=1:256
for j=1:256
Ired(i,j)=I(i,j);
SHEFFIELD HALLAM UNIVERSITY Page | 28
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
end
end
whos
imshow(Ired)
2.3.15 SYNTAX FOR CONVERTING RGB TO GRAY SCALE:
A=rgb2gray(A);
whos
imshow(A)
Displaying an image in Matlab
Below are basic Matlab commands which don’t require any tool box for
displaying an image.
2.3.16 TO DISPLAY IMAGE GIVEN ON MATRIX FORM WHICH DOES
NOT REQUIRE ANY TOOLBOX:
Operation: Matlab command: Displays an image which is
represented as the matrix X. imagesc(X)
Adjusts the brightness and s is a
parameter such that -1<s<0 gives a
darker image and , 0<s<1 gives
brighter image.
brighten(s)
Changes the colors to gray. colormap(gray)
If in case our image is not displayed in gray scale even after converting it
into the gray scale image. we then use the command colormap(gray) inorder
to "force" Matlab to use a gray scale when it is displaying an image.
Command imshow can be used If we are using Matlab with an Image
processing tool box installed
SHEFFIELD HALLAM UNIVERSITY Page | 29
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
2.3.17 TO DISPLAY AN IMAGE GIVEN IN MATRIX FORM USING THE
IMAGE PROCESSING TOOLBOX:
OPERATION MATLAB COMMAND:To Display an image which is
represented as matrix X. imshow(X)
Zoom in by using the left and right
mouse button. zoom on
Turn off the zoom function. zoom off
SHEFFIELD HALLAM UNIVERSITY Page | 30
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
CHAPTER-3
VIDEO CONTENT ANALYIS
3.1 VIDEO CONTENT ANALYSIS:
Numbers of terms are used in different industries and markets to describe
Video Content Analysis:
Analytics
Behaviour Recognition
Content Analysis
Concept Coding
Intelligent Video
Object Tracking
Smart CCTV
However they all describe the real time use of computer vision in a security
environment to monitor the CCTV camera feeds and assist the guard in his
or her decision making process.
Real-Time content-based access to live video data requires content analysis
applications that are able to process video streams in real-time and with an
acceptable error rate.
3.1.1. CONTENT-BASED VIDEO RETRIEVAL:
Video indexing should be analogous to text document indexing
To facilitate fast and accurate content access to video data, we should
segment a video document into shots and scenes
SHEFFIELD HALLAM UNIVERSITY Page | 31
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
We should extract keyframes or key sequences as index entries for
scenes or stories.
Four processes involved by video-content analysis and indexing
Feature extraction
Structure analysis
Abstraction
Indexing
3.1.2 APPLICATIONS OF VIDEO-CONTENT ANALYSIS:
We can broadly classify users into two extremes:
Nontechnical consumer
Trained, technical, professional corporate users who regularly
use the products
Professional and educational applications
Consumer domain application
3.1.3 PROFFESIONAL AND EDUCATIONAL APPLICATIONS:
Automated authoring of Web content
Searching and browsing large video archives
Easy access to educational material
Indexing and archiving multimedia presentation
3.1.4 CONSUMER DOMAIN APPLICATIONS:
The widest audience for video-content analysis is consumers
Differences between large archives and consumer domain
Video overview and access
SHEFFIELD HALLAM UNIVERSITY Page | 32
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Video content filtering
Video Content Analysis is the name commonly given to the automatic
analysis of CCTV images to create meaningful information about the content.
The scope of VCA is certainly impressive, and expanding all the time, for
instance it can now be applied for external and internal intruder detection; the
monitoring of plant or buildings for health and safety; people counting;
automatic traffic event and incident detection; safety enhancements for public
areas; smoke and fire detection and camera failure or sabotage detection. In
theory any 'behaviour' that can be seen and accurately defined on a video
image can be automatically identified and an alert raised.
There is little doubt, when specified and installed correctly, that VCA can
have a positive impact on the effectiveness and return on investment of
CCTV systems by adding enhanced or increased capabilities to detect and
analyse post-event video.
Video-content analysis and indexing involves four process primarily
according to many researchers: feature extraction, structure analysis,
abstraction and indexing.
3.1.5 FEATURE EXTRACTION FOR CONTENT ANALYSIS:
Feature extraction is a critical process in content based video indexing. The
indexing schemes effectiveness depends upon the effectiveness of attributes
in the content representation. And easily extractable video features such as
color, shape, structure, layout, texture and the motion cannot be mapped
easily as semantic concepts like indoor and outdoor, the people, or scenes of
car-racing. Whereas In audio domain, features like the pitch, energy, and
bandwidth enables the audio segmentation and also classification. In a video
program visual content is a major source of information in the video-content
analysis the effective strategy is to use attributes which are extractable from
multimedia sources. Much valuable information is carried in other media
components like the text which is superimposed on the images, or it is
SHEFFIELD HALLAM UNIVERSITY Page | 33
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
included as closed captions, the audio, and speech which accompanies the
pictorial component. For consumer and professional applications combined
and also cooperative analysis of components will be far more effective in
characterizing the video program. Examples for this type of approach are
the Informedia system, 1 AT&T’s Pictorial Transcripts system,2–5 and Video
Scout6.
3.1.6 STRUCTURE ANALYSIS:
This process structure analysis allows us for organizing according to their
temporal of the Video data structures and relations and thus it builds table of
contents. the next step in the overall video-content analysis is the video
structure parsing and it is the process of extracting temporal structural
information of video sequences or programs.
It also involves detecting the temporal boundaries and identifying of
meaningful segments of video. There are many effective and robust
algorithms developed for video parsing 7–11 for segmenting the video
program into its temporal composition bricks. these composition bricks would
be categorized in a hierarchy which is similar to the film storyboards ideally.
Top level always consists of sequences or stories, composed of sets of
scenes where the Scenes are further partitioned into shots. Each shot also
contains a sequence of frames recorded continuously and they represent a
continuous action in the time or space. we can automatically build a table of
contents for video program’s by using such structural information.
The most important step for the process of video structure parsing will be
segmenting of the video into individual scenes. A scene generally consists of
series of shots which are consecutive and grouped together as from
narrative point of view because they are shot in the same location or also
may be because they shares some of thematic content. This process of
detecting the video scenes is analogous to paragraphing in the text
document parsing, but it also requires a higher level of content analysis. two
approaches for automatically recognizing program sequences are: one is
based on film production rules, and the other is based on model of priori
SHEFFIELD HALLAM UNIVERSITY Page | 34
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
program. But both of these have had a limited success because the only
logical layers of the representation is based on subjective semantics in the
scenes or stories in video and there is no universal definition and also rigid
structure exists for scenes and stories, shots are actual physical basic layers
in the contarst, those whose boundaries can be determined by the editing
points or where a camera switches on and off. shots are a very good choice
as a basic unit for video-content indexing, and also they provide basis for the
construction of a video table of contents fortunately as an analogous to
sentences or words in text document. The Shot boundary detection
algorithms that which rely only on visual information contained in the video
frames can also segment video into the frames with a visual contents
similarly.
It is not possible to grouping of the shots into semantically meaningful
segments like stories without incorporating the information from other
components of video program. Algorithms of multimodal processing which
involves processing of not only the video frames, but also the text, audio, and
speech components that which accompany them have been proven effective
in achieving this goal.
3.2 PEOPLE COUNTING/ FOOTFALL:
Accurate people counting gives us powerful intelligence for strategic
planning. People counting technology has transformed the way how business
decisions are made. It is practical way to get clear picture about the picture of
occupancy, pedestrian flow and retail traffic. It also makes customer
behavoiur transparent which reveals how visitors respond in their movements
around a site.
With help of People counting systems we can…
Immediately evaluate and adapt marketing activities according to
retail traffic.
Improve customer service and reduce staff costs by matching staff
levels to varying occupancy.
SHEFFIELD HALLAM UNIVERSITY Page | 35
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Make best use of low occupancy periods, for example with
maintenance activities.
Assess sales conversion rates and see how new product lines or
services affect footfall.
Identify high performing stores and pinpoint the reasons for their
success
A footfall dashboard puts you in control of the data…
• See total and average footfall to date and for any given period.
• Compare different periods over a number of years with graphs and
statistics.
• Automatically highlight key dates such as bank holidays.
• Create a journal to evaluate and compare your own special events
and marketing activities.
• Upload sales data to see sales conversion rates.
• Analyse data against number of staff working.
• View and compare parameters such as occupancy, footfall or dwell
time.
3.3 FENCE PROTECTION (INTRUDER DETECTION)
Installation of security devices for purpose of detecting entry into a
designated security area is called fence protection. Intruders can be detected
and located anywhere along the fencing virtual zone by cctv cameras.
3.3.1 FENCE PROTECTION TECHNIQUES:
Dettering of thresspassing (9000 volt- safe shock wire)
Detection of such an attempt –warning security personnel
Scaring away threspassers (very loud siren and flood light)
SHEFFIELD HALLAM UNIVERSITY Page | 36
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Fig 3.1: Fence showing a virtual zone in red
SHEFFIELD HALLAM UNIVERSITY Page | 37
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
CHAPTER-4
ALGORITHMS
4.1 THRESHOLDING:
In many vision applications, it is useful to be able to separate out the regions
of the image corresponding to objects in which we are interested, from the
regions of the image that correspond to background. Thresholding often
provides an easy and convenient way to perform this segmentation on the
basis of the different intensities or colors in the foreground and background
regions of an image.
In addition, it is often useful to be able to see what areas of an image consist
of pixels whose values lie within a specified range, or band of intensities or
colors. Thresholding can be used for this as well.
Fig 4.1 Threshold, Density slicing
Thresholding is method used in image processing to convert gray scale
image to binary image. In this process individual pictures are marked as
object pixels if their value is greater than some threshold value. And they are
otherwise called as background pixels. The segmentation is determined by a
parameter known as intensity threshold. Each pixel in image is compared
with this threshold and if pixels intensity is higher than the threshold then
pixel is set to white in output. If it is less than the threshold then it is set to
SHEFFIELD HALLAM UNIVERSITY Page | 38
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
black. Multiple thresholds can be set in more sophisticated implementations.
So as a result band of intensity values can be set to white while everything
else is set to black. The object pixel value is 1 while background pixel value
is 0 and as a result binary image is obtained. The object must be brighter
than background. Thresholding is a non-linear operation which converts a
gray scale image into a binary image. In this process two levels are assigned
to the pixels that are above or below a specified value of threshold.
Threshold can be applied directly to command line
Syntax:
myBinaryImage = myGrayImage > thresholdValue ? 255 : 0
The key parameter in this process is selection of threshold value. There are
many methods in choosing mean and median value. The main thing which
should be considered is object pixels must be brighter than background.
It is more efficient if we use image threshold operation which provides
various methods for finding the optimal threshold value for a image. The
following methods are used to determine threshold value.
Automatically calculating a threshold value by using iterative
method
The histogram of the image must be approximated as bimodal
distribution and a midpoint value must be choosen as threshold
level.
Adaptive thresholding is a method used for evaluating the
threshold based on last 8 pixels in each row by using alternating
rows. This method is not supported when used as a part of
operation image edge detection.
Fuzzy thresolding by using entropy as a measure of fuzziness.
Fuzzy thresholding by using a method which minimizes a fuzziness
measure by involving the mean gray level in the object and
background
SHEFFIELD HALLAM UNIVERSITY Page | 39
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Default method where we must use the/T flag to specify a
threshold value.
4.2 ADAPTIVE THRESHOLDING
Fig 4.2 Adaptive thresholding or Dynamic thresholding
Thresholding can be used for segmenting an image by setting all the pixels
whose intensity value is above a threshold into a foreground value and all the
remaining pixels into background value.
Conventional thresholding operator generally uses a global thresholding for
all pixels, where as adaptive thresholding will be changing the threshold
value dynamically over the image. This is most sophisticated version of
thresholding and it can accommodate changing lighting conditions in the
mage which those occurrs as a result of a shadow or strong illumination
gradient .
Histogram of Adaptive thresholding takes input as a gray scale or a color and
outputs a binary image using simplest implemenatation which represents the
segmentation. In an image threshold has to be calculated for each pixel in an
image and If the pixel value is below the threshold will be set to the
background value, or otherwise it will assume the foreground value.
Two main approaches for finding the threshold are: (i) the Chow and Kaneko
approach and (ii) local thresholding. For both methods the assumption is
that smaller image regions will be more likely to have uniform illumination
SHEFFIELD HALLAM UNIVERSITY Page | 40
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
approximately, and hence it is more suitable for thresholding. The Chow and
Kaneko divides an image into an array of overlapping subimages and then
they find optimum threshold for the each subimage by process of
investigating its histogram. By interpolating the results of the subimages
threshold for each single pixel can be found. The main drawbacks of this
method is that it is computational expensive and as a result this is not
appropriate for the real-time applications.
Another approach for finding the local threshold is to examine statistically the
values of intensity for the local neighborhood in each pixel. The which is
most appropriate statistic depends on the input image largely. The Simple
and fast functions will include the mean of local intensity distribution,
The median value,
The mean of the minimum and maximum values,
The neighbourhood size has to be large enough for covering sufficient
foreground and background pixels or otherwise a poor threshold value is
chosen. choosing regions which are too large can also violate the
assumption of approximately uniform illumination on the other hand. This
method is computationally less intensive than Chow and Kaneko approach
and will produces good results for many applications.
Adaptive thresholding can be used for separating desirable foreground image
objects from the background based on the difference in pixel intensities of
each region like global thresholding, Global thresholding will use a fixed
threshold for all the pixels in an image and therefore it works only when the
intensity histogram of a input image contains a neatly separated peaks which
corresponds to the desired subject and background. And as a result it cannot
deal with images containing a strong illumination gradient.
SHEFFIELD HALLAM UNIVERSITY Page | 41
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Whereas the local adaptive thresholding will select an individual threshold for
each pixel depending on the range of intensity values in local neighbourhood
allowing the thresholding of an image whose global intensity histogram don’t
contain any distinctive peaks.
4.3 HISTOGRAM:
The histogram of a image generally refers to histogram of pixel intensity
values. It is a graph which shows number of pixels in an image at each
different intensity value found in that image. For an 8 bit gray scale image
there are 256 different possible intensities and thus a histogram displays
graphically 256 numbers showing the distribution of pixels among those
grayscale values.
Not all images can be segmented neatly into foreground and background
just by using simple thesholding. intensity histogram of an image determines
wether the image can be correctly segemented.
If there is any possibility for separating the foreground of an image based on
the basis of pixel intensity, then the intensity of pixels within the foreground
objects must be different distinctively from intensity of pixels within the
background. And In such case, we will expect to see a distinct peak in the
histogram which will correspond to the foreground objects such that values
of thresholds can be chosen inorder to isolate this peak accordingly. it would
be unlikely that simple thresholding will be producing a good segmentation if
such a peak does not exist. adaptive thresolding will be better in such a case.
Here are some typical histograms along with some suitable choices of
threshold values.
SHEFFIELD HALLAM UNIVERSITY Page | 42
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Figure 4.3 A) Shows a classic bi-modal intensity distribution. This image can be
segmented successfully by using a single threshold T1.
B) Is slightly more complicated one and here we will suppose that central peak will
represent the objects we are interested in and hence the threshold segmentation will
require two thresholds: T1 and T2.
In C), The two peaks of the bi-modal distribution will run together and so it is almost
not possible to successfully segment this image by using a single global threshold
Thresholding can be often useful in many applications like remote sensing
where it will be desirable to select out those regions whose pixels lie within a
specified range of values of pixel within a image.
SHEFFIELD HALLAM UNIVERSITY Page | 43
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Fig 4.4 Original image / Gray scale image
Fig 4.5 Binary image
SHEFFIELD HALLAM UNIVERSITY Page | 44
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
If we Imagine a poker playing robot that needs to visually interpret the cards
in its hand
Fig 4.6 Original image
Fig 4.7 Thresholded image
SHEFFIELD HALLAM UNIVERSITY Page | 45
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
If you choose the threshold wrong the results can be disastrous
Fig 4.8 Threshold too low
Fig 4.9 Threshold too high
SHEFFIELD HALLAM UNIVERSITY Page | 46
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Fig 4.10 Effect of uneven illumination in single value thresholding
Uneven illumination can really upset a single valued thresholding scheme
SHEFFIELD HALLAM UNIVERSITY Page | 47
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
These images show the troublesome parts of the previous problem further
subdivided
An approach to handling situations in which single value thresholding will not
work is to divide an image into sub images and threshold these individually
Since the threshold for each pixel depends on its location within an image
this technique is said to adaptive
The image below shows an example of using adaptive thresholding with the
image shown previously
Fig 4.11Adaptive thresholding
As can be seen success is mixed
But, we can further subdivide the troublesome sub images for more success .
4.4 CONNECTED COMPONENTS LABELING:
Based on connectivity of the image the connected components labeling will
scan an image and will group its pixels based on image connectivity into
components. All pixels in each connected component will share a similar
pixel intensity values and they will be connected to each other in some way.
Once after all groups are determined then the each pixel is labelled using a
graylevel or a color according to the component it was assigned to. This
process of extracting and labeling various disjoint and connected
components in an image is central to many image analysis applications.
SHEFFIELD HALLAM UNIVERSITY Page | 48
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
The connected components labeling will work by scanning an image pixel-by-
pixel from left to right and as well as top to bottom to identify connected pixel
regions. That are regions of adjacent pixels which will share the same set of
intensity values V. for binary image v={1} .incase of a graylevel image V will
take range of values like V={51,52,53,….,77,78,79,80}.
The Connected components labeling works on both binary and gray level
images and there are possible different measures of connectivity. if we
assume binary input images and 8-connectivity. The connected components
labeling operator will scan the image by moving along a row until it comes to
a point p in which p represents pixel to be labeled at any stage in the
scanning process and for which V={1}. Incase if this is true, it will examine
the four neighbors of p which already have been encountered in the scan
process. They are the neighbors (i) on the left of p, (ii) above to it, and (iii )
and (iv) which are the two upper diagonal terms. Dependingupon information
given, the labeling of p will occur as below
If all the four neighbors are 0, then we assign a new label to p, else
if only one of neighbor has V={1}, we will assign its label to p, else
if more than one of the neighbors has V={1}, we assign one of the
labels to p and will make a note about the equivalences.
Once after completion the scaning process, label pairs which are the
equivalent will be sorted into equivalence classes and a unique label will be
assigned for each class. Finally, a second scan should be made through an
image, and during which each label will be replaced by the label assigned to
equivalence classes. the labels might be different graylevels or colors in the
display.
For extracting connected components and to label them in various ways a
collection of morphological operators will be existing. in a simple method for
extracting te connected components of an image will combine operations of
dilation and mathematical intersection. The former one will identify the pixels
which are the part of continuous region by sharing a common set of intensity
values V={1} and the latter will eliminate dilations which are centered on
SHEFFIELD HALLAM UNIVERSITY Page | 49
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
pixels with V={0}. The structuring element which is used will define the
desired connectivity.
More sophisticated variants of this will include a set of functions known as
geodesic functions used for measuring the exact shape of distinct objects in
a image. All these operators are based upon the notion of the geodesic
distance d which is defined as the shortest distance between two points
which are located within an image object such that the entire path between
the points is included in the object. One way to obtain this is to apply a series
of dilations of size 1. For example, we consider the image
Fig 4.12: Original image
Which shows a triangular block. By applying a geodesic operator to the
image it produces a labeled image
Fig 4.13: Result of applying geodesic operator
Where as graylevel intensity labeling which is across the surface of the block
will encode geodesic distance. As a result . light pixels represent larger
distances.
Connected Component Labeling is commonly used to refer to the task of
grouping the connected pixels in an image. (The basic approach is to scan
the image and assign labels to each pixel until the labels for the pixels no
longer change. This basic approach is slow because the labels propagates
one layer in an iteration. There are two common strategies to speed up this
process. The most common strategy leads to a two-pass algorithm. This
algorithm uses a data structure to record label equivalence information. It
scans the image once to assign provisional labels and discover the label
SHEFFIELD HALLAM UNIVERSITY Page | 50
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
equivalence information, and scans the image a second time to assign the
final labels. Another very successful strategy is the one-pass algorithms, that
find all connected pixel in one shot, for example through recursively visit all
the connected neighbours. One of the most successful approach in this
category is the Contour Tracing algorithm by Chang et al. They also distribute
an implementation of their algorithm online. The most commonly used data
structure for recording the label equivalence information in a two-pass
algorithm is the union-find data structure. Researchers have recognized the
possiblity of implementing this data structure implicitly using an array instead
of pointers. We designed a set of algorithms that enabled us to operate on
this array efficiently, and prove that a two-pass labeling algorithm with this
implicit union-find data structure is theoretically as good as the best known
labelling algorithms. We further show through experimental measurements
that it in fact is faster than the best known one-pass algorithms.
Fig 4.14: Original image
Fig 4.15: Thresholded image
SHEFFIELD HALLAM UNIVERSITY Page | 51
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Fig 4.16: Labels coded as gray values
Fig 4.17: Labels coded as colours
Fig 4.18: Labels coded as 8 different colours
4.5 MEDIAN FILTERING:
Smoothing low pass filters reduce noise but however the underlying
assumption is that neighboring pixels represent additional samples of the
same value as reference pixel, i.e. they represent the same feature. But at
the edges blurring of features results. This process is a linear process using
convolution techniques to implement weighting kernels as neighborhood
function. There are also nonlinear neighborhood operations which can be
performed for the purpose of noise reduction which can do a better job of
SHEFFIELD HALLAM UNIVERSITY Page | 52
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
preserving edges than simple smoothing filters. That method is called as
median filtering. In median filtering the neighboring pixels are ranked
according to brightness i.e. intensity and the median value becomes new
value for central pixel. median filters can do an excellent job of rejecting
certain types of noise, in particular, “shot” or impulse noise in which some
individual pixels have extreme values. In median filtering operartion, the pixel
values in the neighbourhood window are ranked according to intensity, and
the middle value(median) becomes the output value for the pixel under
evaluation.
In particular, compared to the smoothing filters examined thus far, median
filters offer three advantages:
No reduction in contrast across steps, since output values available
consist only of those present in the neighbourhood (no averages).
Median filtering does not shift boundaries, as can happen with
conventional smoothing filters (a contrast dependent problem).
Since the median is less sensitive than the mean to extreme values
(outliers), those extreme values are more effectively removed.
The median is, in a sense, a more robust “average” than the mean, as it is
not affected by outliers (extreme values). • Since the output pixel value is one
of the neighbouring values, new “unrealistic” values are not created near
edges. • Since edges are minimally degraded, median filters can be applied
repeatedly, if necessary.
Considerations:
The median filter is more expensive to compute than a smoothing
filter. Clever algorithms can save time by making use of repeating
values as the neighbourhood window is slid across the image.
Median filters are nonlinear:
Median[A(x)+B(x)] median[A(x)]+median[B(x)]�
This must be taken into account if you plan on summing filtered images.
Median Filtering
The median for each subgroup is determined.
SHEFFIELD HALLAM UNIVERSITY Page | 53
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
These two values are then compared to the original pixel value, and
the median for these three values becomes the output value for the
pixel in the filtered image.
Larger neighborhoods permit the defining of additional subgroup
orientations.
4.6 HYBRID MEDIAN FILTERING:
Median filters can tend to erase lines narrower than ½ the width of the
neighbourhood. They can also round off corners.
Hybrid median filters can get around these problems.
The hybrid median filter is a three step ranking process that uses two
subgroups of a 5x5 neighbourhood.
These subgroups are drawn from pixels parallel to the image frame
edges, and at 45º to the edges, centred on the reference pixel.
SHEFFIELD HALLAM UNIVERSITY Page | 54
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
4.7 FLOWCHART SHOWING PROCESS HOW THIS PROJECT
WORKS GENERALLY:
threshold too low threshold too high
If there is any noise
Fig 4.19: Showing actual process of this project
SHEFFIELD HALLAM UNIVERSITY Page | 55
Cctv starts recording
Reads the video in mat lab
Select threshold
value
Thresholded image
Median filterlabeling
If intruder is detected entering fence alarm starts
When intruder approaches fence virtual zone appears red
and alerts
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
A fence is monitored continuously by CCTV cameras. I converted the images
recorded by CCTV into gray scale and then converted to binary. i.e., the
images are thresholded to obtain binary images. If the selected threshold
value is too low or too high change the threshold values. Then median filter is
used to remove any noise. The value of threshold must be adjusted such that
region of interest is separated from background. Then labelling is done to
identify the object and based on based on pixel connectivity. Once the CCTV
detects that intruder has approached virtual zone it alerts security persons.
When intruder actually enters virtual zone and tries crossing fence alarm
starts.
SHEFFIELD HALLAM UNIVERSITY Page | 56
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
CHAPTER-5
RESULTS
The results of the proposed algorithm which was explained in the previous
chapter are illustrated and demonstrated here. The proposed algorithm was
applied to protect a fence in my local area. A virtual zone was created to
detect any intruder enters the fence area. This virtual zone appears green if
no body enters it, however it immediately goes red when that zone is
breached.
The following figures demonstrate how the algorithm is successfully capable
of protecting the fence. The demonstration of the proposed algorithm is
carried out by showing different frames and zone status while an intruder was
approaching the zone as will be seen in the figures below.
(a) (b)
Fig 5.1. (a) Green virtual zone as an intruder approaching the zone, (b) Foreground
objects in the scene.
The fig 5.1 shows an fencing and an intruder moving and the virtual zone is
green because he is away from virtual zone. (a) shows original image and
(b)shows thresholded image.
SHEFFIELD HALLAM UNIVERSITY Page | 57
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
(a (b)
Fig 5.2. (a) Green virtual zone as an intruder approaching the zone, (b) Foreground
objects in the scene.
Fig 5.2 shows virtual zone in green because intruder has not approached the
virtual zone.( a) shows original image and (b) shows thresholded image.
(a) (b)
Fig 5.3. (a) Green virtual zone as an intruder approaching the zone, (b) Foreground
objects in the scene.
Fig 5.3 still shows virtual zone in green because intruder has not come into
fencing virtual zone. (a) shows original image and (b) shows thresholded
image.
SHEFFIELD HALLAM UNIVERSITY Page | 58
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
(a) (b)
Fig 5.4. (a) Green virtual zone as an intruder approaching the zone, (b) Foreground
objects in the scene.
Fig 5.4 also shows virtual zone in green as intruder is not in virtual zone of
fencing. (a) shows original image and (b) shows thresholded image.
(a) (b)
Fig 5.5. (a) Red virtual zone as an intruder approaching the zone, (b) Foreground
objects in the scene.
Fig 5.5 shows virtual zone in red because the intruder has entered the virtual
zone and alerts security personnel. (a) shows original image and (b) shows
thresholded image.
SHEFFIELD HALLAM UNIVERSITY Page | 59
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
(a) (b)
Fig 5.6. (a) Red virtual zone as an intruder approaching the zone, (b) Foreground
objects in the scene.
Fig 5.6 shows that intruder is trying to climb the fence. The virtual zone is red
and alarm starts ringing. (a) shows original image and (b) shows thresholded
image.
As shown in the previous figures, the algorithm was successful capable of
protecting that fence in the last particular example, However there are yet
many challenges and problems may face this algorithm, such as object's
shadow, light variations, camera displacement, … etc. all of these parameter
may lead to wrong conclusion and lead to false Alarm. Fig 5.7 illustrate the
effect of show in the proposed algorithm, as shown in the figure, a false
alarm was set off when the shadow of the man enters the zone.
SHEFFIELD HALLAM UNIVERSITY Page | 60
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
Fig 5.7: The effect of shadow in the proposed algorithm.
In fig 5.7 effect of shadow can be clearly observed. (a),(b),(c) shows virtual
zone in green because the intruder or his shadow did not enter the fencing
virtual zone whereas in (d) virtual zone is shown in red even before intruder
entering the virtual zone. It is because his shadow entered the virtual zone.
This may lead to condition such that alarm starts even if intruder don’t try to
enter the fence only because his shadow enters and his shadow is mistaken
as a object.
SHEFFIELD HALLAM UNIVERSITY Page | 61
(a) (b)
(c) (d)
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
CHAPTER-6
CONCLUSION AND FUTURE
WORK
6.1 CONCLUSION:
In this project, I propose and implement a fence protection algorithm to be
used for the CCTV cameras. The project fence protection using CCTV
camera is developed successfully. When a intruder tried to cross the fence
CCTV has monitored and alarm has started. CCTV camera detected the
person immediately when he entered the virtual zone and prevented the
person entering fence by alerting security persons by giving an alarm sound.
I have taken the video and to separate the object of interest from background
I have done thresholding. I have adjusted threshold value such that object is
clearly visible. To remove any noise I have used median filter. I have done
labelling in order to track the object and check if it is in virtual zone.
As shown in the results, the proposed algorithm was successfully applied to
protect a fence against any intruder trying to approach that fence. In this
stage of the project, the current algorithm need to be adapted to deal with
more challenges such as light variations, shadows and many others.
this project is very useful in many high security areas where it cannot be
done manually. A person cannot be able to look through all the videos
recorded by cameras at the same time. Hence this project is developed such
that if the alarm starts every guard can be alert and prevent an intruder to
enter. It is mainly used in secured places like airports, banks, etc to prevent
any one entering restricted area which may cause loss of money and is
dangerous for people if some one enters an airport. This is a basic design
SHEFFIELD HALLAM UNIVERSITY Page | 62
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
work and more performance will be gained when system is practically
employed.
6.2 FUTURE WORK:
Thus in the proposed system I introduced efficient way for security of fencing
using image processing based tracking algorithms, now for the future work
my main concern is to design practically the proposed system. Following are
main parts of future work:
Design of Target System: Using the algorithms related to the
image processing and intruder tracking target system will be designed.
Additional designs of target system: New mechanisms related
to the target system designs are proposed and will be implemented
additionally.
Improving the algorithm to be capable of detecting shadows,
light variations, background changes and other parameters.
SHEFFIELD HALLAM UNIVERSITY Page | 63
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
REFERENCES
1. Rafael C.Gonzalez, Richard Eugene Woods (2008). Digital image
processing.
2. Scott E Umbaugh (2005). Computer imaging:digital image analysis
and processing.
3. Alan Conrad bovik (2005). Hand book of image and video processing.
4. Ioannis pitas (2000). Digital image processing algorithms and
applications.
5. Amos gilat (2005). Matlab:an introduction with applications.
6. Desmond J. Higham, Nicholas J. Higham (2005). matLab guide
7. Mark S.Nixon, Alberto S Aguado(2008). Feature extraction and image
processing.
8. Gerard blanchet, Maurice charbit (2006). Digital signal and image
processing using matlab.
9. Nasser kehtarnawaz, mark noel gamadia (2006). Real-time image and
video processing from research to utility
10.A. Hanjalic (2004). Content based anlyisis of digital video.
11. Ajay divakaran (2008). Multimedia content analysis:theory and
applications.
12.Singh & chaudhuri, y. Kirani singh, b.b. chaudhuri(2007). Matlab
programming.
13.Terrance J.Dishongh, Michael mcgrath (2008). Wireless sensor
networks for health care applications.
14.Charles P. Pfleeger, shari Lawrence pfleeger(2003). Security in
computing.
15.Jayaraman. Digital image processing
16.Li Tan(2008). Digital signal processing: fundamentals and applications.
Pg 65.
17.Cleve B.Moler(2004). Numerical computing with matlab.
18.Antonio siciliano(2008). Matlab: data analysis and visualization.
19.Vijay Madisetti, Doughlas Bennett Williams (1998). The digital signal
processing handbook.
SHEFFIELD HALLAM UNIVERSITY Page | 64
FENCE PROTECTION ALGORITHM USING CCTV CAMERAS
20.William john Palm (2004). Introduction to matlab & for engineers.
21.Partha P. Banerjee, Ting-Chung Poon. Contemporary optical image
processing.
22.Simone Santini. Exploratory image databases: content based retrieval.
23.Bernd Jahne. Image processing algorithms
24.Kevin jeffay, Hongjiang Zhang(2002). Readings in multimedia
computing and networking.
25.http://en.wikipedia.org/wiki/Thresholding_%28image_processing%29
26.http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm
27.http://www.wavemetrics.com/products/igorpro/imageprocessing/thresh
olding.htm
28.http://www.codersource.net/microsoft-net/c-image-
processing/implementation-of-labeling-connected-components.aspx
29.http://viola.usc.edu/Research/shihhunl_Pages%20from
%2001621451.pdf
30.http://www.mathworks.de/access/helpdesk/help/toolbox/images/im2bw
.html
31.http://people.csail.mit.edu/rahimi/connected/
32.http://diwww.epfl.ch/w3lami/detec/perrigproj96/node9.html
33.http://www.irisys.co.uk/people-counting/introduction.aspx
34.http://en.wikipedia.org/wiki/Median_filter
35.http://medim.sth.kth.se/6l2872/F/F7-1.pdf
36.http://www-h.eng.cam.ac.uk/help/tpl/programs/matlab.html
SHEFFIELD HALLAM UNIVERSITY Page | 65
top related