vision-in-the-loop for control in manufacturing
TRANSCRIPT
Mechatronics 13 (2003) 1123–1147
Vision-in-the-loop for control in manufacturing
Tim King
School of Mechanical Engineering, The University of Leeds, Leeds LS2 9JT, UK
Abstract
Machine vision is an invaluable component of automated manufacturing in situations
where a high degree of randomness in the product, or the process, prevents the construction of
a suitably structured environment for traditional automation or robotics. Case studies are
presented of four mechatronic systems developed, or being researched, for quite different
processes in the textile industry: garment assembly, lace cutting (scalloping), lace inspection
and inkjet printing. They illustrate the potential for vision-in-the-loop in manufacturing.
Although divergent in their purposes, the systems share a common theme: they each use line-
scan machine vision, �incremental� image processing rather than frame-based techniques and
an integrated mechatronic design approach.
� 2003 Elsevier Ltd. All rights reserved.
Keywords: Machine vision; Inspection; Deformable materials; Garment assembly; Fault detection; Fault
rectification; Inkjet printing; Digital printing
1. Introduction
The author and co-workers at several UK universities (Loughborough, Bir-
mingham and Leeds) have, over an extended period, researched and developedseveral mechatronic systems using machine vision based on line-scan cameras. These
have been aimed at automating and controlling aspects of textile manufacturing that
had previously been difficult to achieve. This paper presents four case studies that
demonstrate the potential for including machine vision in mechatronic production
systems.
E-mail address: [email protected] (T. King).
0957-4158/$ - see front matter � 2003 Elsevier Ltd. All rights reserved.
doi:10.1016/S0957-4158(03)00046-1
1124 T. King / Mechatronics 13 (2003) 1123–1147
2. A mechatronic knitted garment linking machine
The main body panels of fully-fashioned outerwear garments are knitted on one
type of machine, whilst the �trims� (collars, facings, waist bands etc.) are usuallyknitted on different machines. A superior method of joining these components to-
gether is a process termed �linking�; the knitted loops of one component are matched
one-for-one with those of the component to be joined to it and a chain stitch used to
sew them together. This provides a flexible seam of minimum bulk, but no loops
must be missed in the process or the garment can unravel. Linking is therefore used
for high quality knitted products and is performed by hand with the aid of simple
machines. These comprise a series of grooved points onto which the loops are loaded
and which serve to guide the needle of the machine�s sewing head, as illustrated inFig. 1. The operators need skill, concentration and good eyesight.
A mechatronic system has been developed to help automate the linking process
[1–5]. Collars and other ribbed garment components are knitted in bulk; each joined
to the next by waste courses of knitting which enable them to be separated before
linking. The row of loops on each trim to be used for linking is termed the �slackcourse� since it is knitted with slightly larger loops than the other rows. The most
arduous part of the manual linking process is identifying these loops and loading
them onto the linking points. This part of the process was therefore selected forautomation through machine vision.
Since the fabric is deformable and, indeed, this very deformability must be ex-
ploited to gently stretch the components to bring their loops into register with the
pitch of the linking points, it was not considered useful to image an area of the fabric
and attempt to determine the positions of multiple loops in the slack course. Instead,
an approach based on progressive sensing and insertion of the linking points into the
loops was adopted. This was implemented by using a low-cost line-scan CCD sensor
to build up an image of the area of fabric under consideration at any one time, al-lowing the fabric to be progressively distorted to bring a single loop into alignment
with the next sequential point, which is then inserted. This process is continued until
all loops in the slack course have been dealt with. In this way the amount of data to
be handled is kept to a minimum and the changes in the geometry of the fabric
caused by the insertion of one point can be taken into account when inserting the
next.
Fig. 1. Linking with a single chain stitch.
Fig. 2. Plain knitted fabric structure+ centres of loops to be selected o �inverted� loops which must not be
selected.
T. King / Mechatronics 13 (2003) 1123–1147 1125
The vision system has to be able to identify which loops are to be used for linking.Fig. 2 depicts the construction of plain knitting. The system uses a knowledge base of
different knitted constructions to assist it in finding the correct loops in plain and
different types of ribbed fabrics.
The camera and the fabric positioning mechanism are mounted on a carriage that
moves in front of the trims. Another carriage, behind the trims, carries the point
insertion mechanism and a fibre optic bundle for back-illuminating the fabric. The
front and rear carriages are moved together, their alignment being maintained by
two ball-screws, rotationally coupled by a timing belt at the end of the machine. Asthe carriages traverse the width of a trim, the sensor scans the fabric, providing data
to identify the slack course, selects the loops to be loaded onto points and calculates
the movements required to position the loops correctly.
The fabric is held between a smooth face on the rear carriage and a pair of rubber
rollers, which are driven so that they normally roll on the fabric as the carriage
traverses. By �advancing� or �retarding� the rotation of these rollers with respect to the
carriage movement, they can exert a force to bring the slack course loop of current
interest into alignment with the points in the traverse direction. In the vertical di-rection, alignment is achieved by driving the pair of rollers axially. Fig. 3 shows the
mechanism that achieves this. The camera is in the right foreground. The fine pitch
timing belt drive running across the top of the carriage advances and retards the
rollers (when the belt is stationary they just roll on the fabric as the carriage moves, a
small stepping motor, not visible in the picture but mounted to the machine frame,
provides the advance and retard action). Vertical motion of the rollers is produced
by the small stepping motor on the front carriage, visible just to the rear of the
camera lens.The very small incremental motions required of the stepping motors to �coax� the
loops into alignment required special motor controllers since, typically, each motor
moves only a few steps in each direction, but is required to do so at high speed. This
was achieved by providing each of the axis motors with its own purpose built 8085
based microprocessor controller. These are co-ordinated by a supervisory computer,
which also processes the vision data.
Fig. 3. Automatic linking machine head.
1126 T. King / Mechatronics 13 (2003) 1123–1147
Although based on low cost sensing and utilising relatively low performance
eight-bit microprocessors, this machine demonstrates that, given an appropriate
mechatronic approach, the deformability of fabrics is not an insurmountable ob-stacle to automation in garment assembly.
3. A mechatronic lace scalloping machine
Lace is manufactured by a specialised knitting process in broad webs up to 3.3 m
wide. Each web contains many pattern repeats across its width, which must be
separated into individual lengths. This involves cutting along the edges of the pat-
tern, or sometimes along its centre, to separate the pieces from one-another and from
the waste mesh as illustrated in Fig. 4.
These processes, scalloping and centre-cutting, are labour intensive and it is de-
sirable to automate them. Unfortunately, the path to be cut can bear a complex orarbitrary relationship to the pattern features (at least at the detailed scale) making
implementation of a system based on simple line-following approaches difficult. The
task is made more challenging by the requirements of high-resolution imaging,
necessary to record the fine detail of the lace figuring.
To achieve good process economics a web speed of 1 m/s is required, which de-
mands very rapid processing of the image data to provide real-time control. Further
difficulties are caused by the deformable nature of the lace, whose dimensions vary
with tension and manufacturing conditions, and the changes in the pattern caused byreleasing tensions in the lace structure as it is cut.
Using a laser for scalloping offers several advantages. Cutting with the laser can
produce an advantageously finished edge in which the thermoplastic fibres in the lace
can locally be lightly fused together. This helps to prevent fraying. Other advantages
Fig. 4. Small section of lace web showing scalloped and centre-cut edges.
T. King / Mechatronics 13 (2003) 1123–1147 1127
of laser cutting include the elimination of cutter sharpening, the very small diameter
of the focused beam, which allows intricate profiles to be cut, and the high cutting
speeds that can be achieved without stressing the material. This last point is espe-
cially important since it makes it possible to cut more than one edge at a time. Using
knife cutting the interaction between the cutting forces would render this virtually
impossible. The prototype system is illustrated in Fig. 5. It can be divided into the
following constituent elements:
• Supervisory system: to co-ordinate the operation of the following functional
blocks.
• Vision system: hardware and software to acquire the image data from the moving
web of lace.
• Tracking system: which computes from the image data the required instantaneous
position of the cutting point as the web moves past the laser.
• Cutting system: comprising laser, beam-delivery and beam deflection hardware to
deliver the laser energy to the point determined by the tracking system.• Transport system: responsible for presenting the lace to the imaging and cutting
systems.
Overall, operation of the system proceeds as follows: The web of lace is trans-
ported continuously past the imaging system and the cutting position, which is
slightly downstream. The image data acquired is continually processed to derive a
control signal to drive a galvanometer that deflects the laser beam. The image data
acquisition and galvanometer control output are synchronised to the web movementto allow for the separation between imaging and cutting positions. The separation
between these two positions, however, is kept as small as possible in order to
Fig. 5. Schematic construction of the lace scalloping machine.
1128 T. King / Mechatronics 13 (2003) 1123–1147
minimise any errors due to gross positional change or localised pattern distortion in
the lace between imaging and subsequent cutting.
Monitoring and overall control of the machine is performed by a Motorola 68020
microprocessor system, which provides user interface via a VDU and high-resolution
video output for setting-up and diagnostic purposes. Fig. 6 shows the general ar-
rangement of the supervisory and other computing hardware. The tasks of the vision
system are to acquire image data and extract the cutting path information in real-
time. Because the lace is a continuous moving web, a line-scan camera is more ap-propriate than area-scan types. Processing the image on an incremental basis, as each
new line of information becomes available, is also essential to the quasi-continuous
real-time control strategy adopted to deal with distortion of the lace in transporta-
tion and cutting. The vision system therefore consists of a high-speed, high-resolu-
tion, line-scan camera coupled by a specially designed interface to a digital signal
processor (DSP) board with multiple DSPs. The DSP board is connected to the
supervisory 68020 microprocessor. A fuller description of the development of the
DSP system has been given elsewhere [6,7].The lace is locally back-lit, where it passes under the line-scan camera, using a
fluorescent tube driven by a high frequency electronic ballast. The electronic ballast
is run from a dc supply that provides further mains frequency rejection. The resulting
illumination is extremely uniform and flicker free.
The camera is connected by a specially designed interface card to one of the DSPs.
The interface thresholds the video data, at a level that can be optimised automati-
cally. After thresholding, the binary image data is placed in shared memory for
access by the DSPs performing the tracking process.
Fig. 6. Control system schematic.
T. King / Mechatronics 13 (2003) 1123–1147 1129
The system tracks the cutting path on the lace by a reference map based tech-
nique. Reference maps for each pattern to be cut are created by scanning one pattern
repeat, on the machine, and defining the required cutting path on the visual display
using a mouse and pointer. The pattern and cutting data are then processed to matchthe ends of the pattern repeat to one-another and extract the information in a band
centred on the cutting line. This information then forms the reference map for
tracking. This procedure is required the first time any new pattern is to be scalloped,
but the information can be stored for subsequent re-use. The cutting path can be
arbitrarily defined with respect to the lace pattern so that its placement is a design
decision. This offers considerable flexibility.
During scalloping, the current position of the web is continually computed by
matching with the reference map. The web is deformable, and has quite large tol-erances on manufacturing dimensions. The matching process tolerates this, and
achieves high processing speed by using a specially developed incremental algorithm
[8]. The algorithm avoids re-calculating an area-based pattern match for each new
line of image data, but instead performs centre-weighted line matching using a cross-
correlation technique. The line matching result is then combined with previous line
matching information, using a filtering process, to give the stability of an area-based
matching approach but with much reduced computation. The centre-weighting of
the line match and the decaying impulse response of the filters give most weight toimage matching close to the current tracking point so that the algorithm is robust
Fig. 7. Lace laser scalloped on both edges simultaneously at 250 mm/s.
1130 T. King / Mechatronics 13 (2003) 1123–1147
against distortion and scale errors of greater than ±10%, both along and across the
web.
The CO2 laser used in the prototype machine is a 240 W continuous wave (CW)
unit. The laser beam is delivered through a focusing lens onto a lightweight front-
silvered mirror mounted on a galvanometer whose position is controlled by the
tracking process. A beam-splitter and a second set of optical components allow two
cuts to be made at once.
The prototype system described here produces well scalloped edges. Fig. 7 shows asample of lace cut using the system. The DSP system alone is capable of tracking at a
maximum of around 300 mm/s but scalloping at the target web speed of 1 m/s has
been achieved by incorporation of purpose designed circuitry which implements the
first stages of the tracking algorithm in dedicated hardware.
Summarising this example it should be noted that the mechatronic system des-
cribed successfully demonstrates pattern cutting in registration with the features of a
deformable material. The use of a mechatronic approach has enabled a highly
productive machine to be constructed, which can finish both edges of the lace si-multaneously––a hitherto unrealisable target, and which can scallop complex curves
previously only attainable by the use of scissors.
4. Machine vision for lace inspection
Flexible materials in general, and lace in particular, are hard to inspect using
machine vision. Lace is especially difficult since it comprises fine and complex yarns
that must be verified. A major problem is that small distortions in the pattern are
characteristic of the product and unavoidable. This is mainly due to the elastic na-
ture of the lace fabric.
4.1. Background
Previous research has indicated that some form of direct comparison of the lace
being produced with a reference image is unavoidable. Other reported methods, suchas the use of two dimensional Fourier power spectra [9], are too computationally
intensive, and therefore too slow, to be used in real time inspection implementation.
T. King / Mechatronics 13 (2003) 1123–1147 1131
The problem of geometric variation in the lace being produced, either in size or
orientation, is a major barrier for direct comparison of two images.
Subtraction of the image of the lace being inspected and the reference can beachieved only when the two images have the same size and orientation. In this case,
both �random defects� such as holes, missing threads, etc. and �geometric defects� suchas stretch, skew, etc. will be detected.
For this purpose, a mechatronic approach has been designed. The main objective
of this system is to generate a �comparable� image. This has been achieved using an
active vision system that uses the acquired image as �feedback� and hence creates a
�vision in the loop� system.
Work on automating the inspection of Lace, using machine vision, began as earlyas 1988 when Norton-Wayne et al. [10,11] started working on a project supported by
the British Lace Federation. They came up with a solution using direct comparison
of the live image with a perfect prototype, and their idea has been developed by
Shelton Vision Ltd. However their system does not take the normal lace distortions
into account and is therefore only suitable for finding defects on the lace knitting
machine where the lace is under controlled tension and advances slowly (3 mm/s)
giving more processing time. Yazdi and King [12,13] later worked on automating the
final inspection stage of lace manufacturing, which demands higher processing speedand process flexibility in terms of different distortions in the lace. They managed to
show a system detecting obvious faults, but the speed was low––an inspection rate of
only 50 mm/s was reported.
4.2. Our approach
Our current work concerns an incremental approach using a multi-DSP system to
achieve the required task at a reasonably high speed and with a minimum of false
alarms. The image is acquired by a monochrome line-scan CCD camera whilst the
lace is transported over a back-lit stage. The live image is compared with a reference
image of a single pattern repeat, stored in memory. A 2D cross-correlation technique
is first used to determine the starting position. Once the start point has been de-
termined and the live image is in synchronisation with the reference image, each linein the live image is matched to the corresponding line in the reference image by a
more computationally efficient tracking technique. During this matching the system
determines if the live image lines are skewing, drifting apart (longitudinal scale
errors) or stretching (lateral scale errors). The system performs necessary corrections
to compensate for any such distortions continuously in real-time using a combina-
tion of hardware and software approaches, thus presenting a �conditioned� live image
for the actual fault detection. The �conditioned� live image lines are then subtracted
from their corresponding reference image lines. The resulting image contains therandom or isolated faults along with noise or a �ghost image�, which is inevitable due
to the fact that individual threads in lace are not always in the same place, as well as
because of any residual skew or stretch which might remain from the �conditioning�process. Hence, it is necessary to apply further processing to the subtracted image in
order to isolate the actual faults. Once a fault is detected, the system will record its
Fig. 8. (A) The reference image. (B) Actual image with no distortion, perfectly aligned with the reference
image. (C) The result of subtracting B from A, the faults are clearly visible. (D) Actual image stretched by
3% in both directions. (E) The result of subtracting D from A, the �ghost image� or noise clearly seen along
with potential faults. (F) Application of morphological erosion with a �disk� of 1 pixel radius. The noise
disappears leaving behind the actual locations of faults.
1132 T. King / Mechatronics 13 (2003) 1123–1147
position and physically tag it if required. The sequence of synthesised images in Fig.
8 show the principle involved.
4.2.1. Image acquisition
Image acquisition and illumination of the subject is one of the most importantelements in any machine vision system. A monochrome line-scan camera was se-
lected for this work because most of the lace produced is monochromatic and the
faults to be investigated are based on the structure of the lace rather than its colour.
The localised dyeing faults in garment lace are not as critical as for normal woven or
knitted fabrics, mainly because of the narrow width of the lace and its open struc-
ture, which make it difficult for the eye to detect these faults. Back lighting has been
selected with a view that the system will be looking at the lace structure and not its
�true surface�. A line-scan camera was the obvious choice considering its speed, highresolution and other benefits described elsewhere [14].
4.2.2. Architecture of the experimental rig
As the aim is high inspection speeds in the region of 1 m/s, powerful signal
processing hardware is needed. If a line resolution of 5 lines/mm is assumed and a
line-scan camera with a 2048 pixel sensor is employed then a pixel clock rate in excess
of 10 MHz (5000 lines/m · 2048) would be needed. Clearly, processing at such a rate
T. King / Mechatronics 13 (2003) 1123–1147 1133
is a difficult task and needs either a very high specification single processor or parallel
processing techniques. To start with, the project is being implemented in a parallel
processing environment.Fig. 9 shows the architecture of the system developed. The lace is mounted on the
transparent drum and illuminated from below. An angular optical encoder is pro-
vided to keep track of lace movement. The lace drum can be rotated at different
speeds by means of a stepper motor. A DALSA CL-C3 2048 element line-scan
digital CCD camera is used to image the lace. Image capture and initial image
processing is accomplished using a Coreco Oculus-F/64 board incorporating a
TMS320C40 DSP. Further processing is then implemented on a loughborough
sound images (LSI) processor board with four TMS320C44 DSPs. A PC commu-nicates with the DSPs through shared memory and provides the human interface
necessary to operate and program the system. A separate monitor is connected to the
Oculus F/64 to display the images acquired by the camera. Two multipurpose in-
terface boards are also provided.
The Coreco Oculus-F/64 board can capture up to 30,000 lines/s, which is more
than adequate for our application. The board incorporates a TMS340C20 graphic
signal processor (GSP) for line acquisition and image display functions and a
TMS320C40 DSP (C40 for short) for general data processing. Both processors canshare the same frame buffer. The C40 incorporates six communication ports for
Fig. 9. Lace inspection system architecture.
1134 T. King / Mechatronics 13 (2003) 1123–1147
interfacing to the outside world and/or other C40 family processors. The multi-DSP
LSI board is a general purpose digital signal processing platform containing four
TMS320C44 processors (C44s) interconnected in ring topology. The C44 DSP is
very similar to the C40 except that it has only four communication ports (comportsfor short). C40 and C44 devices are 32 bit floating point DSPs.
Accurate image registering with a line-scan camera requires precise control of the
movement of the lace under the sensor. This is achieved by rotating the lace drum at
a desired speed controlled by a signal generator. The optical angular encoder gen-
erates a specific number of pulses for a certain drum movement. The encoder pulses
are used to control the clock pulse of the camera and also to generate external in-
terrupts to the DSP C40 to register each new image line. These external interrupts for
each line allow keeping track of the memory location of the line in the Video RAMof the Coreco Oculus F/64 board.
The Coreco Oculus F/64 board�s GSP and C40 DSP play an important role, not
only in capturing and displaying the lace image but also in doing some initial image
processing and coordinating with both C44#2 and C44#3 processors for subsequent
processing.
The first step, towards inspection, is to create a reference image from a prototype
�perfect� lace sample presented under the camera. A reference image should be of
exact repeat length and, ideally, the start and finish of the reference image shouldmatch perfectly to allow it to be repeated to form a �seamless� virtual endless ref-
erence. This is not without difficulties because of the locally variable nature of lace
and so, at present, some manual adjustment of the reference image is required to
achieve a good match between start and finish. Once created, the reference image is
stored in memory for future use.
4.3. Working of the rig––key algorithms and their results
4.3.1. Finding the start position
The inspection process starts with finding the current position of lace under the
camera with respect to the reference image. For simplicity a basic two dimensional
template matching approach using binarised images is used. In order to minimise the
time involved and to make the system work more efficiently the lace images are
processed in two halves i.e. the right hand side of the image is always processed by
the C44#2 and the left hand side by C44#3. The Find Start Position algorithm has asingle pixel resolution and uses minimised search field and early termination tech-
niques to speed up the process. The algorithm provides the position of the best
matched starting line and also gives the best horizontal alignment of the search
window in the reference image. This value is used later by the Tracking routine as its
initial estimate of the lateral tracking positions.
4.3.2. Tracking algorithm
After the start position has been located and the processors know which line the
camera is about to scan with respect to the reference image, the tracking algorithm
�registers� each successive line-scanned with the corresponding line in the reference
T. King / Mechatronics 13 (2003) 1123–1147 1135
image, continuously and in real-time. The tracking algorithm scans a portion of the
current �live� line, and finds its best match from among three candidate �corres-ponding lines� from the reference image. The three candidate lines are (a) the linefound for the last match, (b) the next successive line in the reference image, (c) the
line after that. We term these three choices of line �stay�, �step� and �skip� respectively.Normally the registration process would be expected to step line-by-line through the
reference image, but stay and skip cycles allow the length to be adjusted for local and
overall best match.
A decision rule provides stability by preventing successive stay or skip cycles. We
term this process the stay, step, skip (sss) routine.
The tracking algorithm also gives the best-matched lateral position. The best-matched lateral positions of the right and left hand sides can provide information
about lateral stretch as well as lateral shift, which can be used to correct the live
image before actual comparison. To avoid rapid shifts in the tracking path the po-
sitions found by the line-by-line correlation based routine are damped by taking into
account the �history� using computationally inexpensive IIR filters.
4.3.3. Image subtraction
Once the live image lines are in synchronisation with the reference image, the two
can be subtracted and the result displayed through the Oculus F/64�s GSP and
monitor. Since we are dealing with binary images a simple XOR function is used to
generate a subtracted image. Fig. 10 shows the result of subtraction in real time.
To achieve superior error detection capability, we are employing three ap-
proaches, in sequence:
• Global geometric correction using a feedback approach.
• A local matching operation before subtraction.
• Morphological filtering––erosion of the subtracted image.
4.3.4. Feedback approach
A �feedback� approach is used to make use of the information collected from the
tracking routine to correct lateral scale and offset in the live image before subtrac-
tion. The lateral tracked positions from right and left sides of the image can be used
to estimate the change in the lateral scale and offset, as shown in Fig. 11. The scale
factor can now be defined as x=xref , while the offset can be estimated by comparing
the tracked path location and its corresponding reference track path position, e.g. if
the tracked path positions are x1 and x2, while reference track path positions are x1refand x2ref respectively, with the centre being 0, then:
offset ¼ ½ðx1� x1refÞ þ ðx2ref � xÞ�=2:
(where a negative sign indicates a shift to the right and vice versa).The �feedback approach� involves using the above mentioned information for every
scanned line to correct the one succeeding it. The scale and offset estimates are fed
back to the C40 which �rescales� (it actually resamples for computational speed) and
offsets the next line accordingly (using a proportional + integral control strategy).
Fig. 10. Image subtraction in real time: Arrows indicate potential faults. The black arrows at the top
indicate the tracking path positions, plotted to these output images for information as a thin line of white
pixels. �Ladders� at the edges of the image are diagnostic output indicating status of the sss routine.
Fig. 11. Lateral width and scale adjustment.
1136 T. King / Mechatronics 13 (2003) 1123–1147
In this way the image is constantly adjusted for best global geometric conformance
but with low computational effort.
T. King / Mechatronics 13 (2003) 1123–1147 1137
This closed-loop approach has significant benefits in terms of processing efficiency
because the field of search for the cross-correlation based tracking algorithm can
be reduced considerably (with big savings in computation time) whilst the lace(and hence the tracked paths) can still move sideways through a large displacement
range.
4.3.5. Local matching
Even after the feedback based global matching process there is a considerable
amount of (nonfault) local difference between live image and reference. This is fur-
ther reduced by local matching. This is currently also being implemented by corre-
lation techniques, but it is a computational bottleneck and an incremental approach
based on synthetic annealing or a single pass filter type of algorithm is being sought.
4.3.6. Morphological filtering
The subtracted image in Fig. 10 is filled with noise, hindering the detection of
actual faults. Erosion is applied to remove noise and detect the actual defects. Fig. 12
shows the result of 3 pixel �disk� erosion on Fig. 10. The faults are now clearly visible
(encircled).
4.4. Discussion
A multi-DSP approach has been presented for automatic inspection of deform-
able textile webs, such as lace, in real time using machine vision techniques. A directcomparison of the live image of the lace with a perfect prototype reference image, on
an incremental, line-by-line, basis is adopted. The effect of inherent distortions in the
Fig. 12. Erosion of the image shown in Fig. 10 with a disk of 3 pixel radius. The faults are encircled for
clarity.
1138 T. King / Mechatronics 13 (2003) 1123–1147
lace, which can cause false alarms and excessive noise in the subtracted image, is
minimised by: a longitudinal correction algorithm, which decides the best possible
match within three possible positions; a closed-loop feedback algorithm which
corrects the scale and offset of the next line based on the tracking information fromthe current one; a localised matching algorithm and, finally; morphological filtering
to remove the residual noise after image subtraction. The system is performing very
successfully and development work is under way to further increase inspection speed.
The multi-DSP approach, whilst the only way of obtaining sufficient processing
power at the commencement of the project, makes development very difficult. Load
balancing is also problematic. The next version of the system will be based on a
modern high-performance single processor. This system uses the visual information
in a (computational) feedback loop to adjust image scale and translation. Furtherdevelopments, successfully demonstrated by Yazdi and King [12], will implement a
mechatronic (hardware) feedback system to drive a camera rotation system. This will
improve image matching by removing the effects of �skew� in the lace.
5. On-line fault detection and rectification in inkjet printing
This case study describes an ongoing project to implement improvements to the
speed, reliability and quality of the digital printing process. The approach taken isbased on the concept of checking the printed textile during the printing process and
using a flexible machine topology that allows errors to be recovered in real time
without wastage. Development of a low cost vision system for digitally printed
textiles will provide a means to check the product during printing and take corrective
measures when necessary. The potential for high-speed, high-reliability manufac-
turing is greatly increased with the inclusion of a vision system.
5.1. Vision and recovery for nozzle failures
The potential of inkjet technology for printing colour images on a wide range of
new surfaces has quickly been recognised. This is particularly apparent for wide
format designs. Speed and reliability are two important factors that can be developedto improve production printer results. Nozzle blocking can be a serious problem
when using exotic inks and media. Imperfect prints mean wasted time, materials and
energy. This problem has been particularly seen in the textile industry where at-
tempts to inkjet print textiles with specialist inks have proven problematic. Research
at Leeds addresses these problems with emphasis on the development of a vision and
control system that enables detection and rectification of faults. Using CCD arrays
at either side of each print head and an appropriately tuned illumination source, live
images can be processed to detect blocked nozzles. Results can be reported to acontrol system for online rectification. Colour line-scan technology is still expensive
in comparison to the technology used in desktop scanners. Work is being undertaken
to create hybrid-scanning devices that use low cost linear arrays that potentially
allow each head to have its own independent detection system.
T. King / Mechatronics 13 (2003) 1123–1147 1139
5.2. Drop-on-demand (DOD) technology
A typical DOD head has between 48 and 256 nozzles [15] and print drop volumes,depending on the application, of between 30 and 5000 pl [16]. Ejected drops hit the
surface of textile at frequencies of between 6 and 15 kHz and create dots 50–300 lmin diameter. In the majority of contemporary printers, lines of dots are printed side
by side on each pass at about 200–600 lm intervals. The number of lines printed
coincides with the number of nozzles on the head. Like other inkjet printers, current
textile printers utilise a multiplicity of ink colours (e.g. the Encad has 4 and Fabri-Jet
XII 12 colours), to get a large gamut of colour shades. Virtually every digital printed
image includes several layers of colour inks (CMYK+), which are produced inseveral passes (2, 4, 8 or 12).
5.2.1. Inkjet image build-up
The inkjet printer works on an x, y architecture, building up the image by tra-
versing the head across the substrate forming a stream of dots across the width of thesubstrate (x) and then stepping the substrate forward ready for the next traverse of
the head (y) [16]. These repeated traverse and steps form the 2D array of drops that
constitute the image. The location of the CMYK+ drops is calculated by the raster
image processing (RIP) software. When a digital image is being sent to the printer,
RIP software algorithms converts each pixel of the original image from an inter-
mediate tone directly into a matrix of binary dots, based on a pixel-by-pixel com-
parison of the original image with an array of thresholds [17,18] (Fig. 13). By
splitting the matrix up for different passes of the head a resultant train of pulses issent directly to the nozzles.
5.2.2. The BIP reference image
Each nozzle has a two dimensional array of binary codes: binary image pattern
(BIP). These represent when, and when not, to print a drop for the whole of theimage. By monitoring the signals, which are sent to the print head, the BIP for every
nozzle can be recorded as a reference pattern. This BIP reference pattern is used to
cross reference with what the vision system detects (see below).
Fig. 13. RIP software changes the digital image into binary signals.
1140 T. King / Mechatronics 13 (2003) 1123–1147
5.3. Machine vision system
5.3.1. ‘Before’ and ‘after’ scannersThe system under development uses low cost linear CCD arrays to implement an
on-line vision system for detecting printing imperfections. The concept is based on
using 600 dpi monochrome arrays and selected wavelength of illumination to
identify the ink on the substrate. Illustrated in Figs. 14 and 15, the system utilises two
arrays, one at each side, on each print head and illuminates the printed textile with
light of the complementary colour to that of the ink being printed. Printed images
can be checked continuously during the printing process. During carriage movement
the scanners capture the �before� and �after� printed images. The substrate is illumi-
nated in a particular colour to detect specific colour inks. Changes in grey levelvalues between �before� and �after� show where drops have been printed. Fig. 15 il-
lustrates that the Spectra head is of ideal length to match with the 130 mm scanners
to monitor a full set of 256 nozzles.
5.3.2. Illumination
To detect a particular colour of ink it is useful to just look at a single colour
channel (i.e. that of the colour being printed and therefore that being detected). This
enables other coloured drops on the substrate to be suppressed which is crucial if a
monochrome CCD device is to be used. The volume of data to read is very high in
this application and it is important to reduce the information at source to a mini-
mum: scanning in a single channel (compared to say 3 channel RGB) is very ad-
vantageous both in speed and cost.
Fig. 14. Mono-colour print head with before and after scanners.
Fig. 15. Spectra print-head (top) and linear array.
Fig. 16. Effect of white and red illumination on colour palette.
Fig. 17. CMY printed substrate with white (left) and red (right) illumination.
T. King / Mechatronics 13 (2003) 1123–1147 1141
It is known that a surface illuminated by a particular band width of light will only
reflect that particular colour [19]. This is demonstrated in Fig. 16, where the effect of
illuminating RGBW and CMY samples with a red LED is shown. In the RGB
pallete the red and white reflect the red light, however the green and blue (colours
containing no red) do not. In the CMY sample the cyan (being the complement to
red) does not reflect and the yellow and magenta show some reflection as they both
contain some red.
To ensure that single channel detection would work in practice experimental workwas carried out using the linear scanner and a calibrated light source (LED) to detect
printed images in accordance with RGB colour representation and traditional colour
theory [19]. This is illustrated in Fig. 17, where a primary colour light (red) is used to
illuminate the colour dots on pre-treated cotton. There was no light reflected from
the colour�s complement (cyan). Similar results are found when using blue and green
sources with their corresponding complements.
5.3.3. Digitally controlled illumination
To illuminate the sample in the required colour it is convenient to have a system
that can be digitally controlled. A high-speed digital illumination device has been
developed that can combine red, green and blue LED light sources. As illustrated in
Fig. 18, each colour is pulsed at high speed with differing on/off times in order toeffectively mix differing ratios of the three colours (RGB) together. The resulting
combination of digital light pulses will be integrated by the CCD sensor with the
Fig. 18. Digitally pulsing RGB sources allows the intensity to be varied.
1142 T. King / Mechatronics 13 (2003) 1123–1147
resultant effect being of illuminating in a specifically controlled colour. This colour
can be tuned digitally to detect specific colour inks.
5.4. Test rig and control system
A number of test rigs have been developed. A computer controlled scanning table
can be used to move the arrays across the substrate at a pre-programmed speed. A
print head, situated between the heads, can eject drops in response to the encoder.
Illumination is provided by high intensity LEDs.
A sophisticated control system is necessary to control the very large volumes of
data being transferred around the system. A high performance Pentium 4 with a
series of high-speed interface cards and frame grabbers is used. The control software
is being developed in Visual C++ and is responsible for synchronising the system,controlling the position of the print head and sending image information to it, as well
as reading and processing the information from the heads.
5.5. Detection and image processing
The outputs of the two linear arrays are digitised, subtracted from one another,
and then compared with the BIP reference file. As discussed earlier, every nozzle
will have a BIP array dictating what is to be printed and where. This provides thedata to cross-reference with the subtracted on-line scanner data. After processing
the data into a suitable format, the condition of nozzles is determined. Possible
errors due to any blockages can be estimated. If the number of blocked nozzles
implies an imperfect print, the head can be taken out of service, a cleaning cycle can
be initiated, or another print head takes over responsibility for finishing the
printing process.
5.5.1. Steps in the process
• For every nozzle, depending on the type of substrate and the resolution of print-
ing, 3–5 sensors are needed to sense each drop. For example, a current high spec-
ification linear array might have 3200 pixels and run with a clock frequency of 6
MHz outputting data serially at twice this speed (12 MHz). This yields a line-scan
rate of 3.75 kHz (12 MHz/3200 pixels) which is less than the drop ejection fre-
quency, so that less than one sample per drop would be taken. Off-line tests sug-
gest that 3–5 samples will be required for reliable detection of the dots so that each
T. King / Mechatronics 13 (2003) 1123–1147 1143
drop is imaged as 3 · 3 or 5 · 5 pixels. However with scanner technology becoming
faster it is anticipated that scanners of the required specification will be available
in the near future. For the purposes of testing the vision system it is possible torun the head at a lower speed.
• The analogue outputs from the scanners are digitised using an eight-bit A/D con-
verter.
• Depending on the width of the print head and the width of scanners there will be a
delay between receiving �before� and �after� information. The digitised data for the
�before� scanner is saved in a buffer and used at the appropriate time.
• Each set of 3 or 5 lines that make up a column of �after� dots are stored together as
a new frame and subtracted from the appropriate frame in the �before� image.2
5.5.2. Processing the subtracted image to binary
This is achieved by segmenting the subtracted image [20–22] to find the number of
pixel values in the 5 · 5 matrix that are nonzero. If sufficient number of cells havenonzero values, it indicates the existence of a dot. Then for reducing the memory
space of 5 · 5 matrix with eight-bit pixel values it converts to a binary image.
5.6. Results
The results shown here demonstrate the functionality of the scanner. Fig. 19
provides photographic images of a substrate printed first with cyan drops then with
yellow drops. The overlapping areas are of most importance as these are most dif-
ficult to detect due to the second colour being masked by the undercoat.
The image shown in Fig. 20 shows how the images from the arrays compare with
the photographic images. Magenta drops are pre-printed and cyan ones laid downbetween �before� and �after� scanning. Illumination is with green light. The cyan drops
are only seen faintly but the magenta is very noticeable on the �after� scanner.Subtraction leaves light areas where each of the magenta drops was printed. These
transform themselves into white binary values after the final stage of processing.
Each binary value can be compared to a drop in the photographic image showing the
system is working correctly.
Fig. 19. Cyan drops (left), Cyan drops over-printed with yellow drops (right).
Fig. 20. The �before� and �after� arrays scan pre-printed cyan drops, but with a coat of magenta printed
between the scanners––under green illumination. The images are then subtracted and each drop is pro-
cessed and thresholded to create a BIP file.
1144 T. King / Mechatronics 13 (2003) 1123–1147
5.6.1. Discovering if drops are missing
As discussed earlier, an image is sent as a reference BIP signals to the nozzles (the
array of what should have been printed). A second set of data is available from the
BIPs generated from the scanned prints (what was actually printed). The final stage is
a comparison of these two sets of BIP signals to check the performance of every
individual nozzle and, hence, detect any blocked ones. After each series of checks,the history of each nozzle can be analysed statistically and where a nozzle is deemed
to be failing too frequently the head will be either taken out of service for cleaning
cycle or another print head on the same guide rail [23] takes responsibility and the
printing process continues, albeit at a reduced rate since there are now less active
heads.
5.7. Discussion
Prototyping of the vision system has been successful so far with results for a
working vision system looking promising. Current restrictions on array density mean
that the system is less reliable at detecting drops below 150 microns in diameter.
These restrictions will be overcome with the anticipated improvements in arraytechnology. There is also work to be done to increase processing and scanning speeds
to competitive rates. The next phase development rig, currently under construction,
encompasses improved physical tolerances, increased size and integration with a
recovery design.
T. King / Mechatronics 13 (2003) 1123–1147 1145
6. General discussion and conclusions
As illustrated in these four case studies, the development of real-time �incre-mental� algorithms for use with image data obtained from line-scan imaging de-
vices, and their implementation using appropriate computing hardware, has
enabled novel fabric processing machines, developed using a mechatronic design
philosophy, to be successfully constructed. It is believed that the techniques de-
veloped may be applicable in other situations where deformable materials are
processed in web form. Indeed, application of an incremental imaging and image
processing strategy can be essential for such situations. Where the product may be
distorted or moved unpredictably during the manufacturing process in which it isbeing imaged, much of the information in an areal (2D) image would become in-
accurate before it is processed. Webs are also continuous, so that the adoption of a
continuous image processing �flow�, rather than a �frame-by-frame� approach,
avoids computational inefficiencies caused by the need to register successive frames
to one-another and for overlaps to correctly interpret features close to the edges of
adjacent frames.
During the period in which these projects were undertaken there have been sig-
nificant developments in line-scan imaging and computational technologies. For thefirst case study even the line-scan camera had to be constructed from component
level. For the second, a proprietary analogue line-scan camera was available, but the
DSP interface and electronic exposure control had to be specially developed. For the
later case studies, digital line-scan cameras and �frame-grabbers� capable of acceptingline-scan camera inputs were readily available, simplifying hardware integration
considerably. But line-scan imaging integrated into industrial systems is still prac-
ticed on a relatively small scale (certainly by comparison with mass-market line-scan
applications such as office document scanners). Further uptake is inhibited by sev-eral factors. Equipment prices have not benefited from the economies of mass-
production as in areal imaging. Proprietary interfaces for line-scan cameras are
usually adapted area-scan systems, which assume that the image lines are to be built
up into a 2D frame of finite length before image processing. Whilst this approach can
even offer advantages over areal cameras for imaging some moving objects (very high
resolution, simplified lighting and depth of field requirements), it negates the ad-
vantages of incremental processing illustrated in the case studies presented here. The
problem is exacerbated by the very limited availability of off-the-shelf image pro-cessing tools for incremental processing. Impressive libraries of frame-based routines
are readily commercially available, well integrated with frame-grabbing hardware
and easily built into sophisticated image processing systems. Almost none of this is
any help in implementing an incremental processing system, for which available off-
the-shelf software tends to be aimed at very basic functions such as strip width or
colour consistency monitoring. Unfortunately, as computational systems have be-
come more sophisticated their software has also become more proprietary, so that
the difficulty, and cost, of developing novel approaches can be prohibitive. Industrialline-scan applications often also involve �hard-real-time� system responses, whilst the
readily available frame-based image processing libraries are primarily available for
1146 T. King / Mechatronics 13 (2003) 1123–1147
Windows operating systems for which a different concept of �real-time�, sometimes
adequate for frame-rate processing, applies.
In conclusion, whilst the last few years have seen huge advances in some appli-
cations of digital imaging, progress in others has been more difficult. Vision-in-the-loop will be essential in enabling our mechatronic machines of the future to reliably
assist or replace humans in a multitude of tasks ranging from vehicle driving to
manufacturing. In some of these applications, incremental processing approaches
may provide a way in which high-performance real-time systems can be implemented
with manageable computational resources.
Acknowledgements
Support from the EPSRC for all four projects described in the case studies isgratefully acknowledged as is the contribution made by industrial partners including
Guy Birkin, Shelton Vision Systems, Brook International, Marks and Spencer, Cha
Technologies, Samuel Bradley, Guilford Europe, Coats Viyella Home Furnishings,
Zephyr Flags and Banners, and Franklins Textiles. Thanks are also due to the nu-
merous co-workers and colleagues who developed the systems described including:
Professor Gordon Wray, Professor Ray Vitols, Eddie Baker, Brian Murphy, Dr.
Michael Jackson, Dr. Sen Yang, Dr. Liguo Tao, Dr. David Hodgson, Dr. Hasan
Ekerol, Peter Witty, Dr. Hamid Yazdi, Umer Farooq, Dr. Abbas Dehghani, DuncanBorman, and Farzad Jahanshah.
References
[1] King TG, Murphy BJM, Vitols R. Low cost, high speed sensing of knitted fabrics. Sensor Rev
1985;5(3):119–23.
[2] Vitols R, Wray GR, Murphy BJM, Baker JE, King TG. Computer controlled machinery for garment
manufacture. Proc Textile Inst Conf Comput World Textiles. Hong Kong: 1984. p. 284–96.
[3] Vitols R, Murphy BJM, Wray GR, Baker JE, King TG. The development of computer controlled
machinery for the making-up of garments. IEE Proc. 132, Pt. D, No. 4, 1985. p. 178–82.
[4] Preston ME, King TG, Wray GR, Vitols R, Murphy BJM. Mechatronics Applied to the Manufacture
of Knitted Garments. Proc Mechatronics 89––Mechatronics in Products and Manufacturing.
Lancaster: 1989. p. 7.
[5] Preston ME, King TG, Vitols R, Murphy BJM. A mechatronic system for knitted fabric handling.
Proc IMechE/IEE Conf Mechatronics: Designing Intelligent Machines, Cambridge, Mechanical
Engineering Publications; ISBN 0 85298 722 6, 1990. p. 17–22.
[6] King TG, Tao LG, Jackson MR, Preston ME, Yang S. Computer-vision controlled high-speed laser
cutting of lace. Proc ICCIM�93, vol. 2. Singapore: World Scientific Publishing; ISBN 981-02-1947-4,
1993. p. 929–36.
[7] King TG, Tao LG, Jackson MR, Preston ME. Real-time tracking of patterns on deformable
materials using DSP. IEE SERTA�93. Cirencester UK: ISBN 0-85296-5931, p. 178–83.
[8] King TG, Tao L. An incremental real-time pattern tracking algorithm for line-scan camera
applications. Mechatronics 1994;4(5):503–16.
[9] Xu B. Identifying fabric structures with fast fourier transform techniques. Textile Res J 1996;66(8):
496–506.
T. King / Mechatronics 13 (2003) 1123–1147 1147
[10] Norton-Wayne L. Inspection of lace using machine vision. Comput Graphic Forum 1991;10:113–9.
[11] Sanby C, Norton-Wayne L, Harwood R. The automatic inspection of lace using machine vision.
Mechatronics 1995;5(2/3):215–31.
[12] Yazdi HR, King TG. Application of vision in the loop for inspection of lace fabrics. Real Time
Imaging 1998;4:317–32.
[13] Yazdi HR. Automatic visual inspection of lace. PhD Thesis, University of Birmingham, 1999.
[14] Finney B. Applying real time machine vision. Coreco Imaging, Inc. 55 Middlessex Turnpike, Bedford,
MA 01730, USA, 2001.
[15] Baldwin H. High Performance Piezo Ink Jet for Printing Textiles, Downers Grove, USA, FESPA
Digital Textile, 1999.
[16] Pond S. Ink-jet technology and product development strategies. Torrey Pines Research, 2000.
[17] Lau DL, Arce GR. Modern digital half toning. New York, Basel: Marcel Dekker, Inc.; 2001.
[18] Girma B. Latest development in RIP technology. California, USA: FESPA Digital Textile; 1999.
[19] Ames J. Color theory made easy. New York: Watson-Guptill Publications; 1996.
[20] Jain R, Kasturi R, Schunck BG. Machine vision. McGraw-Hill International Edition; 1995.
[21] Embree PM, Kimble B. C language algorithms for digital signal processing. Prentice Hall; 1991.
[22] Gonzalez Rafael C, Wintz Reading P. Digital image processing. Addison-Wesley Pub. Co.; Advanced
Book Program, 1977.
[23] Borman DJ, Jahanshah F, King T, Dehghani AA, Dixon DA. Mechatronic system topology and
control for high-speed, high-reliability textile inkjet printing. Proc Mechatronics 2002. University of
Twente; ISBN 90 365 17664 (CD-ROM), June 2002. p. 310–19.