FABRIC YARN DETECTION BASED ON
IMPROVED FAST R-CNN MODEL
Haiyan Xu*
Business School, Yangzhou Polytechnic Institute, Yangzhou, Jiangsu, 225127, China
hyxu_ypi@163.com
Reception: 16/11/2022 Acceptance: 09/01/2023 Publication: 06/03/2023
Suggested citation:
X., Haiyan (2023). Fabric yarn detection based on improved fast R-CNN
model. 3C TIC. Cuadernos de desarrollo aplicados a las TIC, 12(1), 287-306.
https://doi.org/10.17993/3ctic.2023.121.287-306
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
287
ABSTRACT
With the rapid development of modern computer technology, and gradually combined
with the textile industry, the application of modern computer technology in the field of
textile is increasingly extensive, which makes textile production gradually move
towards the road of automation development. This paper proposes an automatic
detection method of simple weave fabric density based on computer image vision.
Computer vision and digital image processing technology are used to analyze and
identify the simple weave fabric's warp and weft yarn information and calculate the
fabric density. To avoid the phenomenon of warp and weft yarn skew, a method of
fabric skew correction based on the Radon transform is proposed. The optimal
decomposition order of these four fabrics is k = 2, k = 5, and k = 3. The decomposition
series is k. It is found that the relative error of both warp and weft density is about
1.00%. Most of the data obtained by the method of correlation coefficient curve to
determine the optimal decomposition series are consistent with the results of the
energy curve method. The relative error of the density test results of No. 3 fabric, No.
6 fabric, and No. 7 fabric is higher than 10%, and the relative error of No. 3 fabric is
the highest, reaching 66%. This shows serious errors in these three fabrics' warp and
weft density. To solve the problems of simple weave fabric density detection, the
corresponding algorithm is used to solve the problems. Finally, good results are
obtained, which verifies the feasibility of this method. It is significant to realize the
automatic measurement of fabric density in textile factories.
KEYWORDS
Fabric yarn; Fast R-CNN algorithm; Visual inspection; Image processing; Wavelet
decomposition.
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
288
PAPER INDEX
ABSTRACT
KEYWORDS
1. INTRODUCTION
2. IMPROVED FAST R-CNN ALGORITHM
2.1. Basic structure and characteristics of convolution neural network
2.2. R-CNN series algorithm
2.3. Yolo algorithm
3. SIMPLE WEAVE FABRIC DENSITY DETECTION BASED ON WAVELET
TRANSFORM
3.1. Wavelet transform
3.2. Wavelet transform of simple weave fabric
3.3. Determination of optimal wavelet decomposition series of fabric image
4. SIMPLE WEAVE FABRIC DENSITY TEST RESULTS AND ANALYSIS
4.1. Calculation of warp and weft density of simple weave fabric processed by
computer
4.2. Analysis of experimental results
5. CONCLUSION
DATA AVAILABILITY
CONFLICT OF INTEREST
REFERENCES
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
289
1. INTRODUCTION
With the rapid development of modern computer technology, and gradually
combined with the textile industry, the application of modern computer technology in
the field of textile is increasingly extensive, which makes textile production gradually
move towards the road of automation development. The structure parameters such as
fabric texture, density, and color yarn arrangement are important in detecting and
controlling textile quality. Currently, most factories and enterprises in the textile
industry mainly rely on manual sample analysis and detection of fabric texture with the
aid of fabric lenses, which is subjective, time-consuming, labor-intensive, and prone to
errors. Therefore, the use of computer image processing technology to effectively
replace manual to achieve intelligent fabric density detection, improve industrial
production efficiency, achieve automation, and intelligent production of textile products
is of great significance.
In recent years, the rapid development of computers has made computer vision
technology into people's field of vision and has been much attention. Under our
research and exploration, computer vision continues to develop and progress, and
image processing technology has been widely used. Especially in textile testing,
computer vision technology is also used, which makes the textile industry more
intelligent and efficient. Shukla et al. Collected the transmission image and reflection
image of fabric sample by optical principle, calculated the autocorrelation value of
each row and column of the image with the autocorrelation function after
preprocessing, and processed and analyzed the transmission image and reflection
image respectively to obtain the relevant information of fabric texture parameters [1].
Finally, the fabric structure is determined by the length and weft of each row, and the
fabric structure is determined by scanning the length and weft of each row[2]. Raj et
al. Reduced the gray image level through histogram equalization, then constructed the
gray level co-occurrence matrix according to the pixel spacing and angle changes,
calculated its eigenvalue, and obtained the fabric density parameter through its
periodic calculation [3]. Used autocorrelation function to determine the position,
density, and weave point position of fabric warp and weft yarn. Later, the data of the
organization point area was input into the neural network to output the fabric structure,
which was trained repeatedly. Finally, the fabric structure was identified by a neural
network [4]. Wu and Cao used the gray projection method to get the gray projection
curve in the warp and weft direction of the fabric image[5,6]. The warp and weft yarns
were separated according to the position and quantity of the gray projection curve
peak and valley, and the fabric warp and weft density was calculated. Trafton et al.
First calculated the weft density of twill and satin fabrics was by the gray projection
method and then calculated the warp density of twill and satin fabrics was through the
relationship between the density of twill and satin fabrics and the fabric texture. Later,
the study found that fabric warp and weft yarn inclination was easy to occur when
image processing was used to detect the density of fabric warp and weft [7-9].
Monfared and XZ et al. Proposed using Hough transform to obtain the fabric tilt angle,
then making a gray projection on the fabric along the tilt direction, and finally judging
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
290
the yarn gap according to the wave crest of the projection curve to calculate the fabric
warp and weft density [10,11].
Qin uses MATLAB language to carry out a series of preprocessing for woven fabric
images, then carry out wavelet decomposition and reconstruction to separate and
extract the information of warp and weft yarn, and then carry out binarization and
smoothing processing to obtain the distribution image of warp and weft yarn. Finally,
the warp and weft density of the woven fabric is obtained through program calculation
[12-15]. Shi et al. Carried out multi-layer wavelet decomposition on woven fabric
image through wavelet transform, reconstructed single-layer signal and calculated
average brightness value of warp and weft yarn direction image, and finally calculated
warp and weft density according to periodic change of brightness signal [16,17].
Combined image processing technology with time-frequency transform theory,
transformed woven image from the time domain to frequency domain through Fourier
transform, selected characteristic region to filter and separate single group of warp
and weft yarn images. Finally, the adaptive threshold method was used to locate yarn,
count the number of warp and weft yarn, and calculate woven fabric's warp and weft
density [18-20]. Obtained the frequency spectrum of the fabric through a two-
dimensional fast Fourier transform and calculated the fabric warp and weft density
through the correlation between the characteristic change of the frequency spectrum
and the fabric warp and weft density [21]. Using the Halcon algorithm library and
machine vision technology, Niu processed the fabric image by Fourier transform,
analyzed it by Gabor transform, and finally calculated the fabric warp and weft density
through wavelet transform results [22]. Barreto and Shi use Fourier transform and
wavelet transforms to process fabric images, analyze spectrum characteristics,
process interference information, and then transform them into spatial domain through
inverse transformation. Finally, fabric warp and weft density can be calculated by
spatial domain detection or correlation between frequency spectrum features and
warp and weft yarn density [23,24]. Le obtains the power spectrum of woven fabric
image by Fourier transform, then processes the threshold value and calculates the
fabric warp and weft density by using the relationship between the spatial domain and
frequency domain. Secondly, it uses wavelet transform to separate the warp and weft
sub-images of woven fabric. After processing, the ideal warp and weft yarn density
information is obtained. Finally, a computer program automatically calculates the
fabric warp and weft density. Others use the wavelet transform to reconstruct the
fabric spatial domain image according to the spectrum characteristics of the fabric
image to detect the fabric warp and weft density [26,27]. Some also use the deep
learning method to train many samples to obtain stable automatic detection of fabric
density after Fourier transform or wavelet transform processing of fabric [28].
To sum up, there have been a lot of research and achievements in the automatic
detection of fabric warp and weft density, but there are still some problems that have
not been solved perfectly, and so far, it has not been well applied to actual industrial
production. This paper proposes a warp and weft yarn detection method based on the
improved fast R-CNN algorithm, which is of great significance to realize the
automation and intelligent production of textile products.
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
291
2. IMPROVED FAST R-CNN ALGORITHM
The current deep learning algorithm uses a convolutional neural network to
recognize yarn features, and when the number of training samples of the network is
enough, its recognition accuracy and robustness are better than the traditional image
processing technology, so it has a good application prospect.
2.1. BASIC STRUCTURE AND CHARACTERISTICS OF
CONVOLUTION NEURAL NETWORK
A convolutional neural network (CNN) is a feedforward neural network with a depth
structure. It can complete the local sampling and image-sharing task using a neural
network. Since the convolutional neural networks can collect the spatial and channel
information of feature graphs simultaneously, it is mostly used in the backbone
network of the algorithm to achieve feature extraction tasks[29-30]. CNN structure is
generally divided into the following layers: convolution, pooling, activation, and full
connection layer. The algorithm's core is to adaptively and update the convolution
kernel parameters iteratively through the computer's automatic learning, so the
convolution layer's calculation is particularly important. The main purpose of the
convolution layer is to extract features of different receptive fields by convolution
kernels of various sizes.
CNN differs from the common neural network in that the neurons in the previous
layer are only partially connected with the current layer. This connection structure
greatly reduces the number of branches in the network. A convolution kernel is a local
region. Let the convolution kernel go through the whole characteristic graph.
Weight sharing means that a single parameter controls multiple connections
without considering the position relationship of input data. A convolution kernel with
fixed internal weight parameters is used to process the whole graph by a convolution
operation. The convolution kernel is equivalent to the weight of the traditional network.
Each neuron of the traditional network has different weights, but the same set of
convolution kernels is used in feature processing, so the parameters of the
convolution neural network are shared.
Pooling is an important downsampling operation. Its principle is to do some simple
operations on the neurons in the convolution layer through the local correlation and
take the results as the input values of the neurons in the pooling layer. This operation
not only reduces the amount of calculation but also retains valuable information.
Common pooling operations include maximum pooling and average pooling. We can
select different pooling techniques according to the actual situation to prevent model
overfitting and improve network robustness.
The full connectivity layer (FC) is located at the end of the network hierarchy. Its
function is to connect the feature maps of the previous layer output and map all the
features distributed in the front layer network to the output sample space to reduce the
influence of the target location on classification accuracy. The full connection layer can
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
292
be completed in the actual network by convolution, and all feature expressions can be
mapped to one output value by a convolution operation.
2.2. R-CNN SERIES ALGORITHM
In the development of deep learning, R-CNN is the first industrial-level target
recognition and detection algorithm, which has become an important research
direction in the target recognition and detection field. Fast R-CNN, fast R-CNN, and
other algorithms are based on R-CNN, aiming at the shortcomings of the previous
generation of algorithms for continuous optimization expansion research. Through
training and calculation, the R-CNN algorithm filters the targets in the candidate
region, filters out the invalid feature regions and then completes the corresponding
classification according to the task requirements. Compared with the one-stage target
recognition and detection algorithm, the recognition error rate and miss recognition
rate of the series of algorithms are relatively low, but the recognition speed is relatively
slow. We can choose different algorithms for different target recognition and detection
requirements in practical applications.
It overcomes the limitation of traditional machine learning methods. The biggest
contribution of the Alex net network is the introduction of the activation function of the
modified linear unit (re Lu). The introduction of the re Lu activation function can not
only effectively prevent the overfitting phenomenon but also shorten the training
period due to the reduction of calculation.
The target classification method of the R-CNN algorithm is to use SVM to classify
the extracted features and then use the non-maximum suppression algorithm (NMS)
to evaluate the feature area. The high-score region is identified as the target area, and
the overlapping, redundant area is removed to obtain the region with the highest
possibility of the target. An important factor that affects the performance of the target
recognition and detection model is whether the object can be located accurately.
Because the inaccuracy of the target candidate frame region will cause the overlap
area error, it is necessary to modify and regress the candidate frame and then
generate the final prediction frame.
The fast R-CNN algorithm is an efficient target recognition and detection algorithm
based on the R-CNN algorithm and uses a deep neural network. Because of the
shortcomings R-CNN algorithm, the fast R-CNN algorithm has made corresponding
improvements. The fast algorithm uses the output of the intermediate convolution
layer of the vgg-16 network.
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
293
Figure 1. Structure of vgg-16 network model
The fast R-CNN algorithm can complete the task of feature extraction, detection frame
regression, and classification at the same time. The efficiency of this algorithm is far
higher than that of other algorithms of the R-CNN series. The fast R-CNN algorithm
does not need distributed training and testing. Firstly, it proposes a series of anchors
with preset sizes through the regional recommendation network (RPN), then adjusts
the anchor's size several times through the training network and outputs the final
target detection regression box. Since this paper will optimize the fast R-CNN
algorithm, the principle of the algorithm is analyzed in detail from three aspects:
network architecture, RPN structure, and algorithm loss function.
(1)
2.3. YOLO ALGORITHM
Yolo (you only look once) is a target recognition and detection framework based on
suggestion region. Yolo algorithm is different from the two-stage recognition and
detection idea. Yolo algorithm first divides the given image into S×S cells, which cell is
responsible for detecting the center of each target. Compared with the two-stage
detection algorithm, S×S cells are equivalent to the target region of interest, so there
is no need to generate candidate regions through the network similar to RP, and the
detection task can be completed in one step.
(2)
Next, the C conditional category probabilities are predicted by the segmented grid,
and the redundant boundary boxes are removed by non-maximum suppression
(NMS) to get the best result
{ }{ }
( ) ( ) ( )
* * *
1 1
, ,
i j cls i i i reg i i
i
cls reg
L p t L p p p L t t
N N
λ= +
i truth
j pred
C pr( object )*IOU=
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
294
(3)
(4)
Yolo is a simple and fast end-to-end algorithm, but the accuracy of Yolo recognition
and detection is not high, and the object positioning is not accurate enough. It is not
good enough for small targets and dense target recognition and detection.
Table 1. Comparison of advantages and disadvantages of target recognition algorithm
3. SIMPLE WEAVE FABRIC DENSITY DETECTION
BASED ON WAVELET TRANSFORM
Wavelet transform is the inheritance and development of the traditional Fourier
transform. Wavelet transform has good adaptability for time-frequency windows.
Different from Fourier transform, the window size cannot change with the change in
frequency. Wavelet can be used for multi-scale analysis, feature extraction, and
analysis of the object's high-frequency and low-frequency information, a new image
( ) ( )
/( )
i
i
pr class object
pr class object pr object
×
=
( ) ( )
/ * ( )* *
truth truth
i prad i prrd
pr class object pr object IOU pr class object IOU=
Algorithm name advantage shortcoming
R-CNN
1. CNN is proposed to extract
features
2. Map on Pascal VOC
increased from 35.1% to
53.7%
1. The training process is
divided into stages, and the
recognition speed is slow
2. Consumes disk space
Fast R-CNN
1. Using the full winder
network, the
ROI pooling, all feature maps
can be predicted only once
2. The problem of image
distortion and redundant
computation is reduced
1. The method of extracting
candidate regions is
computationally expensive
and repetitive
2. The end-to-end training is
not implemented
Faster R-CNN
1. Propose RPN network
2. Real end-to-end detection
model
3. Recognition accuracy and
speed have been greatly
improved
1. ROI pooling operation
results in precision loss
2
YOLOv3
1. Faster speed
2. End to end model
1. There is the deviation in the
accuracy of object position
recognition
2. Low recall rate
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
295
processing method. This study uses the wavelet transform to detect simple weave
fabric images' warp and weft density.
3.1. WAVELET TRANSFORM
Compared with the Fourier transform, the wavelet transform has better time-
frequency window characteristics, which has attracted many experts and scholars to
study. In the past ten or twenty years, wavelet transform has developed rapidly and
widely in many scientific and technological fields. Wavelet transform decomposes the
signal into a series of wavelets by scaling and shifting the original wavelet. Compared
with Fourier transform, the wavelet transform overcomes its three shortcomings: first,
Fourier coefficients cannot change with frequency, but wavelet coefficients can; Two
wavelet transforms can well reflect the signal frequency change with time; Wavelet
transform - Fourier transform can solve the problem of variable window size. Wavelet
transforms mainly include continuous wavelet transform and discrete wavelet
transform.
(5)
(6)
Where k is the decomposition series, and t is the displacement Ψ
(x) length. The
basic wavelet function can be obtained Ψ
(x) based on the wavelet transform. The
wavelet sequence function after displacement and decomposition is as follows:
(7)
When the decomposition series and the displacement length
are continuous
variables, the above wavelet transform process is called continuous wavelet transform
(CWT). There is another form of discrete wavelet transform (DWT). In many cases,
the decomposition series K and the displacement length t need to be discretized by
the power series.
(8)
The discrete wavelet sequence function is as follows:
(9)
2
(x)
x
C dx
+
Ψ
−∞
Ψʹ
= <
,
1
( ) , , ; 0
k t
x t
x k t R k
k
k
Ψ=Ψ >
K
T
*0
,
0
0
1
( , ) ( ), ( ) ( )
f m n m
mR
x nt
W m n f x x f x dx
k
k
=Ψ=Ψ
0
,
0
0
1
( )
m n m
m
x nt
x
k
k
Ψ=Ψ
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
296
3.2. WAVELET TRANSFORM OF SIMPLE WEAVE FABRIC
The fabric image can be decomposed into approximate and detailed image
information by one-dimensional wavelet transform on two-dimensional images, low-
frequency, and high-frequency parts. Then the high-frequency part can continue to be
decomposed into a group of high-frequency and low-frequency components, and the
low-frequency part can be further decomposed into another group of high-frequency
and low-frequency components. Finally, a two-dimensional image is decomposed into
four parts by wavelet transform. Compared with one-dimensional wavelet transform,
two-dimensional wavelet transform decomposes the high-frequency information
component more carefully along the horizontal and vertical directions and further
decomposes the high-frequency part into horizontal detail component, vertical detail
component, and diagonal detail component. Therefore, four parts can be obtained
after the first level decomposition: approximate detail component, horizontal detail
component, vertical detail component, and diagonal detail component. In theory, the
decomposition process can be continued for a long time.
After the multi-scale decomposition of fabric images, wavelet transform can
reconstruct and output approximate and detailed image information according to the
demand. Wavelet has good decomposition and reconstruction ability, so it can obtain
clear and complete fabric structure information and detail information without losing
important information and eliminating interference information.
The principle of automatic detection of fabric warp and weft density is that the
horizontal high-frequency detail component and vertical high-frequency detail
component, which match the height of warp and weft yarn, are obtained by wavelet
decomposition and reconstruction, and the number of warp and weft threads is
calculated to obtain the fabric warp and weft density. The horizontal high-frequency
detail component and vertical high-frequency detail component obtained by different
scales are completely different. The matching degree of detail component and yarn
number is also completely different. Therefore, the decomposition scale of wavelet
decomposition directly affects the horizontal and vertical detail component information
obtained and the accuracy of the fabric warp and weft density information reflected by
the image. To obtain the most complete warp and weft yarn information to obtain the
warp and weft yarn number information matching the image height, the wavelet
decomposition scale has directly impacted the fabric's accuracy. Knowing which scale
is processed is necessary to obtain the reconstructed detail image, which is highly
matched with the fabric image details. We call this wavelet decomposition scale the
optimal decomposition series. Therefore, the research on determining the optimal
decomposition series is of great significance for the automatic calculation and
detection of warp and weft density.
3.3. DETERMINATION OF OPTIMAL WAVELET
DECOMPOSITION SERIES OF FABRIC IMAGE
The correlation coefficient is used to study the degree of linear correlation between
two variables. The curve of the correlation coefficient can reflect the correlation
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
297
between two variables, but it is unclear about the degree of correlation, so it is a kind
of uncertain relationship. To determine the optimal wavelet decomposition series by
the method of correlation coefficient curve is to compare the correlation degree
between the fabric image reconstructed by wavelet decomposition and the fabric
image before decomposition and reconstruction, and calculate the correlation
coefficient of the two images, and take the decomposition series corresponding to the
maximum correlation coefficient as the optimal decomposition series. Then the fabric
is decomposed into high-frequency details and series to get the optimal fabric density.
The concept of the correlation coefficient is that assuming that there are two matrices
of the same dimensions, A and B, the correlation coefficient of a and B is calculated
as follows:
(10)
Where m and N represent the rows and columns of the matrix, respectively.
Figure 2. Sample correlation coefficient curve
From the correlation coefficient curve in the above figure, it can be seen that the
correlation coefficient curve of each sample fabric has a maximum peak value, which
indicates that the reconstructed image obtained by wavelet decomposition and
reconstruction under the corresponding decomposition series is most similar to that
before fabric image processing, Therefore, the maximum peak value of correlation
coefficient curve is selected as the optimal decomposition series of wavelet
decomposition and reconstruction. From the curve in the figure above, the maximum
correlation coefficients of fabrics 1, 2, 3, and 4 are 2, 2, 5, and 3, respectively, which
means that the optimal decomposition order of the four fabrics obtained by the
method of correlation coefficient curve is k = 2, k = 2, k = 5, k = 3, that is, for fabric 1
at decomposition scale 2, for fabric 2 at decomposition scale 2, for fabric 3 at
decomposition scale 5When the decomposition scale of fabric 4 is 3, the horizontal
( )( )
( ) ( )
22
mn mn
m n
mn mn
m n m n
A A B B
r
A A B B
=
(a)
(b)
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
298
and vertical high-frequency detail components matching the number of warp and weft
yarns can be obtained by wavelet decomposition and reconstruction respectively.
In the study of determining the optimal decomposition order by the correlation
coefficient, it is found that the results obtained by some fabrics are not very ideal. In
the process of a large number of experimental studies, the optimal decomposition
order determined by the maximum correlation coefficient is still inaccurate. For
example, in the treatment of fabric 3 above, the maximum correlation coefficient is 5.
After processing, the matching degree between the detail component and the warp
and weft yarn of the fabric is not large, so a more stable and accurate method to
determine the optimal decomposition order is studied. Solving the optimal
decomposition series is to obtain the optimal detail components after decomposition
and reconstruction. Through the in-depth study and analysis of all the component
information obtained from many experiments, it is found that there is a certain
relationship between the optimal wavelet decomposition series and the information of
all components obtained after decomposition and reconstruction.
In this paper, the concept of energy curve is introduced to determine the optimal
decomposition series of wavelet decomposition and reconstruction. The energy
values among the approximate, vertical, horizontal, and diagonal components
obtained by wavelet decomposition and reconstruction are calculated using the
energy calculation function energy. The program operation formula is: [a, h, V, D] =
wenergy2 (C, s). A is the approximate low-frequency component of the image after
decomposition and reconstruction, h is the horizontal high-frequency detail component
after decomposition and reconstruction, V is the vertical high-frequency detail
component after decomposition and reconstruction, and D is the diagonal high-
frequency detail component after decomposition and reconstruction. The energy curve
is used to determine the optimal wavelet decomposition series. The wavelet
coefficient square is used for each component after decomposition and reconstruction
by the energy function, and then the energy of each component is obtained by
summation, and then the energy proportion of each component is calculated by
normalization. Finally, the relative gradient change of energy is calculated, and the
energy curve is drawn to observe the change of the energy curve. The lowest peak
value of the energy curve is taken as the optimal decomposition order of the wavelet,
and the fabric image is processed by wavelet transform.
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
299
Figure 3. Wavelet decomposition tree
The two-dimensional signal x (fabric image) is decomposed into four components:
A1, H1, V1, and D1, after the first-order wavelet decomposition. The second-order
decomposition further decomposes A1 into four components, namely A2, H2, V2, and
D2. The third-order decomposition decomposes A2 into four components: A3, H3, V3,
and D3. The subsequent decomposition is carried out according to this law, and the
whole wavelet decomposition process is like branching out. Therefore, the
decomposition process and the resulting components can be represented in a tree
view.
From the above wavelet decomposition tree, we can know that all the components
obtained from the two-dimensional image after K (k is a natural number greater than
0) level decomposition include approximate image component AK, horizontal detail
component H1, H2, HK, vertical detail component V1, V2, VK, and diagonal detail
component D1, D2, Dk. If each component of the output after decomposition and
reconstruction is regarded as a matrix, then the information contained in the
component is stored in the matrix data. Assuming that there are n data in each matrix,
each component is written into a matrix in the form of:
(11)
(12)
(13)
(14)
4. SIMPLE WEAVE FABRIC DENSITY TEST RESULTS
AND ANALYSIS
Simple fabric is the original fabric, which is also called basic fabric. It includes plain
weave, twill weave, and satin weave. All fabrics can not be separated from warp and
[ ]
k 1 2 3k k k kn
A a a a a=
[ ]
k 1 2 3k k k kn
H h h h h=
[ ]
k 1 2 3k k k kn
V v v v v=
[ ]
k 1 2 3k k k kn
D d d d d=
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
300
weft. This paper's fabric density detection method can monitor all fabrics containing
warp and weft. However, there is a problem of high relative error in the density
monitoring of individual fabrics. To get a better result of fabric design, we need to use
the computer to calculate the fabric density automatically.
4.1. CALCULATION OF WARP AND WEFT DENSITY OF SIMPLE
WEAVE FABRIC PROCESSED BY COMPUTER
From the analysis in the previous section, it can be seen that clear warp and weft
density yarns are arranged. The black horizontal line represents the yarn, and the
white horizontal line represents the gap between the yarn and the yarn. Therefore, the
warp and weft density of the simple woven fabric is required. The number of warp and
weft yarns can be obtained by calculating the number of black transverse lines in the
vertical and horizontal directions. Then the warp and weft yarn density is calculated.
The vertical detail component diagram of fabric represents the arrangement of warp
yarn, alternately arranged between warp and blank space. In the image, the alternate
arrangement of black pixels and white pixels is shown; The horizontal detail
component of the fabric represents the weft arrangement of the fabric.
Similarly, the alternate arrangement of the weft and the blank space means that the
black pixels and white pixels in the vertical direction of the image are arranged
alternately. Therefore, black and white pixels alternate once, representing a yarn. To
calculate the number of yarns in the detail image, we can get it by calculating the
number of consecutive black pixels in the image.
Taking the vertical detail component of fabric as an example, the unit length of
fabric is set as 10cm, the unit of fabric density is the national standard unit "root /
10cm", and the image width of fabric is d (unit is a pixel). Counting the total number of
consecutive black pixels in a row of vertical detail component maps, the yarn number
SJ in the horizontal direction can be obtained. The total number of continuous black
pixels SJ is divided by the width of fabric image D. According to the parameters of the
CCD industrial camera, the camera's resolution can be obtained. Through the
resolution, the number of pixels per centimeter P (in pixels/cm) can be obtained.
Finally, the warp density converted into a standard unit can be obtained by multiplying
MJ and P and multiplying by unit length 10. The calculation formula of fabric warp
density PJ is as follows:
(15)
(16)
If the contour curve is marked as , and all the peak positions of the drop curve
are recorded as and
are the total number of peaks, the weft
density formula of the fabric is expressed as:
(17)
j j
M S d= ÷
10
j j
P M p=× ×
f(ρ)
ρi,i= 0,,M1
M
( )
( )
1 0
1
2.54
weft
M
r M
Dρ ρ
×
=×
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
301
In the formula, is the resolution of the image, and the unit of weft density is/
cm.
After the same preprocessing process, the correlation coefficient curve and energy
curve are used to determine the optimal wavelet decomposition series and the optimal
decomposition series is used to decompose and reconstruct the sample fabric image.
After reconstruction, the vertical and horizontal detail components are optimized, and
the fabric warp and weft density detected by the two methods are calculated.
Table 2. shows the measurement results of the optimal decomposition series determined by
the correlation coefficient curve
No. 1 is a woven fabric, No. 2 is a knitted fabric, and No. 3 is knitted fabric; No. 4 is
non-woven fabric, No. 5 is three-way fabric, No. 6 is multi-directional fabric, No. 7 is
composite fabric, No. 8 is a twill fabric, No. 9 is plain fabric, and No. 10 is checkered
fabric. In manual testing, we use a direct method to measusimple weave fabric's warp
and weft densric. Use fabric density. The number of yarns in 5cm length is measured
by analyzing the mirror, and the warp (weft) yarn density is obtained by multiplying the
number of yarns by 2. Each piece of fabric is measured three times, and the average
value of the three values is taken as the final measurement result of warp and weft
density.
4.2. ANALYSIS OF EXPERIMENTAL RESULTS
For the convenience of measuring the fabric density and weft, we will use the
method of measuring the fabric density and going direct. The accuracy and reliability
of the two methods are analyzed to verify the feasibility of the computer automatic
detection method for the warp and weft density of simple woven fabrics.
r
Dweft
Fabric number
Warp density (PCS / 10cm)
for the first
time
The second
time
third time average value
1 351.20 355.58 350.12 352.30
2 769.48 771.72 775.19 772.13
3 104.11 107.93 103.14 105.06
4 308.56 311.69 310.56 310.27
5 215.24 210.70 213.09 213.01
6 241.82 236.17 240.12 239.37
7 527.67 535.18 538.30 537.05
8 407.66 401.97 407.68 405.77
9 682.00 681.43 675.13 679.52
10 637.11 640.15 634.85 637.11
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
302
Figure 4. Relative error results
The results show that the relative error of warp and weft density of simple weave
fabric is about 1.00% by using the method of energy curve to determine the optimal
decomposition series. It is found that the relative error of both warp and weft density is
about 1.00%The results show that this method is feasible. However, most of the data
results of the correlation coefficient curve method for determining the optimal
decomposition series are consistent with the results of the energy curve method. The
relative error of the density test results of No. 3 fabric, No. 6 fabric, and No. 7 fabric is
higher than 10%, and the relative error of No. 3 fabric is the highest, reaching 66%.
This shows a serious error in these three fabrics' warp and weft density. That is to say,
the decomposition order determined by the three fabrics' correlation coefficient curves
is not the fabric's optimal decomposition series. Therefore, the fabric warp and weft
density results calculated by the vertical and horizontal detail components obtained by
the wavelet decomposition and reconstruction of the fabric by using this
decomposition series will be so different from the real density of the fabric. This group
of experimental results also further shows that when using wavelet decomposition and
reconstruction methods to detect the fabric warp and weft density, the determination
of the optimal wavelet decomposition series is very important. The results of different
series decomposition and the real warp and weft density of fabric may be greatly
different. Such a large deviation will also greatly impact the subsequent production of
fabrics. Compared with the correlation coefficient curve, the accuracy of the energy
curve method is better. However, in the experiment, it is also found that the
processing error of simple weave fabric with large warp and weft density or complex
pattern and color of the fabric itself may be large, which needs further research and
improvement.
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
303
5. CONCLUSION
This paper aims to use computer vision and digital image processing technology to
replace manual identification and automate simple weave fabric production detection.
This research combines textile knowledge, computer technology, and image
processing technology and achieves the integration of subject knowledge, which is of
great significance for bringing innovative results. To solve the problems existing in the
previous research on the automatic detection of fabric warp and weft density, this
paper puts forward the corresponding algorithm. The main research contents and
innovations are as follows
1.T
he method in this paper combines the correlation coefficient curve method to
determine the optimal decomposition order of fabrics, which are respectively k=2,
k=2, k=5, k=3. When the decomposition scale of fabric 1 is 2, the decomposition
scale of fabric 2 is 2, the decomposition scale of fabric 3 is 5, and the
decomposition scale of fabric 4 is 3. Therefore, this method can effectively avoid
warp and weft skew.
2. Only the fabric density detection method in this paper is used. The relative error of
the warp and weft density of the simple woven fabric detected is 1%, which is
consistent with the curve with the most decomposition sequence, indicating that the
relative error detected by the fabric density detection method in this paper is the
smallest and the accuracy is the highest.
3. In this paper, the fabric density detection method decomposes and reconstructs the
horizontal, vertical, and high-frequency detail components and approximate detail
components of the fabric image. Most of the data from the optimal decomposition
sequence determined by the correlation coefficient curve method are consistent
with the results obtained by the energy curve method. However, there is a problem
of high relative error in the density monitoring of individual fabrics.
DATA AVAILABILITY
The data used to support the findings of this study are available from the
corresponding author upon request.
CONFLICT OF INTEREST
The authors declare that the research was conducted without any commercial or
financial relationships that could be construed as a potential conflict of interest.
REFERENCES
(1) Shukla, K., Ahmad, A., Ahluwalia, B. S., et al. (2022). Finite element simulation
of transmission and reflection of acoustic waves in the ultrasonic
transducer. Japanese Journal of Applied Physics.
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
304
(2) Elemmi, M. C., Anami, B. S., & Malvade, N. N. (2021). Defective and
nondefective classification of fabric images using shallow and deep
networks. International Journal of Intelligent Systems.
(3) Raj, A., Sundaram, M., & Jaya, T. (2020). Thermography based breast cancer
detection using self-adaptive gray level histogram equalization color
enhancement method. International Journal of Imaging Systems and
Technology.
(4) Ojo, J. F., & Olanrewaju, R. O. (2021). Review of Family of Autoregressive
Integrated Moving Average Models in the Comportment of Autocorrelation
Function for Non-Seasonal Time Series Data. International Journal of
Mathematical Sciences & Applications, 19(1), 79-89.
(5) Wu, D., & Tang, Y. (2020). An improved failure mode and effects analysis
method based on uncertainty measure in the evidence theory. Quality and
Reliability Engineering, (1).
(6) Cao, Y., You, J., Shi, Y., et al. (2021). Research on the Green
Competitiveness Index of Manufacturing Industry in Yangtze River Delta
Urban Agglomeration. Problemy Ekorozwoju, 16(1), 143-156.
(7) Trafton, K., & Giachetti, T. (2021). The morphology and texture of Plinian
pyroclasts reflect their lateral sourcing in the conduit. Earth and Planetary
Science Letters, 562.
(8) Le, B., Troendle, D., & Jang, B. (2021). Detecting fabric density and weft
distortion in woven fabrics using the discrete fourier transform.
(9) Kaplan, V. (2021). Detection of Remote Sensing Warp Tension during
Weaving on Plain Twill and Satin Fabric. Fibres and Textiles in Eastern
Europe, 29(1(145)), 35-39.
(10) Monfared, S. S., & Sedef, B. (2021). Road Lane detection through image and
video processing using edge detection and Hough transform for
autonomous driving purposes.
(11) Xz, A., Gla, B., Zya, B., et al. (2020). Parameter estimation based on Hough
transform for airborne radar with conformal array - ScienceDirect. Digital
Signal Processing, 107.
(12) Qin, T., Cao, P., Zhang, Y., et al. (2021). Underwater magnetic target signal
denoising based on modified wavelet decomposition and reconstruction
algorithm. Journal of Physics: Conference Series, 1738(1), 012019 (8pp).
(13) Yan Y, Liu Y, Yang M, et al. (2020). Generic wavelet-based image
decomposition and reconstruction framework for multi-modal data
analysis in smart camera applications. IET Computer Vision, 14(7), 471-479.
(14) Lgt, A., Mld, B., & Jgg, C. (2022). Picking out the warp and weft of the
Ediacaran seafloor: Paleoenvironment and paleoecology of an Ediacara
textured organic surface. Precambrian Research, 369, 106539-.
(15) Wang, X., Wang, S., Guo, Y., et al. (2020). Research on improved sharpening
algorithm based on closed operation and binarization. Journal of Physics
Conference Series, 1629, 012019.
(16) Shi, J., Wang, Y., Zhang, X., et al. (2021). Extraction method of weak
underwater acoustic signal based on the combination of wavelet transform
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
305
and empirical mode decomposition. International Journal of Metrology and
Quality Engineering.
(17) Nugroho, P.C., Widadi, R., & Zulherman, D. (2021). Hand and Foot Movement
of Motor Imagery Classification Using Wavelet Packet Decomposition and
Multilayer Perceptron Backpropagation. In 2nd Borobudur International
Symposium on Science and Technology (BIS-STE 2020).
(18) Pavicˇic, I., Brievac, Z., et al. (2021). Geometric and fractal characterization
of pore systems in the upper Triassic dolomites based on image
processing techniques example from umberak mts nw Croatia.
Sustainability, 13.
(19) Pang, K., Alam, M. Z., Zhou, Y., et al. (2021). Adiabatic Frequency Conversion
Using a Time-Varying Epsilon-Near-Zero Metasurface. Nano Letters.
(20) Tang, S.C., Wang, X.Y., Yang, L.L., et al. (2020). Design of Slotted Dielectric
Patch Antenna with Filtering Characteristic. In 2019 International Symposium
on Antennas and Propagation (ISAP). IEEE.
(21) Tutatchikov, V. (2020). Application of parallel version two-dimensional fast
Fourier transform calculating algorithm with an analogue of the Cooley-
Tukey algorithm. In 2020 International Conference on Information Technology
and Nanotechnology (ITNT).
(22) Niu, H., Wu, B., Wang, Q., et al. (2020). Research on steel barrel flattened
seam recognition based on machine vision. Journal of Physics Conference
Series, 1633, 012014.
(23) Barreto, M., Reis, J., Muraoka, T., et al. (2021). Diffuse reflectance infrared
Fourier transform spectroscopy for a qualitative evaluation of plant leaf
pigment extraction.
(24) Shi J, Wang Y, Zhang X, et al. (2021). Extraction method of weak underwater
acoustic signal based on the combination of wavelet transform and
empirical mode decomposition. International Journal of Metrology and Quality
Engineering.
(25) Le B, Troendle D, Jang B. (2021).
Detecting fabric density and weft distortion
in woven fabrics using the discrete Fourier transform.
(26) Tabunschik V. A., Т. M. Chekmareva, & Gorbunov R. V. (2020). Spectral
characteristics of some agricultural crops in different phenological phases
of vegetation. Plant Biology and Horticulture Theory Innovation, 152, 56-70.
(27) Cooney G. S., Barberio M., Diana M., et al. (2020). Comparison of spectral
characteristics in human and pig biliary system with hyperspectral
imaging (HSI).
(28) Bo C., Polatkan G., Sapiro G., et al. (2020). The hierarchical beta process for
convolutional factor analysis and deep learning. In ICML.
(29) Kaseng, F., Lezama, P., Inquilla, R., & Rodriguez, C. (2020). Evolution and
advance usage of Internet in Peru.
3C TIC. Cuadernos de desarrollo aplicados
a las TIC, 9(4), 113-127. https://doi.org/10.17993/3ctic.2020.94.113-127
(30) Liu Chunguang. (2021). Precision algorithms in second-order fractional
differential equations. Applied Mathematics and Nonlinear Sciences, 7(1),
155-164. https://doi.org/10.2478/AMNS.2021.2.00157
https://doi.org/10.17993/3ctic.2023.121.287-306
3C TIC. Cuadernos de desarrollo aplicados a las TIC. ISSN: 2254-6529
Ed.42 | Iss.12 | N.1 January - March 2023
306