Abstract
A position detection approach based on machine vision is developed to address the issue that the traditional wind turbine hub lifting procedure relies on the location of engineers and cannot provide quick feedback on the position information. A camera and a range sensor are mounted on the suspended object to enable real-time detection of the fan hub’s posture state in relation to the engine room. A monocular camera positioning technique based on circular features is created for this detection method. The relative pose coordinates of the hub are computed using the position data given by the range sensor. A filter for the range sensor is created based on the hoisting features of the actual working circumstances. The filter applies a filter to the data from the ranging sensor, considerably reducing measurement error and enhancing detection technique stability.
You have full access to this open access chapter, Download conference paper PDF
Keywords
1 Introduction
The advancement of wind turbine-related technology has raised the bar for crane efficiency, lifting precision, real-time detection, and precise positioning control in wind power construction projects as well as the installation and maintenance of wind turbines. At present, the commonly used domestic wind turbine installation method is to install the nacelle on the tower first, and then the blade and hub are combined on the ground, and the blade and hub assemblies are lifted by large cranes in the high altitude for the installation method. Hub blade assembly in the lifting process, the need for multiple cranes to work together, through the rope pulling and other ways to adjust the position of the hub blade assembly.
With the development of wind turbine related technologies, the capacity of wind turbines has been continuously improved, and the installation height of wind turbines has also reached more than 100 m, as shown in Fig. 1, for the installation of wind turbine hubs. Under this working condition, it is difficult to work at high altitude, there are many people involved in installation, and the cost is high. When the hub is installed and aligned, due to factors such as wind power or technical personnel cooperation, the installation period is long and the installation time is greater than 70 h. With the rapid development of China’s wind power industry, it is necessary to develop a pose detection method with less direct participation, high safety, high detection efficiency and high detection accuracy.
Compared with other pose detection methods, visual positioning can detect the position of the measured object under non-contact conditions, and can adapt to various complex environments under sufficient illumination. Scholars at home and abroad have accumulated relevant research foundations in machine vision. Liu et al. [1] designed a robot intelligent unstacking system based on visual positioning. The corresponding world coordinates are obtained by using the coordinate transformation of the target pixel center. The position error is 1.1 mm and the angle error is 1.2°. Chen et al. [2] proposed a screw automatic fastening assembly system based on visual positioning. The visual positioning technology based on sub-pixel edge is introduced to accurately compensate the positioning deviation of screw assembly, and the visual positioning accuracy is better than 0.02 mm.
Currently, the detection of camera fixation and the movement of the measured item are the basic foundations of visual positioning research. This study suggests a pose detection technique in which the measured object and the camera are both in a state of erratic motion. This study offers a fan hub based on a circle fitting algorithm that integrates a laser range sensor with machine vision to address the complex working environment.
2 The Wheel Hub Pose Detection Scheme Under High Altitude Condition
Since the characteristics of the cabin itself are difficult to be directly used as positioning features, it is necessary to add feature color blocks as positioning markers, as shown in Fig. 3. The selection of positioning features is the key to the realization of visual detection. For the needs of practical engineering, two square color blocks of a certain size are installed in different directions of the transmission shaft as the positioning feature of the secondary treatment, and the bolt hole of the engine room shaft is used as the feature of the primary treatment.
The camera is used to collect and process the image to obtain the position information of the target point. Combined with the distance information of the ranging sensor, the pixel information on the image is converted into the actual distance information, and the pose state of the fan hub is obtained.
The position of the camera and the ranging sensor is shown in Fig. 2, and the installation position of the ranging sensor and the industrial camera is that the hub is facing the cabin surface. After the camera is installed, the relative position matrix of the camera and the hub center should be obtained to initialize the pose matrix.
3 Visual Orientation Program
3.1 Image Segmentation Scheme of Secondary Processing
Since the cabin itself has no obvious features that can be used as features for visual recognition and positioning, it is necessary to add feature color blocks for positioning [3]. The square color block features are prominent, and when performing contour fitting, it can effectively eliminate the influence of the external environment on visual recognition. Therefore, two square color blocks of a certain size are installed in different directions of the drive shaft as the positioning feature of the secondary processing, as shown in Fig. 3. The bolt hole of the nacelle shaft is used as a feature of one-time processing.
The quality of the image the camera captures cannot be guaranteed due to the intricacy of the real operating conditions. As a result, during image processing, secondary processing is used to segment the images [4]. The captured image is first simply processed to separate the main recognition portion from the remainder portion, which is then binarized. The picture is then re-segmented following one procedure: binarization processing is used to produce the desired contour after screening by area, contour, and other parameters [5]. It is a flow chart for secondary processing, as seen in Fig. 4.
This image processing method increases the computational pressure of the host computer to a certain extent and increases the time of image processing, but it can effectively overcome the influence of the environment on image processing and improve the accuracy of visual positioning [6]. In addition, during the installation of the fan hub, the movement speed is slow, and the influence of the algorithm delay on the installation process can be ignored.
3.2 Circle Fitting Algorithm Based on Least Square Method
When the wheel hub and the engine room are far apart, the area of the positioning color block in the image collected by the camera is small, and it is difficult to extract the features. Aiming at the situation that the color block cannot be identified and located at a long distance, a circle fitting algorithm based on the least square method is introduced [7].
After the image is binarized, the edge of the image will be highlighted. The highlighted pixels can be regarded as points in the two-dimensional coordinate system (\({{\text{x}}}_{i}\), \({{\text{y}}}_{i}\)).
First, the mathematical algebraic expression of the circle is:
According to (1), another form of the equation of the circle can be obtained:
That is to say, the parameters of the circle can be obtained by calculating a, b, c.
The distance from each point to the center of the circle \({d}_{i}\):
The square difference between the square of the distance from each point to the edge of the circle and the radius is:
Let Q (a, b, c) be the sum of squares of \({\updelta }_{i}\):
The parameters (a, b, c) are obtained to minimize the value of Q.
Figure 5 shows the binary image of the interface shape of the wind turbine. When the camera has a large angle deflection or the camera cannot collect a complete engine room image, as shown in Fig. 6, this circle fitting algorithm can realize the detection and feedback of the deflection.
When the camera only captures part of the cabin image, the simple color feature or circular feature algorithm cannot reflect the correct position relationship [8, 9]. However, the circle fitting algorithm based on the least square method can solve this kind of problem to a certain extent.
Firstly, the center coordinates of each bolt hole are obtained by image processing, as shown in Fig. 7.
Then, the circle fitting of the center coordinates of each bolt hole is carried out, and the pixel coordinates of the center of the engine room are obtained, as shown in Fig. 8.
According to the returned information, the position and posture of the hub are adjusted in time to prevent collision accidents and ensure the safety of installation work.
3.3 Ranging Sensor Filtering Algorithm Based on Mean Filtering
The measurement range of the laser ranging sensor used in the experiment is 0.01–8 m, the measurement blind area is 10 mm, and the measurement error is 2%. When the laser ranging sensor performs distance detection, the detected distance information is unstable and there is a certain error. The software is used to simulate the random error distribution of the ranging sensor within the range, as shown in Fig. 9.
The ranging sensor data is collected at different distances, and several data are randomly collected at the same distance, as shown in Fig. 10, which is the measurement data at a distance of 1138 mm.
Based on the data collected several times, it is concluded that the size of the error of the ranging sensor increases with the increase of the distance when the data measurement is performed. According to the working principle of the range sensor and statistics related theories, the filtering algorithms such as minimum filter, median filter, mean filter and Kalman filter [10] are used to perform the filtering experiments on the range sensor data in static and dynamic situations respectively, as shown in Fig. 11.
By measuring a number of data at a fixed distance and taking the mean value of a number of data as a reference value. According to the data obtained from many experiments, it can be seen that these filtering methods can well reduce the floating trend of the measurement data. Moreover, the effect of mean filtering is the best.
According to the experimental data under different conditions, when the motion speed is greater than 0.5Â m/s, the filtering effect of median filter and Kalman filter is better, and the value is closer to the reference value. When the motion speed is less than 0.2Â m/s, the mean filter has the best filtering effect, and the data is stable and close to the reference value.
In addition, the size of the filter also affects the filtering performance. Taking mean filtering as an example, when the motion speed is slow, the larger the filter is, the better the filtering effect is. When the motion speed is fast, the smaller the filter, the better the filtering effect.
3.4 Coordinate Conversion Based on Image DPI
In the traditional coordinate transformation method, the internal and external parameters of the camera are calculated by camera calibration, and then the image coordinate system is transformed into the world coordinate system through the camera internal and external parameter matrix and pixel coordinates [11]. When the camera is stationary, this coordinate conversion method can achieve high accuracy and good stability. However, during the installation of the fan hub, the hub and the nacelle are in an unstable state of motion. Therefore, the traditional coordinate transformation method does not fully meet the positioning requirements under this working condition.
Aiming at the inapplicability of traditional coordinate transformation methods, a camera coordinate transformation method based on distance and camera DPI relationship is adopted. At a fixed distance, the number of pixels per inch of the actual distance is fixed. During the movement, the DPI of the camera changes with the distance. By randomly changing the distance between the camera and the measured object within a certain distance range, the relevant data of the camera DPI and distance are obtained.
Then, the curve fitting of camera DPI and distance is carried out by polynomial approximation. Finally, the relationship between camera DPI and distance is obtained, as shown in Fig. 12, Distance in mm.
Through multiple sets of data fitting, the quadratic polynomial equation of camera DPI with distance transformation is obtained.
x is the vertical distance between the center of the camera imaging plane and the plane of the cabin, in mm. The measurement for D is mm/pixel, which stands for the reciprocal of the camera’s DPI and represents the actual distance that each pixel at a given distance represents.
The derived equation’s error squares are equal to 0.02, and the R-square (coefficient of determination) is equal to 0.998.
On the basis of this, it is possible to determine the real distance L of the item that was measured using image processing:
l: The number of pixels in the radius of the image, in pixels.
Taking the engine room center as the origin of the world coordinate system, the actual distance between the engine room center and the camera center is obtained by calculation. Taking the direction perpendicular to the nacelle section as the y-axis, the relative position of the camera relative to the nacelle can be obtained, that is, the position matrix T of the hub, as shown in Fig. 13.
Then the rotation matrix R of the hub is obtained by the gyroscope, and the pose matrix P of the hub can be obtained:
R is the initialized rotation matrix. T is the initialized position matrix.
4 Experimental Analysis
Firstly, the error of the visual algorithm itself is evaluated, and the evaluation results can provide reference for the subsequent accuracy improvement. The positioning mark used in the experiment is a blue circular mark with an actual radius of 85 mm, as shown in Fig. 14. The radius of the positioning mark is measured at a fixed distance. The pixel size of the radius of the circle is 47.7 based on the principle of averaging multiple measurements. The visual algorithm is used to process the positioning mark multiple times, and the pixel size of the radius of the circle is measured, and the error of the visual algorithm can be obtained, as shown in Fig. 15.
According to the data obtained from several experiments, the error of the algorithm in detecting the radius is between 0.15 pixels when the conditions are fixed. According to the working conditions of wheel installation, the introduction of mean value filter to optimize the algorithm can keep the error of the algorithm within 0.1 pixels.
On this basis, a ranging sensor is introduced for practical measurement experiments. The positioning markers are subjected to reciprocating motion and arbitrary angle deflection at random speeds within a range of 1 m in front of the camera to simulate the instability of the actual lifting process, as shown in Fig. 16.
The image processing algorithm, ranging sensor filtering algorithm and position detection algorithm shown in the paper are used to carry out the measurement of the radius of the positioning markers for many times, as shown in Fig. 17. The experimental results are compared with the actual radius of the positioning markers to find out the error, and the measured error fluctuation curve is obtained, as shown in Fig. 18.
According to the comprehensive results of multiple sets of experimental data, the position detection method in this paper can realize real-time radius detection of positioning markers in dynamic situations, and the detection error is within 5Â mm. The results of multiple experiments show that the position detection method described in this paper has good repeatability. In the process of fan hub installation, the distance between the hub and the nacelle is within 1Â m, so the fan hub position detection method described in this paper has reference value for the actual fan hub installation process.
5 Conclusion
In this paper, a method for detecting the position and posture of the fan hub based on machine vision and ranging sensor is proposed. The camera is used to collect the image, and the image is processed twice. The circle fitting algorithm based on the least square method is used to obtain the pixel coordinates of the positioning mark. The relative position relationship between the hub and the cabin is obtained by the relationship between the ranging sensor and the fitted camera DPI with the change of distance. The obtained data is processed and optimized by various filtering algorithms, which improves the accuracy of visual positioning and can meet the positioning requirements under actual working conditions. The feasibility and stability of the wheel pose detection method are verified by experiments, which meets the accuracy requirements of engineering hoisting. The feasibility of applying machine vision to construction machinery is verified.
This visual positioning method can reduce the direct participation of personnel in the process of fan hub hoisting, and return the real-time position information to the crane console and commander. When there is a large angle deflection or abnormal position, the technical personnel can adjust the hoisting process in time to eliminate the risk. It provides a technical reference for the research and development of intelligent hoisting or intelligent crane.
References
Liu BL, Zou WC (2023) Intelligent depalletizing system for robots based on visual localization. Comput Syst Appl 32(07):138–144
Chen MX, Li YX, Ye M et al (2023) Automatic screw fastening assembly system based on visual localization. Guidance Fuzing 44(02):50–53+60
Gert N, Albertus AJE, Arnold HE (223) Markerless monocular vision-based localisation for autonomous inspection drones. In: MATEC web of conferences, vol 370
Cao XY, Li Q, Zhang ZB et al (2023) Detection method of ultimate load bearing performance of jib crane based on machine vision. Instrum Technol Sens 483(04):113–117
Abdul-Rahman H, Chernov N (2013) Fast and numerically stable circle fit. J Math Imaging Vis 1–4
Chernov N, Ma H (2011) Least squares fitting of quadratic curves and surfaces. Nova Science Publishers, Inc., pp 287–302
Chernov N (2010) Circular and linear regression. Taylor and Francis; CRC Press
Wang JC (2022) Front-end machine vision system design based on Hess Hi3559 platform. Dalian University of Technology
Ma J (2020) Design and implementation of workpiece recognition and localization system based on machine vision. University of Chinese Academy of Sciences (Shenyang Institute of Computing Technology, Chinese Academy of Sciences)
Zhao X, Yang HM, Qiang J et al (2020) A high-precision coherent laser ranging method based on Kalman filtering. J Opt 40(14):115–123
Wang JZ (2020) Research on machine vision localization algorithm for stacking cartons. Huazhong University of Science and Technology
Acknowledgements
This work is supported by National Natural Science Foundation of China under Grant No. 52275088 and the Fundamental Research Funds for the Central Universities under Grant No. DUT22LAB507.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this paper
Cite this paper
Cao, X., Hu, Y., Xu, G., Song, S., Tie, X. (2024). A Machine-Vision-Based Hub Location Detection Technique for Installing Wind Turbines. In: Halgamuge, S.K., Zhang, H., Zhao, D., Bian, Y. (eds) The 8th International Conference on Advances in Construction Machinery and Vehicle Engineering. ICACMVE 2023. Lecture Notes in Mechanical Engineering. Springer, Singapore. https://doi.org/10.1007/978-981-97-1876-4_30
Download citation
DOI: https://doi.org/10.1007/978-981-97-1876-4_30
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-1875-7
Online ISBN: 978-981-97-1876-4
eBook Packages: EngineeringEngineering (R0)