Abstract
The regeneration of aircraft engine components requires a thorough assessment of the current condition. Based on this, a suitable repair strategy can be selected. To provide a measurement system, which can be used for the inspection of worn components, various optical measurement methods were combined to create a multi-sensor system. To completely reconstruct complex geometries a 6-axis industrial robot and an additional rotational axis are applied. This robot-assisted multi-sensor system is used to digitise and characterise turbine blades of aircraft engines. The inspection process is non-destructive and different features are measured to acquire a holistic model. The sensors are used to reconstruct the 3-D geometry in different scale ranges and characterise the surface based on reflection properties. Afterwards, the data of each individual sensor are transferred into a uniform coordinate system. To ensure high sensitivity to wear and damage, a model-based system calibration and a data interface for subsequent diagnostics and simulations are essential to provide a reliable assessment of the performance and durability of the inspected components.
You have full access to this open access chapter, Download chapter PDF
Keywords
1 Introduction
The complexity of modern machines and their components is growing. While this trend improves the overall performance of higher level systems, it introduces challenges in terms of maintenance. The level of complexity often causes larger part costs. Thus the regeneration of these parts may be more economical and with increasing material scarcity also more ecological.
This work focuses on the regeneration of turbine blades used in aircraft engines. To assist the regeneration process, a combination of different optical sensors is combined to assess the current state of the component. This information can further be used to detect defects and estimate the reliability and performance. Based on the gathered information, a suitable regeneration strategy is chosen.
2 Motivation
The assessment of worn components is challenging due to a wide range of their possible conditions. Mechanical stress and thermal or chemical wear cause deviations from the original state and can impact the overall performance and reliability (Tabakoff et al. 1998; Kurz and Brun 2000; Laguna-Camacho et al. 2016). Detecting these changes and characterizing their appearance with non destructive optical methods is the goal of this project.
However, deviations can occur in varying shapes, sizes and types. Thus multiple geometric scales have to be taken into consideration, when reconstructing the object.
While macroscopic defects (multiple centimetres) like larger cracks and dents may be detectable with one sensor, a different system is necessary to reconstruct defects which only have a size of a few micrometers.
In order to take into account various types of defects, a combination of multiple different sensors (S1,..., Sn) is necessary, since a single sensor is not capable of covering all of the required scale ranges. The basic concept of multiscale geometry inspection and fusion to a holistic dataset in a common coordinate system, which was developed in this work, is shown in Fig. 1.
Different scale ranges require different measurement techniques and global data registration necessitates the identification of spatial relationships of the individual sensors to each other to allow the fusion of data gathered from multiple sensors. Based on the available measurement methods, a suitable sensor can be chosen in order to meet the requirements to reconstruct a defect.
3 Multi-sensor Design
To provide a holistic model, a combination of geometric and non-geometric data has to be acquired. The developed sensor system consists of three sensors, which provide geometric data. A fringe projection unit is used to cover the macro- and mesoscale ranging from sub millimetre to multiple centimetres. A low-coherence interferometer is used to reconstruct micro scale features of the measurement object. In addition to these sensors an illumination sensor is used to provide information about non-geometrical surface properties. The structure, function principle and results of each individual sensor will be introduced in the next sections.
3.1 Illumination Sensor System
This sensor is mainly designed to assess the local reflective properties under different illumination scenarios. Therefore, the sensor is designed with 42 white light-emitting diodes (LEDs). Each one is collimated and placed on a hemisphere pointing towards to the centre (see Fig. 2). To capture the illuminated scene an industrial camera (AVT Manta G-1236B) is mounted in the centre of the hemisphere surface. Each light source is individually controllable, which leads to a large number of possible combinations and thus, lighting scenarios.
3.1.1 Data Acquisition
To perform a measurement a set of pre-defined light configurations is executed and an image is recorded, resulting in an image stack with one image for each configuration. While more complex scenarios are possible, this approach mainly focuses on images when enabling only one LED at a time. Depending on the enabled LED, the amount of reflected light changes the intensity of different regions. This is caused by a combination of light direction and geometry which, based on macro and micro structures on the surface, lead to shadowing or affect the overall reflected light. These changes can be observed in the example images given in Fig. 3.
3.1.2 Measurement Principle
In order to take advantage of the reflective properties of the objects surface, different algorithms are applied to extract information about the measurement object. They are divided into two approaches for a macroscopic and a microscopic characterization. Further it is assumed that the positions of each LED in respect to the camera is sufficiently well-known from the computer-aided design (CAD). Thus the light vectors can be calculated with the assumption of perfectly collimated light and a fixed working distance for the camera when placed in focus. These assumptions may deviate from the actual conditions due to assembly-related deviations, not perfectly collimated LEDs and varying distances to the measurement object, but experiments have indicated, that these simplifications are feasible for this application.
The evaluation of the data is based on the following surface reflectance model utilising the surface normal n, the aforementioned incident light vectors ln with n ∈ [1, 42] for each LED and a view vector v. Figure 4 shows these vectors for a single surface point. Additionally the angles θ and φ are defined for in and outgoing rays, respectively light and view vector. The parameter vector k is used to describe the surface properties. Its length is dependant on the applied model. Utilizing these parameters the shape of the measurement object can be approximated by applying a photometric stereo approach. This algorithm requires a set of images with varying light direction to determine the surface normal for each pixel. However, basic approaches assume that the surface is mainly diffuse with low specular features. Research has been carried out to overcome this limitation (Herbort and Wöhler 2011; Zheng et al. 2020). While highly specular objects are still challenging, the optimization process was adjusted to be more robust.
The objective function is to find the surface normal by utilizing the set of pixel intensities for each light vector. A least squares based approach was published by Woodham (1979), which provides good results for worn turbine blades but is limited at strong specular reflective conditions.
The resulting surface normals can further be used to create an unscaled 3-D model of the measurement object. Since there is no metric information, only qualitative statements can be made. This, however, is already sufficient to detect macro scale deviations of the overall geometry e.g. missing parts of the nominal model. Experiments have shown, this method is not well suited for fine structures in micrometer range since the algorithm tends to smoothen filigree features.
In addition, the estimated normals can further be applied to characterize the objects surface properties. This is achieved by approximating a reflectance function for each pixel. For this a bidirectional reflectance distribution function (BRDF) is utilised. The basic formula in Eq. 1
describes the ratio of reflected to incident light, with respect to the vectors incident and azimuth angles, θ and φ, and the surface normal vector. These functions are mainly used in computer graphics to render photorealistic images of different materials. In general a BRDF model requires the incident light beam l, the surface normal n, the viewing vector v and some model specific parameters k (Guarnera et al. 2016), see Fig. 4. With the previously determined surface normals all but the model parameters, which are used to describe the reflectance, are known. There has been extensive research about identifying model parameters from real world data (Ward 1992; Westin et al. 1992; Lafortune et al. 1997). However, in this case the data base is usually collected using gonioreflectometers, which provide a fine sampling along possible incident and viewing angles (Guarnera et al. 2016; Schröder and Sweldens 1995; Ngan et al. 2005; Bieron and Peers 2020).
The data provided by the illumination sensor is rather limited and thus may not be sufficient to find the `actual’ model parameters. But the data basis is sufficient to do an approximation of reflectance models. Figure 5 shows the pixel intensities of one pixel of the image for all 42 LEDs. While the fitted intensities are not perfectly overlapping with the actual data, the trend is reproduced well. These approximated parameters from the model fit can then be divided into multiple classes, which can be used to do a multi-class segmentation of the object surface. For this state of the art cluster algorithms like K-Means (Hartigan and Wong 1979) or DBSCAN (Ester et al. 1996) can be applied to determine cluster borders in parameter space. Based on the used algorithm an expected number of classes has to be given or is derived from the data. An example result of a K-Means clustering with three classes is shown in Fig. 6b. Here the regions are marked with different colours. While the reflectance of rough surfaces, e.g. sand-blasted metal, is mainly diffuse, surfaces with lower roughness, e.g. polished metal, is shiny. Both effects are represented by parameters of the reflectance model, which are used for the classification. Thus, the classes correspond to areas with different roughnesses. This difference is shown in Fig. 7 where the marked spots from Figure 6a have been measured with a confocal laser scanning microscope (Keyence VK-X200) as a sample for the classified regions. The resulting roughness parameters for the measurements are listed in Table 1. These values suggest, that the red region has a higher roughness than the green areas. As stated by Bons (2010) the roughness affects the performance of the gas turbine. Therefore, the resulting surface classification is useful to identify differing areas which can further be examined with punctual roughness measurements. A model-driven estimation of the local roughness parameters according to the identification of the reflection parameters is not reliably possible with the existing data base.
3.1.3 Conclusion Illumination Sensor
Since the reflectance of a surface is dependent on the surface roughness, the results can be used to make a qualitative distinction between regions with differing roughnesses. These sections can then be measured in future steps to assign quantitative values. Thus, this sensor is mainly used to gather qualitative information about the measurement object. A rough geometric assessment is possible with the addition of distinguishing between different surface regions by examining the reflectance.
3.2 Fringe Projection Sensor
A regular fringe projection system (FPS) consists of a camera and a projector unit to project structured patterns onto the measurement object, see Fig. 8. In this case two industrial monochrome cameras (AVT Manta G-419B) are coupled with a programmable projector module (Wintech PRO4500). The second camera is equipped with a different focal length to expand the scale range of the acquired data. Both systems are calibrated by identifying the optical parameters and the spatial relation between the components by applying state of the art camera, projector and stereo calibrations like Zhang (2000) or methods proposed by Hartley and Zisserman (2003). To encode the projector pixels a 8-step phase shifting sequence following Peng (2007) is applied. Here, 8 phase shifted sine patterns with increasing frequencies are projected onto the measurement object.
3.2.1 Data Acquisition and Registration
Triangulation determines a corresponding surface object point for each camera pixel by 3-D ray intersection, resulting in a high-density three-dimensional, metric point cloud. For the unambiguous spatial determination of the corresponding viewing beams of both camera and projector, the projection patterns are captured and evaluated via a phase unwrapping pipeline.
For a better assessment of the turbine blades state, a 3-D model is necessary. Since the fringe projection sensor only provides surface measurements from a single viewing point, multiple measurements have to be combined. To register these, point correspondences have to be estimated. There are various algorithms which can be used to determine these correspondences. Depending on the algorithm itself, different kind of data and requisites may be necessary to apply them. Below, those strategies will be divided into two categories: 2-D based and 3-D based algorithms.
The first uses 2-D organised data, e.g. image data, to detect features and calculate descriptors, which then can be used to estimate correspondences. The data used for the feature detection is the colour information, which can be greyscale or RGB. Some of the most common algorithms used for image feature detection are ORB (Rublee et al. 2011), SURF (Bay et al. 2008) or SIFT (Lowe 2004).
The other category handles unorganized 3-D data, e.g. point clouds. Here each 3-D point and its neighbourhood is taken into consideration to calculate feature descriptors. Usually these require the surface normals of each point, which can easily be approximated (Mitra and Nguyen 2003). Algorithms of this kind are FPFH (Rusu et al. 2009), ‘spin images’ (Johnson and Hebert 1999) or SHOT (Salti et al. 2014). These feature descriptors can be used for object recognition (Gupta et al. 2019), while recently neural networks are frequently used for this task (Liang and Hu 2015).
However, the state of the measurement object can vary a lot and thus the texture and number of geometrical features present. This means that the registration using only natural occurring features is not robust to changing measurement objects. To overcome this issue artificial textures are applied to the measurement object by projecting random patterns onto it (Betker et al. 2020). Hereby, more features can be detected with 2-D feature detectors. The advantage of this approach is that no manual steps are necessary to apply any markers next to or on the object. However, neither the position with respect to the object nor the optical properties may change during the measurement process.
The setup for the random pattern projection is shown in Fig. 9a. Multiple projectors (Texas Instruments DLPDLCR2000EVM) are placed around the mounting system and aimed at the object. Each projector has a different generated random pattern. To ensure a high density of features different kinds of pattern designs and 2-D feature detectors have been examined. Overall, binary patterns performed better than greyscale. Partly because sharp borders are more robust against effects introduced by defocus, noise and the mixture with the actual object’s texture. The best results were obtained with a combination of randomly placed overlapping black rectangles on white background and a SIFT feature detector and descriptor. Since the 3-D reconstruction is calculated in the camera coordinate system, additional colour information can be added by acquiring an image with active random pattern projection. This way each point holds not only the spatial coordinates, but also an arbitrary number of, in this case, greyscale intensities. These greyscale images can be used to determine 2-D correspondences by applying the aforementioned 2-D SIFT feature detector. Since each reconstructed 3-D object point corresponds to a camera pixel, any 2-D assignment can subsequently be transferred to the registration of the point clouds. An example pair of images respectively surface measurements is shown in Fig. 9b. To increase visibility not all found correspondences are shown.
Due to the chosen pattern the algorithm detects a large amount of corresponding points. Although the majority of correspondences is plausible, a certain amount of false connections are present. This problem is addressed by utilizing a random sample consensus (RANSAC) approach to estimate the rigid body transformation between both measurements. The combination of high density features, distributed on the surface, and an outlier-aware transformation estimation results in a robust alignment of multiple 3-D measurements, which is independent of natural features. To further improve the transformation and reduce the remaining alignment error, an iterative closest point (ICP) algorithm is applied. In this work a coloured ICP (Park et al. 2017) is used to further benefit from the projected patterns.
However the pairwise registration of measurements is error-prone and even small alignment errors add up when forming the complete 3-D model. This can result in a loop closing problem, where first and last measurement of a sequence are not aligned as expected. This problem is addressed by applying a multiway registration as proposed by Choi et al. (2015) which performs a graph optimization between all segments. Further neighbouring measurements are combined into fragments to increase the overlap between segments.
Running the presented registration pipeline results in blade measurements as seen in Fig. 10. To reduce the number of points in the merged point cloud, close points are merged with a weighted voxel based filter. The respective weight is chosen by the reconstruction quality of each point, which is derived from the signal quality during measurement.
3.2.2 Data Evaluation
In this section some options to process the data provided by the fringe projection unit will be discussed. The goal of the evaluation step is to detect defects or damaged regions on the turbine blade. Given the nominal geometry, the 3-D measurement can be aligned to it. This allows the estimation of deviations to the nominal structure of the turbine blade. An example deviation map is shown in Fig. 11. Since the actual nominal geometry from the manufacturer is unknown, another worn blade was chosen to represent the nominal geometry. Therefore, deviations were calculated between multiple worn blades to illustrate the process. Regions with missing material are coloured in blue, added material is represented with red. Green regions have low deviations to the nominal structure.
While this representation allows a quick assessment of damaged an undamaged regions there are some considerations to be made. First, the nominal geometry has to be given. Furthermore the initial alignment of nominal and actual geometry may be influenced by errors introduced by variations of the structure. To ensure a good alignment reference points which are not influenced by the operational stress would be necessary. Nevertheless this strategy can, depending on the requirements, be sufficient to draw conclusions about the blades condition.
Because of the aforementioned drawbacks another strategy has been researched. With emerging research in the field of artificial intelligence, various algorithms and neural networks have been released to handle a multitude of tasks. To use these methods in the field of defect detection, convolutional neural networks (CNN) were chosen. To be more specific, image segmentation networks. Since single measurements of the fringe projection sensor are in a matrix structure this data can be used as input data for CNNs.
The goal is to detect defects in single measurements using this approach. Firstly a’defect’ has to be defined in order to create according labels. Because the worn turbine blades have numerous of defects and a clear line between defect and intact regions is hard to define, the cooling holes of the blade are chosen instead. These are much more distinct and thus easier to label without expert knowledge. In some way the form of the cooling holes resembles the geometry of a defect. They interrupt an otherwise continuous surface by introducing high curvatures in a certain area.
With this definition a dataset of multiple measurements has been labelled. A sample is shown in Fig. 12. The annotations are based on the greyscale image, which can then be extended to the 3-D data. Different combinations of input data have been examined. A promising combination is the colour information with approximated normals. A prediction of the trained model and the difference in prediction and manual labels is shown in Fig. 13. It can be seen that more regions are marked than in the original labels. Subjectively these predictions are plausible and usually represent regions with larger curvatures. However the available training dataset is limited and the overall performance could further be improved with more data. But even with this rather small dataset the network seems to learn the rules for a local deviation, which can also be interpreted as a defect. Therefore, it is expected that it is possible to train this type of neural network to segment more general deviating regions.
3.2.3 Conclusion Fringe Projection Sensor
The main task of the fringe projection system is the reconstruction of turbine blades in 3-D metric space. As shown in the previous section, it is possible to use this data to detect defects or deviations. This can be achieved through the use of neural networks or regular deviations estimations by estimated point distances. Thus this sensor is mainly used to gather geometrical data in macro and meso scale ranges.
3.3 Low-Coherence Interferometer
The Low-Coherence-Interferometer (LCI) is an interferometer in a Michelson configuration. The basic setup is shown in Fig. 14. A regular industrial camera (Basler acA1920-48gm) in combination with a telecentric lens is used to capture the interferences which form on the surface. The objective has a comparatively large working distance as similar systems. This simplifies the positioning of the sensor in relation to the complex geometries and reduces risks of collisions.
The low coherent light source has a wave length of 665 nm. The light beam is collimated and sent into a 50/50 beam splitter to split reference and measurement beam. The reference beam is then send to a deflection mirror which is implemented as a 50/50 beam splitter and gets reflected on the reference mirror which is also realised with a beam splitter, but with a 90% transmission 10% reflection ratio. As a result the reference beam intensity is reduced since the surface of our measurement objects is very rough and does not reflect a lot of light. With these adjustments the beam intensities of measurement and reference ray are aligned in order to improve the overall contrast of the occurring interferences.
3.3.1 Data Acquisition and Evaluation
The LCI is mounted onto a high precision linear axis (PI L-509) to perform the scanning process for each measurement. This way the optical path length of the measurement beam is changed, which allows sampling the depth of the surface. For each step of the scanning process an image is recorded and put into an image stack, which can further be evaluated. Li et al. (2015) presented a GPU-based evaluation strategy to calculate the corresponding depth maps. Paired with the magnification properties of the telecentric lens it is possible to estimate 3-D data.
Some example results are given in Fig. 15. The left measurement shows a section of the pressure side of the turbine blade. The right side shows an area of the leading edge. Smaller defects and transitions into cooling holes are visible. Depth resolutions lower than 200 nm are possible but strongly depend on the signal strength which again is dependent on the surface properties. The lateral resolution is determined by the pixel size and lens magnification which leads to 1.2 µm.
3.3.2 Conclusion Low-Coherence-Interferometer
With its rather small measurement area of around 3 mm2 and a long measuring duration, the LCI is not suitable for digitizing complete geometries or surfaces, but more for local inspections which require a high depth resolution. Thus the LCI is used as a complementary sensor to gather data in a micro scale in particularly relevant regions of the turbine blade such as damages, cooling holes or areas with fine structures which can influence the air flow.
3.4 Multi-sensor Measuring Head
In the previous sections the different sensors were introduced. To combine these into one multi-sensor system the individual requirements regarding working distance, orientation or field of view have to be taken into account. Given the interface of the robot, a measuring head has been developed. A CAD is shown in Fig. 16b. All sensors are mounted on the end-effector of a 6-axis industrial robot (Stäubli TX90). This allows a flexible positioning of the sensor head with respect to the specimen and guarantees appropriate positioning. The robot kinematic is extended with a rotational axis, which rotates the measurement object and the random pattern projectors used for the data alignment. The combination of robot and rotational axis makes it possible to fully reconstruct each individual specimen. With this design each sensor can be chosen by rotating the end-effector. LCI and fringe projection unit were placed in a way that their fields of view overlap to allow direct interactions, see Fig. 16a. The illumination sensor is separated to prevent collisions when handling other sensors. Thus, with this setup is possible to use the individual sensors within the multi sensor setup without directly affecting the other sensors.
4 Data Fusion
Every sensor operates on its own scale range and provides measurement data. However, the measurements of each sensor are currently in the respective sensor coordinate system (see Fig. 16). This means the measurement data is scattered in space and does not refer to a unified coordinate system. Thus it is not possible to immediately combine the information gathered from different sensors. In order to achieve this goal, each sensor coordinate system has to be calibrated to refer to a unified coordinate system. The next sections will outline this calibration process for each sensor.
4.1 Sensor Hand-Eye Calibration
For camera-based sensors state of the hand-eye calibration methods can be applied to calculate the position of the camera coordinate system in respect to a robots base coordinate system (Tsai et al. 1989). For this the robot is moved and for each pose a stationary calibration target is captured with the camera. In this work a 2-D dot pattern with known dot distance is used as calibration target. From the known properties of the target the transformation between it and each camera pose can be estimated. The set of camera to target transformations and forward kinematic based transformations of the robot facilitate the hand-eye calibration of the fringe projection and illumination sensor. Both cameras can be modelled with the pinhole camera model. The camera and lens of the LCI, however, requires a different model. In addition the depth of field of the telecentric lens is a lot smaller and the required number of different robot poses can not be achieved, because the blurriness prevents the transformation estimation. In addition the camera centre of a telecentric lens cannot be explicitly determined, but has to be set manually. Therefore a different calibration procedure has to be used for the LCI.
To calibrate the interferometer in respect to the other sensors, a 3-D calibration strategy is applied. For this a pair of LCI and fringe projection measurements is used. Both perform a measurement of a 3-D calibration target with distinct features, which allows an unambiguous alignment of both measurements. The resulting transformation which is necessary to align LCI into fringe projection data can be used to determine the 3-D calibration of the LCI coordinate system. Thus the fringe projection can be used as reference coordinate system for the LCI, which closes the transformation chain to the robot base.
The calibration of each sensor makes it possible to transform data from one coordinate system to the other. In order to transform fringe projection data (FPS) into the coordinate system of the interferometer (LCI), the homogenous transformation matrix LCITFPS is applied by multiplying it with the fringe projection data p (cf. Eq. 2).
The subscript of data is used to show the coordinate system. For better understanding the data origin is kept as first subscript for transformed data. Sub- and superscripts applied to transformation matrices determine the source respectively destination co-ordinate system.
4.2 Calibration of the Rotational Axis
While the hand-eye calibrations are sufficient to transform data when only moving the robot, the rotary axis is not yet considered. To include the additional rotational movement, the axis is integrated into the kinematic chain and calibrated. For this the rotational axis in respect to the robot has to be identified. This is achieved by mounting the calibration target onto to the axis and recording multiple images while rotating the axis is small steps. For each axis position the target centre can be estimated utilizing the calibrated camera and known target properties. The 3-D information of each estimated centre is used to calculate a three dimensional circle fit. The normal of this circle is used to describe the location of the rotational axis. The direction of the normal is chosen with respect to the actual rotation of the axis following the right-hand rule. The missing degree of freedom along the axis is set manually.
4.3 Combination of Measurement Data
With the calibrated systems it is now possible to transform all gathered data into a unified coordinate system. Figure 17 shows the registered measurement of LCI and FPS. Larger structures on the blade surface can be observed in both surface reconstructions. Roughness measurements can thus be used to complement existing data with higher depth resolution. The transformation of the data is performed by using Eq. 2:
In this case both measurements are transformed into the stage coordinate system (RS) to include rotations which may have been performed between both measurements. Since some geometrical features are available this coarse alignment can further be improved by applying e.g. a point-to-plane ICP.
The previous example demonstrates the combination of multiple scales of 3-D data.
The determined transformations additionally allow the merging of 2-D and 3-D measurements as present with the results of the illumination sensor. This, however, requires the addition of the intrinsic camera parameters to perform a proper projection of the data into the respective camera coordinate system. Melchert et al. (2020) demonstrated this projection step to utilize the surface normals from the FPS to improve the classification results and map them onto a 3-D measurement.
Figure 18 shows this process. The classification image which is derived from 2-D data shown on the left can be projected onto the 3-D points. With that it is possible to increase the amount of information of each point of the surface measurement and thus of each point of the 3-D model. Based on this enhanced model and the segmentation results of the CNN (see Fig. 13a), a more comprehensive defect detection can be carried out.
However, due to the fact that multiple calibrations depend on each other, the registration process is prone to error. Since a lot of calibration approaches are based on robot mounted cameras, even small errors of the camera calibration influence the hand-eye calibration, the identification of the rotary stage or the stereo calibration results. In addition the robot has assembly-related deviations in segment lengths and axis alignment. To reduce this effect, the robot is factory-calibrated. Nevertheless these aspects impact the overall registration performance and have to be noted.
5 Conclusions and Outlook
The application of optical measuring instruments of different scales and modalities in a common, global coordinate system opens up a wide range of novel inspection possibilities. In this way, the advantages of different measuring principles can complement each other while taking into account the resolution, measuring field size, accuracy and measuring duration in an optimal way and ensure a fast and meaningful diagnostic process. The detection and characterization of defects can be derived from multiple layers of information. Approaches for a multilayered defect detection and interpretation of combined measurement data are subject of further research.
In addition, individual sensors can profit from the multi sensor setup. A good example are the fringe projection unit and the illumination sensor. While the illumination sensor is capable of estimating the surface normals of the object on its own, the results are strongly dependant on the surface properties and favour low frequency structures. The surface reconstruction of the fringe projection system, on the other hand, can provide normals with a much higher certainty.
The interaction of multiple sensors opens up possibilities for measurement planning. While FPS and illumination sensor can be used to get a good overview of the measurement object, the LCI is introduced for local detail measurements. The position of these surface roughness measurements can be derived from the overview data e.g. by identified damaged regions based on the classification results. This on-demand sensor selection can greatly improve the measurement process, since only necessary measurements are performed.
References
Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L. (2008). Speeded-up robust features (surf). Computer vision and image understanding, 110(3):346–359.
Betker, T., Quentin, L., Kästner, M., and Reithmeier, E. (2020). 3d registration of multiple surface measurements using projected random patterns. In Optics and Photonics for Advanced Dimensional Metrology, volume 11352, page 113520C. International Society for Optics and Photonics.
Bieron, J. and Peers, P. (2020). An adaptive brdf fitting metric. In Computer Graphics Forum, volume 39, pages 59–74. Wiley Online Library.
Bons, J. P. (2010). A Review of Surface Roughness Effects in Gas Turbines. Journal of Turbomachinery, 132(2). 021004.
Choi, S., Zhou, Q.-Y., and Koltun, V. (2015). Robust reconstruction of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Ester, M., Kriegel, H.-P., Sander, J., Xu, X., et al. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd, volume 96, pages 226–231.
Guarnera, D., Guarnera, G., Ghosh, A., Denk, C., and Glencross, M. (2016). Brdf representation and acquisition. Computer Graphics Forum, 35(2):625–650.
Gupta, S., Kumar, M., and Garg, A. (2019). Improved object recognition results using sift and orb feature detector. Multimedia Tools and Applications, 78(23):34157– 34171.
Hartigan, J. A. and Wong, M. A. (1979). Algorithm as 136: A k-means clustering algorithm. Journal of the royal statistical society. series c (applied statistics), 28(1):100–108.
Hartley, R. and Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge university press.
Herbort, S. and Wöhler, C. (2011). An introduction to image-based 3d surface reconstruction and a survey of photometric stereo methods. 3D Research, 2(3).
Johnson, A. E. and Hebert, M. (1999). Using spin images for efficient object recognition in cluttered 3d scenes. IEEE Transactions on pattern analysis and machine intelligence, 21(5):433–449.
Kurz, R. and Brun, K. (2000). Degradation in gas turbine systems. Journal of Engineering for Gas Turbines and Power, 123(1):70–77.
Lafortune, E. P., Foo, S.-C., Torrance, K. E., and Greenberg, D. P. (1997). Non-linear approximation of reflectance functions. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 117–126.
Laguna-Camacho, J., Villagrán-Villegas, L., Martínez-García, H., Juárez-Morales, G., Cruz-Orduña, M., Vite-Torres, M., Ríos-Velasco, L., and Hernández-Romero, I. (2016). A study of the wear damage on gas turbine blades. Engineering Failure Analysis, 61:88–99.
Li, Y., Kästner, M., and Reithmeier, E. (2015). Development of a compact low coherence interferometer based on gpgpu for fast microscopic surface measurement on turbine blades. In Optical Measurement Systems for Industrial Inspection IX, volume 9525, pages 164–170. SPIE.
Liang, M. and Hu, X. (2015). Recurrent convolutional neural network for object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110.
Melchert, N., Kästner, M., and Reithmeier, E. (2020). Robot-assisted BRDF measurement and surface characterization of inhomogeneous freeform shapes. In de Groot, P. J., Leach, R. K., and Picart, P., editors, Optics and Photonics for Advanced Dimensional Metrology, volume 11352, pages 37–42. International Society for Optics and Photonics, SPIE.
Mitra, N. J. and Nguyen, A. (2003). Estimating surface normals in noisy point cloud data. In Proceedings of the Nineteenth Annual Symposium on Computational Geometry, SCG ’03, page 322–328, New York, NY, USA. Association for Computing Machinery.
Ngan, A., Durand, F., and Matusik, W. (2005). Experimental analysis of brdf models. Rendering Techniques, 2005(16th):2.
Park, J., Zhou, Q.-Y., and Koltun, V. (2017). Colored point cloud registration revisited. In Proceedings of the IEEE International Conference on Computer Vision (ICCV).
Peng, T. (2007). Algorithms and models for 3-D shape measurement using digital fringe projections. University of Maryland, College Park.
Rublee, E., Rabaud, V., Konolige, K., and Bradski, G. (2011). Orb: An efficient alternative to sift or surf. In 2011 International Conference on Computer Vision, pages 2564–2571.
Rusu, R. B., Blodow, N., and Beetz, M. (2009). Fast point feature histograms (fpfh) for 3d registration. In 2009 IEEE international conference on robotics and automation, pages 3212–3217. IEEE.
Salti, S., Tombari, F., and Di Stefano, L. (2014). Shot: Unique signatures of histograms for surface and texture description. Computer Vision and Image Understanding, 125:251–264.
Schröder, P. and Sweldens, W. (1995). Spherical wavelets: Efficiently representing functions on the sphere. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pages 161–172.
Tabakoff, W., Hamed, A., and Shanov, V. (1998). Blade deterioration in a gas turbine engine. International Journal of Rotating Machinery, 4(4):233–241.
Tsai, R. Y., Lenz, R. K., et al. (1989). A new technique for fully autonomous and efficient 3 d robotics hand/eye calibration. IEEE Transactions on robotics and automation, 5(3):345–358.
Ward, G. J. (1992). Measuring and modeling anisotropic reflection. In Proceedings of the 19th annual conference on Computer graphics and interactive techniques, pages 265–272.
Westin, S. H., Arvo, J. R., and Torrance, K. E. (1992). Predicting reflectance functions from complex surfaces. In Proceedings of the 19th annual conference on Computer graphics and interactive techniques, pages 255–264.
Woodham, R. J. (1979). Photometric stereo: A reflectance map technique for determining surface orientation from image intensity. In Nevatia, R., editor, SPIE Proceedings. SPIE.
Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330–1334.
Zheng, Q., Shi, B., and Pan, G. (2020). Summary study of data-driven photometric stereo methods. Virtual Reality & Intelligent Hardware, 2(3):213–221. 3D Visual Processing and Reconstruction Special Issue.
Acknowledgements
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—SFB 871/3—119193472. In addition the authors are grateful to all laboratory assistants and students who contributed to the realisation of this project.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2025 The Author(s)
About this chapter
Cite this chapter
Sliti, T., Kästner, M., Reithmeier, E. (2025). Multiscale Measurement of Blade Geometries with Robot-Supported, Laser-Positioned Multi-sensor-Techniques. In: Seume, J.R., Denkena, B., Gilge, P. (eds) Regeneration of Complex Capital Goods. Springer, Cham. https://doi.org/10.1007/978-3-031-51395-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-51395-4_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-51394-7
Online ISBN: 978-3-031-51395-4
eBook Packages: EngineeringEngineering (R0)