1 Introduction

The first appearance of X-ray computed tomography (XCT) machines dates back to 1971 as a medical application. Today, XCT systems are not only used for medical imaging but also serve as coordinate measuring systems (CMSs) in industrial applications [1]. XCT offers numerous advantages over traditional tactile and optical CMS, allowing for the nondestructive measurement of both internal and external features with resolutions as fine as a few micrometres, depending on the metrological structural resolution achievable with the system in use and the selected scanning parameters.

However, despite its potential, XCT for metrological applications still requires further development to achieve reliability and standardisation. The complex nature of the XCT process, influenced by various factors affecting measurement uncertainty, presents challenges to achieving consistent and accurate results. Several empirical approaches have been proposed in the literature [2]. Additionally, the modelling of X-ray attenuation through materials remains a complex problem, leading to a lack of international standards for uncertainty estimation.

The workflow for metrology in XCT involves physical measurement and subsequent data processing steps. Physical measurement involves conducting XCT scans to generate greyscale images for analysis. The data processing phase includes reconstruction, surface determination, point partitioning and fitting of the measurand [3].

During physical measurement, a source emits X-rays, which then pass through the material of a workpiece, and images are captured using an X-ray detector, typically producing a two-dimensional greyscale image using a flat-panel detector. Multiple images are captured from different angles around the workpiece.

The reconstruction process involves using a mathematical algorithm, such as Feldkamp–Davis–Kress [4], to convert projected images into a three-dimensional (3D) voxel representation. Each voxel in this volume stores information on X-ray attenuation, represented by different grey levels.

After this comes one of the most critical and challenging steps of the process: surface determination. This step is primarily divided into two phases: segmentation and surface extraction.

Segmentation is designed to distinguish between materials and the background within an XCT volume. Its input is a reconstructed 3D voxel representation. Segmentation assigns each voxel to either the background or the material(s) following a specific criterion. Various criteria have been proposed in the literature and will be introduced later.

The ultimate goal of surface extraction is to more accurately define the skin model [5] of the part. It typically starts with the centroids of boundary voxels provided by segmentation and then refines these points using subvoxeling techniques. The final output of surface extraction is usually a point cloud, accompanied by its triangulation.

Finally, the last step is to partition the obtained points to determine their belonging to the different features of the object to be measured. This process is typically driven by the target measurands, which are subsequently fitted to their respective geometries for measurement. This allows for the analysis and comparison of results.

XCT is gaining relevance in several industrial fields, such as automotive, aerospace, medical and additive manufacturing. Therefore, exploring various tolerances using XCT becomes crucial. However, whereas dimensional measurements like distances and diameters are often studied in the literature, geometric deviation [6] measurements have received less attention.

From a measurement perspective, the primary distinction between dimensional and geometric measurements lies in their fitting methods: least squares fitting is typically employed for dimensional measurements [7], whereas Chebyshev fitting is used for geometric measurements. Although least squares is not the default specification operator considered in ISO 14405-1 [8], this method is often adopted in coordinate metrology for size measurements as it averages out local deviations from a nominal surface, effectively smoothing out measurement noise. In contrast, Chebyshev fitting aims to minimise the maximum deviation between data points and the fitted curve or surface. This method is the default operator specification indicated in ISO 1101 [6], which focuses on geometric tolerances. Chebyshev fitting critically assesses shape errors such as roundness, straightness and flatness. By applying the principles of the Chebyshev inequality, it identifies the minimum amplitude of the tolerance zone, which still contains the real surface (ideally) or the measured skin model of the part. Consequently, geometric measurements using Chebyshev fitting are generally more sensitive to noise than dimensional measurements. In addition, the measurement uncertainty is constrained by the system resolution when applying Chebyshev fitting. Therefore, effective subvoxeling is mandatory to ensure measurement accuracy.

To obtain accurate inspection data for metrology, surface determination techniques (SDTs) must provide high-precision surface information. To address this issue, we propose a novel surface determination algorithm based on the third-order Taylor expansion function. We validate our proposed method by comparing it with relevant algorithms presented in the recent literature and those used in the commercial software VGStudioMax [9]. The conditions and parameters chosen for the acquisition of XCT projection may also influence the scan results. To better understand the impact of these parameters, we conduct scans under various conditions, considering voltage, current, part orientation and exposure time. The results of this comparison indicate comparable performances concerning size errors while showing notable improvements in addressing geometric errors.

2 Literature Review

In this section, we analyse SDTs from two perspectives: segmentation and surface extraction. Although different types of segmentation algorithms can be found in the literature, most of them fall into three main categories: threshold-based, boundary-based and region-based methods [10]. Each method offers distinct strategies to define structures within XCT volumes.

Threshold-based methods aim to distinguish different objects or areas by their grey levels. They are commonly classified into two types: global thresholding and local thresholding. Global thresholding applies a single threshold value to the entire volume, assuming that the grey values of objects and the background can be separated by a single threshold. For example, Otsu [11] solved an optimisation problem to identify the optimal threshold value that minimised the intraclass variance. In contrast, local thresholding adjusts the threshold value for each voxel based on its local neighbourhood, considering spatial information and grey value variations within smaller regions. For instance, Phansalkar [12] calculated a dynamic threshold for each voxel based on the mean and standard deviation of a local neighbourhood.

Meanwhile, region-based methods in segmentation involve the initialisation of one or multiple regions, which are then expanded based on a homogeneity criterion. These methods aim to group voxels into coherent regions that share similar characteristics. An example of a region-based method is region growing [13], which starts with seed regions, which are manually or automatically set. The algorithm then iteratively grows these seed regions by incorporating neighbouring voxels that satisfy certain growing criteria. Another example of a region-based technique is Chan-Vese [14], which is a level set method that evolves a level set function based on the signed distance to the contour. A different example of this class of methods is watershed transform [15]. In this method, regions are constructed by simulating a flooding process. The volume is considered a topographic relief, and water is poured into the basins. As the water level rises, the basins start to merge, and the flooding process continues until a complete segmentation is obtained. However, the results of some of these methods are highly dependent on the operator’s choices. Additionally, a common challenge encountered with these algorithms is oversegmentation. To address these issues, Yang et al. [16] recently proposed an SDT based on a marker-controlled watershed (MCW). This method is characterised by its automation, with parameter selection simplified to determine the size and shape of the structuring element required for morphological operations.

Boundary-based methods focus on detecting boundaries between components based on the relative differences in the grey values of the voxels within a neighbourhood. Unlike threshold-based methods, which rely solely on grey value thresholds, boundary-based methods account for local variations in grey values to identify boundaries. A widely used example of a method for this category is Canny [17]. It involves smoothing the volumetric image, computing the gradient magnitudes and directions, applying nonmaximum suppression to obtain thin edges and, finally, applying hysteresis thresholding to detect and connect the resulting edges. Segmentation through Canny and edge detection methods enables obtaining a preliminary identification of edges, thus providing an excellent starting point for refining these points using subvoxeling methods.

Methods for 3D surface extraction in metrology using industrial XCT are quite limited. The most widely used algorithm up to now is the Marching Cubes algorithm, introduced by Lorensen and Cline [18]. This method evaluates eight neighbouring voxels around each point, forming a small cube, to determine whether they are within or outside the surface based on a predefined grey value threshold. For this reason, it naturally performs well with threshold-based techniques such as Otsu or ISO50 [19]. In ideal conditions, global surface extraction methods are best for single-material components. However, factors affecting XCT data can lead to inaccurate surface models that include unrelated material. In optimal conditions, global surface extraction methods would yield the best surfaces for single-material components. Because of various influencing factors in how XCT data are taken, inaccurate surface models are often produced, inadvertently including the portion of the material not related to the surface. Therefore, more advanced and localised methods are needed for precise surface extraction [10].

The most recent and relevant contribution in this field was the Gravity Center algorithm proposed by Yagüe-Fabra et al. [20]. This method starts with the central points of edge voxels with the help of the Canny segmentation algorithm. Then, it enhances these points by determining the gravity centre using the grey values in a one-dimensional window around the initial centres.

Another noteworthy option is the Steinbeiss algorithm [21], which forms the foundational logic of the ‘Advanced’ model of the surface determination modules in the commercial software VGStudioMax. The concept behind this algorithm involves starting with an initial surface model of the specimen and calculating gradients solely along the direction of the surface normal at each point on the surface model. Each point’s position is then refined to correspond with the location of the maximum gradient magnitude within the grey value profile.

3 Method

Subvoxeling techniques allow for a precise estimation of an object’s surface position with subvoxel accuracy, surpassing the limitations of discrete voxel grid coordinates. We propose an SDT that looks for the maximum gradient of grey values. It starts from central points of the voxels identified by the Canny edge detection, a technique that can segment a volume essentially by finding the maximum gradient using a multistage algorithm, which includes Gaussian smoothing, gradient computation, nonmaximum suppression and edge tracking by hysteresis. Subsequently, it applies the subvoxeling technique based on the Taylor expansion up to the third order of the function describing the grey values of the volume. This choice was driven by Canny’s ability to directly identify voxels that belong to the contour, which includes the maximum gradient. Moreover, because of the presence of noise, the Taylor approximation of the grey value function is more accurate as the starting point gets closer to the maximum gradient value. For this reason, Canny is the best candidate segmentation method for applying the proposed subvoxeling technique.

Consider the \(f\left( \textbf{p}\right) \) function defining the grey value at the generic location \( \textbf{p}\) and the related gradient direction \(\textbf{g}\left( \textbf{p}\right) \). Define the function \(f_g\left( t\right) = f\left( \textbf{p}+ t\textbf{g} \right) \) representing the grey value at the location \(\textbf{p}+t\textbf{g}\), where t is a parameter determining a translation from the initial point \(\textbf{p}\) along the direction \(\textbf{g}\). Then, what we want to do is to determine the value \(t_i^*\), which translates the ith point identified by Canny \(\textbf{p}_i\) to the location characterised by the maximum norm of the gradient along the direction \(\textbf{g}_i = \textbf{g}\left( \textbf{p}_i\right) \). The function \(f_g\left( t\right) \) can be approximated by its third-order Taylor expansion:

$$\begin{aligned} f_g\left( t\right) \approx f_g\left( 0\right) + \frac{\partial f_g\left( 0\right) }{\partial t} \cdot t +\frac{1}{2} \frac{\partial ^2 f_g\left( 0\right) }{\partial t^2} \cdot t^2 + \frac{1}{6} \frac{\partial ^3 f_g\left( 0\right) }{\partial t^3} \cdot t^3 \end{aligned}$$
(1)

The edge can be located where the norm of the gradient of f (i.e. the directional derivative in the gradient direction) is maximum. This position can be approximated by calculating the value of t for which the second derivative of \(f_g\left( t\right) \), (i.e. the first derivative of the norm of the gradient) is equal to zero. Differentiating twice \(f_g\left( t\right) \) as approximated in Eq. (1), we obtain

$$\begin{aligned} \frac{\partial ^2 f_g\left( t\right) }{\partial t^2} \approx \frac{\partial ^2 f_g\left( 0\right) }{\partial t^2} + \frac{\partial ^3 f_g\left( 0\right) }{\partial t^3} \cdot t \end{aligned}$$
(2)

Then, setting Eq. (2) as equal to 0 to identify the maximum yields

$$\begin{aligned} t^* = - \frac{\frac{\partial ^2 f_g\left( 0\right) }{\partial t^2}}{\frac{\partial ^3 f_g\left( 0\right) }{\partial t^3}} \end{aligned}$$
(3)

The physical meaning of \(t^*\) can be explained as the displacement of the original point in the gradient direction. Using Eq. (3), we can determine the value of \(t_i^*\) for each point \(\mathbf {p_\textit{i}}\).

The second and third derivatives of \(f_g\left( t\right) \) can be computed as linear combinations of partial derivatives of \(f\left( \mathbf {p_\textit{i}} \right) \). In fact, remembering that \(f_g\left( t\right) = f\left( \mathbf {p_i} + t \cdot \mathbf {g_i} \right) \), it follows that the first, second and third derivatives of \(f_g\left( t\right) \) are the first, second and third directional derivatives of f in the direction \(\textbf{g}_i\).

Consider the following normalised gradient of f:

$$\begin{aligned} \nabla f= & {} \left[ \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z} \right] \end{aligned}$$
(4)
$$\begin{aligned} \textbf{g}= & {} \frac{\nabla f}{\Vert \nabla f\Vert } \end{aligned}$$
(5)

The first-order derivative in the gradient direction is

$$\begin{aligned} \frac{\partial f_g\left( t\right) }{\partial t} = D_{\textbf{g}} f = \textbf{g} \cdot \nabla f = \Vert \nabla f\Vert \end{aligned}$$
(6)

Similarly, we can calculate the second derivative

$$\begin{aligned} \begin{aligned} \frac{\partial ^2 f_g\left( t\right) }{\partial t^2}&= \textbf{g} \cdot \nabla \left( \Vert \nabla f\Vert \right) = \frac{1}{\Vert \nabla f\Vert ^2} \\&\quad \cdot \left[ \left( \frac{\partial f}{\partial x}\right) ^2 \frac{\partial ^2 f}{\partial x^2} + 2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \frac{\partial ^2 f}{\partial x \partial y} + 2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial z} \frac{\partial ^2 f}{\partial x \partial z} \right. \\&\quad \left. + \left( \frac{\partial f}{\partial y}\right) ^2 \frac{\partial ^2 f}{\partial y^2} + 2 \frac{\partial f}{\partial y} \frac{\partial f}{\partial z} \frac{\partial ^2 f}{\partial y \partial z} + \left( \frac{\partial f}{\partial z}\right) ^2 \frac{\partial ^2 f}{\partial z^2}\right] \end{aligned} \end{aligned}$$
(7)

and the third:

$$\begin{aligned} \begin{aligned} \frac{\partial ^3 f_g\left( t\right) }{\partial t^3}&= \textbf{g} \cdot \nabla \left( \textbf{g} \cdot \nabla \left( \Vert \nabla f\Vert \right) \right) \\&= \frac{1}{\Vert \nabla f\Vert ^3} \cdot \left[ \left( \frac{\partial f}{\partial x}\right) ^3 \frac{\partial ^3 f}{\partial x^3} + 3\left( \frac{\partial f}{\partial x} \right) ^2 \frac{\partial f}{\partial y} \frac{\partial ^3 f}{\partial x^2 \partial y} + 3\left( \frac{\partial f}{\partial x} \right) ^2 \frac{\partial f}{\partial z} \frac{\partial ^3 f}{\partial x^2 \partial z} \right. \\ &{}\quad + 3 \frac{\partial f}{\partial x} \left( \frac{\partial f}{\partial y}\right) ^2 \frac{\partial ^3 f}{\partial x \partial y^2} + 6 \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \frac{\partial f}{\partial z} \frac{\partial ^3 f}{\partial x \partial y \partial z} + 3 \frac{\partial f}{\partial x} \left( \frac{\partial f}{\partial z}\right) ^2 \frac{\partial ^3 f}{\partial x \partial z^2} \\ &{}\quad + \left( \frac{\partial f}{\partial y}\right) ^3 \frac{\partial ^3 f}{\partial y^3} + 3\left( \frac{\partial f}{\partial y}\right) ^2 \frac{\partial f}{\partial z} \frac{\partial ^3 f}{\partial y^2 \partial z} + 3 \frac{\partial f}{\partial y} \left( \frac{\partial f}{\partial z}\right) ^2 \frac{\partial ^3 f}{\partial y \partial z^2} \\ &{}\quad \left.+ \left( \frac{\partial f}{\partial z}\right) ^3 \frac{\partial ^3 f}{\partial z^3} \right] \end{aligned} \end{aligned}$$
(8)

All the partial derivatives can be numerically estimated from XCT images at the required locations. Eventually, the ith measured point \(\mathbf {p_\textit{i}}^*\) on the surface of the part is identified as

$$\begin{aligned} \mathbf {p_\textit{i}}^* = \mathbf {p_\textit{i}} + t_i^* \cdot \mathbf {g_\textit{i}} \end{aligned}$$
(9)

In this way, the correction of the points is done using the parameter \(t_i^*\) along the gradient. For the software implementation, a flux diagram is presented in Fig. 1.

Fig. 1
figure 1

Flux diagram of the implementation of the surface determination method

We started from the points identified by Canny. Subsequently, we performed Gaussian smoothing on the original volume, followed by computing gradient matrices for the partial derivatives along the three directions (x, y and z) using the Sobel operator. This preprocessing step for the Taylor approximation ensured consistency with the gradient matrices used by Canny to identify the initial points. In the same way, we computed the second and third derivatives for a total of 19 sets of partial derivatives and calculated the new coordinates with subvoxel accuracy for each direction. Notably, the sequential application of the Sobel operator involved a neighbourhood of the considered voxel in the computation. This generally ensured the stability of the fitting and the effectiveness of the approximation, at least within a few voxels of the considered one.

Figure 2 provides a visual representation of the subvoxel refinement technique applied to the Canny segmentation method. The red points represent the centroids of the edge voxels identified by Canny, whereas the green points (connected by lines) represent the new points generated after displacement (blue line) through the subvoxel refinement technique. The red points exhibit a saw-tooth configuration, which is characteristic of the use of voxel centroids for edge detection. This configuration can introduce some irregularities in the contour. In contrast, the green points, which have been moved along the gradient direction using the subvoxel refinement technique, form a more continuous contour. In addition, Fig. 2 shows that the computed \( t_i^*\) values are always small. The approximation of the real grey values at such small distances is ensured. Then, Fig. 3 shows the grey value and directional derivative curves. Apparently, the Taylor polynomial effectively approximates the grey values locally.

Fig. 2
figure 2

Subvoxel refinement compared with the initially identified points by Canny

Fig. 3
figure 3

Identification of the maximum. The blue cross marks the original point. The green circle marks the optimal point

Fig. 4
figure 4

Drawing of the sample

4 Materials and Validation

The validation was conducted using a multistepped aluminium sample, as depicted in Fig. 4. The measurements focused on the top five cylinders of the sample, with each cylinder including one size tolerance (diameter) and two form tolerances (cylindricity and flatness). The sample underwent calibration using a ZEISS Prismo VAST HTG coordinate measuring machine (CMM). The diameter, cylindricity and flatness of each element of the sample were calibrated (Table 1).

Table 1 Calibrated results from CMM at a reference temperature of \(20^{\circ }\)C ± \(0.5^{\circ }\)C

Sixteen CT scans were conducted using a Phoenix V|tome|x M300 XCT scanner (Waygate Technologies, owned by MADE s.c.a r.l.). During the experiment, a 1-mm-thick tin physical filter was used. Two X-ray tube voltage levels were employed: 150 and 250 kV. Additionally, variations in orientation (PO ) and exposure time (ET) were introduced. For orientation, two inclinations of the part axis, \(0^{\circ }\) and \(45^{\circ }\), were considered. The ET was varied between 50 and 100 ms. The current value was carefully chosen to optimise the image quality based on the selected voltage and ET combination. Each experiment was repeated twice. The total number of projections acquired was 2356, with a voxel size of 32.29 µm. The number of projections was chosen to fulfil Shannon’s sampling theorem [22]. An example of the scanned volume is shown in Fig. 5.

Fig. 5
figure 5

Example of an XCT image of the aluminium sample

To verify the effectiveness of the proposed method, hereinafter referred to as Canny_tay, we compared it with seven relevant SDTs from the recent literature and those used in the widely adopted commercial software VGStudio MAX (Volume Graphics GmbH) [9], as detailed in Table 2. All the SDTs, except VGStudio MAX, were implemented in MATLAB 2022b (The MatWorks, Inc.). As shown in the table, Otsu [11], Phansalkar (PH) [12] and Chan-Vese (CV), [14] all utilised the Marching Cubes algorithm [18] for surface extraction. Canny_gc [20] and MCW_gc [16] utilised the segmentation and surface extraction methods described in the literature. Additionally, we introduced the boundary voxels provided by the original Canny method (Canny) [17], using their central points for comparison to analyse the refinement performance of Canny_gc and Canny_tay. In the case of the VGStudioMax software, XCT data segmentation was performed using its SDTs. Specifically, the ‘Advanced (classic)’ method in ‘Expert mode’ was utilised. Starting from the initial contour of ISO50, a Steinbeiss-like algorithm was applied. We selected the options of ‘Use starting contour for gradient’ and ‘Iterative surface determination’. The initial threshold was automatically selected using the default setting of VGStudioMax.

Table 2 SDTs involved for comparison

After extracting surface points, the point cloud was aligned to the nominal geometry, partitioned for the target measurands and fitted to produce the final measurement results. Least squares fitting was used to measure the diameter, and Chebyshev fitting was applied to determine the cylindricity and flatness. Ultimately, the performances of the algorithms were evaluated using the measurement error, which was defined as the difference between the measurement result and the calibrated value. Positive values indicated an overestimation of the measurand, whereas negative values indicated an underestimation. Overall, \(5\cdot 2\cdot 2\cdot 2\cdot 8\cdot 2=640\) experimental results for cylindricity, flatness and diameter were analysed.

5 Results

Analysis of variance (ANOVA) was applied to verify the statistical significance of the factors. The final results are presented in Figs. 6, 7 and 8, which show the main effects of the measurement error of the diameter, cylindricity and flatness from calibrated values. As the plot indicates the average error from the calibrated value induced by a factor of the experiment, values closer to zero, either positively or negatively, denote better averages in accordance with the calibrated values themselves. In general, this indicates better performance.

Let us deepen the analysis of the factor under study (i.e. SDTs), starting with the diameter measurement error. Because PH_mc was significantly worse with respect to the others, with an average value of \(-61.2\) µm, and this had an impact on the comparison and grouping of methods because it diverged from the rest, we decided to analyse the diameter excluding this method. Based on an initial analysis of the main effects plot, all methods evidently showed similar results and consistently underestimated the diameter. Furthermore, the factors of voltage, orientation and ET did not seem to have a significant impact, except for orientation, which appeared to be marginally more influential compared with the others.

The Tukey criterion was applied to verify the statistical significance of the differences among the various SDTs (Table 3). The criterion identified three primary method groups. Groups A and B had three out of four methods in common, including Canny_tay, the literature subvoxeling technique and the industrial standard VGStudio MAX. Meanwhile, Group C exhibited slight deviations from the first two groups, but overall, there was no clear distinction in the methods analysed based on the measurement of the diameter, as the measurement deviation ranged from −34.6 to −37.7 µm. This consistency of the results could be attributed to the large number of sampling points averaged in the definition of the least squares diameter.

Fig. 6
figure 6

Main effects of the measurement error of diameter

Table 3 Grouping information of diameter using the Tukey method and 95% confidence

Regarding the measurement error in cylindricity deviation, the main effects plot revealed greater deviations among the methods compared with those of the diameter. The choice of method appeared to have a significant influence on this feature, with the proposed method, along with Otsu_mc and VGStudio MAX, emerging as predominant over the others. Meanwhile, none of the other factors appeared to be particularly influential in this case. The Tukey criterion identified six distinct groups (Table 4). Group F included only the Canny_tay method, with an average measurement error of 14.0 µm. The second group comprised the VGStudio MAX method and Otsu_mc, exhibiting average measurement errors of 20.2 and 21.0 µm, respectively. The remaining groups included only the remaining methods alone (CV_mc, PH_mc, MCW_gc, Canny and Canny_gc). Each group demonstrated a significant difference from the others. In cylindricity comparison, Canny_tay emerged as the most suitable.

Fig. 7
figure 7

Main effects of the measurement error of cylindricity

Table 4 Grouping information of cylindricity using the Tukey method and 95% confidence

The flatness results exhibited significant deviation from the calibrated measurements. We noticed that this behaviour was due to the \(0^{\circ }\) orientation scans. This phenomenon could be attributed to the fact that the optical axis was parallel to the surfaces of various planes during the scan at this position, leading to a not well-defined transition of the background to the material. This outcome was difficult to interpret, and the accuracy of the model (adjusted R squared), as derived from the ANOVA, was low at approximately 47.8%. For this reason, we opted to split the analysis and to solely consider deviations in flatness at a \(45^{\circ }\) orientation for the comparison.

Examining the main effects depicted in Fig. 8, the results for flatness closely resembled those obtained for cylindricity. Canny_tay, Otsu_mc and VGStudio MAX demonstrated better results, with Canny_tay being predominant by more than 6 µm compared with the others. Furthermore, in this final case, the other factors appeared to have minimal significance. The Tukey criterion in this case identified five groups (Table 5). The first, once again, included only Canny_tay, with an average measurement of 16.7 µm. The second group included, as for the cylindricity, VGStudio MAX and Otsu_mc, with average measurements of 22.8 and 24.4 µm, respectively. The remaining methods were categorised into three different groups, where Canny and Canny_gc were found to be not statistically different. Similarly, PH_mc and CV_mc were not statistically different. Overall, the difference among the methods was clear, with none being shared across multiple groups. Also, in the flatness comparison, Canny_tay emerged as the most suitable.

Fig. 8
figure 8

Main effects of the measurement error of flatness for a \(45^{\circ }\) orientation

Table 5 Grouping information of flatness using the Tukey method and 95% confidence

6 Conclusions

XCT is arguably the single inspection technology that can provide the most comprehensive description of parts. However, handling these images coherently with conventional segmentation methods is challenging.

Although different measurement scales posed various challenges in obtaining reliable results [23], the proposed analysis focused solely on reconstructed images and demonstrated that the results were mostly independent of scan parameters. Therefore, the methods and considerations remained consistent across systems of different scales. In addition, this application may have the potential to benefit other 3D tomography techniques that are currently constrained by conventional segmentation algorithms, such as those using focused ion beams combined with scanning electron microscopy [24].

As a spreading technology in several fields, such as automotive, medical, aerospace and the emerging additive manufacturing, the assessment of diverse tolerances using XCT has become a pivotal area of study. In response to the need for precise metrological data, a Taylor expansion-based SDT has been proposed. This technique has been developed to allow for the precise quantification of both dimensional characteristics and geometric tolerances, ensuring high accuracy in measurement results. The preliminary experimental results indicated that Canny_tay surpasses seven others referenced in the literature or offered by the commercial software VGStudioMax. Notably, it showed increased accuracy in assessing geometric tolerances, such as cylindricity and flatness, while consistently maintaining comparable accuracy in size measurements, such as diameters.

The limitation of this study is that it involved a relatively simple sample with basic geometry. Future research can expand upon this by analysing samples with diverse geometries and incorporating a broader range of dimensional and geometric measurands.