Abstract
Magnetorelaxometry imaging is a highly sensitive technique enabling noninvasive, quantitative detection of magnetic nanoparticles. Electromagnetic coils are sequentially energized, aligning the nanoparticles’ magnetic moments. Relaxation signals are recorded after turning off the coils. The forward model describing this measurement process is reformulated into a severely ill-posed inverse problem that is solved for estimating the particle distribution. Typically, many activation sequences employing different magnetic fields are required to obtain reasonable imaging quality. We seek to improve the imaging quality and accelerate the imaging process using fewer activation sequences by optimizing the applied magnetic fields. Minimizing the Frobenius condition number of the system matrix, we stabilize the inverse problem solution toward model uncertainties and measurement noise. Furthermore, our sensitivity-weighted reconstruction algorithms improve imaging quality in lowly sensitive areas. The optimization approach is employed to real measurement data and yields improved reconstructions with fewer activation sequences compared to non-optimized measurements.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Magnetic nanoparticles (MNPs) are a promising tool for a large number of biomedical applications [21]. One of the most promising fields to employ MNPs is in cancer therapy. Specifically, they can be used as agents for magnetic drug targeting [1] or for hyperthermia therapy [26]. However, for most of these applications a quantitative knowledge of the MNP distribution inside the body is mandatory to guarantee their safety and efficacy. Magnetorelaxometry imaging (MRXI) is a highly sensitive imaging modality, which is able to provide qualitative as well as quantitative information about the spatial distribution of MNPs in large volumes noninvasively. A typical MRXI setup consists of several electromagnetic excitation coils surrounding the region of interest (ROI) containing MNPs and a set of highly sensitive sensors measuring the magnetic flux densities of the MNPs, such as fluxgates [20], superconducting quantum interference devices (SQUIDs) [28] or, more recently, optically pumped magnetometers [2].
A single activation sequence of an MRXI measurement starts with the application of a magnetic excitation field for a time period \(t_\mathrm {mag}\) generated by one or several energized coils of the electromagnetic coil arrangement which aligns the particle moments inside the ROI toward the local field direction of the applied magnetic field. Then, the excitation field is turned off, and the reorientation of the magnetic moments of the MNP due to thermal agitation is detected by the sensors as a decay of magnetization over time (i.e., the relaxation process). Every coil magnetizes proximal subregions of the ROI much stronger than distant ones. In order to obtain a balanced sensitivity over the entire ROI, a complete MRXI measurement is typically constituted of several activation sequences from different coil positions, employing varying excitation field configurations to enhance the reconstruction of the MNP distribution.
As the magnetic induction in every sensor is affected by contributions from the entire ROI, the imaging process requires finding a reasonable solution to an ill-posed inverse problem. An accurate forward model enabling the simulation of the measurement process is essential for a proper restoration of the underlying MNP distribution. This forward model is formulated as a system of linear equations linking the spatial MNP distribution to the measured data via the system matrix which embodies the geometrical, physical and electrical interrelationships of the MRXI system and the MNPs. The solution of the inverse problem is non-unique and thus requires a proper regularization in order to determine reasonable reconstructions of the underlying MNP distributions. There are a number of different regularization techniques available that are used to find robust solutions to these types of inverse problems [6, 12, 13].
Naturally, the quality and accuracy of these reconstructions depend heavily on the mathematical properties of the system matrix. It is possible to affect the system matrix structure to some degree by adapting the coil currents in each activation sequence. Several attempts to optimize those coil currents to influence matrix properties, like the singular value distribution [4], the theoretical information content [8] or the spatial sensitivity [3, 9], have already been performed. These approaches focus either on random excitation strategies [4], the determination of consecutive activation sequences based on (sub-)volume sensitization [3, 9] or on the optimization of sequential activation patterns where only a single coil per sequence is energized [8]. However, none of them optimize coil current patterns for a given number of activation sequences simultaneously while multiple or all coils are active.
It is well known that the spectral condition number \(\kappa \) of a system matrix \(\mathbf {A} \in \mathbb {R}^{m\times n}\) is a measure on how strongly perturbations in the measured data or in the model equations are amplified during matrix inversion. \(\kappa \) is calculated by
where \(\left\| \cdot \right\| _2 = \sigma _\mathrm {max}(\cdot )\) is the spectral matrix norm, and \(\sigma _\mathrm {max}(\mathbf {A})\) and \(\sigma _\mathrm {min}(\mathbf {A})\) denote the maximum and minimum singular values of \(\mathbf {A}\). Some of the aforementioned works employed the spectral condition number (or the related distribution of singular values) as an indicator on how well their coil current patterns improved the system matrix . However, a coil current optimization with respect to the minimization of the matrix condition has not been performed. On the other hand, Van Durme et al. investigated the optimization of geometrical design parameters for an MNP tomographic imaging setup based on the minimization of the spectral condition number [27]. The spectral condition number itself is not derivable and discontinuous, therefore not suitable for first-order optimization methods and difficult to optimize at all. In this paper, however, we propose an indirect approach for an optimization of the spectral condition number \(\kappa \) based on the minimization of the related, derivable and continuous Frobenius condition number \(\kappa _\mathrm {F}\)
with \( \left\| \mathbf {A}\right\| _\mathrm {F}= \sqrt{\sum _{i=1}^{m} \sum _{j=1}^{n} a_{i,j}^2} \) denoting the Frobenius norm of \(\mathbf {A}\) where \(a_{i,j}\) are the individual matrix elements.
We aim at recovering the most accurate reconstruction of the underlying MNP distribution with the least number of activation sequences possible in order to reduce the measurement time. In this paper, we present an optimization algorithm to determine the coil current patterns for given numbers of activation sequences which produce system matrices with minimized spectral condition that yield more accurate reconstructions of MNP distributions using fewer activation sequences than non-optimized measurements. Furthermore, we employ sensitivity-weighted reconstruction algorithms to enhance contrast and reconstruction accuracy in regions with low spatial sensitivity. We perform and evaluate the coil current optimization on superimposed sensor data from real sequential MRXI activation sequences as measurements instead of simulated signals. The algorithm presented here is not constrained to MRXI, but easily adaptable to other linear inverse problems to minimize their spectral condition numbers and stabilize their inversion. Specifically, the optimization approach may be useful for other (medical) imaging modalities as well, as many of them seek a stable solution to an underlying linear inverse problem.
2 Methods
2.1 MRXI Forward Model
In the present study, we aim to reconstruct the distribution of MNPs in a designated 3D ROI using MRXI. The ROI is tessellated into \(N_\mathrm {v}\) cubic volume elements (voxels), allowing a discrete formulation of the MNP relaxation signals throughout the entire ROI. The signal sources are modeled as point dipoles in the centers of the voxels. The magnetic flux density \(B_{s}(t)\) measured in sensor \({s}\) can be modeled as the superimposed contributions of \(N_\mathrm {v}\) point dipoles between the voxel centers \(\mathbf {r}_{v}\in \mathbb {R}^3\) containing an amount of MNPs \(x_{v}\) and the respective sensor center \(\mathbf {r}_{s}\in \mathbb {R}^3\) such that
where \(\mu _0=4\pi \cdot 10^{-7}\ \mathrm {H}/\mathrm {m}\) is the permeability of free space, \(\mathbf {n}_{s}\in \mathbb {R}^3\) is the orientation of sensor \({s}\) in unit length, \(\mathbf {r}_{{s},{v}} = \mathbf {r}_{s}-\mathbf {r}_{v}\) is the distance vector from voxel \({v}\) to sensor \({s}\) and \(\left\| \cdot \right\| \) is the Euclidean vector norm. \(\chi \) denotes the magnetic susceptibility of the particles and depends on the magnetization time. The relaxation function \(\xi (t)\), which determines the decay of the net magnetization, is typically modeled either as single or as a sum of exponential decays and takes values between zero and one. The magnetic field \(\mathbf {H}_{{c},{v}} \in \mathbb {R}^3\) of coil \({c}\) in voxel \({v}\) is calculated numerically according to Biot-Savart’s law where the coil geometry is approximated with short linear filaments by [15, 18]
with \(\mathbf {f}_{1,i}\) and \(\mathbf {f}_{2,i}\) denoting the distance vectors from the \({v}\)th voxel center \(\mathbf {r}_{v}\) to the beginning and end points of the ith line segment, respectively. \(I_{c}\) represents the current that is carried by coil \({c}\). The complex geometry of the spiral excitation coils employed here was subdivided into \(i=1920\) line filaments for numerical simulations. The contribution of all \(N_\mathrm {c}\) coils can be added to gain the resulting magnetic field \(\mathbf {H}_{v}= \sum _{{c}=1}^{N_\mathrm {c}} \mathbf {H}_{{c},{v}}\) in voxel \({v}\).
The known geometrical, physical and electrical quantities in (3) can be separated from the unknown particle amount \(x_{v}\) and summarized in \(\mathscr {L}_{{s},{v}}(t)\) such that\(B_{s}(t) = \sum _{{v}=1}^{N_\mathrm {v}} \mathscr {L}_{{s},{v}}(t) x_{v}\). Note that \(\xi (t)\) is the only time-dependent parameter in \(B_{s}(t)\). Since the particle distribution defining the relaxation function \(\xi (t)\) is immobilized during a measurement (see Sect. 2.2), the qualitative relaxation behavior does not change over time. Thus, we can make a transition from the dynamic system to a stationary system of linear equations for fixed time instants. Additionally, the arbitrary constant offset that cannot be avoided in SQUID measurements can be compensated using the relaxation amplitude \(b_{s}= B_{s}(t_1) - B_{s}(t_2)\) between the two fixed time points \(t_1\) and \(t_2\) (\(t_1<t_2\)). Hence, the stationary linear relationship is expressed by \(b_{s}= \sum _{{v}=1}^{N_\mathrm {v}} L_{{s},{v}} x_{v}\) with \(L_{{s},{v}} = \mathscr {L}_{{s},{v}}(t_1)-\mathscr {L}_{{s},{v}}(t_2)\) [18]. By defining \(\mathbf {x} = \left[ \begin{array}{cccccc} x_1&x_2&\ldots&x_{N_\mathrm {v}} \end{array}\right] ^\mathrm{T}\) as data vector representing the unknown MNP amounts and \(\mathbf {b}_{a}= \left[ \begin{array}{cccccc} b_1&b_2&\ldots&b_{N_\mathrm {s}} \end{array}\right] ^\mathrm{T}\) containing the measured relaxation amplitudes from all \(N_\mathrm {s}\) sensors of a single activation sequence \({a}\), the linear model can be expressed as:
where \(\mathbf {L}_{a}\in \mathbb {R}^{N_\mathrm {s}\times N_\mathrm {v}}\) is the system matrix which contains the values of \(L_{{s},{v}}\) for each sensor s and every voxel v. A typical MRXI measurement is comprised of \(N_\mathrm {a}\) activation sequences where the individual equations can be concatenated to formulate the final MRXI forward model
with \(\mathbf {L}\in \mathbb {R}^{N_\mathrm {s}N_\mathrm {a}\times N_\mathrm {v}}\) and \(\mathbf {b}\in \mathbb {R}^{N_\mathrm {s}N_\mathrm {a}}\).
We compute \(N_\mathrm {c}\) dictionary system matrices \(\mathbf {L}_{\mathrm {dict}}\) beforehand, which are independent of any coil current to avoid the recalculation of the entire system matrix in every step during the coil current optimization. Subsequently, they are linearly scaled by the respective coil currents and summed up to generate the actual MRXI forward model. The dictionary system matrix of the \({c}\)th coil \(\mathbf {L}_{\mathrm {dict},{c}}\) is calculated as the system matrix of an individual activation of coil \({c}\) with unit current \(I_{c}=1\,\mathrm {A}\). The forward model is then solely dependent on the \(N_\mathrm {c}\times N_\mathrm {a}\) coil currents represented in the current matrix \(\mathbf {I}\in \mathbb {R}^{N_\mathrm {c}\times N_\mathrm {a}}\) and can be formulated as:
where \(I_{{c},{a}}\) denotes the current of the \({a}\)th activation in the \({c}\)th coil.
2.2 MRXI Setup
The forward model employed in this study is based on a real setup of PTB Berlin, Germany [19] (see photograph in Fig. 1 for the real setup and Fig. 2a for a schematic representation). The sensor system consists of 304 low-\(T_\mathrm {C}\) SQUIDs arranged in four horizontal xy-planes measuring the magnetic flux densities produced by the decays of net magnetic moments during the relaxation phase [25]. On each plane, the sensors are arranged on a hexagonal grid and oriented in five directions to measure different vector components of the magnetic flux density. The \(12\,\mathrm {cm}\times 12\,\mathrm {cm}\times 6\,\mathrm {cm}\) ROI is embedded inside a \(50\,\mathrm {cm}\times 40\,\mathrm {cm}\times 7\,\mathrm {cm}\) phantom body (see inset Fig. 1) and is separated into five horizontal xy-layers. MNP-loaded gypsum cubes with defined MNP quantity serve to model more complex MNP distributions inserted into the ROI during an experiment. The magnetic fields are generated by planar, spiral coils (\(d= 36\,\mathrm {mm}\)), where 15 coils above and 15 coils below the ROI were applied for MNP magnetization. For a single experiment, 15 MNP-loaded cubes (\(3.7\,\mathrm {mg/cm^3}\)) have been arranged in one of the horizontal xy-layers to form the letter “P” as ground-truth MNP distribution. Subsequently, each excitation coil was consecutively driven by \(I_{c}= 0.8\,\mathrm {A}\) to produce 30 different relaxation responses \(\mathbf {b}_\mathrm {real}\), which were recorded by the sensor system. One such activation sequence took \(3.5\,\mathrm {s}\) to complete, resulting in a total of \(105\,\mathrm {s}\) for the entire measurement. Five of these measurements have been conducted with the P-shaped distribution in a different horizontal layer each time (see Fig. 2b). For a more complete understanding about the experimental setup, the reader is referred to [19].
Since we require a broader spectrum of measurement data produced by arbitrary combinations of coil currents to enable an optimization thereof, superimposed data are generated from the real measurements using an approach analogous to (7). The sequential MRXI measurement data from a single phantom are used as dictionary measurements for each coil \({c}\) such that \(\mathbf {b}_{\mathrm {dict},{c}} = \mathbf {b}_{\mathrm {real},{c}}/0.8\) (division by 0.8 to account for unit current in \(\mathbf {b}_{\mathrm {dict},{c}}\) by compensating the \(I_{c}=0.8\,\mathrm {A}\) applied during the real measurements). The superimposed measurement data employed during optimization are ultimately computed by:
Naturally, since each measured phantom exhibits different sensor data, the set of dictionary measurements \(\mathbf {b}_\mathrm {dict}\) differs from one phantom to another.
2.3 Objective Function
The spectral condition number \(\kappa \) of the system matrix is a measure for the stability of the inverse problem. A large spectral condition number indicates that small alterations in the measured data \(\mathbf {b}\) or the forward model \(\mathbf {L}\) lead to huge deviations in the reconstructed MNP distribution \(\tilde{\mathbf {x}}\) [16]. The spectral condition number strongly depends on the structure of the system matrix \(\mathbf {L}\) and is given by (1). To reduce the perturbations caused by model uncertainties or measurement noise during matrix inversion, it is desirable to choose coil currents such that \(\kappa \) is minimal. However, since the spectral condition number is not derivable and discontinuous, a simple direct optimization is not possible. Therefore, we aim for an indirect optimization as described below.
We employ the matrix norm equivalence
where \(n=\min (N_\mathrm {s}N_\mathrm {a}, N_\mathrm {v})\) is the number of singular values of \(\mathbf {L}\) to formulate a surrogate optimization problem for \(\kappa \) by minimizing the Frobenius condition number \(\kappa _\mathrm {F}\) given by (2). From (9), it follows that \(\kappa \) is bounded by
which implies that a minimization of \(\kappa _F\) lowers the upper bound of \(\kappa \). Therefore, the surrogate minimization problem at hand with respect to the coil currents \(\mathbf {I}\) is
where \(\tilde{\mathbf {I}}\in \mathbb {R}^{N_\mathrm {c}\times N_\mathrm {a}}\) denotes the optimized coil current matrix.
2.4 Coil Current Optimization
Since \(\mathbf {L} \in \mathbb {R}^{N_\mathrm {s}N_\mathrm {a} \times N_\mathrm {v}}\) is usually a rectangular matrix, the inverse of the system matrix \(\mathbf {L}^{-1}\) in (2) is replaced with the Moore–Penrose pseudoinverse \(\mathbf {L}^\dagger = \left( \mathbf {L}^\mathrm{T}\mathbf {L}\right) ^{-1}\mathbf {L}^\mathrm{T}\). Furthermore, the Frobenius norm of a matrix can also be expressed as \(\left\| \mathbf {L}\right\| _\mathrm {F}= \sqrt{\mathrm {Tr}\left( \mathbf {L}^\mathrm{T}\mathbf {L} \right) }\) with \(\mathrm {Tr(\cdot )}\) denoting the trace operator which sums up the main diagonal entries of a matrix. Thus, the objective function \(\kappa _\mathrm {F}\) can be reformulated into
where \(\mathbf {L}(\mathbf {I})\) is a concatenation of \(N_\mathrm {a}\) matrix sums as in (7).
We need the following three general equalities of matrix calculus [23] to compute the gradient of (12):
The derivative of the system matrix \(\mathbf {L}(\mathbf {I})\) with respect to a single coil current \(I_{{c},{a}}\) is simply the dictionary system matrix of the \({c}\)th coil in the \({a}\)th \(N_\mathrm {s}\times N_\mathrm {v}\) block of the derived system matrix such that
where \(\mathbf {0} \in \mathbb {R}^{N_\mathrm {s}\times N_\mathrm {v}}\) are zero matrices. Using (13), (14) and (15), the gradient of \(\kappa _\mathrm {F}(\mathbf {L}(\mathbf {I}))\) can then be formulated as:
The individual entries of \(\nabla \kappa _\mathrm {F}(\mathbf {L}(\mathbf {I})) \in \mathbb {R}^{N_\mathrm {c}\times N_\mathrm {a}}\) in (17) are then calculated using (16) for each coil current \(I_{{c},{a}}\) with
where \(\mathbf {L}_{a}\in \mathbb {R}^{N_\mathrm {s}\times N_\mathrm {v}}\) (resp. \(\mathbf {L}_{a}^\dagger \in \mathbb {R}^{N_\mathrm {v}\times N_\mathrm {s}}\)) is the \({a}\)th block of the system matrix \(\mathbf {L}\) (resp. of its pseudoinverse \(\mathbf {L}^\dagger \)).
A gradient descent with the general update step
is applied to find the local minima of \(\kappa _\mathrm {F}\) with respect to the coil currents. The coil current matrix and the gradient are normalized to unit Frobenius norm (\(\left\| \mathbf {I}\right\| _\mathrm {F}= \left\| \nabla \kappa _\mathrm {F}(\mathbf {L}(\mathbf {I}))\right\| _\mathrm {F}= 1\)) during each minimization step to ensure a consistent step size control throughout various dimensions of \(\mathbf {I}\). This is admissible since a linear scaling of \(\mathbf {I}\) neither affects \(\kappa _\mathrm {F}\) nor \(\kappa \). The optimization algorithm terminates if a step toward the negative gradient \(- \beta \nabla \kappa _\mathrm {F}(\mathbf {L}(\mathbf {I}_j))\) with the step size \(\beta \) (here \(\beta = 2\cdot 10^{-3}\)) increases the objective function value \(\kappa _\mathrm {F}(\mathbf {L}(\mathbf {I}_{j+1}))\) compared to the previous iteration \(\kappa _\mathrm {F}(\mathbf {L}(\mathbf {I}_{j}))\).
Four different coil current patterns which are well established in sensing matrix design are used as initial currents for the optimization as well as a benchmark for the efficiency thereof [14]. At the first attempt, we employ a Gaussian distribution for the choice of coil currents where each entry of \(\mathbf {I}\) is chosen as a random value such that \(I_{{c},{a}}\sim \mathscr {N}(0,1)\). Secondly, \(\mathbf {I}\) is designed as Bernoulli matrix containing values of \(\pm 1\,\mathrm {A}\) with equal probability: \(I_{{c},{a}}\sim \mathrm {sgn}(\mathscr {N}(0,1))\). The third coil current pattern is chosen as random binary sequence such that the coils are either not activated or driven by unit current with equal probability where\(I_{{c},{a}}\sim \max \left\{ \mathrm {sgn}(\mathscr {N}(0,1)),\ 0\right\} \). Finally, a sequential activation strategy is employed, where a random coil is solitarily energized during each activation sequence. No coil is chosen more than once during a measurement using sequential activations.
2.5 Reconstruction
2.5.1 General
The choice of an appropriate reconstruction algorithm plays a crucial role in accurately reconstructing an estimate \(\tilde{\mathbf {x}}\) of the original MNP distribution from a given inverse problem. Two standard reconstruction approaches have been extended and adapted to account for the a priori information in MRXI and employed for the reconstruction from sensor data recorded from excitations with the initial as well as the optimized coil current patterns: (i) the well-known iterated Tikhonov method [13], which is an \(\ell _2\)-regularization technique, and (ii) the iterative shrinkage-thresholding algorithm (ISTA) [5, 10] for \(\ell _1\)-regularization and commonly used for compressed sensing. Typically, an efficient compressed sensing image recovery requires uncorrelated columns of the system matrix [7], which is not the case in MRXI. Although the properties of \(\mathbf {L}\) are not well suited to handle a reconstruction algorithm originally designed for compressed sensing, the sparsity of the phantoms is exploited by the use of an \(\ell _1\)-regularization nonetheless.
We utilize a priori information for the solution of the inverse problems to further improve the image quality of the result. As already shown in [17], in MRXI it is sensible to introduce a nonnegativity constraint to the data vector \(\mathbf {x} \succeq 0\) as a negative concentration of MNPs in a voxel is not possible. The second exploitable a priori information is the spatial sensitivity \(\mathbf {s} = \left[ \begin{array}{cccc} s_1&s_2&\ldots&s_{N_\mathrm {v}} \end{array}\right] ^\mathrm{T}\) [3, 9]. It encodes the theoretical overall impact of a voxel on all sensors throughout a measurement. The sensitivity \(s_{v}\) of voxel \({v}\) is expressed through the column-sum of absolute values of \(\mathbf {L}\):
where \(L_{i,{v}}\) is the entry in the ith row and the \({v}\)th column of \(\mathbf {L}\). Typically, equal spatial sensitivity over the whole ROI results in a more stable solution with increased reconstruction quality [9]. Voxels with larger spatial sensitivity have a stronger impact on the measured data and are favored during reconstruction as a consequence. Thus, MNP ensembles in lowly sensitive areas of the ROI are often overshadowed by highly sensitive voxels which contain none or only small amounts of MNPs due to a better fit of the imaging minimization problem. Therefore, a spatial sensitivity weighting term is introduced to the reconstruction algorithms to suppress this bias.
The Pearson correlation coefficient [11, 22]
is employed in order to measure the performance and the accuracy of the reconstruction results. The CC is an established figure of merit to evaluate the reconstruction quality and is calculated from the ratio of the covariance \(\mathrm {Cov\left( \cdot ,\;\cdot \right) }\) to the product of the standard deviations \(\mathrm {STD}\left( \cdot \right) \) of the ground-truth MNP distribution \(\mathbf x _\mathrm {truth}\) and the reconstruction \(\mathbf x _\mathrm {recon}\). It already has been used in several MRXI studies [3, 8, 9] and quantifies the similarity between the reconstructed and the ground-truth image, where a CC of 1 indicates a complete overlap of the two images and 0 indicates no correlation at all.
2.5.2 Nonnegative, sensitivity-weighted Tikhonov
The classical Tikhonov regularization with nonnegativity constraint can be formulated as the minimization problem
where \(\alpha \in \mathbb {R}\) is the regularization parameter and \(\varvec{\Gamma } \in \mathbb {R}^{N_\mathrm {v}\times N_\mathrm {v}}\) a weighting matrix which is most commonly the identity matrix. We employ \(\varvec{\Gamma }\) as a diagonal matrix with the normalized sensitivity values as entries such that \(\varvec{\Gamma } = \mathrm {diag}\left( \frac{\mathbf {s}}{\left\| \mathbf {s}\right\| }\right) \) to account for the spatial sensitivity weighting. The gradient of the objective function \(f(\mathbf {x})\) with respect to \(\mathbf {x}\)
has to be calculated for the iterative regularization approach. Subsequently, the nonnegativity constraint is introduced such that the final reconstruction scheme is then formulated as gradient descent with
where \(\beta \) is the step size of the gradient descent and \(\Pi _{\mathbb {R}^+_0}\) is the projection onto the set of nonnegative, real numbers \(\mathbb {R}^+_0\). Specifically, \(\Pi _{\mathbb {R}^+_0}\left( \mathbf {x}\right) \) performs the operation
2.5.3 Nonnegative, sensitivity-weighted ISTA (SWISTA)
The ISTA algorithm [5] (here with added nonnegativity constraint) is one of the most popular methods to solve the \(\ell _1\)-regularized problem
that tends toward a sparse solution for \(\tilde{\mathbf {x}}\). The standard ISTA iteratively updates \(\mathbf {x}\) with the general step
where the step size \(\beta \) is typically chosen as Lipschitz constant (here \(\beta = 0.99\cdot \left\| \mathbf {L}^\mathrm{T}\mathbf {L}\right\| _2\)) and \(\mathrm {soft}_{\beta \lambda }\) denotes the soft-thresholding operator which performs
with the regularization parameter \(\lambda \) for every voxel \({v}\). We introduce a sensitivity weighting by multiplying \(\gamma _{v}= \left( \frac{s_{v}}{\left\| \mathbf {s}\right\| }\right) ^2\) to the penalty term of (28) such that
to compensate the unequally distributed spatial sensitivity in MRXI. The general update step for the adapted, nonnegative and sensitivity-weighted ISTA algorithm (referred to as SWISTA), which we use for the reconstruction of the MNP distribution, is
2.5.4 Regularization parameter selection
Naturally, the quality of a reconstruction using one of the two regularization techniques presented in Sects. 2.5.2 and 2.5.3 is strongly influenced by the choice of the regularization parameter. Therefore, multiple reconstructions have been created for every inverse problem using a set of different regularization parameters to avoid an unfair comparison between the results. Initially, the lower and upper thresholds for the regularization parameters have been determined. This was done for all phantoms and several numbers of activation sequences by empirically lowering (resp. raising) the regularization parameter values until the quality of the reconstruction, both by visual inspection and quantitatively by calculating the CC to the ground-truth phantom, deteriorated with any further reduction (resp. increase). The system matrices employed here have been normalized to unit spectral norm in advance to ensure consistent results.
In the second step, the iterated Tikhonov approach explained in Sect. 2.5.2 has been applied to reconstruct the MNP phantoms. The reconstruction was repeated for regularization parameter values between \(\alpha =1\) and\(\alpha =10^{-5}\) in 11 logarithmically equidistant steps(\(\alpha = \left\{ 1,\ 0.316,\ 0.1,\ ...,\ 10^{-5}\right\} \)). Subsequently, the result with the highest CC to the ground-truth phantom was stored for comparison between different coil activation strategies. An analogous procedure was applied for the SWISTA reconstruction in 7 logarithmically spaced steps between \(\lambda =10^6\) and \(\lambda =10^3\). Both iterative algorithms were terminated after the deviation to the previous iteration \(\left\| \mathbf {x}_{j+1}-\mathbf {x}_{j}\right\| /\left\| \mathbf {x}_{j}\right\| \) fell below \(2 \cdot 10^{-5}\), respectively. This value has been chosen as it empirically yielded a reasonable trade-off between algorithm convergence and computation time.
3 Results
3.1 Condition Number Minimization
We investigate the impact on the spectral matrix condition \(\kappa \) by means of minimization of the corresponding Frobenius condition \(\kappa _\mathrm {F}\) with respect to the coil current matrix \(\mathbf {I}\). In practice, both graphical representations of the condition numbers qualitatively look very similar (see Fig. 3). The condition numbers are recorded over increasing numbers of activation sequences up to \(N_\mathrm {a}= 20\) using the four different patterns for the initial coil currents described at the end of Sect. 2.4. The coil current optimization is performed for all four activation patterns and is repeated over the course of 50 realizations with different random initial conditions, respectively. The averaged results for the spectral and the Frobenius condition numbers are displayed in Fig. 4. The condition numbers for the optimized coil currents are averaged over the entire 200 realizations from all four activation patterns. Additionally, the minimum and maximum condition numbers of the 200 optimized system matrices are recorded and illustrated as dashed lines in Fig. 4. The vertical axis in Fig. 4 is limited to \(\kappa = \kappa _\mathrm {F}= 10^9\) for illustrative purposes, as the condition numbers of the sequential activation patterns extend beyond \(10^{14}\).
The lowest values for \(\kappa \) and \(\kappa _\mathrm {F}\) are achieved at \(N_\mathrm {a}= 1\) for all coil current strategies except for the sequential pattern. The condition numbers of underdetermined systems are typically dominated by the matrix row dependencies. Considering that the condition numbers grow with matrix size [24], it is not surprising that a smaller amount of rows yields lower values for \(\kappa \) and \(\kappa _\mathrm {F}\). Both condition numbers exhibit their largest values at \(N_\mathrm {a}=3\) in all cases. The similar number of rows and columns in \(\mathbf {L}\) at 3 activation sequences contributes to the fact that both row and column dependencies act negatively on the condition numbers. \(\kappa \) and \(\kappa _\mathrm {F}\) decrease in value for \(N_\mathrm {a}> 3\) since the condition numbers of overdetermined systems are mainly defined by the matrix column dependencies. A concatenation of rows to these system matrices (i.e., performing more activation sequences) contributes to the linear independency of the columns and lowers the condition numbers as a consequence.
On average, Gaussian and Bernoulli coil current patterns yielded very similar results with respect to the condition numbers, deviating around the mean values depicted in Fig. 4 with STDs of \(8.3\%\) and \(5.7\%\), respectively. The mean spectral condition of the binary pattern varies with an STD of \(6.2\%\) and is slightly higher than for the other two approaches. Average condition numbers of sequential coil activations are by far the largest and strongly depend on the combination of employed coils. Thus, the values heavily vary with an STD of \(154\%\). The system matrices designed from our optimized coil currents consistently show reduced condition numbers throughout various amounts of activation sequences and different initial current patterns. Depending on the initial conditions and the number of excitations, the optimization reliably reduces \(\kappa \) by a mean of approximately 75–\(80\%\) for Gaussian, Bernoulli and binary coil current patterns (\(>\,99\%\) for sequential activations). Moreover, the magnitudes of the condition numbers determined by the optimization are rather stable, as the final objective function value \(\kappa _\mathrm {F}\) only deviates around an STD of \(2.7\%\) or a maximum of \(6.2\%\) on average from the mean values independent from the initial coil currents.
3.2 Reconstruction Evaluation
3.2.1 Evaluation of Optimized Activation Strategies
In the first step of the reconstruction evaluation, we assess the effectiveness of our coil current optimization with respect to the reconstruction quality. For this purpose, we employ the two classical, unmodified versions of the \(\ell _1\)-regularization (ISTA [5]) and the \(\ell _2\)-regularization (nonnegative Tikhonov regularization) described in Sect. 2.5 to recover the five MNP distributions \(\text {P}_1\)–\(\text {P}_5\) (see Fig. 2b) from the different excitation strategies. The CC is calculated for each reconstruction and every realization for all five phantoms. Figure 5 depicts the averaged results for both reconstruction algorithms and the five phantoms using different coil current patterns over various numbers of activation sequences \(N_\mathrm {a}\). Additionally, the maximum CC values achievable with sequential activation of all 30 coils as already performed in [19] are displayed as dash-dotted line.
The optimized excitation strategy clearly outperforms the initial patterns in the case of nonnegative Tikhonov regularization for the phantoms \(\text {P}_1\)–\(\text {P}_4\), whereas the sequential activations show on average the worst performance. The reconstruction accuracy for the topmost phantom \(\text {P}_5\) is almost identical for all coil current patterns except for the sequential excitation strategy which yields a slower convergence toward the maximum achievable CC for this phantom. The reconstructions using the optimized coil current pattern yield even higher CC values than the reconstructions from the full 30 sequential excitations in phantoms \(\text {P}_2\), \(\text {P}_3\) and \(\text {P}_4\) with only a fraction of the activations.
The reconstructions using the ISTA algorithm are comparable to the Tikhonov regularization for the phantoms \(\text {P}_1\), \(\text {P}_3\), \(\text {P}_4\) and \(\text {P}_5\). Except for \(\text {P}_5\), where almost all excitation strategies show similar behavior, the optimized coil current patterns are able to recover equal or more accurate reconstructions compared to any other pattern. Interestingly, the sequential activations are the most efficient in recovering phantom \(\text {P}_2\) up to \(N_\mathrm {a}= 16\) when using ISTA, and also the binary sequence performs better than the optimized currents up to \(N_\mathrm {a}= 6\). The optimized currents recover more accurate reconstructions than the full sequential activations for the phantoms \(\text {P}_3\) and \(\text {P}_4\) with no more than \(N_\mathrm {a}= 10\).
3.2.2 Evaluation of Proposed Regularization Algorithms
In this section, the performance of the proposed, sensitivity-weighted regularization methods is compared to their respective unmodified versions. Sequential coil current patterns are chosen as activation sequences for this assessment, as they are most commonly used in practical measurements. Figure 6 depicts the averaged values for the CC over different numbers of activation sequences for \(\ell _2\)-regularized reconstructions in the top row (nonnegative Tikhonov vs. nonnegative, sensitivity-weighted Tikhonov) and for \(\ell _1\)-regularized reconstructions in the bottom row (ISTA vs. SWISTA).
3.2.3 Combined Application of Optimized Currents and Sensitivity-Weighted Regularization
A combination of both optimized coil current pattern and sensitivity-weighted reconstruction method is employed and compared to the other four excitation strategies (see Fig. 7). The optimized activations yield equal or higher CC values than the initial patterns using both reconstruction algorithms and throughout every phantom location for almost all numbers of activation sequences. The exemplary reconstruction of phantom \(\text {P}_3\) displayed in Fig. 8 is a comparison between a reconstruction from the full 30 sequential activations with a standard nonnegative Tikhonov regularization and an image recovered from 10 optimized activation sequences using the SWISTA algorithm. Both images show the reconstructions with the highest CC values achieved by the respective regularization techniques. The full sequential activations still produce a very blurry, inexact image (\(CC = 0.69\)), while the proposed methods provide a far more accurate, high-contrast recovery (\(CC=0.97\)) with only a third of the number of activation sequences.
As already observed in Sect. 3.2.1, the binary and sequential excitation patterns outperform the optimized activations in the reconstruction of phantom \(\text {P}_2\) for low numbers of activation sequences. However, this effect is compensated to some degree by the application of the sensitivity-weighted reconstruction algorithms. All excitation strategies yield higher correlations to the ground-truth phantom \(\text {P}_2\) when compared to the unmodified versions of the algorithms.
Throughout all excitation strategies, phantom \(\text {P}_1\) yields the best reconstruction results with the largest CC values due to its close proximity to the lower coils which makes the source of the relaxation signals easier to allocate. Similarly, the topmost MNP distribution \(\text {P}_5\) shows good recoveries as well. However, the fact that this phantom lies in a highly sensitive area in between the sensor arrangement and the rest of the ROI, which overshadows other voxels, introduces more uncertainties to the inverse problem and hampers a reconstruction as accurate as in \(\text {P}_1\). On average, the other three MNP distributions \(\text {P}_2\)–\(\text {P}_4\) yield lower CC values in comparison due to their increased distances to both coils and sensors, with \(\text {P}_2\) still being the phantom most difficult to reconstruct accurately independent of the choice of coil currents.
Naturally, the correlations to the ground-truth phantoms increase when raising the number of activation sequences. The increase is most prominent at lower amounts of activation sequences up to around \(N_\mathrm {a}=10\). For more activation sequences, the CC growth becomes much smaller independent of the excitation strategy. In general, the SWISTA algorithm produced more accurate results than the sensitivity-weighted Tikhonov approach due to the sparsity of the phantoms. Similar to the condition numbers in Fig. 4, the reconstructions using Gaussian and Bernoulli coil current matrices show almost identical behavior regarding their correlation to any of the five phantoms. The binary excitation strategy yields a lower reconstruction quality in most cases, although the CC for a small number of activation sequences (Tikhonov: \(N_\mathrm {a}<3\), SWISTA: \(N_\mathrm {a}<14\)) in phantom \(\text {P}_2\) is slightly higher compared to the other coil current patterns. On average, the sequential coil current pattern performed the worst in almost all scenarios. Most often the optimized currents reach the same CC as the other patterns with only a fraction of the activation sequences. For example, in the best-case scenario (\(\text {P}_3\), SWISTA), only four optimized activation sequences already outperform the maximum correlation achieved by the next best pattern (Gaussian) using 20 activation sequences, reducing the necessary measurement time by \(80\%\). Only in \(\text {P}_2\) for few activation sequences, some of the initial patterns yield slightly higher correlations than the optimized excitation strategy.
3.3 Spatial Sensitivity
We investigate the spatial sensitivity patterns of the different excitation strategies since an evenly distributed sensitivity also supports a stable reconstruction of phantoms equally in all regions of the ROI. The typical sensitivity distributions of a Gaussian coil current pattern and one from an optimized pattern are shown in Fig. 9. Bernoulli and binary activation sequences produce sensitivity patterns qualitatively almost identical to the Gaussian distribution and are therefore not visualized here. The sensitivity map of sequential coil current patterns is strongly dependent on the choice of energized coils and has a different appearance from one realization to another. We employ the coefficient of variation (CV), which is the ratio between \(\mathrm {STD}\) and mean \(\mu \) (\(\hbox {CV} = {\mathrm {STD}}/{\mu }\)), to quantify the variability of each sensitivity distribution where a low CV indicates an even distribution. The CV of optimized sensitivity patterns is on average \(42\pm 12\%\) lower than the sensitivities of the initial coil current patterns. This is mainly due to the fact that the initial excitation strategies produce currents that are rather similar in magnitude in all coils, resulting in an oversensitization of the top horizontal plane which is close to both coils and sensors. The optimization yields currents for the bottom coils that are on average \(4.4\pm 0.7\) times higher than the currents on top (see Fig. 10). This leads to a higher sensitization of the lower regions and to a more evenly distributed sensitivity in general.
4 Discussion
In this study, we present an optimization strategy to determine coil currents for MRXI that minimize the Frobenius condition number and, as a consequence, also the spectral condition number of the system matrix which enables a more accurate and stable recovery of MNP ensembles. We demonstrate that the matrix condition is minimized reliably by using the derivative of the Frobenius condition number with respect to the coil currents in a gradient descent (see Figs. 3 and 4). Additionally, the optimization equalizes the spatial sensitivity distribution (see Fig. 9), which is also beneficial for a successful recovery [9]. The optimization approach (17) developed here can be readily adapted to any imaging modality with an underlying system of linear equations as well as other linear inverse problems to stabilize their results.
Different excitation strategies are tested based on the superposition of real measurement data from the MRXI system at PTB Berlin. The reconstructions were performed with two well-established regularization techniques (nonnegative Tikhonov and ISTA [5]) as well as with the proposed, sensitivity-weighted variations thereof. The optimized excitations increase the reconstruction accuracy for almost all employed phantoms throughout various numbers of activation sequences compared to Gaussian, Bernoulli, random binary and sequential coil current matrices (see Fig. 7). In most cases, this leads to a substantial decrease in required activation sequences to achieve a similar or better recovery of the phantoms than a larger number of unoptimized sequences.
In particular, reconstructions from sequential coil current patterns suffer from an unstructured sequence of activated coils as we can derive from their poor performance in Fig. 5 and in Fig. 7. On the other hand, it is even possible to outperform full sequential activations of all 30 coils for some of the phantoms by employing optimized excitation patterns. In comparison with the state-of-the-art imaging procedure with full sequential activations and Tikhonov regularization (see the dash-dotted lines of the upper row in Fig. 5), the overall reconstruction accuracy is considerably increased with the application of optimized coil current patterns and the proposed SWISTA algorithm (see the green lines of the lower row in Fig. 7). Enhanced reconstruction results can be achieved in \(\text {P}_2\) and \(\text {P}_3\) after already \(N_\mathrm {a}= 10\), and also the other three phantoms \(\text {P}_1\), \(\text {P}_4\) and \(\text {P}_5\) yield imaging quality comparable to the full 30 sequential activations with a third of the sequences. This means that by employing optimized excitation strategies, the measurement time could be shortened from 105 to \(35\,\text {s}\), providing comparable or better reconstructions in a faster way.
The reconstruction of the total MNP amount using an optimized excitation strategy shows a very fast convergence toward the nominal (in reality employed) particle amount of \(95.9\,\mathrm {mg}\) for the MNP distributions \(\text {P}_1\)–\(\text {P}_4\). The relative deviation between reconstructed and nominal amounts is \(<10\%\) after no more than \(N_\mathrm {a}=4\) for the four phantoms and both reconstruction algorithms. Only in \(\text {P}_5\) the necessary number of activation sequences for a deviation \(<10\%\) is around \(N_\mathrm {a}=8\). On average, Gaussian and Bernoulli coil currents show a comparably slower convergence toward larger deviations. In contradiction to the poor recovery with respect to CC values, the binary excitation strategy reconstructed the total nominal MNP amount as fast and as good as the optimized strategy in many cases. The sequential coil current patterns yielded larger deviations from the nominal MNP amount in most cases.
The sensitivity weighting of the \(\ell _1\)-regularization yields an increase in reconstruction accuracy in areas with low spatial sensitivity (i.e., \(\text {P}_2\) and \(\text {P}_3\)). In particular, the reconstruction of phantom \(\text {P}_2\), which is difficult to recover with the unmodified version of the algorithm (see Fig. 5, ISTA), benefits from the proposed method. Similarly, the \(\ell _2\)-regularization reconstructs slightly more accurate MNP distributions of \(\text {P}_2\) by employing the sensitivity weighting. The remaining phantoms are reconstructed equally well, independent of the weighting factors.
The results from the minimization of the condition number using different amounts of activation sequences shown in Fig. 4 suggest that both \(\kappa \) and \(\kappa _\mathrm {F}\) experience their lowest values for \(N_\mathrm {a}= 1\). This could mistakenly lead to the assumption that the reconstruction should perform best for a single activation sequence. This is not the case since the condition numbers depend heavily on the sizes of the examined matrices [24]. Therefore, an evaluation based on the comparison of condition numbers as indicators for the information content of a matrix is only sensible among matrices with equal sizes.
The technical implementation of the computed optimized currents in an MRXI setup is feasible. A realization of approximately four times larger currents in the bottom coils as required by the optimization is easily viable. Also, a sensitivity analysis of the optimized currents has been performed to estimate the impact of imprecise current values on the spectral condition number. For this, \(\mathbf {I}\) has been linearly scaled such that the maximum absolute currents equaled \(\max \left( \left| \mathbf {I}\right| \right) = 1\,\mathrm {A}\) and all values of \(\mathbf {I}\) have been rounded to a precision of \(10\,\mathrm {mA}\) (i.e., \(1\%\) of \(\max \left( \left| \mathbf {I}\right| \right) \)). This yielded a small average variation of \(0.5 \pm 1.0\%\) in \(\kappa \). Since an accuracy of coil currents lower than \(10\,\mathrm {mA}\) in a real experiment is unproblematic, it will be possible to construct the optimized system matrix leading to the desired spectral condition number.
The setup of excitation coils employed here consists of two horizontal planes with identical coil orientations. The coils are only placed on the top and on the bottom of the flat phantom body because of its large extensions in x- and y-directions. Due to these large distances, coils on the sides of the body would only contribute marginally to an enhancement in reconstruction accuracy. This leads to two different sets of coils which contain relatively similar dictionary system matrices, respectively. Since the final system matrix is built upon superpositions of dictionary system matrices scaled by the respective coil currents [see (7)], the optimization is constrained to these two sets. It is expected that changing coil positions, orientations and/or shapes provides more distinct dictionary elements, which could result in an even more efficient optimization.
One obstacle for the optimization employed in this study is the non-convexity of \(\kappa _\mathrm {F}\). The simple gradient descent approach applied here only determines the next best local minimum of the Frobenius condition. A global minimization technique coupled with a first-order optimization method using the gradient of \(\kappa _\mathrm {F}\) would have to be applied to solve this problem. However, since the minima determined by the present optimization approach are rather stable regarding the magnitude of the objective function (see Fig. 4), such a substantial overhead is not considered expedient.
It is also important to be aware of the computational limits of the optimization. Because of the limited numerical precision and the resulting inaccuracies in the inversion of matrices with large condition numbers, the optimization became increasingly unstable for grid sizes beyond \(20\times 20\times 20\) voxels and \(\kappa _\mathrm {F}> 10^{13}\).
Apart from the optimization of coil currents, there is still room for considerable advancement of MRXI setups. As briefly mentioned in Sect. 1, there have already been significant improvements regarding spatial resolution by employing inhomogeneous magnetic field patterns based on different excitation strategies such as random activation sequences [4], sub-volume sensitization [3] and statistical parameter optimization [8]. These enhancements as well as the present study are based solely on the appropriate choice of coil currents. Hence, designing the whole system aiming for optimal condition numbers regarding coil design, positions and orientations as well as sensor positions and orientations still holds a tremendous potential for further improvement of the imaging quality and the measurement duration. In particular, the positioning of sensors has become more attractive very recently due to the emerging technology of optical magnetometers. Until now, the relaxation signals in magnetorelaxometry are mainly picked up with SQUIDs [28], which are spatially constrained by a dewar containing liquid helium to cool the sensor system. The recent advance in developing optically pumped magnetometers, which do not require liquid helium cooling, has enabled an almost unrestrained positioning of sensors for a more flexible MRXI setup [2]. In conclusion, an optimization of geometrical and electrical properties of the system is still mandatory for the advance and the success of the imaging modality and will eventually pave the way for clinical application of MRXI.
References
Alexiou, C., Tietze, R., Schreiber, E., Jurgons, R., Richter, H., Trahms, L., Rahn, H., Odenbach, S., Lyer, S.: Cancer therapy with drug loaded magnetic nanoparticles-magnetic drug targeting. J. Magn. Magn. Mater. 323(10), 1404–1407 (2011)
Baffa, O., Matsuda, R., Arsalani, S., Prospero, A., Miranda, J., Wakai, R.: Development of an optical pumped gradiometric system to detect magnetic relaxation of magnetic nanoparticles. J. Magn. Magn. Mater. 475, 533–538 (2019)
Baumgarten, D., Braune, F., Supriyanto, E., Haueisen, J.: Plane-wise sensitivity based inhomogeneous excitation fields for magnetorelaxometry imaging of magnetic nanoparticles. J. Magn. Magn. Mater. 380, 255–260 (2015)
Baumgarten, D., Eichardt, R., Crevecoeur, G., Supriyanto, E., Haueisen, J.: Magnetic nanoparticle imaging by random and maximum length sequences of inhomogeneous activation fields. In: Conference Proceedings—IEEE Engineering in Medicine and Biology Society, pp. 3258–3260. IEEE (2013)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Bertero, M., Boccacci, P.: Introduction to Inverse Problems in Imaging. CRC Press, Boca Raton (1998)
Candes, E.J., Eldar, Y.C., Needell, D., Randall, P.: Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal. 31(1), 59–73 (2011)
Coene, A., Leliaert, J., Dupré, L., Crevecoeur, G.: Quantitative model selection for enhanced magnetic nanoparticle imaging in magnetorelaxometry. Med. Phys. 42(12), 6853–6862 (2015)
Crevecoeur, G., Baumgarten, D., Steinhoff, U., Haueisen, J., Trahms, L., Dupré, L.: Advancements in magnetic nanoparticle reconstruction using sequential activation of excitation coil arrays using magnetorelaxometry. IEEE Trans. Magn. 48(4), 1313–1316 (2012)
Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure. Appl. Math. 57(11), 1413–1457 (2004)
Dunn, O.J., Clark, V.A.: Applied Statistics: Analysis of Variance and Regression. Wiley, New York (1987)
Eldar, Y.C., Kutyniok, G.: Compressed Sensing: Theory and Applications. Cambridge University Press, Cambridge (2012)
Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems, vol. 375. Springer, New York (1996)
Haltmeier, M., Zangerl, G., Schier, P., Baumgarten, D.: Douglas-Rachford algorithm for magnetorelaxometry imaging using random and deterministic activations. Int. J. Appl. Electrom. 60(1), 63–78 (2019)
Hanson, J.D., Hirshman, S.P.: Compact expressions for the Biot-Savart fields of a filamentary segment. Phys. Plasmas 9(10), 4410–4412 (2002)
Kabanikhin, S.I.: Definitions and examples of inverse and ill-posed problems. J. Inverse Ill-Pose Probl. 16(4), 317–357 (2008)
Liebl, M., Steinhoff, U., Wiekhorst, F., Coene, A., Haueisen, J., Trahms, L.: Quantitative reconstruction of a magnetic nanoparticle distribution using a non-negativity constraint. Biomed. Eng. (2013). https://doi.org/10.1515/bmt-2013-4261
Liebl, M., Steinhoff, U., Wiekhorst, F., Haueisen, J., Trahms, L.: Quantitative imaging of magnetic nanoparticles by magnetorelaxometry with multiple excitation coils. Phys. Med. Biol. 59(21), 6607 (2014)
Liebl, M., Wiekhorst, F., Eberbeck, D., Radon, P., Gutkelch, D., Baumgarten, D., Steinhoff, U., Trahms, L.: Magnetorelaxometry procedures for quantitative imaging and characterization of magnetic nanoparticles in biomedical applications. Biomed. Eng. 60(5), 427–443 (2015)
Ludwig, F., Heim, E., Mäuselein, S., Eberbeck, D., Schilling, M.: Magnetorelaxometry of magnetic nanoparticles with fluxgate magnetometers for the analysis of biological targets. J. Magn. Magn. Mater. 293(1), 690–695 (2005)
Pankhurst, Q.A., Connolly, J., Jones, S., Dobson, J.: Applications of magnetic nanoparticles in biomedicine. J. Phys. D 36(13), R167 (2003)
Pearson, K.: VII. Mathematical contributions to the theory of evolution—III. Regression, heredity, and panmixia. Philos. Trans. R. Soc. A 187, 253–318 (1896)
Petersen, K.B., Pedersen, M.S., et al.: The matrix cookbook. Tech. Univ. Den. 7(15), 510 (2008)
Pyzara, A., Bylina, B., Bylina, J.: The influence of a matrix condition number on iterative methods’ convergence. In: Proceedings of Conference on FedCSIS , pp. 459–464. IEEE (2011)
Schnabel, A., Burghoff, M., Hartwig, S., Petsche, F., Steinhoff, U., Drung, D., Koch, H.: A sensor configuration for a 304 squid vector magnetometer. Neurol. Clin. Neurophysiol. 2004, 70–70 (2004)
Thiesen, B., Jordan, A.: Clinical applications of magnetic nanoparticles for hyperthermia. Int. J. Hyperth. 24(6), 467–474 (2008)
Van Durme, R., Coene, A., Crevecoeur, G., Dupré, L.: Model-based optimal design of a magnetic nanoparticle tomographic imaging setup. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 369–372. IEEE (2018)
Wiekhorst, F., Steinhoff, U., Eberbeck, D., Trahms, L.: Magnetorelaxometry assisting biomedical applications of magnetic nanoparticles. Pharm. Res. 29(5), 1189–1202 (2012)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Financial support by the German Science Foundation (DFG) within the priority program 1798 (Grant BA 4858/2-1), the priority program 1681 (Grant WI 4230/1-3), and the DFG core facility for the measurement of ultra-low magnetic fields (Grants TR 408/11-1 and KO 5321/3-1) is gratefully acknowledged.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Schier, P., Liebl, M., Steinhoff, U. et al. Optimizing Excitation Coil Currents for Advanced Magnetorelaxometry Imaging. J Math Imaging Vis 62, 238–252 (2020). https://doi.org/10.1007/s10851-019-00934-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-019-00934-8