AI-Aristotle: A physics-informed framework for systems biology gray-box identification

Discovering mathematical equations that govern physical and biological systems from observed data is a fundamental challenge in scientific research. We present a new physics-informed framework for parameter estimation and missing physics identification (gray-box) in the field of Systems Biology. The proposed framework—named AI-Aristotle—combines the eXtreme Theory of Functional Connections (X-TFC) domain-decomposition and Physics-Informed Neural Networks (PINNs) with symbolic regression (SR) techniques for parameter discovery and gray-box identification. We test the accuracy, speed, flexibility, and robustness of AI-Aristotle based on two benchmark problems in Systems Biology: a pharmacokinetics drug absorption model and an ultradian endocrine model for glucose-insulin interactions. We compare the two machine learning methods (X-TFC and PINNs), and moreover, we employ two different symbolic regression techniques to cross-verify our results. To test the performance of AI-Aristotle, we use sparse synthetic data perturbed by uniformly distributed noise. More broadly, our work provides insights into the accuracy, cost, scalability, and robustness of integrating neural networks with symbolic regressors, offering a comprehensive guide for researchers tackling gray-box identification challenges in complex dynamical systems in biomedicine and beyond.


Introduction
One of the most coveted tasks in Machine Learning is the discovery of new physics laws from observed and experimental data.One of the first attempts to extrapolate governing equations from observed data is presented in the well-known work by Brunton et al. [1], in which the authors propose a new school of thought for dynamical system discovery problem from the perspective of sparse regression [2] and compressed sensing [3].In particular, they take advantage of the fact that most physical systems are described by only a few relevant terms governing the dynamics, making the governing equations sparse in a high-dimensional nonlinear function space.This method named SINDy -Sparse Identification of Nonlinear Dynamics -depends on the choice of the candidate non-linear functions library and the availability and quality of the data.Thus, it is not a generalized method and works better if guided by available knowledge of the phenomena under study.For example, given the trend of the observed data, one can approximately understand if it is a trigonometric or polynomial trend and build the library accordingly.SINDy has shown its capability in identifying non-linear dynamical systems from data without previous assumptions of the forms of the differential equations governing the phenomena.
Another method to retrieve governing equations from data has been proposed by Udrescu et al. [4].In this paper, the authors make use of the symbolic regression (SR), which aim to find a symbolic expression that accurately represents an unknown function based on a given dataset.They developed a novel recursive multidimensional symbolic regression algorithm, named AI-Feynman, that combines neural network techniques with physics-inspired strategies.The efficiency of this method has been proved by discovering 100 equations from the Feynman Lectures on Physics, outperforming the accuracy of the state-of-the-art publicly available software.However, despite the groundbreaking capability of this work, there are some drawbacks and areas for improvement.The method currently focuses on equations involving elementary functions but does not handle equations involving derivatives and integrals commonly found in physics.Integrating the capability to discover such equations would be valuable.Also, while the AI-Feynman shows promise, it could further benefit from combining the strengths of genetic algorithms and its approach to generate a more robust and versatile equation discovery tool.Overall, the development and refinement of symbolic regression algorithms continue to evolve, offering exciting possibilities for future discoveries in the realm of physics and beyond.
On this research direction, a new framework named AI-Descartes has been recently published [5].In this paper, the authors address the challenge of deriving meaningful mathematical models from both axiomatic knowledge and experimental data by combining logical reasoning with SR.The novelty of this method lies on the attempt to generate models that are consistent with general logical axioms.The authors showcase the effectiveness of their method by applying it to three classic scientific laws: Kepler's third law of planetary motion, Einstein's relativistic time-dilation law, and Langmuir's theory of adsorption.They demonstrate the capability to discover governing laws even with limited data points, emphasizing the importance of logical reasoning in distinguishing between candidate formulas with similar data-fit accuracy.However, this method relies on the correctness and completeness of background theories, which may not always hold and the development of further techniques such as abductive reasoning [6] for partially addressing incomplete theories would be needed.Scaling behavior remains a challenge, especially regarding the undecidability of certain logical types and variations in run-time performance.
Another recently developed SR package, named Feyn [7] and based on the symbolic regressor QLattice, is showing great performance and capabilities especially for small data sets, where traditional machine learning techniques such as gradient boosting and random forests tend to overfit [8].Christensen et al. [9] efficiently used Feyn on clinical omics datasets to generate high-performing models to predict disease outcomes and to reveal putative disease mechanisms.
Few attempts have been made for discovering "gray-box" terms in Differential Equations (DEs) by tackling both data and available known physics.One of these used Physics-Informed Neural Networks (PINNs) [10] with the eXtended-PINN (X-PINN) framework [11] coupled with SR [12].The authors proposed to augment the original X-PINN framework by enforcing a flux continuity across the sub-domains interfaces, and moreover they employed SR to learn the explicit expression of the gray-box unknown function retrieved with the augmented X-PINN.the method proved its effectiveness with the Allen-Cahn equation, whose results indicating excellent performance, even in presence of noise in real-world data scenarios.
Other approaches using particular type of Neural Networks called Random Projection Neural Networks (RPNN) [13] are used in combination with SR.In a recent work [14], RPNN are used to model a representation for SR called Interaction Transformation [15], showing the capability of this framework in drastically reducing the computational effort.In another work [16], a single-layer NN is combined with SR.In this approach, the SR layer, incorporating mathematical operators and basis functions, is constructed randomly instead of using genetic programming, and the output weighting parameters are optimized through least-squares optimization.The use of least-squares optimization significantly reduced computational time, resulting in system models based on simple analytic expressions that accurately represent the input-output relationship of dynamic systems.
One of the earliest works on addressing "gray-box" identification for non linear dynamical systems is the one of Ref. [17].The gray-box in this paper is composed by a known part, represented by a system of Ordinary Differential Equations (ODEs), and unknown parts, which are approximated using neural networks.The paper illustrates this approach by applying it to model a complex reacting system with nonlinear kinetics for parameter discovery.The authors also highlight the challenges of working with discrete-time models and the advantages of using continuous-time approximations for a more nuanced understanding of system behavior.Other gray-box identification and parameter estimation methodologies were applied to a wide range of applications, such as phase field systems, biotechnology and optogenetics [18,19,20,21,22].
The PINN frameworks are advancing the state-of-the-art of methodologies for inverse problems of parameter discovery.Particularly challenging is the scenario in which we have a highly nonlinear dynamics system with many unknown parameters and very few available experimental data to leverage.This challenge has been addressed in a systems-biology-informed deep learning algorithm that incorporates the system of ODEs into the neural networks.In the works [23,24], the authors proved the efficiency of this new algorithm to infer the dynamics of unobserved species using only a few scattered and noisy measurements by testing it for benchmark problems in systems biology.
In this work, we propose a new framework named AI-Aristotle to perform parameter discovery and graybox identification for problems in Systems Biology.We employ two neural networks based methods for the unknown terms approximation, such as PINNs and X-TFC [25] with domain decomposition [26], and two symbolic regression algorithms for the mathematical explicitation of the gray-box model, such as PySR [27,28] and gplearn [29].Our framework is tested for two problems.The first one is a three-compartment pharmacokinetics model, describing a single-dose drug absorption.The second, more challenging problem, is an ultradian endocrine model, describing the glucose-insulin interaction.This paper is organized as follows.In section 2, we present an introduction of the physics-based models used for our simulations.In Section 3 we report the two Neural Networks methods used for solving the inverse problem with data and physics models, and the two SR algorithms used to explicitly identify the gray-boxes previously retrieved.In In Section 4 we report the results obtained by the two NN methods and the two SR algorithms for different test cases, involving both parameter discovery and gray-box identification.Finally, we summarize conclusions and discussion about in Section 5.

Models
In this section, the mathematical models describing the phenomena of our simulations are introduced.These models are designed to capture the dynamic interactions within specific biological processes, such as drug absorption and glucose-insulin interaction, offering physics-based knowledge of the behavior and characteristics of the systems under study.

Pharmacokinetics model
The first model we aim to use for our simulations is a single-dose compartmental Pharmacokinetics (PK) model [30], represented by the following system of ODEs: This model evaluates the variation of drug concentration in three compartments, in a time range [0, 10] hours.The drug is initially introduced in the GI-tract (first compartment G), where it dissolves and diffuses into the bloodstream (second compartment B).Finally, the drug is eliminated from the bloodstream through the liver, kidneys, and urinary tract (third compartment U ).The parameters k g = 0.72h −1 and k b = 0.15h −1 represent the rates at which the drug diffuses from the GI-tract into the bloodstream, and then eliminated from the bloodstream through the liver, kidneys, and urinary tract, respectively.The intake drug is considered to be 0.1µg of antibiotic tetracycline.In Section 4, we will show our simulations using this model for two test cases: 1) Parameters discovery, and 2) gray-box identification.With "gray box", we indicate the missing terms of a model.For this PK model, the missing term considered is the right-hand-side of the first ODE, which we approximate with an unknown function h(t) as follows: which we aim to obtain by using available data for B, G, and U .

Ultradian Endocrine model
The second model used in our simulations is an ultradian model for the glucose-insulin interaction [31], which is modeled by 6 state variables and 30 parameters [23].This model describes the existence of rhythmic oscillations in both glucose and insulin levels within the body that occur on a relatively short timescale, typically less than 24 hours.In particular, in our simulation we will use a time range [0, 1800] minutes.It results in the following system of ODEs: The three main variables of this model are the plasma insulin concentration I p , the interstitial insulin concentration I i , and the glucose concentration G.The last three variables h 1 , h 2 , and h 3 -a three-stage linear filter -represent the delay process between insulin and glucose production [31].The functions f 1 , f 2 , f 3 , and f 4 , represent the insulin secretion, the insulin-independent glucose utilization, the insulin-dependent glucose utilization, and insulin-dependent glucose utilization, respectively [32], and they are expressed as follows: and I G (t) is the exogenous (externally driven) glucose delivery rate.In our simulations, we define it over N = 3 nutrition events, at time t j with a carbohydrate quantity m j : where (t j , m j ) = [(300, 60)(650, 40)(1100)](min, g).The parameters governing this system of ODEs are listed in Table 1.Also for this second model, we aim to pursue parameter discovery and gray-box identification.
For the latter case, the missing terms we approximate with two unknown functions, f (t) and g(t), whih are in the first two ODEs, as follows:

Methodology
As mentioned in the Introduction section, the parameter discovery and approximation of the unknown terms in the systems of ODEs are performed by two NN-based methods, while the symbolic regression is performed by two different algorithms, to cross-verify the mathematical expressions obtained.In this section, we present some details of these methods that are included in the AI-Aristotle framework.

X-TFC
The first NN-based method presented uses a single-layer random projection neural network.For the sake of simplicity, we will show its implementation for the gray-box identification in the pharmacokinetics model only, since the implementation for the ultradian endocrine model is similar.
Different techniques are combined to build this algorithm for solving both forward and inverse problems involving differential equations.The first one is a functional interpolation technique named Theory of Functional Connections (TFC) [33,34].According to TFC [35], we can approximate the unknown solutions of our system of ODEs, taking into consideration the initial conditions, with the so-called constrained expressions (CE) as follows: whose derivatives can be analytically expressed: The parameter c represents a mapping coefficient that maps the time domain t into the activation function domain.To these systems, we need to add the NN approximation of the unknown term h(t), which is Here, σ is the free-chosen function of the CE.No matter what free-chosen function will be selected, the CE will always satisfy the initial conditions exactly.According to the X-TFC framework [25], we select a single-layer NN as free-chosen function, such as where L is the number of neurons, w j ∈ R is the j th input weight connecting the input node with the j th neuron, β j ∈ R with j = 1, ..., L is the j th output weight connecting the output node with the j th neuron, b j is the bias of the j th neuron, and σ j (•) is the NN's activation function, which is selected by the user (for all the simulations in this work, we select a tanh activation function).In the extreme learning machine algorithm [36], input weights and biases are randomly pre-selected (uniform random distribution), thus the only unknown parameters that need to be computed are the output weights β = [β 1 , ..., β L ] T .Once the CEs are built, they can be replaced in the system of ODEs of Eq. ( 2), to obtain the loss functions where B, G, and Ũ are the available observed data of the three variables.As we can see, now we have reduced the problem into a system of linear equations of the type Ax = b, where the unknown x is the vector of output weights β.However, here we show the procedure to solve it as a system of non-linear equations (that will be the case of the Ultradian Endocrine model).When dealing with a system of non-linear ODEs, the next step is to build the Jacobian matrix, by deriving the six previous losses with respect to β B , β G , β U , and β f .For the pharmacokinetics model, the Jacobian is The unknown vector β is computed by iteratively solve the linear system J ∆β k = L.Each k-th iteration corresponds to an update of the output weights Once all the output weights β are computed, they will be replaced into the CEs of eqs.(6a) to (6c) and eq.( 8) to find our sought solutions.In this work, X-TFC is used in a domain-decomposition fashion [26,37], where the time-domain is decomposed into several sub-domains with equispaced time steps, and the algorithm is applied to each sub-domain subsequently, such that the solution found at the interface becomes the new initial condition for the subsequent iteration of the algorithm in the next sub-domain.A schematic of the X-TFC algorithm to solve the gray-box inverse problem for the pharmacokinetics model is shown in Figure 1.

Physics-Informed Neural Networks (PINNs)
The second NN-based approach is known as Physics-Informed Neural Networks (PINNs).This method has the capability to address both forward and inverse problems associated with differential equations, by using a deep fully connected neural network.

PINNs for Pharmacokinetics model
Building upon the concept of PINNs as originally proposed in reference [10], we introduce a deep learning framework that incorporates the differential equations governing single-dose compartmental Pharmacokinetics model.In this framework, a neural network characterized by parameters θ 1 takes time t as input and generates an output vector representing the state variables û(t; θ 1 ) = (û B (t, θ 1 ), ûG (t; θ 1 ), ûU (t; θ 1 )) which serves as an approximation of the ODE solution u(t).To solve the gray-box inverse problem, in addition to the unknown parameters, we have an unknown component of the equation.Thus, we introduce another neural network with a different design to approximate the unknown term h(t).The system of ODEs for this model is as follows: Here, the parameters θ 2 characterize the second neural network, which takes t as input and generates an output h(t; θ 2 ).The next crucial step involves constraining the neural network to satisfy both the scattered observations of u(t) and the system of ODEs (12).This is achieved by constructing the loss function that takes into account terms corresponding to the observations and the ODE system.To be more specific, let us assume that we have measurements of u data = {u 1 , u 2 , . . ., u M } at various time instances t 1 , t 2 , . . ., t M data .We want to ensure that the neural network satisfies the ODE system at specific time points t 1 , t 2 , . . ., t N ode .It is important to note that the time instants t 1 , t 2 , . . ., t M data , and t 1 , t 2 , . . ., t N ode may not necessarily be on a uniform grid and can be chosen arbitrarily.Here, N is the number of collocation points, and M is the number of data points.For computing the total loss, we employ the Self-Adaptive Loss Balanced method [38,39].The total loss function is defined as a function of θ 1 , θ 2 , p, λ ode , where p represents the unknown parameters of the ODEs, and λ ode is a vector representing the individual loss weights for all the state variables, i.e., λ ode = (λ 1 , λ 2 , . . ., λ S ), where S is the number of state variables.Note that λ data and λ IC are constant values, equal to 1 in this study, and are not trainable variables in our neural network.The total loss is computed as follows: where We emphasize that L data and L IC represent the discrepancies between the neural network predictions and the measured data, making them supervised losses.Conversely, L ode is derived from the ODE system and, therefore, qualifies as an unsupervised loss.In the final step, we simultaneously determine the parameters θ * 1 , θ * 2 of both neural networks and the unknown ODE parameters p * by minimizing the loss function using gradient-based optimization methods, such as the Adam optimizer [40] and L-BFGS optimizer [41] .Additionally, we determine the λ * ode vector by updating adaptive weights in each epoch by solving: For the training process, where our goal is to predict the unknown term h(t; θ 2 ) and the values of parameters simultaneously, we employ the Adam optimizer with default hyperparameters and a learning rate of 10 −4 .
Training is performed on the entire dataset.Since our total loss comprises two supervised losses and one unsupervised loss, we adopt a two-stage training strategy as follows: 1. Recognizing that supervised training typically yields faster convergence than unsupervised training, we initially train the network using the two supervised losses, L data and L IC , for a set number of iterations.This initial training phase enables the network to quickly align with the observed data points.2. Subsequently, we continue the training process, incorporating all three losses.
Empirical observations demonstrate that this two-stage training approach expedites network convergence.The specific number of iterations for each stage and parameters for the implementation are detailed in Table 5.A schematic of the PINNs algorithm for solving the gray-box inverse problem in the pharmacokinetics model is shown in Figure 2.

PINNs for Ultradian Endocrine model
The system of ODEs for this model is as follows: Here, parameters θ 2 characterize the second neural network, which takes t as input and generates two outputs f (t; θ 2 ) and g(t; θ 2 ).
In accordance with the pharmacokinetics model, this study adopts a self-adaptive loss balanced method and a two-stage training strategy.To expedite the training process of the neural network, extending the discussion from the previous section on Fully connected Neural Networks, we introduce supplementary layers following the workflow presented in [23].Here, û(t; θ 1 ) is a vector that contains all three output states.Unlike the X-TFC network, PINNs requires back-propagation, which is the expensive computational component.
• Input Scaling Layer: In cases where the time domain exhibits significant variation spanning multiple orders of magnitude, which can detrimentally affect neural network training, we employ a linear scaling function on the time variable t, using a value in the time domain T to obtain t = t T , which approximates values to be ∼ O (1).In this study, for the time interval ranging from 0 to 1800, we have adopted a value of T = 100.
• Feature Layer: Frequently, solutions to ordinary differential equations (ODEs) exhibit specific patterns, such as periodicity or exponential decay.Instead of relying on the neural network to autonomously identify these features, we incorporate them within a dedicated feature layer.While the choice of features is problem-specific, the general framework remains consistent across different problems.We utilize the set of functions e 1 (θ), e 2 (θ), . . ., e L (θ) to construct L features e 1 ( t), e 2 ( t), . . ., e L ( t), as illustrated in Figure 3.If discerning a clear pattern proves challenging, it is advisable to omit the feature layer rather than introducing inaccurate information.This feature layer is a training aid and not a mandatory component for the success of the PINNs for system biology identification problems.
• Output Scaling Layer: The predicted outputs, denoted as ũIp , ũIi , . . ., ũh3 , may exhibit variations in magnitudes.To address this, we can normalize the network outputs.To standardize these outputs, we employ a normalization procedure, expressed as follows: Here, k Ip , k Ii , . . ., k h3 represent the magnitudes of the corresponding ODE solutions u Ip , u Ii , . . ., u h3 .This normalization ensures that the predicted outputs are scaled consistently with the characteristics of the underlying ODE solutions.Furthermore, we introduce an additional component to this layer to facilitate alignment of the state variables with a linear trajectory connecting the initial and final data points.This linear transformation facilitates the interpretation and visualization of the model's outputs, ensuring their alignment with meaningful data trends.In summary, the Output Scaling Layer standardizes predicted outputs while integrating a linear transformation component.This integration enhances the interpretability and relevance of the model's results, expediting the neural network's convergence towards an accurate solution.
The list of parameters of this model can be found in Table 9.A schematic of the PINNs algorithm for solving the gray-box identification problem in the Ultradian Endocrine model is shown in Figure 3.

Symbolic Regression
Symbolic regression is a powerful method used in machine learning, designed to discover a mathematical expression or equation that provides the optimal fit for a provided dataset.Unlike traditional regression methods (e.g., linear regression, polynomial regression), symbolic regression seeks to discover the underlying mathematical relationship between input variables and the target variable without making assumptions about the form of the equation.Two popular symbolic regression algorithms commonly used in this context are PySR (Python Symbolic Regression) [27] and gplearn (Genetic Programming for Symbolic Regression) [29].These algorithms employ different techniques to discover symbolic expressions from data, and their processes are very similar to each other.
They are SR libraries that combine genetic programming with machine learning techniques to discover mathematical expressions.The first step of their processes is creating an initial population of candidate equations represented by mathematical expressions composed of simple mathematical operations (+, −, ×, ÷), functions (e.g., sine, cosine, exponential), and variables.Subsequently, each candidate equation is evaluated against the given dataset, and its performance is assessed using a fitness function, that measures how well the equation fits the data, typically by calculating the mean squared error (MSE) or a similar metric.A genetic algorithm is used to select the best-performing candidate equations for the next generation.Equations that fit the data well are more likely to be selected, while less fit equations may be removed.Genetic operations like crossover (combining parts of two equations) and mutation (making small changes to an equation) are applied to the selected equations to create a new generation of candidate equations.This process iterates through multiple generations, continually improving the equations' fitness until a termination condition, such as a maximum number of generations, or a threshold fitness level, is met.

Results
In this section, the results of our simulations are reported and discussed.The first two subsections 4.1 and 4.2 show the performance of X-TFC and PINNs in parameter discovery and gray-box identification for both Pharmacokinetics and Ultradian Endocrine model.The outputs of the gray-box identification are used as input in the symbolic regression algorithms for the symbolic distillation of both NN-based methods, whose results and performance are shown in subsections 4.3.1 and 4.3.2.

Pharmacokinetics
In the parameter discovery test case, we aim to infer the value of the parameters k g = 0.72h −1 and k b = 0.15h −1 of the system of ODEs in Eqs.(1), given a certain numbers of available data points of B, G, and U .The results and performance for both X-TFC and PINNs are reported in Table 2, simulating the variation of drug concentration in the three compartments for a time domain of 50 hours.The number of data points used varies from 10 to 100, and both methods show great accuracy in retrieving both the parameters governing the ODEs.The accuracy of the methods is evaluated with the absolute difference between the nominal value of the parameters and their inference.As expected, we see an increase of accuracy while increasing the number of data points, but one can see that both methods can give great precision even for meager dataset (10 data points -one every five hours).
For the pharmacokinetics inverse problem, in PINNs we utilized the Adam optimization with N c = 100, learning rate (lr) of 1 × 10 −4 , and we conducted training for 30,000 iterations.Notably, in this context, the application of self-adaptive loss balancing weights was deemed unnecessary, and the two-phase training method was not employed.We perform the computational experiments for PINNs on NVIDIA's GeForce RTX 3090 GPUs, which are powered by NVIDIA's 2nd generation RTX Ampere architecture.The GPU has 10496 core and endowed with 24 GB of GDDR-6X memory.
Since X-TFC uses a domain decomposition technique, we report the number of iterations needed from the iterative least-squares for each sub-domain, with an iteration tolerance set equal to 1e-09.When a decomposition of the domain is required to increase the accuracy of the results (cases with 20, 50, and 100 data points), the inferred parameter is given by the mean of the parameters inferred in each subdomain.The X-TFC results reported in Table 3 are obtained with certain neural networks hyperparameters setup.The tuning hyperparameters are N number of points per sub-domain, L number of neurons, and t step the length of each subdomain.These setups for each simulation are reported in Table 4, made with a Intel(R) Xeon(R) W-2255 CPU @ 3.70GHz machine.GPUs, renowned for their inherently parallel architecture, excel in efficiently distributing specific computations across a multitude of cores.As the volume of data points grows, the potential for enhanced parallelization efficiency becomes evident, potentially resulting in reduced computation times.It is worth highlighting that when employing GPUs, computational times may decrease as the number of data points increases, as illustrated in the table 2 depicting the results of the PINNs method.This phenomenon is particularly noticeable due to our utilization of GPUs for this method.

X-TFC
In the gray-box identification test case for the Pharmacokinetics model, we aim to obtain the right-handside unknown term h(t) of the first ODE of the system (2).X-TFC and PINNs results and performance for a simulation of 50 hours are shown in Table 3. Performance are evaluated via Mean Absolute Error (MAE): Root Mean Squared Error (RMSE): and Relative Error (RE): where ĥ(t) and h(t) are the exact and learned solutions, respectively.Also, for these test cases, we can see how both methods can perform a good inversion of the unknown term h(t) given few data samples.Figure 4a shows the learned concentrations in time of the three state variable B, G, and U for X-TFC and PINNs solutions vs. the exact solution (given by 50 data points), while the learned function h(t) is plotted in Figure 4b.As presented in Table 2 and 3, our comparative analysis reveals valuable insights into the performance of the X-TFC and PINNs methods when applied to the same problem with varying data sizes within the same time range.For smaller size of the dataset (e.g., 10 data points), the PINNs method can achieve better performance in accuracy, especially for the gray-box test case, showing its inherent performance in handling sparse datasets for approximating complex functions, due to the high expressivity of the deep neural network.Conversely, as the dataset size increases, the performance of X-TFC method in terms of accuracy improves substantially.Its computational speed, a distinct advantage, allows it to effectively capitalize on larger datasets.With more data points, the X-TFC method can produce increasingly accurate results, eventually surpassing the accuracy achieved by the PINNs method.Despite the initial accuracy advantage of PINNs, it reaches a point where further increasing the dataset size does not significantly improves accuracy with the same setup, while still keeping great performance.This is probably due to the optiization error, and overcoming this limitation may involve architectural enhancements, such as increasing the neural network's depth, employing different optimization algorithms, or implementing alternative techniques.In contrast, the X-TFC method continues to benefit from additional data, showcasing its scalability and adaptability.In summary, for problems with small datasets, the PINNs method excels in providing accurate solutions.For larger datasets the X-TFC method becomes increasingly competitive, offering the potential for superior accuracy with adequate computational resources.

Ultradian Endocrine model
The results of the parameter discovery test case for the Ultradian Endocrine model are reported in Table 6, as the absolute difference between the nominal and inferred values of the parameters.Our simulations were conducted for the discovery of five parameters.However, the PINNs algorithm proved to be very effective in system identification, discovering up to 21 parameters of the ultradian endocrine model using only data for G and I p .As presented in [24], using only 360 data points for G, the PINNs algorithm was able to discover 17 parameters accurately, which is challenging and not possible for the X-TFC algorithm to do with a small amount of data on only one state variable.With X-TFC, we are able to retrieve the parameters already in the first sub-domain, thus further iteration of the algorithm would be redundant, allowing us to further speed up the computation.In the context of PINNs, the obtained results are contingent on the learning process.Notably, the neural network's capacity to learn effectively is closely tied to the temporal scope of the problem.Specifically, the neural network may not yield accurate approximations within a smaller time range, which corresponds to a reduced dataset size.
In the gray-box identification case, we aim to infer the two unknown terms f (t) and g(t) in the system Table 6: Ultradian Endocrine model: parameter discovery via X-TFC and PINNs algorithms.The performance of the two methods is given by the absolute difference between nominal values and inferred values.On the right we also present computational times in seconds.
of ODEs (5), from available data of the variables I p and G.In Table 7, the M AE, RM SE, RE, and computational times are reported for both X-TFC and PINNs frameworks, for different amount of data points, from 360 to 1800 (i.e., data available every 5, 4, 3, 2, and 1 minutes), in a simulation of 1800 minutes.For X-TFC, a domain decomposition of several subdomains is needed, thus the number of iterations reported in the table refers to the average number of iterations in one subdomain.The hyperparameters for the X-TFC neural networks, as well as the configuration of parameters for the PINNs, employed to generate the results presented in Table 7, are documented in Tables 8 and 9, respectively.The first three state variables of the model learned by X-TFC and PINNs are plotted vs. the exact solution in Figure 5a, while the two learned functions f (t) and g(t) are plotted in Figure 5b.In both figures, the overlap of the solutions of both frameworks is clear.As evidenced by the data presented in Tables 6 and 7, encompassing both gray-box and inverse problem scenarios, and spanning across both this model and the pharmacokinetics model, a discernible pattern emerges concerning the impact of dataset size on method performance.
In the case of the X-TFC method, an increase in the number of data points leads to progressively more accurate results.However, it is noteworthy that when confronted with a relatively small dataset, the PINNs method exhibits superior performance, characterized by heightened accuracy and reduced absolute error.For instance, in Table 7, the PINNs method demonstrates better efficacy with merely 360 and 450 data points.Nevertheless, as the dataset size grows, the X-TFC method surpasses PINNs in both accuracy and computational efficiency.
In summary, the choice between the X-TFC and PINNs methods should be made judiciously, with careful consideration of dataset size and noise levels.While the X-TFC method excels with larger datasets, the PINNs method exhibits a unique strength in scenarios involving smaller datasets or noisy data, where it achieves greater accuracy.4.3.Symbolic Distillation of gray-box models recovered from X-TFC and PINNs methods After training of X-TFC and PINNs model, we obtain a gray-box models for f (t), g(t) and h(t) parameterized by high dimensional parameters.Therefore, we perform symbolic regressions and fit a compact closed-form analytical expressions to f (t), g(t) and h(t) independently by using PySR [28] and gplearn [29].Both packages use a genetic algorithm to combine algebraic expressions stochastically.The employed method shares similarities with the method of natural selection, as it assesses the "fitness" of each expression based on its simplicity and accuracy.In this study, we consider binary operations in the fitting process are +, −, and ×.In symbolic regression, accuracy of recovered expressions is assessed through complexity, score, and loss.Complexity measures intricacy of the discovered equations in terms of the number of terms, mathematical operations, and the overall structure of the equations.Managing complexity is an important aspect of symbolic regression because overly complex equations can be difficult to interpret and may not generalize well to new data, leading to overfitting.Score in symbolic regression algorithm is typically used to discover the mathematical expressions that maximize or minimize the chosen scoring metric while considering different combinations of mathematical operations and constants.Loss in symbolic regression typically refers to a mathematical function that quantifies the discrepancy between the predicted values generated by a symbolic expression or equation and the actual observed values in the dataset.

X-TFC
We represent the validation metrics for the model obtained from PySR with variation in loss and score against the complexity of symbolic expression.The loss function can be considered as mean square error (MSE) or root mean square of error (RMSE) between actual and predicted outputs.However, the score is defined as the negative of the derivative of the log-loss with respect to the complexity.The complexity in PySR is defined as the number of nodes in an expression tree, irrespective of each node's content.In the PySR implementation, we chose the candidate model with the highest score among expressions with a loss better than at least 1.5x the most accurate model represented by lower most loss function.In gplearn, we observe the variation of the loss function against length of the symbolic expression, and we choose the candidate model when complexity increases but the loss remains stagnant.

Symbolic distillation of Pharmacokinetics model
We perform symbolic regression for (12), in particular where we recover the expression h sym in terms of G and B using symbolic regressions.In  the closed form symbolic models obtained from the packages PySR and gplearn for the black-box models recovered from X-TFC and PINNs approaches.From Table 10 It is evident that symbolic models are in very good agreement with the true models.Validation metrics for the models obtained from PySR and gplearn are shown in Figure 6.In Figure 6a we show the plots of loss and scores against the complexity of expressions for the symbolic models obtained from PySR.In Figure 6a it is evident that as complexity increases the scores remains constant for both the PINNs and X-TFC, which indicates convergence of candidate model.Similarly, the loss for PINN approach obtained the convergence very early, but the loss for X-TFC method keeps decreasing but complexity remains constant.Therefore, a candidate model with a complexity of 5 is appropriate and does not overfit.Figure 6b shows the validation metric of the symbolic model obtained from gplearn.Unlike PySR, gplearn provides the metric in terms of loss and length of expressions as population evolves.In Figure 6b, we plot the loss against the length of expression in symbolic models.The candidate models for PINN and X-TFC methods, shown in Table 2, correspond to length of 7 and 19, respectively.In Figure 7, we show the evolved tree of binary operations, obtained from gplearn, in the symbolic model recovered for h sym obtained from PINNs.It is to be noted that the number of nodes (9) in Figure 7 represents the the length of expression in the symbolic model.10.(b) represents variation in loss of symbolic models, obtained from gplearn, with respect to the length of expression.We choose the length of expression 9 and 19 for PINNs and X-TFC, respectively.These lengths of expressions corresponds to minimum loss for the regressed symbolic models with closed form expression shown in Table 10 Figure 7: Pharmacokinetics model: gplearn based evolved tree of binary operations in symbolic model recovered for black-box model hsym obtained from PINNs.It is to be noted that number of nodes in the tree corresponds to length of expressions, which is 9 for PINNs method.

Symbolic Distillation of X-TFC and PINNs for Ultradian Endocrine model
The gray-box models for I p and I i are expressed as Here, we discover the closed and compact form of f sym (I p , I i ) and g sym (I p , I i ) using symbolic regression.In Table 11, we present the close and compact form symbolic models for f sym (PINNs and X-TFC) and g sym (PINNs and X-TFC) recovered by using PySR and gplearn.Table 11 shows a very good agreement between the symbolic models and actual expression represented by semi-discrete system of ODEs.In Figure 8 and Figure 9, we present the plots that show the variation in score and loss against complexity of recovered expression for models learned from X-TFC and PINNs, for PySR and gplearn packages, respectively.Interpretation of the Figure 8 and Figure 9 are same as those explained in 4.3.1.For example, in Figure 8 the convergence with PySR is achieved when the score remains constant while the complexity increases.In Figure 9a the convergence with gplearn framework for f sym is achieved at length of expression of 18 and 25 for PINN and X-TFC, respectively.However, for g sym we see that convergence is achieved for length of expression of 13 and 18 for PINN and X-TFC, respectively.In Figure 10, we show the evolved tree of binary operations, obtained from gplearn, in symbolic model recovered for g sym obtained from PINNs.It is to be noted that number of nodes in tree (13) in Figure 7 represents the the length of expression in the symbolic model.11.For fsym, we choose length of expression 18 and 25 for PINNs and X-TFC, respectively.However, for gsym, we choose length of expression 13 and 25 for PINNs and X-TFC, respectively.

Summary and Discussion
In this paper, we have presented a comprehensive framework named AI-Aristotle, which combines two neural network-based methods (X-TFC and PINNs) with two symbolic regression techniques to address the challenging tasks of parameter discovery and gray-box identification in Systems Biology problems.
Our framework was evaluated on two benchmark problems: the pharmacokinetics drug absorption model and the ultradian endocrine model describing glucose-insulin interactions.The results demonstrated the capability of both X-TFC and PINNs to accurately estimate parameters even with limited data, showcasing their potential for model calibration in real-world scenarios.In the gray-box identification simulations, our framework successfully discovered the missing terms in the differential equations governing the systems.The learned functions exhibited high accuracy even with a small number of data points.This ability to identify gray-box terms is essential for improving model fidelity and understanding complex systems where some underlying mechanisms are not fully known.We further distilled the learned neural network models using two symbolic regression algorithms, providing interpretable mathematical expressions.This process enhances the transparency and usability of the models, facilitating their integration into scientific research and decision-making processes.
Our study has unveiled a noticeable trend in how dataset size affects the performance of different methods.When we look at the X-TFC method, increasing the number of data points leads to progressively improved results.However, when dealing with relatively small datasets, the PINNs method outperforms on accuracy.This superiority can be attributed to PINNs' efficiency in handling sparse datasets and approximating complex functions with fewer data points.As the dataset size expands, the X-TFC method overtakes PINNs in both accuracy and computational efficiency.In particular, the latter occurs because of the use of least-squares optimization as solver instead of the back-propagation.It seems that the optimization error dominates in PINNs and hence no further improvement can be achieved even for more data points.Thus, when choosing between the X-TFC and PINNs methods, careful consideration of dataset size and required computational time is paramount.We perform the distillation of gray-box models obtained by using PINNs and X-TFC methods.Symbolic regression provided compact and closed form expression for PINN and X-TFC based surrogates.To show the robustness of recovered symbolic expression, we used PySR and gplearn package, and recovered almost identical expressions for Pharmacokinetics and Ultradian Endocrine model.At the implementation level, we find that PySR is a more robust and efficient framework than gplearn, for example, for the problems we considered here, it takes 10 minutes for PySR on CPU, while it takes up to one hour for gplearn.Also, PySR requires less effort in tuning the hyperparmeters of the model to perform the symbolic regressions.The robustness of PySR is due to the implementation of simulated annealing based mutation of tree of binary expressions [28], which is not present in the gplearn framework.

Figure 1 :
Figure 1: Pharmacokinetics model: Schematic of the X-TFC algorithm.Input weights and biases are randomly selected.The last step solves iteratively a least squares system, thus no back-propagation is involved in the training, allowing fast computational times.

Figure 2 :
Figure 2: Pharmacokinetics model: Schematic of the PINNs algorithm for predicting the unknown term h(t; θ 2 ) and the values of parameters simultaneously.Here, û(t; θ 1 ) is a vector that contains all three output states.Unlike the X-TFC network, PINNs requires back-propagation, which is the expensive computational component.

Figure 3 :
Figure 3: Ultradian Endocrine model: Schematic of the PINNs algorithm for solving a gray-box identification problem.
State variables B, G, and U comparison.Unknown term h(t) comparison.

Figure 4 :
Figure 4: Pharmacokinetics model: comparison between exact solution vs. X-TFC and PINNs solutions, for (a) the variables B, G, and U , with 20 data points per variable, and for (b) for the unknown term h(t).
Ultradian Endocrine model: state variables Ip, Ii, and G: 360 observed data points of Ip and G, exact solution, and learned solutions via X-TFC and PINNs methods.Ultradian Endocrine model: unknown terms f (t) and g(t): exact solution, and learned solutions via X-TFC and PINNs methods.

Figure 5 :
Figure 5: Glucose-insulin interaction model: comparison between exact solution vs. X-TFC and PINNs solutions for: (a) the variables Ip, I i , and G (top to bottom), and (b) unknown terms f (t) and g(t) (top to bottom).
(a) Validation metrics for hsym using PySR (b) Validation metrics for hsym using gplearn

Figure 6 :
Figure 6: Pharmacokinetics model: Validation metrics for the Pharmacokinetics model using for X-TFC and PINN based gray-box models.(a) represents variation in loss and score of symbolic models, obtained from PySR, with respect to complexity of expressions.Once convergence is achieved, the score remains constant as the complexity of the recovered expression increases and thus the criteria for selection of candidate symbolic with expression shown in Table10.(b) represents variation in loss of symbolic models, obtained from gplearn, with respect to the length of expression.We choose the length of expression 9 and 19 for PINNs and X-TFC, respectively.These lengths of expressions corresponds to minimum loss for the regressed symbolic models with closed form expression shown in Table10

( a )
Validation metrics for fsym (b) Validation metrics for gsym

Figure 8 :
Figure 8: Ultradian Endocrine model: Validation metrics for PySR method.(a) fsym and (b) gsym are expressed by score and loss metrics against complexity of the expressions recovered using PySR.It is to be noted that, in both the plots once convergence is achieved, score remains unchanged as complexity increases.

( a )
Validation metrics for fsym (b) Validation metrics for gsym

Figure 9 :
Figure 9: Ultradian Endocrine model: Validation metrics for gplearn method.(a) fsym and (b) gsym are expressed by MSE loss against length of the expressions recovered using gplearn and presented in Table11.For fsym, we choose length of expression 18 and 25 for PINNs and X-TFC, respectively.However, for gsym, we choose length of expression 13 and 25 for PINNs and X-TFC, respectively.

Table 1 :
Ultradian Endocrine model: List of parameters for the model.The search ranges are listed only for the five parameters used for the parameter discovery in our simulations.

Table 2 :
Pharmacokinetics model: performance of X-TFC and PINNs for parameter discovery for time range [0,50] hours.Refer to Table1for X-TFC hyperparameters.

Table 3 :
Pharmacokinetics model: Unknown term discovery for time range [0,50] hours.Comparison between X-TFC and PINNs performance via M AE, RM SE, RE, and computational time for different number of data points.The initial number in the '# of Iter.' column for PINNs represents the iterations during the primary training stages using Adam optimization while the second number corresponds to the training stage utilizing L-BFGS.

Table 4 :
Pharmacokinetics model: X-TFC hyperparameters setup for parameter discovery and unknown term discovery, for time range [0,50] hours.

Table 5 :
Pharmacokinetics model: PINNs parameters setup for the discovery of unknown terms in the over a time range of [0,50] hours.The initial and second numbers in the 'Number of Iterations' Row represent the iterations during the primary and secondary training stages using Adam optimization.The third number corresponds to the training stage utilizing L-BFGS.The first and second numbers in the 'Architecture of Neural Networks' indicate the width and depth, respectively.

Table 7 :
Ultradian Endocrine model: unknown terms discovery for time range [0,1800] minutes.X-TFC and PINNs performance in terms of M AE, RM SE, RE, number of iterations, and computational time for different numbers of data points.

Table 8 :
Ultradian Endocrine model: X-TFC hyperparameters setup for parameter discovery and unknown terms discovery, for time range [0,1800] minutes.

Table 9 :
Ultradian Endocrine model: PINNs parameters setup for unknown terms discovery, for the time range [0,1800] minutes.The first and second numbers in the 'Architecture of Neural Networks' indicate the width and depth, respectively.The initial and second numbers in the 'Number of Iterations' Row represent the iterations during the primary and secondary training stages.