Mathematical Modeling of Root Growth Trajectory
Introduction:
In the realm of plant biology, root growth trajectories are adaptive mechanisms that optimize resource utilization. In this exploration, we focus on the mathematical aspects of root growth, specifically addressing the minimization of soil resistance as a key factor in efficient penetration. Our subject is a corn root, initiating its growth in loamy soil at point P, and the goal is to delineate the trajectory that minimizes energy expenditure.
This investigation introduces a mathematical model that accounts for soil resistance factors in loamy, clayey, sandy, and muddy soils. By defining variables and equations tied to the coordinates of the root’s trajectory, we seek to unravel the optimal path - one that enables the root to navigate through various soil types swiftly and with minimal energy consumption. #Specific Problem In the soil matrix, a maize root encounters a spatial challenge. Originating at a locus, P, within loamy soil, the root traverses a span of 8 meters. However, this path is complicated by the succession of four distinct soil types, each characterized by unique resistance factors: loamy (0.0031), clayey (0.04), sandy (0.0023), and muddy (0.009).
The core objective is to optimize the root’s trajectory in the x-direction, dictated by the x-coordinates of three pivotal points (Q1, Q2, Q3). This optimization involves the minimization of the cumulative resistance experienced across the diverse soil types. The mathematical formulation entails the construction of a resistance function, R(x), embodying the Euclidean distances between successive points and the corresponding resistance factors.
Model
To formulate the equations for the minimization problem, we need to define the resistance function \( R(x) \), where \( x = [x_1, x_2, x_3]^T \) are the x-coordinates of the endpoints of the three first lines.
Let’s denote the points in the x-y plane as follows:
- \( P = (0, 0) \) is the starting point.
- \( Q_1 = (x_1, 1.6) \) is the point where the root transitions from loamy soil to clayey soil.
- \( Q_2 = (x_2, 2.7) \) is the point where the root transitions from clayey soil to sandy soil.
- \( Q_3 = (x_3, 4.0) \) is the point where the root transitions from sandy soil to muddy soil.
- \( Q_4 = (8.0, 5.0) \) is the point where the root reaches the most mechanically stable spot.
The resistance \( R(x) \) can be calculated as the sum of the resistances in each soil type:
\[ R(x) = R_{\text{loamy}}(P, Q_1) + R_{\text{clayey}}(Q_1, Q_2) + R_{\text{sandy}}(Q_2, Q_3) + R_{\text{muddy}}(Q_3, Q_4) \]
Now, let’s express each resistance term:
-
Resistance in loamy soil: \(R_{\text{loamy}}\): \[ R_{\text{loamy}}(P, Q_1) = 0.0031 \cdot \sqrt{(x_1 - 0)^2 + (1.6 - 0)^2} \]
-
Resistance in clayey soil: \(R_{\text{clayey}}\): \[ R_{\text{clayey}}(Q_1, Q_2) = 0.04 \cdot \sqrt{(x_2 - x_1)^2 + (2.7 - 1.6)^2} \]
-
Resistance in sandy soil: \(R_{\text{sandy}}\): \[ R_{\text{sandy}}(Q_2, Q_3) = 0.0023 \cdot \sqrt{(x_3 - x_2)^2 + (4.0 - 2.7)^2} \]
-
Resistance in muddy soil: \(R_{\text{muddy}}\): \[ R_{\text{muddy}}(Q_3, Q_4) = 0.009 \cdot \sqrt{(8.0 - x_3)^2 + (5.0 - 4.0)^2} \]
Finally, the total resistance function \( R(x) \) is the sum of these individual resistances.
To find the minimum resistance path, you would need to solve the minimization problem:
\[ \text{Minimize } R(x) \]
Subject to the constraint:
\[ 0 \leq x_1 \leq x_2 \leq x_3 \leq 8.0 \]
This constraint ensures that the root grows in the positive x-direction and doesn’t backtrack.
Solving this optimization problem will give you the x-coordinates of the endpoints of the three first lines that minimize the total resistance experienced by the root.
Gradient:
In mathematical optimization, the gradient plays a crucial role in finding the minimum of a function. The gradient of a function is a vector that points in the direction of the steepest increase of the function at a given point. In our case, the function of interest is the total resistance function, \(R(x)\), representing the cumulative resistance experienced by the corn root across different soil types.
The gradient of a multivariate function is a vector of its partial derivatives with respect to each variable. For our optimization problem, the gradient of \(R(x)\) with respect to \(x\) can be expressed as follows:
\[ \nabla R(x) = \begin{bmatrix}
\frac{\partial R}{\partial x_1}
\frac{\partial R}{\partial x_2}
\frac{\partial R}{\partial x_3}
\end{bmatrix} \]
To compute the partial derivatives, we differentiate the total resistance function with respect to each variable:
-
Partial derivative with respect to \(x_1\): \[ \frac{\partial R}{\partial x_1} = \frac{\partial R_{\text{loamy}}}{\partial x_1} + \frac{\partial R_{\text{clayey}}}{\partial x_1} \]
-
Partial derivative with respect to \(x_2\): \[ \frac{\partial R}{\partial x_2} = \frac{\partial R_{\text{clayey}}}{\partial x_2} + \frac{\partial R_{\text{sandy}}}{\partial x_2} \]
-
Partial derivative with respect to \(x_3\): \[ \frac{\partial R}{\partial x_3} = \frac{\partial R_{\text{sandy}}}{\partial x_3} + \frac{\partial R_{\text{muddy}}}{\partial x_3} \]
Now, let’s express the individual partial derivatives: \[ \frac{\partial R_{\text{loamy}}}{\partial x_1}, \frac{\partial R_{\text{clayey}}}{\partial x_1}, \frac{\partial R_{\text{clayey}}}{\partial x_2}, \frac{\partial R_{\text{sandy}}}{\partial x_2}, \frac{\partial R_{\text{sandy}}}{\partial x_3}, \text{ and } \frac{\partial R_{\text{muddy}}}{\partial x_3} \]
The gradient vector provides information about the direction in which the resistance function increases most rapidly. Minimizing the total resistance involves moving in the opposite direction, and iterative optimization algorithms utilize the gradient information to converge towards the optimal solution.
Jacobian of the Gradient:
The Jacobian matrix is an extension of the gradient vector for vector-valued functions. In our case, the gradient vector (\(\nabla R(x)\)) is a vector-valued function. The Jacobian matrix, denoted by (\(J(\nabla R)\)), is a matrix of partial derivatives of each component of the gradient vector with respect to each variable.
\[ J(\nabla R) = \begin{bmatrix} \frac{\partial^2 R}{\partial x_1^2} & \frac{\partial^2 R}{\partial x_1 \partial x_2} & \frac{\partial^2 R}{\partial x_1 \partial x_3} , \frac{\partial^2 R}{\partial x_2 \partial x_1} & \frac{\partial^2 R}{\partial x_2^2} & \frac{\partial^2 R}{\partial x_2 \partial x_3} , \frac{\partial^2 R}{\partial x_3 \partial x_1} & \frac{\partial^2 R}{\partial x_3 \partial x_2} & \frac{\partial^2 R}{\partial x_3^2} \end{bmatrix} \]
The entries of this matrix are second-order partial derivatives of the total resistance function with respect to the variables \(x_1, x_2, x_3\). The Jacobian matrix provides information about how the gradient changes concerning changes in each variable, offering insights into the curvature and behavior of the resistance function.
Hessian:
The Hessian matrix is the next level of abstraction, representing the second-order partial derivatives of a scalar-valued function. For our optimization problem, the Hessian matrix (\(H(R)\)) is the matrix of second-order partial derivatives of the total resistance function with respect to \(x_1, x_2, x_3\).
\[ H(R) = \begin{bmatrix} \frac{\partial^2 R}{\partial x_1^2} & \frac{\partial^2 R}{\partial x_1 \partial x_2} & \frac{\partial^2 R}{\partial x_1 \partial x_3} , \frac{\partial^2 R}{\partial x_2 \partial x_1} & \frac{\partial^2 R}{\partial x_2^2} & \frac{\partial^2 R}{\partial x_2 \partial x_3} , \frac{\partial^2 R}{\partial x_3 \partial x_1} & \frac{\partial^2 R}{\partial x_3 \partial x_2} & \frac{\partial^2 R}{\partial x_3^2} \end{bmatrix} \]
Gradient, Jacobian, and Hessian Insights using Python
The Hessian matrix gives information about the curvature of the total resistance function. In optimization, the Hessian is crucial for determining the nature of critical points, distinguishing between minima, maxima, and saddle points. For our problem, analyzing the Hessian matrix is essential for ensuring that the identified solution corresponds to the minimum resistance path for the corn root’s trajectory.
Import Libraries and Define Symbolic Variables
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
x1, x2, x3 = sp.symbols('x1 x2 x3')
syms=[x1,x2,x3]
Define Total Resistance Function
R_loamy = 0.0031 * sqrt(x1**2 + 1.6**2)
R_clayey = 0.04 * sqrt((x2 - x1)**2 + (2.7 - 1.6)**2)
R_sandy = 0.0023 * sqrt((x3 - x2)**2 + (4.0 - 2.7)**2)
R_muddy = 0.009 * sqrt((8 - x3)**2 + (5.0 - 4.0)**2)
T = R_loamy + R_clayey + R_sandy + R_muddy
T
- Defining the total resistance function
T
based on the given resistance terms for loamy, clayey, sandy, and muddy soils.
Gradient
gradient = [sp.simplify(sp.diff(T, var)) for var in (x1, x2, x3)]
gradient = sp.Matrix(len(gradient), 1, gradient)
gradient
- Computing the gradient vector for the total resistance function.
from IPython.display import display, Math
for i, grad in enumerate(gradient, start=1):
latex_expr = sp.latex(grad)
display(Math(latex_expr))
- Displaying the LaTeX representation of each component of the gradient vector.
Jacobian Matrix
jacobian = gradient.jacobian(syms)
jacobian
- Computing the Jacobian matrix from the gradient vector.
for i, jac in enumerate(jacobian, start=1):
latex_expr = sp.latex(jac)
display(Math(latex_expr))
- Displaying the LaTeX representation of each component of the Jacobian matrix.
Hessian Matrix
hessian_matrix = sp.hessian(T, (x1, x2, x3))
hessian_matrix
- Computing the Hessian matrix for the total resistance function.
for i, hess in enumerate(hessian_matrix, start=1):
latex_expr = sp.latex(hess)
display(Math(latex_expr))
- Displaying the LaTeX representation of each component of the Hessian matrix.
Functions for Gradient and Total Resistance
gradient_func = sp.lambdify(syms, gradient)
total_resistance = sp.lambdify(syms, T)
- Creating lambda functions for the gradient and total resistance to be used in numerical computations.
Gradient Descent Function
def gradient_descent(initial_x, learning_rate, tolerance):
x= initial_x
prev_x = np.zeros_like(x) # Initialize prev_x with zeros for the first iteration
iteration = 0
while np.linalg.norm(x - prev_x) > tolerance:
prev_x = x.copy()
x -= learning_rate * gradient_func(*x.flatten())
iteration += 1
print(f"Converged in {iteration} iterations.")
return x
- Gradient descent optimization algorithm using the given initial guess, learning rate, and tolerance.
# Initial guess for x1, x2, x3
initial_guess = np.array([[0.0, 3.0, 5.0]]).T
learning_rate = 0.001
tolerance=.000001
# Run the gradient descent algorithm
optimal_x = gradient_descent(initial_guess, learning_rate, tolerance)
# Output the result
for x,sym in zip(optimal_x.flatten(),syms):
print(sym.name,':',x)
print("Minimum total resistance:", total_resistance(*optimal_x.flatten()))
- Running the gradient descent algorithm with an initial guess, learning rate, and tolerance.
Scipy Minimization for Comparison
import numpy as np
from scipy.optimize import minimize
# Define the objective function for minimization
def objective(x):
return total_resistance(*x)
# Initial guess for x1, x2, x3
initial_guess = np.array([0.0, 3.0, 5.0])
# Use scipy.optimize.minimize to find the minimum
result = minimize(objective, initial_guess, method='BFGS')
# Output the result
optimal_x = result.x
for x,sym in zip(optimal_x.flatten(),syms):
print(sym.name,':',x)
# Output the result
print("Minimum total resistance:", total_resistance(*optimal_x.flatten()))
- Using Scipy’s
minimize
function for comparison with the gradient descent result.
Newton’s Method Function
jacobian_func=sp.lambdify(syms,jacobian)
def newtons_method(initial_x, num_iterations):
x = initial_x
prev_x = np.zeros_like(x)
iteration = 0
convergence_data = []
while np.linalg.norm(x - prev_x) > tolerance:
prev_x = x.copy()
gradient = gradient_func(*x.flatten())
jacobian_inv = np.linalg.pinv(jacobian_func(*x.flatten()))
x = x - .0001*jacobian_inv @ gradient
iteration+=1
convergence_data.append(np.linalg.norm(x - prev_x))
print(f"Converged in {iteration} iterations.")
print(jacobian_inv @ gradient)
print(np.linalg.norm(gradient))
plt.plot(convergence_data)
return x ```
- Defining Newton’s method for optimization using the given initial guess and number of iterations.
# Initial guess for x1, x2, x3
initial_guess = np.array([[0.0, 3.0, 5.0]]).T
# Number of iterations
tolerance = .000001
# Run Newton's method
optimal_x = newtons_method(initial_guess, tolerance)
for x,sym in zip(optimal_x.flatten(),syms):
print(sym.name,':',x)
# Output the result
print("Minimum total resistance:", total_resistance(*optimal_x.flatten()))
- Running Newton’s method with an initial guess and specified number of iterations.
Visualization
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import CubicSpline
# Extract x, y, and z coordinates from the 3D plot
x_coordinates = np.array([0, optimal_x[0, 0], optimal_x[1, 0], optimal_x[2, 0], 8])
y_coordinates = -np.array([0, 1.6, 2.7, 4.0, 5.0])
# Create a spline interpolation
t = np.linspace(0, len(x_coordinates)-1, len(x_coordinates))
cs_x = CubicSpline(t, x_coordinates)
cs_y = CubicSpline(t, y_coordinates)
# Generate more points for a smoother curve
t_smooth = np.linspace(0, len(x_coordinates)-1, 100)
x_smooth = cs_x(t_smooth)
y_smooth = cs_y(t_smooth)
# Plot the 2D trajectory with spline interpolation
plt.figure(figsize=(8, 6))
plt.plot(x_coordinates, y_coordinates, 'ro-', label='Original Points')
plt.plot(x_smooth, y_smooth, 'b-', label='Spline Interpolation')
plt.xlabel('x-axis')
plt.ylabel('y-axis')
plt.title('Root Trajectory Projection in 2D with Spline Interpolation')
plt.legend()
plt.grid(True)
plt.show()
- Plotting the trajectory of the root through soil layers in a 3D graph using Matplotlib.