In almost all analog modeling algorithms, we solve a (non-)linear system they require at some point to solve , with given and . Depending on the size of the matrix and its characteristics, computing an inverse can be costly and may incur numerical problems. Let’s tackle cost in this discussion.
The case for an inverse
When we have a small matrix, inverses can be very efficient to compute. The simplest case is a division (1×1 case) and with some knowledge of the matrix structure, the inverse can be simplified instead of using a general algorithm like the determinant of the comatrices.
When you have a linear system (this would occur with an analog modelling of a circuit with only linear components like resistors, inductors and capacitors), the case is even bigger because the matrix is constant, so the inverse computation is a one-off cost. Even if the matrix is now dense when the original was sparse, the one-off cost is still worth it, until you get to high dimension matrices.
The case for solving the equation
Solving the equation means not inverting the matrix. Not inverting the matrix means using another algorithm like a pivot (QR, Householder…) to get the answer. These can works very well with large sparse matrices, and you can tailor them to the specifics of the matrices structure, just like for the inverse.
When the equations are non-linear, the cost of inverting the matrix all the time quickly becomes higher than using a numerical solver.
In the end, choosing between inverting a matrix or solving equations depends on linearity/non-linearity and the size of your matrix. Each system will have a limit where one will be better than the other, and we need to measure to get to this limit. But basically the solution is the following one:
|Type||Small-size matrix||Medium-size matrix||Big-size matrix|