## Ax=b with b dependent on x

Question

I understand how to solve Ax=b, but what if b is dependent on x? See in the pictures E3 = function(E4). I guess this is done iteratively.. what is such a problem called? what methods do I use to solve it?

I am trying to solve the following system:

Giving the following set of equations:

Leading to the following matrix:

**Update:**
As requested some more information:

```
A = ([
[-1, 0, 0, 0, 0],
[1, -1, -1, 0, 0],
[0, 1, 0, -1, 0],
[0, 0, 1, 1, -1],
[0, 0, 1, 0, 0]
])
b = [-10, 0, 1, 1, 3]
print(np.linalg.solve(A, b))
-> [ 10. 7. 3. 6. 8.]
```

This works, but what if:

```
b = [-10, 0, 1, 1, some_function(x[3])]
```

So E3 = some_function(E4), thus E3 depends on E4, defined by 'some_function' (non-linear)

Show source

## Answers ( 2 )

Yeah, you can solve this with a nonlinear optimization.

`scipy.optimize`

has all the juicy details but here’s an example that solves your system assuming`some_function(x)`

is`x ** 2`

:All optimization techniques are basically function minimizations. You

givethe optimizerThe optimizer

returnsthe input to that function that produces the smallest output.The

`objectiveFunction`

function above is being minimized, and it returns the error between`A . x - b`

where`x`

is a candidate solution and where`b`

has the form that depends on`x`

.You can get caught in local minima so using optimization methods is a bit of a black art. But this case seems pretty straightforward: the code above prints out the following:

which is a lot of info but the important bit is the

`x`

and`fun`

values: note how`x`

is a vector and`fun`

is a scalar. This means that`objectiveFunction(solution.x) == solution.fun`

. This in turn means that the answer`b`

you’re looking for (given my assumed`some_function`

) is`solution.x`

and you can have confidence that this is right because`solution.fun`

(the error between`A . x`

and`b`

) is close to zero.I skipped over a bunch of explanation, but I can elaborate if you need.

If your

`b(x)`

is some nonlinear function of`x`

, it doesn't really matter much that on the left hand side you have`A*x`

. The simplest way of expression the equation is`A*x - b(x)=0`

, in other words`F(x) = 0`

, the general nonlinear equation. Before even trying to solve this, be aware of some of the nasty consequences:In general, you can't say anything about the distribution of solutions. Is there one at all? Impossible to say without a more detailed analysis. Perhaps there are a few, or infinitely many? With linear equation systems (

`A*x = b`

), all the info is in the matrix, but not so with nonlinear equations.Since the nonlinear solver cannot make assumptions about the structure of the solution landscape, no solver is guaranteed to converge. In fact, all non-esoteric solvers are only local, i.e., you provide an initial guess that's "close" to a solution, and the solver will converge that guess towards it. To guarantee convergence, you already have to have a good idea of the solution before you start. In practice, many people just take random guesses and let the local solver do a number steps, keeping their fingers crossed.

Arguably, the most popular local solver is Newton's method; it's the only solver that achieves quadratic convergence (if you're already close to a solution). In each step, you need to solve a linear equation system with the Jacobian, namely

`J*d = -F(x)`

. That alone can be quite costly if you don't do it carefully.Now that you know all that, you can play around with scipy optimize.