So what can we say about the solution? David and Walter have both told you the matrix really is singular. A singular matrix will offer no (unique) solution, even if a solution does exist.
A = [0.2500000000 0.00000000959994 0 -1 0 ; 0 0.2500000000000 0.0000000095999400 0 -1 ; 0.2500000000 0.00000000000075 0 -1 0 ; 0 0.2500000000000 0.0000000000007500 0 -1 ; 0.2500000000 0.00000000087634 0 -1 0]
B = [-1.30038272588598 ; 14.8832035274747 ;-2.15072625510578 ; 18.9956035209202 ;-1.75244493678615]
So, first, is A singular? Yes, I am afraid it is so. Frequently this means you have some fundamental problem in what the problem itsef means. It usually indicates there is insufficient information provided, or that there are numerical issues with how the problem was formulated. I cannot get into that, since only you know the source of this system.
As has been said, A is singular. How singular is it? We can lean a lot by looking at the singular values of A.
And this clearly tells us that A is singular, at least numerically so. Comparing the number of singular values that are no larger than eps times the largest singular value is a way to determine the rank. Essentially since there is one tiny singular value in that set, then we know your matrix has one row or column that is a linear combination of the rest. The rank of A will be 4. However, even given that, A is still a matrix with some numerical issues, since I see there are two singular values as small as 1e-9.
Even given that however, a solution may still exist. We can learn if an exact solution exists by looking at the rank of the new matrix [A,B].
Essentially, this tells us that NO exact solution exists, because there is no way to write the vector B as a linear combination of the columns of the matrix A. Thus the matrix [A,B] now has rank 5, instead of rank 4.
Are we done then? Well, we can still try to flog this dead horse yet a few more times. How closely can we approximate the vector B?
You had a problem in working with pin, because there are tow small singular values of A that are not truly zero. They are actually relatively rather large, so they cause numerical problems.
You should see that pinv(A) has some relatively large numbers in it, as well as some quite small numbers. And this is why pinv(A)*B gives you crap. So let me see how well we can approximate B, using only TWO pieces of information, instead of trying to use all 4 of the technically linearly independent pieces of information in A.
Now, if we recall that A had two relatively large singular values, we might try to approximate A by a low rank (rank 2) approximation using the singular value decomposition.
pinvapprox = U(:,k)*diag(1./diag(S(k,k)))*V(:,k)';
So we might try this:
The first column is B, the second column is the best rank 2 approximation to B given the matrix A. They are very different, and that is a bad thing. It tells me that there will be no solution to your problem that does not involve quite large numbers. Another way of seeing this is to look at the projections of B onto the vectors in the columns of V.
That the first two elements of that vector are relatively large is good. But we wanted the other three elements to be small. And they were not. In fact, B has a significantly large projectino onto the nullspace vector. We see that in the last element of thos product.
So I am sorry, but there exists NO solution for the problem A*x==b with small numbers in it that comes even remotely close to solving this linear system.