# Absolute values

### Constraints

Linear constraints are of the form:

```a1 x1 + a2 x2 + a3 x3 + ... <= maximum
a1 x1 + a2 x2 + a3 x3 + ... >= minimum```

Where minimum and maximum are constants.

lp_solve can only handle these kind of Linear equations. So what if absolute values must be formulated:

```abs(a1 x1 + a2 x2 + a3 x3) = 0
abs(a1 x1 + a2 x2 + a3 x3) <= maximum
abs(a1 x1 + a2 x2 + a3 x3) >= minimum```

#### = 0 (or <= 0)

This is the easiest case. If abs(X) must be equal to zero, then this can only be fulfilled if X is zero. So the condition can also be written as:

`a1 x1 + a2 x2 + a3 x3 = 0`

#### <= maximum

This is a bit more complicated, but still quite easy.

Let's first represent a1 x1 + a2 x2 + a3 x3 by X. So the condition becomes:

`abs(X) <= maximum`

What is in fact abs(X) ?

It is X if X is positive or 0 and it is -X if X is negative.

This also implies that maximum is always bigger than or equal to zero. Else the constraint would always be impossible (mathematical impossible with real numbers).

The geometric representation of this is:

```----+===============+----
-maximum    0    +maximum```

The section between -maximum and +maximum fulfils the constraint.

So if X is positive, the restriction becomes:

` X <= maximum`

If X is negative, the restriction becomes:

`-X <= maximum`

And the fortunate thing is that if we need the one restriction that the other is always redundant. If X is positive, then -X is negative and thus always less than maximum (which is always positive, remember) and thus the second equation is then redundant. If X is negative, then it is always less than maximum (which is always positive, remember) and thus the first equation is then redundant. This can also be seen easily from the graphical representation. So just add the following two equations:

``` X <= maximum
-X <= maximum```

And the abs(X) <= maximum condition is fulfilled.

And what if the condition is

`abs(X) + Y <= maximum`

With Y any linear combination.

It is easy to see that the same reasoning can be used to become:

``` X + Y <= maximum
-X + Y <= maximum```

With the original definition of X this becomes:

``` a1 x1 + a2 x2 + a3 x3 + Y <= maximum
-a1 x1 - a2 x2 - a3 x3 + Y <= maximum
```
Special case 1
```abs(x1) + abs(x2) + ... <= maximum;
```
First consider each abs(xi) value. From above, consider the following:
```xiabs >= xi;
xiabs >= -xi;
```
This makes xiabs >= abs(xi)
Greater than or equal, but as is, not for sure equal.
However, in combination with other constraints, this can become equal. If following constraint is added, and when active, then each xiabs will represent the absolute value of xi:
```x1abs + x2abs + ... <= maximum;
```
So, for each abs(xi) variable in the constraint, add a new variable xiabs and add two extra constraints for it. Then replace abs(xi) by xiabs in the constraint and the condition is fulfilled.
Note that the objective may be minimization or maximization, it doesn't matter.
Note that the variables may have an extra coefficient, but not negative! If the sign would be negative, then xiabs will not have the intention to become as small as possible, but as large as possible and the result would be that xiabs would not be equal to abs(xi). It could become larger.

Example:
```max: x1 + 2x2 - 4x3 -3x4;
x1 + x2 <= 5;
2x1 - x2 >= 0;
-x1 + 3x2 >= 0;
x3 + x4 >= .5;
x3 >= 1.1;
x3 <= 10;

abs(x2) + abs(x4) <= 1.5; /* Note that this is not a valid expression. It will be converted. */

free x2, x4;
```
The converted model becomes:
```max: x1 + 2x2 - 4x3 -3x4;
x1 + x2 <= 5;
2x1 - x2 >= 0;
-x1 + 3x2 >= 0;
x3 + x4 >= .5;
x3 >= 1.1;
x3 <= 10;

x2abs >= x2;
x2abs >= -x2;

x4abs >= x4;
x4abs >= -x4;

x2abs + x4abs <= 1.5;

free x2, x4;
```
The result is:
```Value of objective function: 2.6

Actual values of the variables:
x1                           3.75
x2                           1.25
x3                            1.1
x4                          -0.25
x2abs                        1.25
x4abs                        0.25
```

#### >= minimum

Let's first represent a1 x1 + a2 x2 + a3 x3 by X. So the condition becomes:

`abs(X) >= minimum`

What is in fact abs(X) ?

It is X if X is positive or 0 and it is -X if X is negative.

This also implies that minimum should always be bigger than zero. Else the constraint would always be fulfilled and there is no point in having the constraint.

The geometric representation of this is:

```====+---------------+====
-minimum    0    +minimum```

The section not between -minimum and +minimum fulfils the constraint.

So if X is positive, the restriction becomes:

` X >= minimum`

If X is negative, the restriction becomes:

`-X >= minimum`

Unfortunately, the trick as for a maximum cannot be used here. If X is positive, then -X is not greater than minimum, in contrary ...

It can also be seen from the graphical representation that this restriction is discontinue. This has as effect that it is not possible to convert this into a set of linear equations.

A possible approach to overcome this is making use of integer variables. In particular by using a binary variable B:

``` X + M * B >= minimum
-X + M * (1 - B) >= minimum```

M is a large enough constant. See later.
The binary variable B takes care of the discontinuity. It can be either 0 or 1. With M large enough, this makes one or the other constraint obsolete.

If B is 0, then the equations can be written as:

``` X >= minimum
-X + M >= minimum```

So in this case, the restriction X >= minimum is active. X must be positive and larger than minimum. With M large enough, the second constraint is always fulfilled.

If B is 1, then the equations can be written as:

``` X + M >= minimum
-X >= minimum```

So in this case, the restriction -X >= minimum is active. X must be negative and -X be larger than minimum. With M large enough, the first constraint is always fulfilled.

It is important to use a realistic value for M. Don't use for example 1e30 for it. This creates numerical instabilities and even if not then tolerances will give problem. Because of tolerances, B may not be zero, but actually for example 1e-20. This multiplied with 1e30 gives not zero, but 1e10! This results in X + 1e10 >= minimum instead of X >= minimum. Not what was mathematically formulated!

So how big must M be?
Well, we can make a prediction.
Either -X + M >= minimum (X >= minimum) or X + M >= minimum (X <= -minimum) must always be TRUE.
That comes to -abs(X) + M >= minimum.
Or M >= minimum + abs(X)

If we can predict how large X can become (absolutely), then we can predict a maximum value needed for M for this to work. If abs(X) cannot be larger than maximum, then M can be minimum+maximum.

In most cases, it is possible to determine a reasonable upper bound for X.

In lp-format, the needed equations are:

```X + M * B >= minimum;
X + M * B <= M - minimum;

B <= 1;

int B;```

And what if the condition is

`abs(X) + Y >= minimum`

With Y any linear combination.

It is easy to see that the same reasoning can be used to become:

``` X + M * B + Y >= minimum
-X + M * (1 - B) + Y >= minimum```

With M >= minimum - Y + abs(X)

In lp-format:

```X + M * B + Y >= minimum;
X + M * B - Y <= M - minimum

B <= 1;

int B;```

### Objective function

The objective function is of the form:

`min or max: a1 x1 + a2 x2 + a3 x3 + ...`

What if there is an absolute value in the objective:

`abs(a1 x1 + a2 x2 + a3 x3) + a4 x4 + a5 x5`

Let's first represent a1 x1 + a2 x2 + a3 x3 by X and a4 x4 + a5 x5 by Y. Then the objective becomes:

`abs(X) + Y`

Depending on the sign before the abs and the objective direction, there is an easy and a harder way to solve this.

#### minimization and sign is positive or maximization and sign is negative.

```min: abs(X) + Y
or
max: -abs(X) + Y```

In these two situations, abs(X) will be as small as possible, ideally zero. We can use that fact. Add one variable X' and two constraints to the model:

``` X <= X'
-X <= X'```

And replace in the objective abs(X) with X':

```min: X' + Y
or
max: -X' + Y```

That is all. So how does this work? There are 3 cases to consider:

##### X > 0

In this case, -X is negative and the second constraint -X <= X' is always fulfilled because X' is implicitly >= 0. The first constraint X <= X' is however different. Because X is positive, X' must be at least as large as X. But because X' is in the objective in such a way that is tends to be as small as possible, X' will be equal to X. So X' is abs(X) in this case.

##### X < 0

In this case, X is negative and the first constraint X <= X' is always fulfilled because X' is implicitly >= 0. The second constraint -X <= X' is however different. Because X is negative (-X positive), X' must be at least as large as -X. But because X' is in the objective in such a way that is tends to be as small as possible, X' will be equal to -X. So X' is abs(X) in this case.

##### X = 0

In this case, both constraints are always fulfilled because X' is implicitly >= 0. Because X' is in the objective in such a way that is tends to be as small as possible, X' will be equal to X, in this case 0. So X' is abs(X).

So in all cases, X' equals abs(X)

With the original definition of X and Y this becomes:

```min: X' + a4 x4 + a5 x5
or
max: -X' + a4 x4 + a5 x5

a1 x1 + a2 x2 + a3 x3 <= X'
-a1 x1 - a2 x2 - a3 x3 <= X'```

#### minimization and sign is negative or maximization and sign is positive.

```min: -abs(X) + Y
or
max: abs(X) + Y```

This is a different story. abs(X) now tends to be as large as possible. So the previous trick cannot be used now.

A possible approach to overcome this is making use of integer variables. In particular by using a binary variable B and adding a variable X'. Add following constraints to the model:

``` X + M * B >= X'
-X + M * (1 - B) >= X'
X <= X'
-X <= X'
```

And replace in the objective abs(X) with X':

```min: -X' + Y
or
max: X' + Y```

That is all. So how does this work? In fact this is a combination of a maximum and minimum constraint on an absolute expression. X' represents the absolute expression and is used in the objective.

M is a large enough constant. See later.
The binary variable B can be either 0 or 1. With M large enough, this makes one or the other constraint obsolete.

If B is 0, then the equations can be written as:

``` X >= X'
-X + M >= X'
X <= X'
-X <= X'```

So in this case, the restriction X >= X' is active. X must be positive and larger than X'. With M large enough, the second constraint is always fulfilled. The third constraint says that X <= X'. The forth constraint is always fulfilled. In fact the first and third constraint have as result that X' equals X, which is positive in this case.

If B is 1, then the equations can be written as:

``` X + M >= X'
-X >= X'
X <= X'
-X <= X'```

So in this case, the restriction -X >= X' is active. X must be negative and -X be larger than X'. With M large enough, the first constraint is always fulfilled. The third constraint is always fulfilled. The forth constraint says that -X < X'. In fact the second and forth constraint have as result that X' equals -X, which is positive in this case.

It is important to use a realistic value for M. Don't use for example 1e30 for it. This creates numerical instabilities and even if not then tolerances will give problem. Because of tolerances, B may not be zero, but actually for example 1e-20. This multiplied with 1e30 gives not zero, but 1e10! This results in X + 1e10 >= X' instead of X >= X'. Not what was mathematically formulated!

So how big must M be?
Well, we can make a prediction.
Either -X + M >= X' (X >= X') or X + M >= X' (X <= -X') must always be TRUE.
That comes to -abs(X) + M >= X'.
or -abs(X) + M >= abs(X).
Or M >= 2 * abs(X)

If we can predict how large X can become (absolutely), then we can predict a maximum value needed for M for this to work. If abs(X) cannot be larger than maximum, then M can be 2 * maximum.

In most cases, it is possible to determine a reasonable upper bound for X.

In lp-format, the needed equations are:

```max: X' + Y;

X + M * B - X' >= 0;
X + M * B + X' <= M;
X <= X'
-X <= X'

B <= 1;

int B;```