Let \(f(x)\) be some non-negative, continuous function over the interval \([a,b]\). The area under the graph of \(f(x)\) is given by the definite integral:

\[ \text{Area under f} = \int_a^b f(x) dx \]

Computing this area is *often* made easier with the Fundamental Theorem of Calculus which states in one form that one can compute a definite integral through knowledge of an antiderivative. In particular, if \(F(x)\) is an antiderivative for \(f(x)\), a continuous function, then

\[ \int_a^b f(x) dx = F(b) - F(a). \]

This is great as long as some antiderivative is known. However, it is a fact of life that not all nice functions will have an antiderivative in a convenient form. In these cases, the above approach is of no help. Rather, to find the area, one can turn to approximations that progressively get better as more approximations are taken.

One such approximation is given by the familiar Riemann sums, which we will look at here. However, the problem of trying to find the area of geometric figures did not start with Riemann some 150 years ago, indeed it has a much longer history. In the time of Pythagorus the idea of calculating area was one of being able to construct a square of equal area to a figure. This was known as *quadrature*. Finding such answers for figures bounded by curves was difficult, though Archimedes effectively computed this area under \(f(x) = x^2\) about 2,000 years before Riemann sums using triangles, not rectangles to approximate the area. By medieval Europe, the term *quadrature* evolved to be the computation of an area by any means. For example, Galileo and Roberval found the area bounded by a cycloid arch. Along the way, other approximations were used. We will see those due to Simpson and Gauss, both predating Riemann.

In 1854 Riemann was the first to give a rigorous definition of the integral of a continuous function on a closed interval, the problem we wish to solve here, using the concept of a *Riemann sum*. A Riemann sum is one of the simplest to understand approximations to the area under a curve. The basic idea is that the interval \([a,b]\) is partitioned through points \(a = x_0 < x_1 < \cdots x_n = b\) and the area under \(f(x)\) between \(x_i\) and \(x_{i+1}\) is approximated by a rectangle with the base \(x_{i+1} - x_i\) and height given by \(f(x_i^*)\), where \(x_i^*\) is some point in the interval \([x_i, x_{i+1}]\). Typical choices are the left point or the right point of the interval, or the \(x\) value which minizes or maximizes \(f\) over the interval. The figure shows these four choices for some sample function. For a *Riemann integrable* function, such as a continuous function on \([a,b]\), any of the choices will yield the same value as the partition's mesh shrinks to \(0\).

As with other limits, we can numerically approximate the limit by computing the Riemann sum for some partition. The steps for this include:

creating a partition of \([a,b]\). We will use evenly spaced points for convenience.

Selecting the \(x_i^*\) within the partition

Computing the values \(f(x_i^*)(x_{i+1} - x_i)\) for each \(i\)

adding these values up

If we partition \([a,b]\) into \(n\) same sized intervals, then each has length \(\delta = (b-a)/n\) and so the points are separated by this amount. As such, we can choose our \(a = x_0 < x_1 < \dots < x_n = b\) with commands like:

```
a, b, n = 1, 3, 5;
delta = (b-a)/n;
x = a + (0:n) * delta # alternatively linspace(a, b, n+1)
## 1.0:0.4:3.0
```

To apply a function to a range of values, we may use a map, a comprehension, a `for`

loop or the "dot" notation. We will use `map`

here. Recall, the syntax:

```
f(x) = x^2
fx = map(f, x)
## [1.0,1.9599999999999997,3.24,4.840000000000001,6.760000000000001,9.0]
```

Now to add the numbers up. For this task, the `sum`

function is available

```
sum(fx)
## 26.8
```

Okay, just one subtlety, we really only want the points

```
[ a + (0:n-1) * delta ]'
## 1x5 Array{Float64,2}:
## 1.0 1.4 1.8 2.2 2.6
```

for the left Riemann sum and the points

```
[ a + (1:n) * delta ]
## [1.4,1.7999999999999998,2.2,2.6,3.0]
```

for the right.

Putting this together, here are commands to approximate the area under the curve \(f(x)=x^2\) using 10 left Riemann sums:

```
f(x) = x^2
a, b, n = 1, 3, 5; ## note n=5
delta = (b - a)/n; ## nothing to change below here
xs = a + (0:n-1) * delta; ## n, right is 1:n * delta
fx = map(f, xs);
sum(fx) * delta
## 7.120000000000001
```

We compare this value to the known value from the Fundamental Theorem of Calculus, as \(F(x) = x^3/3\) is an antiderivative:

```
F(x) = x^3/3
F(b) - F(a)
## 8.666666666666666
```

Boy, not too close. We need a better approximation of course, which means simply that we need `n`

to be bigger.

Repeat with n=100

For the same problem, let \(n=100\). What do you get?

Repeat with n=1,000

For the same problem, let \(n=1000\). What do you get?

Repeat with n=10,000

For the same problem, let \(n=10,000\). is the difference between the answer and the actual answer within \(0.001\)?

How big should n be? (Strang)

Let \(f(x) = (10 + \cos(2\pi x))^{-1}\). For the integral over \([0,1]\), the known answer is \(1/\sqrt{99}\). How big must \(n\) be so that the error in the Riemann sum is less than \(10^{-8}\)?

Here we write a function to do the integration. This needs the basic inputs of

- a function,
- the interval's start and end value, and
- the number of equal-sized subintervals.

In addition, we allow for the possibility of passing in a function to compute the approximate area for a given subinterval. We give a default value where the left-hand endpoint is chosen.

You can type or copy and paste these two function definitions in:

```
function integrate(f::Function, a::Real, b::Real, n::Integer, approx_area::Function)
delta = (b-a)/n
x = a + (0:n) * delta
sum( [ approx_area(f, x[i], x[i+1]) for i in 1:n ] )
end
```

We will use the left endpoint for the default choice of point in each subinterval:

```
function integrate(f::Function, a::Real, b::Real, n::Integer)
left_riemann(f::Function, a, b) = f(a)*(b-a)
integrate(f, a, b, n, left_riemann)
end
```

The basic usage of the `integrate`

function is straightforward. Here we approximate the integral of \(e^{-x^2}\) from \(0\) to \(3\) using \(10,000\) subintervals:

```
f(x) = exp(-x^2)
integrate(f, 0, 3, 10_000)
## 0.8863573297424971
```

How big should the number of intervals be? More intervals will give better answers, but unlike Newton's method we have no stopping criteria. For this problem, we look at various values based on `n`

:

```
[integrate(f, 0, 3, n) for n in [100, 1000, 10000, 100000]] ## or use 10.^(2:5)
## {0.9012054416030275,0.8877071625894734,0.8863573297424971,0.8862223464083187}
```

We see a value around \(0.886\) as the answer.

For some integrals, you may need to make a minor adjustment for lack of continuity. The function \(f(x) = \sin(x)/x\) over the interval \([0, \pi]\) has to be defined to be \(1\) at \(0\) to be continuous. We do so here:

`f(x) = x > 0 ? sin(x)/x : 1`

Then `integrate`

may be used as before, this time with \(50,000\) subintervals:

```
integrate(f, 0, pi, 50_000)
## 1.8519684678042823
```

Had we simply specified `f(x) = sin(x)/x`

, then `julia`

would have returned `NaN`

for `x=0`

which have led to the entire integral being computed as `NaN`

:

```
f(x) = sin(x)/x
integrate(f, 0, pi, 50_000)
## NaN
```

If we define `right_riemann`

as

`right_riemann(f, a, b) = f(b) * (b-a)`

Then we can compare the right and left Riemann sums. Let's do so for the monotonic function \(e^x\) over the interval \([0,2]\).

```
f(x) = exp(x)
ns = 10.^(2:5);
ys = [integrate(f, 0, 2, n, right_riemann) - integrate(f, 0, 2, n) for n in ns];
[ns ys]
## 4x2 Array{Any,2}:
## 100 0.127781
## 1000 0.0127781
## 10000 0.00127781
## 100000 0.000127781
```

Since these are also the minimum and maximum Riemann sums, the above gives a bound on the error in the approximations. We can see it converges quite slowly, in that there are quite a few computations needed to get even a modest bound. (\(100,000\) for \(0.00013\)).

Define the midpoint area by

`midpoint(f, a, b) = f((a+b)/2) * (b-a)`

With \(n=10,000\) what does `integrate`

return when \(f(x) = \sin^2(x)\) between \(0\) and \(\pi\)?

Repeat the above analysis comparing the right and left Riemann sums, but this time multiply by \(n\), as follows:

```
f(x) = exp(x)
[n * (integrate(f, 0, 2, n, right_riemann) - integrate(f, 0, 2, n)) for n in 10.^(2:5)]
## {12.778112197861269,12.778112197860736,12.77811219787317,12.778112197864289}
```

This shows what?

The basic left or right Riemann sum will converge, but the convergence is really slow. The value of using rectangles over a grid to approximate area is for theoretical computations, for numeric computations better approximations were known well before Riemann. We mention a few:

The trapezoid rule and Simpson's rule approximate the area under the curve better, as instead of a rectangle they use a trapezoid (linear fit between two points) or a quadratic fit between the two points.)

Gauss quadrature uses non-evenly selected points within the range and a weighting which is exact for polynomials of a given degree.

Adaptive methods pick a non-uniform set of points to use based on where a function is less well behaved.

The trapezoid rule simply replaces the approximation of the area in a subinterval by a trapezoid, as opposed to a rectangle. Easy enough, digging up the formula from geometry for the area of a trapezoid, we can write our approximation function with:

`trapezoid(f, a, b) = (1/2) * (f(a) + f(b)) * (b-a)`

We can use this as follows. Let's approximate the area under \(5x^4\) curve between \(0\) and \(1\) (with known answer \(1\)):

```
f(x) = 5x^4
integrate(f, 0, 1, 1000, trapezoid)
## 1.0000016666665
```

Pretty close to 1 with just 1,000 subintervals.

We now compare the error with the left Riemann sum for the same size \(n\):

```
ns = 10 .^ (2:5);
left_r = [integrate(f, 0, 1, n) for n in ns];
trapezoid_r = [integrate(f, 0, 1, n, trapezoid) for n in ns];
[ns 1-left_r 1-trapezoid_r]
## 4x3 Array{Any,2}:
## 100 0.0248333 -0.000166665
## 1000 0.00249833 -1.66667e-6
## 10000 0.000249983 -1.66667e-8
## 100000 2.49998e-5 -1.66667e-10
```

One can see that the errors are much smaller for the trapezoid method.

The trapezoid rule can be viewed as a simple linear approximation to the function \(f(x)\) over the subinterval \([a, b]\). That is, replace the function with the secant line between these two values and integrate the replacement. With this viewpoint, it is possible that other easy-to-integrate function approximations will lead to improved approximate integrals. Simpson's method can be viewed in just this way. It replaces \(f\) by the parabola going through \((a, f(a))\), \((c, f( c))\) and \((b, f(b))\) where \(c=(a+b)/2\) is the midpoint between \(a\) and \(b\). The resulting area after this approximation is:

`simpsons(f, a, b) = (f(a) + 4f((a+b)/2) + f(b))*(b-a)/6`

We compare how accurate we get with this rule for the same `f`

as before:

```
simpsons_r = [integrate(f, 0, 1, n, simpsons) for n in ns];
[ns 1-left_r 1-trapezoid_r 1-simpsons_r]
## 4x4 Array{Any,2}:
## 100 0.0248333 -0.000166665 -4.16667e-10
## 1000 0.00249833 -1.66667e-6 -4.17444e-14
## 10000 0.000249983 -1.66667e-8 0.0
## 100000 2.49998e-5 -1.66667e-10 0.0
```

As can be seen, for this function approximating with a parabola is much quicker to converge. That is, \(n\) can be smaller yet the same accuracy is maintained. (Of course, there are more computations involved for each, so the number of operations needed may or may not be fewer, that would require some analysis.)

It can be shown that the error for Simpson's method is bounded by

\[ \frac{1}{90}\frac{1}{2^5} M (b-a)^5 \frac{1}{n^4}, \]

where \(M\) is a bound on the fourth derivative. As we increase \(n\), the error gets small at a quick rate. By contrast, the error for the trapezoid method will be like \(n^{-2}\) and the left Riemann sum like \(n^{-1}\).

The trapezoid rule has no error for linear functions and Simpson's rule has no error for quadratic functions. Verify the latter by computing the following:

```
f(x) = x^2; F(x) = x^3/3
integrate(f, 0, 10, 100, simpsons) - (F(10) - F(0));
```

How accurate is the approximation? Around

Compare the difference between the trapezoid rule and Simpson's rule when integrating \(\cos(x)\) from \(0\) to \(\pi/6\). How big is the difference when \(n=1000\)?

`integrate(cos, 0, pi/6, 1000, trapezoid) - integrate(cos, 0, pi/6, 1000, simpsons);`

How big is the difference?

Using Simpson's rule and `n=`

`3800`

compute the integral of \(f(x) = 1/(1+x^2)\) between \(0\) and \(1\).

`Julia`

provides the `quadgk`

function to do adaptive Gauss-Konrod quadrature, a modern, fast and accurate means to compute 1-dimensional integrals numerically. The use is straightforward, and similar to `integrate`

above: you specify a function object, and the limits of integration. You don't specify \(n\) -- as this is computed adaptively -- but you can optionally specify a tolerance which controls the accuracy, though we don't do so here. For example, as typical usage might be:

```
quadgk(sin, 0, pi) ## 2 is exact
## (2.0000000000000004,1.7896795156957523e-12)
```

Two values are returned, the answer and an estimate of the error. In the above, \(2\) is the exact answer to this integral, the estimated value a just a bit more \(2\), but is guaranteed to be off my no more than the second value, \(1.78 \cdot 10^{-12}\).

The known answer here is \(1/3\), and `quadgk`

gets it right for all the digits:

```
quadgk(x -> x^2, 0, 1)
## (0.3333333333333333,5.551115123125783e-17)
```

For other integration routines, the `Cubature`

package is an interface to the Cubature library (http://ab-initio.mit.edu/wiki/index.php/Cubature) which provides serveral. Cubature is the term for higher dimensional integrals, quadrature refers to finding area. In that package, the function `hquadrature`

is similar to `quadgk`

. (The two are written by the same author.)

Let \(f(x) = \exp(-4 \cdot |x-1/2|)\). Find the integral over \([0,1]\) using `quadgk`

:

Let \(f(x) = \sin(100\pi x)/(\pi x)\). Find the integral over \([0,1]\) using `quadgk`

:

Let \(f(x) = \sin(100\pi x)/(100\pi x)\). Using \(1,000\) points, find the Riemann integral with right hand endpoints

```
begin
f(x) = sin(100*pi*x)/(100*pi*x)
right_riemann(f::Function, a, b) = f(b) * (b-a)
val = integrate(f, 0, 1, 1000, right_riemann);
end
## 0.0044899515576561425
```

What is it?

(The answer via Riemann sums isn't even correct to 4 decimal points, due to the highly oscillatory nature of the function.)

How far off is this?

```
begin
f(x) = 1/(1 + x^4)
quadgk(f, 0, 1)[1] - integrate(f, 0, 1, 100000);
end
## -2.499991666571333e-6
```

The `quadgk`

function allows you to specify issues where there are troubles. For example, we know that \(f(x) = \sin(x)/x\) has an issue at 0. Directly trying this integral `quadgk(x->sin(x)/x, -pi, pi)`

will fail, but you can specify the issue at \(0\) as follows `quadgk(x -> sin(x)/x, -pi, 0, pi)`

. Do so. What is the value of the result:

Let \(f(x) = |x - 0.3|^{-1/4}\). We wish to find \(\int_0^1 f(x) dx\). The problem with this function is the singularity at \(x=0.3\). (That is, the function is not continuous, so has no guarantee that an integral over a closed domain exists.) However, some such integrals do exist, and the `quadgk`

function can integrate around such singularities by spelling them out in the domain of integration. Just specify the trouble spots between the endpoints:

```
f(x) = abs(x - 0.3)^(-1/4)
val = quadgk(f, 0, 0.3, 1);
```

Following the above, what answer do you get?

There are many more applications of the integral beyond computing areas under the curve. Here we discuss two:

- finding the volume of a figure with rotational symmetry (a glass in our example) and
- finding the arc length of a line.

In each case one integrates a function related to the one describing the problem. If you keep this straight, the applications are no different than above.

For a symmetrical drinking vessel, like most every glass you drink from, the Volume can be computed from a formula if a function describing the radius is known. For a given glass, let \(r(h)\) give the radius as a function of height. Then the volume of the vessel as a function of height, \(b\), is given by an integral:

\[ V(b) = \int_0^b \pi (r(h))^2 dh \]

We wish to look at our intuition relating the height of the fluid in the vessel compared to the percentage of fluid of the whole. A basic question might be: If the vessel is filled half way by height, is the volume half of the total, more or less?

The answer, of course, depends on the shape of the glass. That is the shape of the function \(r(h)\). Note, if \(r(h)\) is a constant -- the glass is a cylinder -- then the half-height mark is also the half-volume mark. Not so in general.

In Glass Shape Influences Consumption Rate for Alcoholic Beverages the authors demonstrate that the shape of the glass can have an effect on the rate of consumption, presumably people drink faster when they aren't sure how much they have left. In particular, they comment that people have difficulty judging the half finished by volume mark.

We work with metric units, as there is a natural relation between volume in cm\(^3\) and liquid measure (1 liter = 1000 cm\(^3\), so a 16-oz pint glass is roughly \(450\) cm\(^3\).)

Let two glasses be given as follows. A typical pint glass with linearly increasing radius:

\[ r(h) = 3 + \frac{1}{5}h, \quad 0 \leq h \leq b; \]

and a curved edge one:

\[ s(h) = 3 + \log(1 + h), \quad 0 \leq h \leq b \]

One could also consider a fluted one, such as appears in the comparison noted in the article.

Which of these functions might describe a fluted glass where the radius changes faster as the height gets bigger, that is the radius is a *concave up* function?

For the two types of glasses in the figure, we create functions in `julia`

as follows:

```
r(h) = 3 + h/5
s(h) = 2 + log(1 + h)
r_vol(b) = quadgk(x -> pi*r(x)^2, 0, b)[1]
s_vol(b) = quadgk(x -> pi*s(x)^2, 0, b)[1]
```

Then we can easily find the volume as a function of height. For example at 10cm we have:

```
(r_vol(10), s_vol(10))
## (513.1268000863329,427.26481657392833)
```

However, to find \(b\) that makes the glass \(450\) cm\(^3\) requires us to solve an equation involving an integral for \(b\):

\[ V(b) = \int_0^b \pi r(h)^2 dh = 450. \]

Not to worry, we can use `fzero`

from the `Roots`

package for that. To solve for when `V(b) = r_vol(b) - 450 = 0`

we have

```
using Roots
r_b = fzero(x -> r_vol(x) - 450, 10)
## 9.168923214523723
```

So \(b\) is basically \(9.17\). Given this, how much volume is left at `b/2`

?

```
r_vol(r_b/2)
## 173.27527610482952
```

That is about `j r_vol(r_b/2) / r_vol(r_b) *100`

percent (\(\approx 173.28/450 \cdot 100\)). As this height is often mistaken for the half-way by volume mark, people tend to drink these pints faster than they think.

Now compare to the height to get half the volume (225 ml):

```
fzero(x -> r_vol(x) - 225, 5)
## 5.603662378152274
```

Or about \(5.6038\). That is, when you are at `j x/r_b*100`

percent \(\approx 5.6038/9.169 \cdot 100\) of the height you have only half the volume remaining (and not at 50% of the height.)

Compare the above for the curved glass, where \(s(h) = 3 + \log(1 + h)\).

What is the height of the glass, `b`

, needed to make the volume 450?

Find the volume of the glass represented by \(s(h) = 3 + \log(1 + h), 0 \leq h \leq b\) when the glass is filled to half its height. Report the value as a percentage of the total volume.

Now, what height of filling will produce half the volume when? Report your answer in terms of a percentage of \(b\), height of the glass.

The basic indefinite integral for a positive function answers the amount of area under the curve over a given interval. However, the integral can be interpreted in many different ways. For example, one can use an integral to answer how long a curve is. Of course one can estimate this answer. For example, consider this curve:

`plot(x -> x^2, 0, 1)`

This curve has length no more than \(2 = 1 + 1\) -- the distance along the \(x\) axis starting at \(0\) to \(1\) and then going up. It is also longer than \(\sqrt{2} = \sqrt{1^2 + 1^2}\) -- the straight line distance between the two endpoint. But how long is it?

In general, the *arc length* of the curve \(y=f(x)\) between \(a \leq x \leq b\) (or how long is the curve) is given through the formula

\[ l = \int_a^b \sqrt{1 + f'(x)^2} dx \]

The formula is from the length of the hypotenuse of a right triangle with lengths \(1\) and \(f'(x)\), though why is left for another day.

The arc length is easily computed using numeric integration. For example, our answer for \(f(x) = x^2\) is given by

```
f(x) = x^2
quadgk(x -> sqrt(1 + f'(x)^2), 0, 1)
## "MethodError(ctranspose,(f,))"
```

Whereas, the length of the \(f(x) = \sin(x)\) over \([0, \pi]\) would be:

```
f(x) = sin(x)
quadgk(x -> sqrt(1 + f'(x)^2), 0, pi)
## "MethodError(ctranspose,(f,))"
```

Next we look at a more practical problem.

A *caternary* shape (http://en.wikipedia.org/wiki/Catenary) is the shape a hanging chain will take as it is suspended between two posts. Of course, power wires will also have this shape between towers. A formula for a caternary can be written in terms of the hyperbolic cosine, `cosh`

in `julia`

:

\[ y = a \cosh(x/a) = a \cdot \frac{e^{x/a} + e^{-x/a}}{2}. \]

Suppose we have the following wire hung between \(x=-1\) and \(x=1\) with \(a = 2\):

```
a = 2;
f(x) = a * cosh(x/a)
plot(f, -1, 1)
```

How long is the chain? Looking at the graph we can guess an answer is between \(2\) and \(2.5\), say, but it isn't much work to get much closer to the answer:

```
quadgk(x -> sqrt(1 + f'(x)^2), -1, 1)
## "MethodError(ctranspose,(f,))"
```

The sag in the chain is adjusted through the parameter \(a\) -- chains with larger \(a\) have less sag.

Suppose your chain has parameter `a=`

`2.58`

what is the length? (Use `quadgk`

)

Suspension bridges, like the Verrazano bridge, have different loading than a cable and hence a different shape. A parabola is the shape the cable takes under uniform loading (cf. page 19 of http://calteches.library.caltech.edu/4007/1/Calculus.pdf for a picture).

The Verrazano-Narrows bridge has a span of 1298m. Suppose the drop of the main cables is 147 meters over this span. Then the cable itself can be modeled as a parabola with

- \(x\)-intercepts \(a = 1298/2\) and \(-a\) and
- vertex \((0,b)\) with \(b=-147\).

The parabola that fits these three points is

\[ y = \frac{-b}{a^2}(x^2 - a^2) \]

Find the arc length of the cable in meters.

A boat sits at the point \((a, 0)\) and a man holds a rope taut attached to the boat at the origin \((0,0)\). The man walks on the \(y\) axis. The position \(y\) depends then on the position \(x\) of the boat, and if the rope is taut, the position satisfies:

\[ y = a \ln\frac{a + \sqrt{a^2 - x^2}}{x} - \sqrt{a^2 - x^2} \]

This can be entered into `julia`

as:

`g(x, a) = a * log((a + sqrt(a^2 - x^2))/x) - sqrt(a^2 - x^2)`

Let \(a=\)`16`

, \(f(x) = g(x, a)\). Compute the length the bow of the boat has traveled between \(x=1\) and \(x=a\) using `quadgk`

.

(The most elementary description of this curve is in terms of the relationship \(dy/dx = -\sqrt{a^2-x^2}/x\) which could be used in place of `D(f)`

in your work.)

More on the tractrix

Watch this video "bicycle tracks" to see an example of how the tractrix can be found in an everyday observation.

What components go into the `quadgk`

function? This section covers some of the background.

The trapezoid rule can be rearranged to become:

\[ \delta f(x_0) + 2\delta f(x_2) + 2 \delta f(x_3) + \cdots + 2 \delta f(x_{n}) + \delta f(x_{n}) \]

where \(\delta = (b-a)/n\).

Whereas for even \(n\), Simpson's rule can be written with:

\[ \delta f(x_0) + 4\delta f(x_1) + 2 \delta f(x_2) + \cdots + 4 \delta f(x_{n-2}) + 2 \delta f(x_{n-1}) + \delta f(x_{n}) \]

with \(\delta = (b-a)/n \cdot (1/6)\).

These both have the general form of

\[ \sum_k w_k \cdot f(x_k) \]

where \(w_k\) are weights and the \(x_k\) some choice of points -- not necessarily evenly spaced, though that is so in the examples we've seen. So, an alternative way to do the trapezoid formula in `julia`

for \(n=4\) might be:

```
a, b, n = 0, 1, 4
## (0,1,4)
delta = (b-a)/n
## 0.25
f(x) = x^2
w = delta * [1, 2, 2, 2, 1] ## delta * [1, repeat([2],n-1), 1]'
## [0.25,0.5,0.5,0.5,0.25]
x = a + (0:n) * delta
## 0.0:0.25:1.0
sum(w .* [f(xi) for xi in x]) ## note .* has the leading dot
## 0.6875
```

The compact code to compute the approximate integral, `sum(w .* [f(xi) for xi in x])`

, shows there are three important things in this form of the integral: the weights, the *nodes* or \(x\) values, and the function. The use of equally spaced nodes has been used by us so far, but it need not be the case. If fact Gauss showed he could get similar answers faster if it wasn't the case.

The Gauss nodes and weights are computable (http://en.wikipedia.org/wiki/Gaussian_quadrature). The main tools are the so-called Legendre polynomials, which can be defined recursively with Bonnet's formula:

\[ P_0(x) = 1; P_1(x) = x; \quad n P_{n}(x) = (2(n-1)+1) x P_{n-1}(x) -(n-1) P_{n-2}(x). \]

Using `julia`

's `Polynomial`

package this can be implemented almost verbatim:

```
using Polynomial
function lgp(n::Integer)
if n == 0 return Poly([1]) end
if n == 1 return Poly([1, 0]) end
(2*(n-1) + 1) / n * lgp(1) * lgp(n-1) - (n-1)/n * lgp(n-2)
end
```

This is used as,

```
p4 = lgp(4)
## Poly(4.375x^4 + -3.75x^2 + 0.375)
```

The term *recursion* is applied to a function when it makes a reference to itself during a computation. With this function, don't try it with values much bigger than \(20\), as the recursion can take a long time.

The nodes are the roots of the right polynomial. Here we have the values for `p4`

```
xk = roots(p4)
## [-0.3399810435848559,0.3399810435848554,-0.8611363115940524,0.8611363115940529]
```

(The Konrod part of `quadgk`

changes the nodes so they can be reused during the refinement.)

Finally, the weights involve the derivative of \(P_n\) through:

\[ w_i = \frac{2}{(1 - x_i^2) \cdot(P^{'}_n(x_i)/P_n(1))^2} \]

These can be done simply with:

```
P = p4 /polyval(p4, 1) ## normalize p4 so it is 1 at 1.
## Poly(4.375x^4 + -3.75x^2 + 0.375)
weights(x) = 2 / (1 - x^2) / (polyval(polydir(P), x))^2
wk = [weights(xi) for xi in xk]
## {0.6521451548625462,0.6521451548625466,0.34785484513745457,0.34785484513745296}
```

From here `gauss_quadrature`

will do the integration of `f`

over the interval \([-1,1]\), though we can do it ourself quickly enough. Here we compute the integral of \(\cos(\pi/2 x)\) over \([-1,1]\) (you can check this is very close to the answer \(4/\pi\) even with just 4 nodes):

```
f(x) = cos(pi/2*x)
sum( wk .* [f(xi) for xi in xk])
## 1.2732295042595088
```

Next, we a have a brief discussion about an alternative means to compute integrals. The following function `adapt`

implements a basic *adaptive quadrature* method for integration. The basic idea is that for a subinterval \([a,b]\) if the area of the trapezoid is not close to the area of Simpson's parabolic estimate then the subinterval is split into two pieces \([a,c]\) and \([c,b]\) and the same question is asked. If the area is close the Simpson's parabolic estimate is used to estimate the integral of \(f\) over that subinterval.

Again, we see *recursion* when programming this algorithm. To avoid infinite loops during this, we use a limit below to keep track.

In general, the value of adaptive methods like this, is the function calls concentrate on areas where \(f\) is not well approximated and where it is well approximated it just moves on. This approach works well for poorly behaved functions, as it has a more refined grid there.

```
function adapt(f, a, b, limit)
close_enough(x, y) = abs(x - y) < sqrt(eps())
## println("adapt called with a=$a, b=$b, limit=$limit")
h = b-a
c = (a + b)/2
a1 = (f(a) + f(b)) * h/2 ## trapezoid
a2 = (f(a) + 4f(c) + f(b)) * h/6 ## Simpson's parabola
if close_enough(a1, a2)
return(a2)
end
if limit == 0
println("limit reached for this interval [$a, $b]")
return(a2)
end
adapt(f, a, c, limit - 1) + adapt(f, c, b, limit-1)
end
```

Does it work? Let's see it for the area of \(f(x) = x^2(1-x)^{10}\) which is known to satisfy \(\beta(2+1, 10+1)\)

```
out = adapt(x -> x^2 * (1 -x)^10, 0, 1, 10)
## 0.0011655014497573913
```

Not too far off (`1e-10`

) from the known answer which is a beta function:

```
out - beta(2 + 1, 10 + 1)
## 2.842562207752003e-10
```