My pursuit of fake mathematician status is on hold for a bit as I find myself getting deeper into statistical learning and optimization. I’m in a humanities **online class** that talks about statistical learning, taught by Trevor Hastie and Rob Tibshirani. I don’t think I’m really the target audience but I do think it helps me read their book, Elements of Statistical Learning by Hastie, Tibshirani, and Friedman [1]. I really like the material but sometimes I find it very difficult, so their high-level non-technical explanations help a lot. In one of their lectures they said that a particular proof was fun, and so of course I had to go look into it. It’s actually an exercise in their book, and I liked it a lot so I figured I’d write it up. As for my rule about not posting solutions to exercises in books, I have fewer reservations in this case because a solution manual is available **here**. Actually, before we look at the exercise I should give some relevant background and definitions (for my benefit as much as anyone else’s).

We’re concerned with the problem of fitting a function to data. This is of course really vague and there are tons of approaches, but the approach we’re going to use is called polynomial splines, more specifically cubic splines. This basically involves splitting the feature space into a few different regions and fitting a polynomial (cubic) to the data in each region. For convenience we’re going to work in a one-dimensional feature space, and the x-coordinates which define the regions we are splitting the feature space into will be called knots. We also have a few other requests; we want our function to be continuous and we want its first two derivatives to be continuous. In general when working with splines using polynomials of degree we require continuous derivatives. Hastie and company note that cubic splines are the most common and that making the second derivative continuous makes the transition imperceptible to the human eye. I have absolutely no idea why this is a good criterion concerning continuity – honestly I suspect it’s not, since I see no further discussion. But moving on, here’s a more formal-ish definition of a cubic spline.

**Definition 1** *A cubic spline interpolant with knots on feature space and target set is a function defined as follows:*

*where is a cubic function with , , and for . A natural cubic spline is a cubic spline such that and are linear.*

Presumably the are actually fit to data in some way, although I suppose in a strictly technical sense that’s not required. A natural cubic spline is sometimes preferred because polynomials are not really to be trusted when they go off to infinity. There’s still a problem here, though. How do we actually pick the knots ? I suppose in some scenarios there might be a definitive divide in the data, but in general it is not at all obvious. But like everything in statistical learning (at least in my experience so far), a simple idea comes to the rescue. Just make all the knots! This is the maximal set of useful knots since adding more cannot improve the fit. This is called the* smoothing spline*. It’s not actually immediately clear why this is a great idea; while we will have minimal training error, why should we expect such an approach to produce a stable hypothesis function? That’s where the exercise posed by Professors Hastie and Tibshirani comes in.

**Exercise 1** *Given points with and for , consider the following optimization problem:*

*Show that the minimizer over all functions of class defined on is a natural cubic spline interpolant with knots at each ( is the class of functions which have continuous first and second derivatives).*

This objective function has a simple interpretation; the first term is the residual sum of squares, and the second term is a regularization or penalty term with tuning parameter lambda which penalizes large second derivatives. With any function that interpolates the data will be a minimizer, and with we will be forced to use a linear function, so the problem collapses to least squares, which is a sort of degenerate natural cubic spline. It is much more clear why the minimizer of this objective would be a good model than why just making all data points knots would produce a good model, but it turns out that they are actually one and the same.

We begin with the ever-classic proof technique. Let be the natural cubic spline interpolant to the pairs and let be another function of class interpolating . We’re going to show that if is as good as a solution as , then it is equal to on . Let . It’s not too hard to show that can perfectly interpolate the data (cubic splines with knots are defined by a set of basis functions of size ), but we’ll just assume it here. Consider the following calculation, where we let and for convenience.

integration by parts | |

since is linear outside the knots | |

splitting the integral | |

integration by parts again | |

since is a cubic | |

is linear outside the knots | |

is constant between knots | |

If we plug into this result we see that it implies that , and we can now use Cauchy-Schwarz (the operation defines an inner product assuming and are square-integrable).

Equality holds if and only if , i.e. is identically zero on . This implies that their difference is linear on , but since they agree on points that must actually be zero, so is identically zero on .

Armed with this result it’s apparent that the objective function will evaluate to something greater on any that interpolates the data, so is the unique minimizer over all functions that perfectly interpolate the data. What about functions not perfectly interpolating the data? Could there exist a where some function that’s not a cubic spline will produce a slightly greater residual sum of squares but a significantly smaller penalty term? No; for any such function , let be the set of points it evaluates to on the . We can find a cubic spline that perfectly interpolates these points and run through the same argument with an objective function using the points to show that the penalty term will be smaller in . Since the residual sum of squares in the original objective function is the same for and and the penalty term is the same across the two optimization problems, our cubic spline is optimal. We can safely conclude that our cubic spline is a unique minimizer.

There are actually a ton of other really cool ideas in this book. Hopefully I’ll find something to say about them instead of just doing one of the exercises, but honestly sometimes I just like to see a finished writeup. I will try to incorporate some of that next time.

Sources:

[1] Hastie, Tibshirani, and Friedman. Elements of Statistical Learning, second edition. Last updated January 2013. A copy is available for free at **http://statweb.stanford.edu/~tibs/ElemStatLearn/download.html**.