Linear Regression

https://github.com/gtraines/linear-regression

Linear regression is an approach to machine/statistical learning generally applied to value prediction problems. It is a form of supervised learning, wherein the training data provides the “correct” answer in addition to the data points generated by an unknown function, (f). Although in this case we were provided a 2-dimensional data set, linear regression can be used on higher-dimensional data sets. The linear regression method assumes that the unknown function f can be approximated using a polynomial linear equation of d terms (the number of features being measured plus a constant value for bias). Among machine learning algorithms, it is fairly simple, and in his CalTech lectures Dr. Abu-Mostafa calls linear regression “one-step learning.”

Computational Complexity of Learning

Before setting out to build a learning system, practitioners, students, and consumers of machine learning should be able to rigorously show that learning methods can be applied successfully to a given learning problem. Using statistics and probability, we can show how a supervised classifying learning system can achieve a level of success called “probably approximately correct.” Given the desired probability that a learner will correctly classify a problem, we can determine how expensive it will be to approximate the target function in terms of computation (generally time and space) and how many training examples would be required to achieve that level of correctness.

The Hoeffding Inequality showed that learning is a theoretical possibility.