https://github.com/gtraines/perceptron_classification

*Background*

One of the fundamental concepts in artificial intelligence and machine learning is the perceptron learning algorithm which gives life to the abstract data structure known as the perceptron. The perceptron is a data structure created to resemble the functioning of a neuron in the brain. The perceptron has a set of inputs (variable values) which each has an excitatory (positive) or inhibitory (negative) weight associated with it. During the training phase, the perceptron receives a set of values corresponding to its inputs along with an expected target outcome. If the sum of the weights multiplied by their corresponding input values is greater than a threshold value, the perceptron will emit a positive response; if the sum is lower than the threshold value, the perceptron will emit a negative response.

Continue reading Introducing the Perceptron →

https://github.com/gtraines/apriori-recommender-py

Machine learning methods are often applied to model complex systems where the function mapping inputs to outputs is unknown but a relationship is suspected or known to exist. Human behavior is one of these complex systems where machine learning can add insight to apparently random behavior. By looking at large samples of behavior, machine learning practitioners can highlight patterns. Retail stores and marketers have a vested interest in determining these patterns to support their decision-making processes and ultimately maximize profits.

Continue reading An offline shopping recommendation engine using the Apriori algorithm and association analysis →

https://github.com/gtraines/linear-regression

Linear regression is an approach to machine/statistical learning generally applied to value prediction problems. It is a form of supervised learning, wherein the training data provides the “correct” answer in addition to the data points generated by an unknown function, (*f*). Although in this case we were provided a 2-dimensional data set, linear regression can be used on higher-dimensional data sets. The linear regression method assumes that the unknown function *f *can be approximated using a polynomial linear equation of *d *terms (the number of features being measured plus a constant value for bias). Among machine learning algorithms, it is fairly simple, and in his CalTech lectures Dr. Abu-Mostafa calls linear regression “one-step learning.”

Continue reading Linear Regression →

## always choosing the local optimum