https://github.com/gtraines/perceptron_classification

*Background*

One of the fundamental concepts in artificial intelligence and machine learning is the perceptron learning algorithm which gives life to the abstract data structure known as the perceptron. The perceptron is a data structure created to resemble the functioning of a neuron in the brain. The perceptron has a set of inputs (variable values) which each has an excitatory (positive) or inhibitory (negative) weight associated with it. During the training phase, the perceptron receives a set of values corresponding to its inputs along with an expected target outcome. If the sum of the weights multiplied by their corresponding input values is greater than a threshold value, the perceptron will emit a positive response; if the sum is lower than the threshold value, the perceptron will emit a negative response.

Continue reading Introducing the Perceptron →

https://github.com/gtraines/apriori-recommender-py

Machine learning methods are often applied to model complex systems where the function mapping inputs to outputs is unknown but a relationship is suspected or known to exist. Human behavior is one of these complex systems where machine learning can add insight to apparently random behavior. By looking at large samples of behavior, machine learning practitioners can highlight patterns. Retail stores and marketers have a vested interest in determining these patterns to support their decision-making processes and ultimately maximize profits.

Continue reading An offline shopping recommendation engine using the Apriori algorithm and association analysis →

https://github.com/gtraines/linear-regression

Linear regression is an approach to machine/statistical learning generally applied to value prediction problems. It is a form of supervised learning, wherein the training data provides the “correct” answer in addition to the data points generated by an unknown function, (*f*). Although in this case we were provided a 2-dimensional data set, linear regression can be used on higher-dimensional data sets. The linear regression method assumes that the unknown function *f *can be approximated using a polynomial linear equation of *d *terms (the number of features being measured plus a constant value for bias). Among machine learning algorithms, it is fairly simple, and in his CalTech lectures Dr. Abu-Mostafa calls linear regression “one-step learning.”

Continue reading Linear Regression →

Before setting out to build a learning system, practitioners, students, and consumers of machine learning should be able to rigorously show that learning methods can be applied successfully to a given learning problem. Using statistics and probability, we can show how a supervised classifying learning system can achieve a level of success called “probably approximately correct.” Given the desired probability that a learner will correctly classify a problem, we can determine how expensive it will be to approximate the target function in terms of computation (generally time and space) and how many training examples would be required to achieve that level of correctness.

The Hoeffding Inequality showed that learning is a theoretical possibility.

Continue reading Computational Complexity of Learning →

https://github.com/gtraines/logistic-regression-classification

Logistic regression is a type of machine learning approach to the classification of noisy data. Whereas linear classification requires data to be linearly separable in order to find the decision hyperplane, logistic regression allows for the expression of uncertainty by providing a probability that a given sample should be placed into one class or the other.

Logistic regression calculates the probability by running the vector of of inputs and weights through a logistic or “sigmoid” function which Continue reading Logistic Regression with Gradient Descent – Some Thoughts and Lessons →

I’ve posted a project providing a visual step-through explanation of Dijkstra’s shortest-path algorithm implemented on a randomly-generated digraph, as well as a short report on the algorithm and my project, available for download on the “projects” page.

Multithreading and Parallel Breadth-First Search: An implementation in C++11

So, if you read the accompanying lab report, you’ll see this was an attempt at using the C++11 multithreading library (essentially the same as the Boost multithreading library) to implement a parallel breadth-first search algorithm. The results weren’t spectacular, but I don’t feel too bad; of the two computer scientists who developed the more successful approach, one of them literally wrote the book on algorithms.

Anyway, if this is helpful to anyone, please feel free to add to, change, or take away from my work here. I’d really appreciate any feedback, though. Thanks!

## always choosing the local optimum