Machine learning uses a set of algorithms, which we normally don’t use in traditional software development. Even its being used, it would have been a highly reduced algorithm defined for a particular set of case.
Machine learning engineering problems varies from problem to problem depending on type of data, like numbers, text, images, graphs and so on. The number of parameters can also vary from 1 to million or more.
Depending on the scenario, engineers have to figure out a good algorithm, that is best fit and practical for a particular set of machine learning problems.
Broadly machine learning algorithms are categorized in to
 Supervised Learning Algorithms: Here a model that use machine learning algorithm is trained with true data, that give desired output. It’s like a student going to school, study a prescribed curriculum, attend classes, study at home and write a final examination and get a pass or fail.
 Unsupervised learning: Here the model is trained with data, but the desired output is not given. It’s like a student who does everything as supervised, but don’t write the exam.
 Semisupervised learning: It’s a combination of both Supervised and Unsupervised Learning, where the machine learning model is trained with some set desired output. Here the student may write some exams which can be a sure pass.
 Reinforcement learning: Here the model learns from rewards from a sequence of actions. In this scenario, a student is given complete textbooks and learning materials and asked to figure out a way to pass the course, but he can write only one exam and can write the next exam only if he passed the previous one.
The following is a list of algorithms that are being widely used in Machine Learning Applications.
Machine Learning Algorithms  
Regression  · Linear Regression · Ordinary Least Squares Regression · Logistic Regression · NadarayaWatson Kernel Regression · Regression Trees  
Regularization  · Least Absolute Shrinkage and Selection Operator (LASSO) · Elastic Net · Least Angle Regression (LARS)  
Reinforcement Learning  · Q Learning · Deep Learning · Deep Reinforcement Learning  
Deep Learning  · Deep Reinforcement Learning · Deep Boltzmann Machine · Deep Belief Networks · Deep Neural Networks  
Neural Networks  · Deep Neural Networks  o Convolutional Neural Networks o Recurrent Neural Networks 
· Perceptron · BackPropagation · Hopfield Network  
Bayesian  · Naïve Bayes · Bayesian Belief Networks · Gaussian Naïve Bayes · Multinomial Naïve Bayes · Bayesian Network  
Decision Tree  · Classification and Regression Tree (CART) · Iterative Dichotomiser 3 (ID3) · C4.5 · C5  
Ensemble  · Random Forest  
· Boosting  o AdaBoost  
Dimensionality Reduction  · Principal Component Analysis (PCA)  
· Linear Discriminant Analysis  o Fisher Linear Discriminant  
Instance Based  · kNearest Neighbor(Knn)  
Clustering  · kMeans · Expectation Maximization · Gaussian Mixture Models (identity covariance) · SingleLink Hierarchical Clustering · Spectral Clustering

There are thousands of Machine Learning algorithms, but almost all of them can be structured into three parts
 Representation
 Evaluation
 Optimization
Representation is the process of creating a structured data from the given data pool, such that a particular analysis can be applied to it. It can be as simple as sorting an array to building a complex tree.
Some of the commonly used data representations are
 Decision trees
 Logic programs / set of rules
 Instances
 Graphical models (Bayes/Markov nets)
 Neural networks
 Support vector machines
 Model ensembles etc.
Evaluation of data is done over the represented data set. What the algorithm tries to achieve here is to identify data relationships that can give the desired output. for example, let’s say our algorithm is analyzing over a dataset of people, country and language. Most commonly spoken language in countries like US, UK and Australia is English, for Spain, Spanish, for German for Germany. If a person is from US and his language is not identified, the algorithm is able to say the best probable language, that is English.
There are various methods used to accomplish this, the following are few.
 Accuracy
 Precision and recall
 Squared error
 Likelihood
 Posterior probability
 Cost / Utility
 Margin
 Entropy
 KL Divergence Etc.
Optimization is the process, where the algorithm becomes complete. When data is represented and evaluated, during the training of model, the algorithm don’t have to go through the same routines as it did during the training process. The model already knows, which type of data goes where, and what could be a best solution for a missing data and therefore helps up to speed up the output of the program.
Few of the techniques used to accomplish optimization are the following
 Combinatorial optimization
 Convex optimization
 Constrained optimization
Although designing a new algorithm falls into theoretical computer science and mathematics, a developer does need to have a good understanding how the algorithm works under the hood, so as to make design decisions suitable for the engineering problem at hand.
References
http://www.cs.cmu.edu/~aarti/Class/10701/MLAlgo_Comparisons.pdf
http://www.cs.cmu.edu/~ninamf/courses/601sp15/lectures.shtml