Try on AWS

Try now the Accelerated Machine Learning Library on Amazon AWS.



Accelerated ML Suite on AWS


Machine learning accelerators providing all the required APIs and libraries for the seamless integration in scalable distributed systems (C/C++, Java, Python Scala, R, Spark ML and Mahout)


Accelerated Machine Learning Suite


InAccel provides the accelerated machine learning suite in widely-used frameworks:
  • Single node integration for C/C++, Java, Python and Scala
  • Distributed system integration through Apache Spark ML

InAccel's Accelerated Machine Learning Suite (AML) is a fully integrated framework that includes both the Software APIs/libraries and the FPGA files for accelerating your machine learning applications. It aims to maintain the practical and easy to use interface of other open-source frameworks and at the same time to accelerate the training part of machine learning models.

The accelerators can achieve up to 15x speedup compared to multi-threaded high performance processors.
InAccel provides all the required APIs in Python, Scala and Java for the seamless integration of the accelerators in your applications.

InAccel provide FPGA-based hardware accelerators for widely-used machine learning algorithms such as:

Classification and Regression

Logistic Regression
Naive Bayes

Clustering

K-Means

Recommendation engines

Alternating Least Squares (ALS)

Decision Trees

Tree Ensembles

Random Forests
Gradient-Boosted Trees (GBTs) - XGboost


You can use Accelerated Machine learning instantly on:

You can test Accelerated Machine learning on Nimbix (1-click demo)

Logistic regression is used for building predictive models for many complex pattern-matching and classification problems. It is used widely in such diverse areas as bioinformatics, finance and data analytics. It is also one of the most popular machine learning techniques. It belongs to the family of classifiers known as the exponential or log-linear classifiers and is widely used to predict a binary response.

The specific IP core implements the (Batch) Gradient Descent algorithm for the Logistic Regression. For more information on the Logistic Regression cores check the datasheet: LogisticRegression Datasheet

K-means is one of the simplest unsupervised learning algorithms that solve the well known clustering problem and is applicable in a variety of disciplines, such as computer vision, biology, and economics. It attempts to group individuals in a population together by similarity, but not driven by a specific purpose. The procedure follows a simple and easy way to cluster the training data points into a predefined number of clusters (K). The main idea is to define K centroids c, one for each cluster.​
Recommendation engines are widely used in order to predict the rating that a user would give to an item based on the user’s past behavior. Modern recommendation engines are based on computational intensive algorithms like collaborative filtering that needs to process huge sparse matrices in order to provide efficient results. The InAccel accelerator implements the Alternating Least Squares-based (ALS) collaborative filtering for recommendation engines that can be used to speedup significantly the processing time.

Logistic regression


15x faster execution of logistic regression on MNIST (24GB) compared to 48-core processors with the same cost.

Accelerated Machine Learning using InAccel ML suite


InAccel's Accelerated ML suite can be used to speedup significantly widely-used machine learning applications like logistic regression and K-means clustering.

The LR core can achieve up to 7.5x speedup on an AWS f1 instance compared to a typical processor with the same number of cores.

The K-means clustering core can achieve up to 6.2x speedup on an AWS f1 instance compared to a typical processor with the same number of cores.

In the specific benchmark we compared a cluster of 2 f1.4x instances (in total 32 vCPUs) plus 2 FPGA against a cluster of 2 r5d.4x (32 vCPUs). We used the larger dataset of MNIST (24GB) based on 8 million images.

Reduction on the operational expenses for Machine learning


The speedup you achieve using InAccel's ML suite comes also with a significant reduction on the operational expenses. While the accelerators cost higher (per hour) compared to typical processors, when you take into account the reduction of the total execution time, you can achieve up to 2.6 reduction on the operational expenses (TCO)

Accelerated ML on Jupyter notebook

Using InAccel Accelerated ML suite you can speedup you ML applications instantly on Jupyter notebooks. Chech the video on how you can speedup your ML applications just with a click of a button.

Accelerated Spark ML on aws


Download the Solution brief for Spark ML suite on amazon aws.