InAccel’s Accelerated Machine Learning Studio (AML) is a fully integrated framework that allows to speedup your application from framework like C/C++, Python, Scala, Jupyter notebooks and Spark wth zero code changes.
It aims to maintain the practical and easy to use interface of other open-source frameworks and at the same time to accelerate the training part or the classification of machine learning models.
The accelerators can achieve up to 15x speedup compared to multi-threaded high performance processors. InAccel provides all the required APIs in Python, Scala and Java for the seamless integration of the accelerators in your applications.
Check the 1min video to see how InAccel can help you speedup your ML application with zero code changes.
Enjoy 10x-20x speedup on your applications utilizing the power of accelerators stress-free.
Using the same familiar frameworks like Jupyter, Scikit Learn and Keras, Inaccel offload the most computationally intensive function to the hardware accelerators instantly.
Pay-as-you-go: Pay only for the days that you are using the InAccel studio in a secure cloud-based environment (using aws f1 instances).
InAccel’s Accelerated ML suite can be used to speedup significantly widely-used machine learning applications like logistic regression and K-means clustering.
Also it allows sharing of resources from multiple users and scalable deployment to multiple FPGA cards.
The speedup you achieve using InAccel’s ML suite comes also with a significant reduction on the operational expenses. While the accelerators cost higher (per hour) compared to typical processors, when you take into account the reduction of the total execution time, you can achieve up to 2.6 reduction on the operational expenses (TCO)
Accelerated ML on Jupyter notebook Using InAccel Accelerated ML suite you can speedup you ML applications instantly on Jupyter notebooks.
Check the video on how you can speedup your ML applications just with a click of a button on Jupyter notebooks.
Use the same notebooks.
Just import the inaccel library and enjoy up to 15x faster execution time for your application.
Speedup the hyper-parameter tuning, the training or even the classification using your familiar tools.
It can be used also for Inference (i.e. ResNet50) and get up to 3,000 fps per FPGA card.
Select the application that you need to speedup with ready to use examples
Note: The online web platform is available for demonstration purposes to show the easy of deployment using FPGA-based accelerators. Multiple users may share the available resources which may affect the performance of the applications. If you want to have exclusive access to speedup your applications contact us at email@example.com.