ARN

5 AI start-ups leading MLops

From data preparation and training to model deployment and beyond, these start-ups offer state-of-the-art platforms for managing the entire machine learning lifecycle.

Along with the huge and increasing demand for artificial intelligence (AI) applications, there’s a complementary hunger for infrastructure and supporting software that make AI applications possible.

From data preparation and training to deployment and beyond, a number of start-ups have arrived on the scene to guide you through the nascent world of MLops. Here’s a look at some of the more interesting ones that will make your AI initiatives more successful.

Weights & Biases

Weights & Biases is becoming a heavyweight presence in the machine learning space, especially among data scientists who want a comprehensive and well-designed experiment tracking service. Firstly, W&B has out-of the box integration with almost every popular machine learning library (plus it’s easy enough to add custom metrics).

Secondly, you can use as much of W&B as you need — as a turbo-charged version of Tensorboard, or also as a way to control and report on hyper-parameter tuning, or also as a collaborative centre where everybody in your data science team can see results or reproduce experiments run by other team members.

For the enterprise, W&B can even be used as a governance and provenance platform, providing an audit trail of which inputs, transformations, and experiments were used to build a model as the model goes from development to production.

Your data scientists certainly already know about W&B, and if they’re not using it within the company, they almost certainly want to be. If OpenAI, GitHub, Salesforce, and Nvidia are using W&B, why aren’t you?

Seldon

Seldon is another company with an open core offering that offers additional enterprise features on top.

The open source component is Seldon Core, a cloud-native way of deploying models with advanced features like arbitrary chains of models for inference, canary deployments, A/B testing, and multi-armed bandits, and support for frameworks like TensorFlow, Scikit-learn, and XGBoost out-of-the-box.

Seldon also offers the the open source Alibi library for machine learning model inspection and explanation, containing a variety of methods to gain insight on how model predictions are formed.

An interesting feature of Seldon Core is that it is incredibly flexible in how it fits in with your technology stack. You can use Seldon Core by itself, or slot it into a Kubeflow deployment. You can deploy models that have been created via MLFlow, or you can use Nvidia’s Triton Inference Server, resulting in a number of different ways that you can leverage Seldon for maximum gain.

For the enterprise, there’s Seldon Deploy, which provides a comprehensive suite of tools for governance of models, including dashboards, audited workflows, and performance monitoring. This offering is targeted at data scientists, SREs, as well as managers and auditors.

You won’t be entirely surprised to discover that Seldon’s focus on auditing and explanation has made this UK-based start-up a hit with banks, with Barclays and Capital One using their services.

While there are numerous competitors in the model deployment space, Seldon provides a comprehensive set of features and an all-important focus on Kubernetes deployment in its core offering, along with useful enterprise additions for companies that desire a more end-to-end solution.

Pinecone / Zilliz

Vector search is red hot right now. Thanks to recent advances in machine learning across domains such as text, images, and audio, vector search can have a transformative effect on search.

For example, a search for “Kleenex” can return a retailer’s selection of tissues without the need for any custom rules of synonym replacements, as the language model used to generate a vector embedding will place the search query in the same area of the vector space. And the exact same process can be used to locate sounds or perform facial recognition.

Although current search engine software is not often optimised to perform vector search, work continues in Elastic and Apache Lucene, and a host of open source alternatives offer the vector search capability at high speed and scale (e.g NMSLib, FAISS, Annoy).

In addition, many start-ups have emerged to lift some of the burden of setting up and maintaining vector search engines from your poor ops department. Pinecone and Zilliz are two such start-ups providing vector search for the enterprise.

Pinecone is a pure SaaS offering, where you upload the embeddings produced by your machine learning models to their servers and send queries via their API. All aspects of hosting including security, scaling, speed, and other operational concerns are handled by the Pinecone team, meaning that you can be up and running with a similarity search engine within a matter of hours.

Although Zilliz has a managed cloud solution coming soon, in the shape of Zillow Cloud, the company takes the open core approach with an open source library called Milvus.

Milvus wraps commonly used libraries such as NMSLib and FAISS, providing a simple deployment of a vector search engine with an expressive and easy-to-use API that developers can use to build and maintain their own vector indexes.

Grid.ai

Grid.ai is the brainchild of the people behind PyTorch Lightning, a popular high-level framework built on PyTorch that abstracts away much of the standard PyTorch boilerplate and makes it easy to train on one or 1000 GPUs with a couple of parameter switches.

Grid.ai takes the simplification that PyTorch Lightning brings and runs away with it, allowing data scientists to train their models using transient GPU resources as seamlessly as running code locally.

Do you want to run a hyper-parameter sweep across 200 GPUs all at once? Grid.ai will let you do that, managing all of the provisioning (and decommissioning) of infrastructure resources behind the scenes, making sure that your datasets are optimised for use at scale, and providing metrics reports, all bundled up with an easy-to-use web UI.

You can also use Grid.ai to spin up instances for interactive development, either at the console or attached to a Jupyter Notebook.

Grid.ai’s efforts to simplify model training at scale will be useful to companies that regularly need to spin up training runs that occupy 100 or more GPUs at a time, but it remains to be seen just how many of those customers are out there. Still, if you need a streamlined training pipeline for your data scientists that minimises cloud costs, you should definitely give Grid.ai a close examination.

DataRobot

DataRobot would like to own your enterprise AI lifecycle all the way from data preparation to production deployment, and the company makes a good pitch for it.

DataRobot’s data prep pipeline has all the bells and whistles in terms of web UI that you’d expect to make data enrichment a breeze, plus it includes facilities to assist users (either novices or experts) by automatically profiling, clustering, and cleaning data before it gets fed into a model.

DataRobot has an automated machine learning facility that will train a brace of models against targets for you, allowing you to select the best-performing generated model or one of your own uploaded to the platform. When it comes to deployment, the platform’s integrated MLops module tracks everything from uptime to data drift as time goes by, so you can always see the performance of your models at a glance.

There’s also a feature called Humble AI that allows you to put extra guardrails on your models in case low probability events occur at prediction time, and of course these can be tracked via the MLops module as well.

In a slight difference from most of the other start-ups on this list, DataRobot will install on bare metal within your own data centres and Hadoop clusters as well as deploy in private and managed cloud offerings, showing that it’s determined to compete in all arenas in the enterprise AI platform battles ahead, serving customers from the quick-moving start-up to the established Fortune 500 company.

MLops is one of the hottest areas of AI right now — and the need for accelerators, platforms, and management and monitoring will only increase as more companies enter the AI space. If you’re joining the AI gold rush, you can turn to these five start-ups to supply your picks and axes!