At the Amazon Web Services re:Invent conference today, AWS CEO Andy Jassy introduced a fully managed, end-to-end machine learning service — SageMaker. The company says SageMaker will enable developers to quickly build machine learning models at scale.
Jassy said AWS has always striven to provide other companies with the same level of technology as Amazon, itself, enjoys. “Most companies don’t have expert machine learning practitioners,” he said. “Machine learning is still too complicated. If you want to enable most companies to be able to use machine learning, you have to solve the problem of making it accessible for everyday developers and scientists.”
According to a blog post by Randall Hunt, senior technical evangelist at AWS, there are three main components of SageMaker:
- Authoring: Zero-setup hosted Jupyter notebook integrated development environments for data exploration, cleaning, and preprocessing. These can be run on general instance types or GPU powered instances.
- Model Training: A distributed model building, training, and validation service. Developers can use built-in common supervised and unsupervised learning algorithms and frameworks or create their own training with Docker containers.
- Model Hosting: A model hosting service with HTTPs endpoints invokes models to get real-time inferences. Again, users can construct these endpoints using the built-in SDK or provide their own configurations with Docker images.
“Each of these components can be used in isolation, which makes it really easy to adopt Amazon SageMaker to fill in the gaps in your existing pipelines,” wrote Hunt.
SageMaker includes the 10 most common machine learning algorithms that have been pre-installed and optimized. SageMaker also comes pre-configured to run the TensorFlow software and Apache MXNet, two of the most popular open source frameworks. Users also have the option of using their own framework.