The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run Kubeflow.
Based on the current functionality you should consider using Kubeflow if:
This list is based ONLY on current capabilities. We are investing significant resources to expand the functionality and actively soliciting help from companies and individuals interested in contributing (see Contributing).
This documentation assumes you have a Kubernetes cluster already available.
For more general information on setting up a Kubernetes cluster please refer to Kubernetes Setup. If you want to use GPUs, be sure to follow the Kubernetes instructions for enabling GPUs.
Requirements:
Run the following script to create a ksonnet app for Kubeflow and deploy it.
export KUBEFLOW_VERSION=0.2.5
curl https://raw.githubusercontent.com/kubeflow/kubeflow/v${KUBEFLOW_VERSION}/scripts/deploy.sh | bash
Important: The commands above will enable collection of anonymous user data to help us improve Kubeflow; for more information including instructions for explictly disabling it please refer to the Usage Reporting section of the user guide.
For detailed troubleshooting instructions, please refer to the Troubleshooting Guide.