-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
best practices on production deployment? #76
Comments
Please take a look at the Kubeflow Pipelines project, it uses MLMD for tracking lineage of artifacts and jobs. |
@dushyanthsc for deployment guidelines from KFP side. |
What's the plan on the multi-tenancy support? I know MLMD doesn't have this concept yet, what's the best practice we should follow? |
@Jeffwan In the current release, each mlmd-server talks to a single db instance. If the users are allowed to share the same db, then reusing the single server with the released image is fine. When the clients need to store to different db, then multiple server instances are needed at the moment. Would you please elaborate more about the multi-tenancy use case in your deployment to better understand the priorities? |
@hughmiao Thanks for the explanation. Sure. Let me collect more requirements internally and come back with a concrete summary and then we can have some discussion. |
I'm also looking for some guidance here. How exactly do we deploy MLMD in production? What is the proper guidance here? Is there a docker image that we can simply run without going through Bazel? Any tips might be helpful :) |
I wonder what is a recommended way of deploying
mlmd
in production?The only piece in documentation about starting gRPC server I found here:
Did I miss something?
What would be the best approach to deploy
mlmd
in Kubernetes?Some example manifests / documentation on this could be very helpful.
The text was updated successfully, but these errors were encountered: