Kubernetes

Send us your best ideas, we want to hear them! If you need help with anything regarding your DigitalOcean account, please open a ticket with our Support team through https://cloudsupport.digitalocean.com/s/
Disappointed to see the k8s dashboard deprecation
I just saw the email regarding the built in kubernetes dashboard deprecation. This is incredibly disappointing. I chose to use DO because of the guarantees of simplicity and assistance that your web UI and service provide such as built-in web-based tools that go above and beyond. When I use DO, I trust that your service is just going to work and be easy when I need to go in and make changes, add services, or debug. Dashboards for databases and even k8s are a huge part of that. Setting it up on my own is a pain. With that in mind: it is incredibly sad to see the k8s dashboard go away. For many users, having a convenience (even if rare) of the dashboard already working is a big deal. Especially for people that aren't k8s experts, or find that even getting their own dash installed is a headache. I really wish that DO reconsiders this, and that they understand that having a built-in tool to make k8s easier is exactly the type of thing that differentiates DO from other cloud providers (!). Just because the dashboard is rarely used doesn't mean it isn't immensely valued when needed. Maybe you can do a better job advertising that it exists -- or maybe you can survey people and ask them: "Even if you didn't know it existed, are you happy that the k8s dash exists? If you had a production incident and you were on the run with limited tools at your disposal to debug quickly from your browser, would the k8s dash be helpful?" In my case I've used it rarely. But when I did, it was a godsend. Please don't mistake rare usage from the value it provides WHEN it is needed. Thank you for the consideration- I hope to see you reverse this decision.
Surge upgrades should wait for nodes to be ready
Currently, upgrading a Kubernetes cluster with surge upgrade enabled on DigitalOcean still results in downtime, even though the feature is intended to prevent this. We are requesting improvements to the upgrade process to ensure zero-downtime deployments as the cluster scales. Current Behavior: When performing a cluster upgrade: 1. Upgrade is initiated via the dashboard. 2. New nodes are created (takes 1–2 minutes). 3. New nodes appear in management tools (e.g., Lens) after 20–30 seconds. 4. New nodes are tainted as not ready. 5. Old nodes are immediately marked as SchedulingDisabled —this kills all pods before new nodes are ready. 6. New nodes become ready 20–30 seconds later and begin running pods. 7. Full recovery takes several minutes, resulting in noticeable downtime. Expected/Ideal Behavior: The upgrade process should ensure that old nodes are only drained after new nodes are fully ready, minimizing or eliminating downtime. A more robust upgrade flow would be: 1. Start upgrade in the dashboard. 2. New nodes are created (ideally, this should also be faster). 3. New nodes appear in management tools. 4. New nodes are tainted as not ready. 5. Wait until new nodes are fully ready. 6. Only then, drain the old nodes, and wait for the drain process to complete (or timeout after 5 minutes). 7. Once drained, remove old nodes from the cluster. 8. Mark the update as complete. Why this matters: Zero-downtime upgrades are a critical expectation for Kubernetes users, especially as clusters scale and host production workloads. The current process undermines the value of surge upgrades and may force teams to consider alternative providers. Request: • Please adjust the upgrade workflow so that old nodes are only drained after new surge nodes are fully ready. • Consider optimizing node provisioning speed. • Provide more transparency or documentation if there are best practices or settings that can help achieve zero-downtime upgrades. After a chat with support, they told us about Pod Disruption Budgets, however we still strongly encourage prioritizing this feature. Thank you!
Load More