{"id":351,"date":"2021-08-31T14:39:27","date_gmt":"2021-08-31T14:39:27","guid":{"rendered":"https:\/\/fde.cat\/?p=351"},"modified":"2021-08-31T14:39:27","modified_gmt":"2021-08-31T14:39:27","slug":"how-to-rename-a-helm-release","status":"publish","type":"post","link":"https:\/\/fde.cat\/index.php\/2021\/08\/31\/how-to-rename-a-helm-release\/","title":{"rendered":"How to Rename a Helm Release"},"content":{"rendered":"<h3>Problem<\/h3>\n<p>The process for migrating from Helm <a href=\"https:\/\/helm.sh\/docs\/topics\/v2_v3_migration\/\">v2 to v3<\/a>, the latest stable major release, was pretty straightforward. However, while performing the migration, we encountered an anomaly with how one of the application charts had been deployed, thus introducing additional challenges.<\/p>\n<p>One of our application\u2019s Helm v2 releases did not adhere to the standard naming convention when we installed it in our pre-production and later into the production environments. This had gone unnoticed <em>(we do make mistakes!!!) <\/em>and surfaced at the time we decided to migrate from Helm v2 to v3. We have some automation capabilities in our pipeline that rely on the naming convention of the Helm release to sign off the deployment. Naturally, this started to fail for the application in scope. So our first task before we could start migrating the charts to Helm v3 was to fix the release\u00a0name.<\/p>\n<p>A quick internet search showed us that many in the Kubernetes community had <a href=\"https:\/\/github.com\/helm\/helm\/issues\/1809\">faced similar problems<\/a> and there <a href=\"https:\/\/stackoverflow.com\/questions\/45545663\/in-helm-can-i-change-the-chart-name-of-a-chart-that-is-already-up\">isn\u2019t a simple solution yet<\/a>. The closest answer we got was to delete the release and install it again with the correct name. Unfortunately, this wouldn\u2019t work for us, as the service is customer-facing, serving in real-time, and we couldn\u2019t afford any downtime. Hence, we had to consider other ways to solve this\u00a0problem.<\/p>\n<h3>Options<\/h3>\n<p>There were two theoretically possible solutions that would allow us to rename an existing release without causing any service disruption.<\/p>\n<p>Let\u2019s take a closer look at the approaches:<\/p>\n<p><strong>Trick the datastore<\/strong><\/p>\n<p>The first option was to modify the datastore <em>(ie. ConfigMap in Helm v2 and Secrets in Helm v3)<\/em> that stores the resource manifest by replacing the existing <em>(incorrect)<\/em> release name string with the desired <em>(correct)<\/em> value. Helm v3 stores the resource manifest in a zipped, double base64 encoded secret in the namespace.<\/p>\n<p><strong>## GET RELEASE INFO<\/strong>$ kubectl get secret -n <strong>&lt;NAMESPACE&gt;<\/strong> sh.helm.release.v1.<strong>&lt;RELEASE-NAME&gt;<\/strong>.v1 -o json | jq -r &#8220;.data.release&#8221; | base64 -D | base64 -D | gzip &gt; release.json<strong>## REPLACE RELEASE NAME WITH DESIRED NAME &amp; ENCODE<\/strong>DATA=`cat release.json | gzip -c | base64 | base64`<strong>## PATCH THE RELEASE<\/strong>$ kubectl patch secret -n <strong>&lt;NAMESPACE&gt;<\/strong> sh.helm.release.v1.<strong>&lt;RELEASE-NAME&gt;<\/strong>.v1 &#8211;type=&#8217;json&#8217; &#8211;p=&#8221;[{&#8220;op&#8221;:&#8221;replace&#8221;,&#8221;path&#8221;:&#8221;\/data\/release&#8221;,&#8221;value&#8221;:&#8221;$DATA&#8221;}]\u201d<\/p>\n<p>With this approach, on multiple attempts, we noticed that the decoding\/encoding was off due to the escape characters, binary data, etc., and we couldn\u2019t upgrade the release after changing the name; in another instance, we lost a release\u2019s info and had to restore from backup. The unpredictable results did not inspire confidence, so we decided to drop this approach. We would re-evaluate this as a last resort approach.<\/p>\n<p><strong>2. Orphan &amp;\u00a0Adopt<\/strong><\/p>\n<p>The second approach that we experimented with was more deterministic and simple in a way; it didn\u2019t need us to go through the complex process of modifying the datastore. Instead, we disconnect the Kubernetes resources <em>(orphan)<\/em> from the incorrectly named Helm release and later have the new Helm release with the correct name start managing these resources <em>(adopt)<\/em>. Sounds simple right\u2026.voila!!!<\/p>\n<p>Let\u2019s walk through the steps with an example. Assume that we have an incorrectly named release called <strong>\u201cworld-hello.\u201d <\/strong>We\u2019ll have to rename this to something more meaningful, such as <strong>\u201chello-world.\u201d<\/strong><\/p>\n<p>First things first, we use Helm release names in the labelSelectors to select what backend pods the Kubernetes service <em>(kube-proxy)<\/em> directs traffic to. Since we are renaming the release, the correctly named new release will be installed, and the Kubernetes service will immediately start proxying traffic to the new ReplicaSet pods while they are still booting.<br \/>The service will be unavailable to our customers during this time. The application pods typically take about 20\u201330s to boot and we can\u2019t afford to have a disruption this long. To prevent this, we decided to remove the release name from the labelSelectors fields in the service\u00a0spec.Fig1. Remove the release label from the service\u2019s selector\u00a0field<strong>## REMOVE RELEASE LABEL<\/strong><strong>$ git diff templates\/service.yaml<\/strong>app: {{ .Values.app.name }}<br \/><strong>&#8211; release: {{ .Release.Name }}<\/strong>Next, let us follow the <a href=\"https:\/\/helm.sh\/blog\/migrate-from-helm-v2-to-helm-v3\/\">official steps<\/a> to migrate the release from Helm v2 to Helm v3 without correcting the name. Once done, issue an upgrade using the new client to validate that the resources are now managed by Helm v3.<br \/>The upgrade step will also add the label app.kubernetes.io\/managed-by=Helm to the resources managed by the release. Without this label on the resources, the release renaming will\u00a0fail.<strong>## MIGRATE RELEASE FROM HELM v2 TO HELM v3<\/strong><strong>$ helm3 2to3 convert world-hello &#8211;release-versions-max 1 -n dev<br \/><\/strong>2020\/11\/12 19:06:44 Release \u201cworld-hello\u201d will be converted from Helm v2 to Helm v3.<br \/>2020\/11\/12 19:06:44 [Helm 3] Release \u201cworld-hello\u201d will be created.<br \/>2020\/11\/12 19:06:46 [Helm 3] ReleaseVersion \u201cworld-hello.v1\u201d will be created.<br \/>2020\/11\/12 19:06:47 [Helm 3] ReleaseVersion \u201cworld-hello.v1\u201d created.<br \/>2020\/11\/12 19:06:47 [Helm 3] Release \u201cworld-hello\u201d created.<br \/>2020\/11\/12 19:06:47 Release \u201cworld-hello\u201d was converted successfully from Helm v2 to Helm v3.<br \/>2020\/11\/12 19:06:47 Note: The v2 release information still remains and should be removed to avoid conflicts with the migrated v3 release.<br \/>2020\/11\/12 19:06:47 v2 release information should only be removed using `helm 2to3` cleanup and when all releases have been migrated over<strong>## LIST HELM v3 RELEASE<\/strong><strong>$ helm3 ls -n dev<br \/><\/strong>NAME             NAMESPACE          REVISION<br \/>world-hello      dev                1<strong>## UPGRADE HELM v3 RELEASE<\/strong><strong>$ helm3 upgrade &#8211;install world-hello -n dev<br \/><\/strong>Release \u201cworld-hello\u201d has been upgraded. Happy Helming!NAME: world-hello<br \/>LAST DEPLOYED: Thu Nov 12 20:06:02 2020<br \/>NAMESPACE: dev<br \/>STATUS: deployed<br \/>REVISION: 2<br \/>TEST SUITE: NoneNow that we\u2019ve validated that the resources can be managed by Helm v3, let\u2019s begin the process of adopting the existing resources. We need to add two annotations and a label to all the resources that need to be adopted by the new (correctly named) Helm v3 release. These annotations will indicate to Helm v3 that the new release should now start managing these resources.<\/p>\n<p><strong>NOTE: <\/strong>Up to this point, the Kubernetes resources have been managed by the incorrectly named Helm release that we migrated from v2 to\u00a0v3.<\/p>\n<p><strong>## LABELS TO BE ADDED<\/strong>app.kubernetes.io\/managed-by=Helm<strong>## ANNOTATIONS TO BE ADDED<\/strong>meta.helm.sh\/release-name=<strong>&lt;NEW-RELEASE-NAME&gt;<\/strong><br \/>meta.helm.sh\/release-namespace=<strong>&lt;NAMESPACE&gt;<\/strong><strong>## ADD RELEASE NAME ANNOTATION<\/strong><strong>$ for i in deploy cm sa svc role rolebinding; do kubectl annotate -n dev $i hello-world meta.helm.sh\/release-name=hello-world &#8211;overwrite; done<\/strong>deployment.extensions\/hello-world annotated<br \/>configmap\/hello-world annotated<br \/>serviceaccount\/hello-world annotated<br \/>service\/hello-world annotated<br \/>role.rbac.authorization.k8s.io\/hello-world annotated<br \/>rolebinding.rbac.authorization.k8s.io\/hello-world annotated<strong>## ADD RELEASE NAMESPACE ANNOTATION<\/strong><strong>$ for i in deploy cm sa svc role rolebinding; do kubectl annotate -n dev $i hello-world meta.helm.sh\/release-namespace=dev &#8211;overwrite; done<\/strong>deployment.extensions\/hello-world annotated<br \/>configmap\/hello-world annotated<br \/>serviceaccount\/hello-world annotated<br \/>service\/hello-world annotated<br \/>role.rbac.authorization.k8s.io\/hello-world annotated<br \/>rolebinding.rbac.authorization.k8s.io\/hello-world annotatedOnce the annotations and labels are added to the Kubernetes resources, install the release with the correct name to sign-off on the adoption process. Once the release is upgraded, all the resources are actively managed by the correctly named release <strong>\u201chello-world.\u201d<\/strong>Because we have rolling deployments, the ReplicaSet managed by the incorrectly named release will be orphaned and hence would need to be cleaned up manually.<strong>## INSTALL HELM v3 RELEASE WITH CORRECT NAME<\/strong><strong>$ helm3 install hello-world -n dev<br \/><\/strong>Release \u201chello-world\u201d does not exist. Installing it now.NAME: hello-world<br \/>LAST DEPLOYED: Thu Nov 12 20:06:02 2020<br \/>NAMESPACE: dev<br \/>STATUS: deployed<br \/>REVISION: 1<br \/>TEST SUITE: None<strong>## LIST HELM v3 RELEASE<\/strong><strong>$ helm3 ls -n dev<br \/><\/strong>NAME             NAMESPACE          REVISION<br \/>world-hello      dev                2<br \/>hello-world      dev                1<strong>## LIST REPLICASET MANAGED BY INCORRECTLY NAMED RELEASE<\/strong><strong>$ kubectl get rs -n dev -l release=world-hello<br \/><\/strong>NAME                    DESIRED     CURRENT     READY     AGE<br \/>hello-world-8c5959d67   2           2           2         30m<strong>## LIST REPLICASET MANAGED BY CORRECTLY NAMED RELEASE<\/strong><strong>$ kubectl get rs -n dev -l release=hello-world<br \/><\/strong>NAME                    DESIRED     CURRENT     READY     AGE<br \/>hello-world-7f88445494  2           2           2         2mSince we also removed the release info from the service\u2019s labelSelector, the traffic is proxied toReplicaSets <em>(pods)<\/em> managed by both the correctly named and incorrectly named releases, ie. <strong>\u201chello-world\u201d <\/strong>and<strong> \u201cworld-hello.\u201d<\/strong>Now we can start cleaning up orphaned resources and the datastore containing the incorrectly named\u00a0release.<\/p>\n<h3>Cleanup<\/h3>\n<p>First, let&#8217;s add the release name that we initially removed back to the service\u2019s labelSelectors field. Once done, the Kubernetes service <em>(kube-proxy) <\/em>will<em> <\/em>start sending traffic to pods managed by the new release \u201c<strong>hello-world\u201d <\/strong>only.<\/p>\n<p>Next, delete the orphaned ReplicaSet and the incorrectly named Helm v2 and v3 releases.<\/p>\n<p><strong>## ADD RELEASE LABEL<\/strong><strong>$ git diff templates\/service.yaml<br \/><\/strong>app: {{ .Values.app.name }}<br \/><strong>+ release: {{ .Release.Name }}<br \/><\/strong><strong>## DELETE REPLICASET MANAGED BY INCORRECTLY NAMED RELEASE<\/strong><strong>$ kubectl get rs -n dev -l release=world-hello<br \/><\/strong>NAME                    DESIRED     CURRENT     READY     AGE<br \/>hello-world-8c5959d67   2           2           2         30m<strong>$ kubectl delete rs hello-world-8c5959d67 -n dev<br \/><\/strong><strong>## LIST INCORRECTLY NAMED RELEASE DATASTORE (Helm v3)<\/strong><strong>$ kubectl get secret -n dev | grep \u201csh.helm.release.v1.world-hello\u201d<br \/><\/strong>sh.helm.release.v1.world-hello.v1<br \/>sh.helm.release.v1.world-hello.v2<br \/><strong>## DELETE INCORRECTLY NAMED RELEASE DATASTORE (Helm v3)<\/strong>$ kubectl delete secret sh.helm.release.v1.world-hello.v1 -n dev<br \/>$ kubectl delete secret sh.helm.release.v1.world-hello.v2 -n dev<br \/><strong>## DELETE INCORRECTLY NAMED RELEASE DATASTORE (Helm v2)<\/strong>$ helm3 2to3 cleanup &#8211;name world-hello<\/p>\n<p>Finally, redeploy the application chart one more time through the deployment pipeline you might own and verify that the upgrade went smoothly.<\/p>\n<p><strong>NOTE:<\/strong> Throughout this exercise, we had a traffic generator making continuous requests to the service endpoint and we didn\u2019t notice a single failure (!2XX response code), indicating a seamless and successful migration\/renaming of a Helm\u00a0release.<\/p>\n<p><strong>## UPGRADE HELM v3 RELEASE WITH CORRECT NAME<\/strong><strong>$ helm3 upgrade \u2014 install hello-world -n dev<br \/><\/strong>Release \u201chello-world\u201d has been upgraded. Happy Helming!NAME: hello-world<br \/>LAST DEPLOYED: Thu Nov 12 20:40:06 2020<br \/>NAMESPACE: hello-world<br \/>STATUS: deployed<br \/>REVISION: 2<br \/>TEST SUITE: None<\/p>\n<p>On a closing note, renaming a Helm release is not a simple task. There is a ton of prep work and experimentation that\u2019s involved. But at the same time, we got to learn interesting details on how Helm functions internally <em>(around migrations, executions, etc)<\/em> and in the process, found out yet another way to rename a release.\u2026with documentation!<\/p>\n<p>We hope these learnings are useful and help the community alleviate some of the problems we faced during our migration!<\/p>\n<p><em>If you\u2019re interested in solving problems like these, <\/em><a href=\"https:\/\/careers.mail.salesforce.com\/tpil-blogs\"><em>join our Talent Portal<\/em><\/a><em> to check out open roles and get periodic updates from our recruiting team!<\/em><\/p>\n<p><a href=\"https:\/\/engineering.salesforce.com\/how-to-rename-a-helm-release-6fdcd7526ac7\">How to Rename a Helm Release<\/a> was originally published in <a href=\"https:\/\/engineering.salesforce.com\/\">Salesforce Engineering<\/a> on Medium, where people are continuing the conversation by highlighting and responding to this story.<\/p>\n<p><a href=\"https:\/\/engineering.salesforce.com\/how-to-rename-a-helm-release-6fdcd7526ac7?source=rss----cfe1120185d3---4\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Problem The process for migrating from Helm v2 to v3, the latest stable major release, was pretty straightforward. However, while performing the migration, we encountered an anomaly with how one of the application charts had been deployed, thus introducing additional challenges. One of our application\u2019s Helm v2 releases did not adhere to the standard naming&hellip; <a class=\"more-link\" href=\"https:\/\/fde.cat\/index.php\/2021\/08\/31\/how-to-rename-a-helm-release\/\">Continue reading <span class=\"screen-reader-text\">How to Rename a Helm Release<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","footnotes":""},"categories":[7],"tags":[],"class_list":["post-351","post","type-post","status-publish","format-standard","hentry","category-technology","entry"],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":289,"url":"https:\/\/fde.cat\/index.php\/2021\/08\/31\/journey-to-spinnaker-deployment-orchestration\/","url_meta":{"origin":351,"position":0},"title":"Journey to Spinnaker Deployment Orchestration","date":"August 31, 2021","format":false,"excerpt":"IntroductionSpinnaker has been gaining popularity as a Continuous Deployment (CD) solution. It certainly offers many useful features supporting deployment pipelines including but not limited to access permission control, automatic and manual gated deployment configurations when moving from one phase to the next. Having said that, if you are operating a\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":890,"url":"https:\/\/fde.cat\/index.php\/2024\/07\/02\/hyperforces-template-for-enhancing-developer-workflow-inside-the-7-pillars-of-agile-development\/","url_meta":{"origin":351,"position":1},"title":"Hyperforce\u2019s Template for Enhancing Developer Workflow: Inside the 7 Pillars of Agile Development","date":"July 2, 2024","format":false,"excerpt":"Written by Armin Bahramshahry and Shan Appajodu. Hyperforce is a pivotal infrastructure platform for Salesforce, enhancing global service delivery through top public cloud platforms for increased safety, scalability, and agility. Hyperforce enabled rollout of new innovations like Data Cloud and boosted the global scalability of Salesforce\u2019s Core CRM. To help\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":570,"url":"https:\/\/fde.cat\/index.php\/2022\/05\/02\/how-the-cinder-jits-function-inliner-helps-us-optimize-instagram\/","url_meta":{"origin":351,"position":2},"title":"How the Cinder JIT\u2019s function inliner helps us optimize Instagram","date":"May 2, 2022","format":false,"excerpt":"Since Instagram runs one of the world\u2019s largest deployments of the Django web framework, we have natural interest in finding ways to optimize Python so we can speed up our production application. As part of this effort, we\u2019ve recently open-sourced Cinder, our Python runtime that is a fork of CPython.\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":625,"url":"https:\/\/fde.cat\/index.php\/2022\/08\/30\/hyperpacks-using-buildpacks-to-build-hyperforce\/","url_meta":{"origin":351,"position":3},"title":"Hyperpacks: Using Buildpacks to Build Hyperforce","date":"August 30, 2022","format":false,"excerpt":"At Salesforce we regularly use our products and services to scale our own business. One example is Buildpacks, which we created nearly a decade ago and is now a part of Hyperforce. Hyperpacks are an innovative new way of using Cloud Native Buildpacks (CNB) to manage our public cloud infrastructure.\u00a0\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":690,"url":"https:\/\/fde.cat\/index.php\/2023\/03\/14\/how-is-salesforce-improving-everyday-developer-experiences-and-innovating-scalable-solutions\/","url_meta":{"origin":351,"position":4},"title":"How is Salesforce Improving Everyday Developer Experiences and Innovating Scalable Solutions?","date":"March 14, 2023","format":false,"excerpt":"In our \u201cEngineering Energizers\u201d Q&A series, we examine the life experiences and career paths that have shaped Salesforce engineering leaders. Meet Prianna Ahsan, a software engineering architect for MuleSoft\u2019s production engineering team. Prianna and her team enhance developer experiences by supporting cutting-edge projects, including the migration of MuleSoft onto Salesforce\u2019s\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":306,"url":"https:\/\/fde.cat\/index.php\/2021\/08\/31\/blazing-the-trail-one-year-with-openjdk-11\/","url_meta":{"origin":351,"position":5},"title":"Blazing the Trail: One Year with OpenJDK 11","date":"August 31, 2021","format":false,"excerpt":"Early Adoption of Java Runtime Innovations in Production at\u00a0ScaleCo-written by Donna\u00a0ThomasIntroductionSalesforce was one of the first major enterprises to adopt OpenJDK 11 at scale in production, starting our adoption journey shortly after its release in late 2018. Cutting edge? Sure. Safe? Absolutely. You might not know this, but Salesforce has\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/351","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/comments?post=351"}],"version-history":[{"count":1,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/351\/revisions"}],"predecessor-version":[{"id":359,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/351\/revisions\/359"}],"wp:attachment":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/media?parent=351"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/categories?post=351"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/tags?post=351"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}