{"id":848,"date":"2024-04-01T15:56:00","date_gmt":"2024-04-01T15:56:00","guid":{"rendered":"https:\/\/fde.cat\/index.php\/2024\/04\/01\/unveiling-the-cutting-edge-features-of-ml-console-for-ai-model-lifecycle-management\/"},"modified":"2024-04-01T15:56:00","modified_gmt":"2024-04-01T15:56:00","slug":"unveiling-the-cutting-edge-features-of-ml-console-for-ai-model-lifecycle-management","status":"publish","type":"post","link":"https:\/\/fde.cat\/index.php\/2024\/04\/01\/unveiling-the-cutting-edge-features-of-ml-console-for-ai-model-lifecycle-management\/","title":{"rendered":"Unveiling the Cutting-Edge Features of ML Console for AI Model Lifecycle Management"},"content":{"rendered":"<p>In our \u201cEngineering Energizers\u201d Q&amp;A series, we explore the journeys of engineering leaders who have made remarkable contributions in their fields. Today, we meet Venkat Krishnamani, a Lead Member of the Technical Staff for Salesforce Engineering and the lead engineer for <a href=\"https:\/\/www.salesforce.com\/products\/einstein-ai-solutions\/\">Salesforce Einstein\u2019s<\/a><strong> Machine Learning<\/strong> (<strong>ML) Console<\/strong>. This vital tool for internal AI and ML engineers at Salesforce to streamline AI model lifecycle management with an intuitive interface, boosting productivity and simplifying AI development.<\/p>\n<p>Discover how Venkat\u2019s team enhances developer efficiency, overcomes technical challenges, and incorporates user feedback to refine features.<\/p>\n<p><strong>How does ML Console enhance internal developer productivity and simplify AI model lifecycle management?<\/strong><\/p>\n<p>My team designed ML Console with a singular focus: To boost developer productivity in AI model management. ML Console does this by simplifying the AI model lifecycle across Salesforce Einstein. By tackling the complexities of operational processes, it allows developers to easily monitor, debug, and manage deployed models, tenants, applications, and pipelines \u2014 cutting down on time spent navigating intricate systems.<\/p>\n<p>Key to ML Console\u2019s success is its integration of an extensive suite of features within an intuitive interface. This not only streamlines workflow management but also provides developers with <strong>easy access to essential data<\/strong>, minimizing errors and enhancing efficiency. With tools like straightforward log access and actionable commands tailored for the dynamic requirements of modern AI projects, the ML Console is a critical asset for developers aiming to navigate the complexities of AI applications.<\/p>\n<p><em>Venkat explains what makes Salesforce Engineering\u2019s culture unique.<\/em><\/p>\n<p><strong>What were the main challenges in developing the UI for data observability in ML Console, and how were they resolved?<\/strong><\/p>\n<p>The process of obtaining and denormalizing live data, as well as surfacing it, proved to be a significant hurdle, as it involved multiple artifacts \u2014 models, applications, pipelines, and flows \u2014 and services.<\/p>\n<p>Another challenge we faced was determining how to bundle the data together and establish connections between different artifacts across various services. The sheer volume of data that needed to be surfaced for observability purposes also presented a major challenge. With potentially millions of records being generated daily, managing and interacting with such a large amount of data required careful planning.<\/p>\n<p>To address these challenges, we implemented a guardrails framework. This framework consists of several key components, including an API router, configuration history, and a request queue. By defining the APIs and connecting them together, we were able to establish connections across services. Configurations were utilized to run these APIs and prioritize requests, ensuring that users who required immediate access to data could obtain it through on-demand data refresh capabilities.<\/p>\n<p><em>A detailed look at the guardrails framework.<\/em><\/p>\n<p><strong>How does your ML Console team address the resource-intensive nature of searching and filtering across millions of records, given the continuous data growth?<\/strong><\/p>\n<p>One approach we employ is data splitting. By categorizing and segmenting the data based on user preferences, we ensure that only relevant data is indexed for easy searching. Additionally, we are planning to create a separate list of the most frequently searched entities and records. This will enable instant access for users when performing searches.<\/p>\n<p>In terms of data reliability, we focus on both retention and deletion. While important data is retained in the main index for daily interactions, older records that are no longer actively accessed are moved to a separate index. This ensures that the most crucial and frequently used data remains readily accessible while optimizing storage resources.<\/p>\n<p>Enhancing search capabilities is equally important. We make search more feedback driven by incorporating features like spinners, loading buttons, and estimated time of completion. This provides users with visibility into the progress and resource requirements of their searches.<\/p>\n<p><strong>How is real-time availability of data ensured despite the limitations and constraints of data storage?<\/strong><\/p>\n<p>Previously, every 24 hours, we had a scheduled job that collected and updated data from all services, making it available in ML Console. However, with the integration of Event Bus, we will enhance this process.<\/p>\n<p>Instead of daily data pulls, we\u2019ll leverage Event Bus\u2019 streaming capabilities to identify and refresh only the modified data. This approach significantly reduces the number of data calls, minimizes API overload, and optimizes the system\u2019s performance.<\/p>\n<p>By refreshing only a small subset of the data, we can ensure real-time availability of information while mitigating the limitations and constraints of data storage. This not only improves efficiency but also reduces the strain on the system, resulting in a more streamlined and reliable experience for users.<\/p>\n<p>Venkat shares why engineers should join Salesforce.<\/p>\n<p><strong>How does ML Console\u2019s UI support AI model accuracy and empower developers to make informed decisions on model refinement?<\/strong><\/p>\n<div class=\"wp-block-group is-layout-constrained wp-container-1 wp-block-group-is-layout-constrained\">\n<p>Our UI enables efficient exploration of large language model (LLM) responses to determine the suitability of introducing a model into our ecosystem. It also facilitates exposing the model for inferencing via a prediction service and subsequently surfacing it in Einstein Studio. This is achieved through seamless integration with multiple key features:<\/p>\n<p><strong>Performance assessment<\/strong>: The internal model hub is a dedicated page where users can view all deployed models, regardless of their organization. This centralized view allows developers to easily assess the performance of their models.<\/p>\n<p><strong>Comparing model responses<\/strong>: With our UI, users can send a query to deployed LLMs and observe their responses on a single page. This streamlined approach simplifies the debugging process and facilitates targeted fine-tuning efforts.<\/p>\n<p><strong>Model switching<\/strong>: Switching between LLMs is effortless through a dropdown menu in the model hub. This flexibility enables developers to interact with specific models and evaluate their performance.<\/p>\n<p><strong>Toxicity analysis<\/strong>: Our UI provides information on the toxicity of model responses. This data helps developers frame their fine-tuning strategies and ensure that model outputs align with desired standards. For this, ML Console leverages Salesforce\u2019s <a href=\"https:\/\/help.salesforce.com\/s\/articleView?id=sf.generative_ai_trust_layer.htm&amp;type=5\">AI Trust Layer<\/a>, an inferencing playground and model evaluation to deliver accurate data insights.<\/p>\n<\/div>\n<p><em>A look at ML Console\u2019s user-friendly UI.<\/em><\/p>\n<p><strong>How do you collect feedback from developers and internal teams to continuously enhance ML Console?<\/strong><\/p>\n<div class=\"wp-block-group is-layout-constrained wp-container-2 wp-block-group-is-layout-constrained\">\n<p>To continuously enhance the UI and address emerging challenges, we employ a variety of channels to gather feedback from developers and internal teams. These channels include:<\/p>\n<p><strong>Slack<\/strong>: We have a dedicated channel where users can report issues, provide recommendations, and suggest improvements. This platform fosters open communication between developers and consumers of ML Console.<\/p>\n<p><strong>AI platform internal demos<\/strong>: Every two weeks, we conduct demos to showcase new features and gather valuable feedback from participants. This allows us to understand how the UI is being received and identify areas for improvement.<\/p>\n<p><strong>Workgroup meetings<\/strong>: We hold meetings with stakeholders and service teams to gather feedback and ensure alignment. These meetings address UI-related changes or upcoming developments, ensuring that our UI meets expectations and aligns with project goals.<\/p>\n<p><strong>Roadmap discussions<\/strong>: We collect feedback through roadmap discussions, gathering input across different products and considering the needs and preferences of our users. This helps us produce unique content that can be utilized across the platform.<\/p>\n<\/div>\n<p>By utilizing these various channels, we can gather comprehensive feedback and make informed decisions to enhance the UI of our platform.<\/p>\n<p><strong>What\u2019s the future of ML Console?<\/strong><\/p>\n<p>While ML Console started off providing functionality for internal developers, we see a lot of synergy and similar demands from external developers. We are actively collaborating with teams across Salesforce to combine our efforts and provide a unified experience for both external and internal developers.<\/p>\n<div class=\"wp-block-group is-layout-constrained wp-container-3 wp-block-group-is-layout-constrained\">\n<h4 class=\"wp-block-heading\"><strong>Learn More<\/strong><\/h4>\n<p>Stay connected \u2014 join our <a href=\"https:\/\/flows.beamery.com\/salesforce\/eng-social-2023\">Talent Community<\/a>!<\/p>\n<p>Check out our <a href=\"https:\/\/www.salesforce.com\/company\/careers\/teams\/tech-and-product\/?d=cta-tms-tp-2\">Technology and Product<\/a> teams to learn how you can get involved.<\/p>\n<\/div>\n<p>The post <a href=\"https:\/\/engineering.salesforce.com\/unveiling-the-cutting-edge-features-of-ml-console-for-ai-model-lifecycle-management\/\">Unveiling the Cutting-Edge Features of ML Console for AI Model Lifecycle Management<\/a> appeared first on <a href=\"https:\/\/engineering.salesforce.com\/\">Salesforce Engineering Blog<\/a>.<\/p>\n<p><a href=\"https:\/\/engineering.salesforce.com\/unveiling-the-cutting-edge-features-of-ml-console-for-ai-model-lifecycle-management\/\" target=\"_blank\" class=\"feedzy-rss-link-icon\" rel=\"noopener\">Read More<\/a><\/p>","protected":false},"excerpt":{"rendered":"<p>In our \u201cEngineering Energizers\u201d Q&amp;A series, we explore the journeys of engineering leaders who have made remarkable contributions in their fields. Today, we meet Venkat Krishnamani, a Lead Member of the Technical Staff for Salesforce Engineering and the lead engineer for Salesforce Einstein\u2019s Machine Learning (ML) Console. This vital tool for internal AI and ML&hellip; <a class=\"more-link\" href=\"https:\/\/fde.cat\/index.php\/2024\/04\/01\/unveiling-the-cutting-edge-features-of-ml-console-for-ai-model-lifecycle-management\/\">Continue reading <span class=\"screen-reader-text\">Unveiling the Cutting-Edge Features of ML Console for AI Model Lifecycle Management<\/span><\/a><\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","footnotes":""},"categories":[7],"tags":[],"class_list":["post-848","post","type-post","status-publish","format-standard","hentry","category-technology","entry"],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":881,"url":"https:\/\/fde.cat\/index.php\/2024\/06\/14\/25-productivity-tools-that-power-salesforce-engineering-teams\/","url_meta":{"origin":848,"position":0},"title":"25 Productivity Tools that Power Salesforce Engineering Teams","date":"June 14, 2024","format":false,"excerpt":"In this special edition of \u201cEngineering Energizers,\u201d we\u2019re celebrating Salesforce\u2019s 25th anniversary by showcasing 25 key productivity tools favored by leading engineers at Salesforce across India, the U.S., Israel, and Argentina. Explore the essential tools these experts rely on to enhance their productivity, tackle complex problems, and elevate innovation. 1.\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":229,"url":"https:\/\/fde.cat\/index.php\/2021\/02\/02\/ml-lake-building-salesforces-data-platform-for-machine-learning\/","url_meta":{"origin":848,"position":1},"title":"ML Lake: Building Salesforce\u2019s Data Platform for Machine Learning","date":"February 2, 2021","format":false,"excerpt":"Salesforce uses machine learning to improve every aspect of its product suite. With the help of Salesforce Einstein, companies are improving productivity and accelerating key decision-making. Data is a critical component of all machine learning applications and Salesforce is no exception. In this post I will share some unique challenges\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":563,"url":"https:\/\/fde.cat\/index.php\/2022\/04\/12\/onboarding-slos-for-salesforce-services\/","url_meta":{"origin":848,"position":2},"title":"Onboarding SLOs for Salesforce services","date":"April 12, 2022","format":false,"excerpt":"At Salesforce, we operate thousands of services of various sizes: monolith and micro-services, both customer-facing and internal, across multiple substrates, i.e. first party and public cloud infrastructure. In our earlier blog \u201cREADS: Service Health Metrics,\u201d we talked about the Service Level Objective (SLO) framework called READS that we developed at\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":751,"url":"https:\/\/fde.cat\/index.php\/2023\/08\/22\/how-is-einstein-gpt-shaping-the-future-of-salesforce-development-and-unleashing-developer-productivity\/","url_meta":{"origin":848,"position":3},"title":"How is Einstein GPT Shaping the Future of Salesforce Development and Unleashing Developer Productivity?","date":"August 22, 2023","format":false,"excerpt":"By Yingbo Zhou and Scott Nyberg In our \u201cEngineering Energizers\u201d Q&A series, we examine the professional life experiences that have shaped Salesforce Engineering leaders. Meet Yingbo Zhou, a Senior Director of Research for Salesforce AI Research, where he leads the team to develop the model for Einstein GPT for Developers\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":791,"url":"https:\/\/fde.cat\/index.php\/2023\/08\/22\/how-is-einstein-shaping-the-future-of-salesforce-development-and-unleashing-developer-productivity\/","url_meta":{"origin":848,"position":4},"title":"How is Einstein Shaping the Future of Salesforce Development and Unleashing Developer Productivity?","date":"August 22, 2023","format":false,"excerpt":"By Yingbo Zhou and Scott Nyberg In our \u201cEngineering Energizers\u201d Q&A series, we examine the professional life experiences that have shaped Salesforce Engineering leaders. Meet Yingbo Zhou, a Senior Director of Research for Salesforce AI Research, where he leads the team to develop the model for Einstein for Developers, a\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":550,"url":"https:\/\/fde.cat\/index.php\/2022\/03\/10\/einstein-evaluation-store-beyond-metrics-for-ml-ai-quality\/","url_meta":{"origin":848,"position":5},"title":"Einstein Evaluation Store\u200a\u2014\u200aBeyond Metrics for ML\/AI Quality","date":"March 10, 2022","format":false,"excerpt":"Einstein Evaluation Store\u200a\u2014\u200aBeyond Metrics for ML\/AI\u00a0Quality An important transition is underway in machine learning (ML) with companies gravitating from a research-driven approach towards a more engineering-led process for applying intelligence to their business problems. We see this in the growing field of ML operations, as well as in the shift\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/848","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/comments?post=848"}],"version-history":[{"count":0,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/848\/revisions"}],"wp:attachment":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/media?parent=848"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/categories?post=848"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/tags?post=848"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}