{"id":288,"date":"2021-08-31T14:40:23","date_gmt":"2021-08-31T14:40:23","guid":{"rendered":"https:\/\/fde.cat\/?p=288"},"modified":"2021-08-31T14:40:23","modified_gmt":"2021-08-31T14:40:23","slug":"building-a-successful-enterprise-ai-platform","status":"publish","type":"post","link":"https:\/\/fde.cat\/index.php\/2021\/08\/31\/building-a-successful-enterprise-ai-platform\/","title":{"rendered":"Building a Successful Enterprise AI Platform"},"content":{"rendered":"<h3>Introduction<\/h3>\n<p>In 2016, I started as a fresh grad software engineer at a small startup called MetaMind, which was acquired by Salesforce. Since then, it has been quite a journey to achieve a lot with a small team. I\u2019m part of Einstein Vision and Language Platform team. Our platform provides customers with the ability to upload and train datasets (images or text) to produce models that can be used for generating insights in real time. We serve internal Salesforce teams working on Service Cloud, Marketing Cloud, and Industries Cloud, as well as external customers and developers.<\/p>\n<p>If you\u2019ve ever interacted with a chatbot on a leading e-commerce apparel store, financial institute, healthcare organization, or even a government agency, then it\u2019s likely that your request was processed on our platform to understand the question and provide an answer. For example, <a href=\"https:\/\/www.salesforce.com\/blog\/use-chatbots-to-deal-surges-in-case-volume\/\">Sun Basket<\/a> customers are able to track orders or packages, report any issues with delays or damage, and get a credit or refund. <a href=\"https:\/\/www.salesforce.com\/blog\/customer-service-chatbot-adventhealth\/\">AdventHealth<\/a> is able to provide patients with an interactive CDC COVID-19 assessment and educate them about the\u00a0disease.<\/p>\n<p>I\u2019m frequently asked about my experience building an AI platform from the ground up and what it takes to run it successfully, which is what this blog post explores.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/1000\/1*s7vdqwkNfaMyMN8RLsvDpg.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><\/figure>\n<h3>User Experience<\/h3>\n<h4>Understand Your Customer | End-to-End User Experience<\/h4>\n<p>User experience is the key to our success. Our target users are developers and product managers. We provide a public-facing API to interact with our platform. As a team of product managers and developers, it was easy to imagine a fellow developer or product manager using an API to upload a tar file with images. But, the challenge we faced was that not all of our users in these roles were familiar with deep learning or machine learning.<\/p>\n<p>We had to educate our users by answering questions like: What\u2019s a dataset? Why do you need a dataset? What are classes in a dataset? How many samples do we need in a dataset? What does training on a dataset mean? What is a model? How do I use a model? And that\u2019s not all; in some cases, we were even asked, \u201cWhat is machine learning, and why do I need\u00a0it?\u201d<\/p>\n<p>We spent a considerable amount of time understanding our customer. We ran several in-house A\/B tests and opened our platform to a small number of pilot customers. We also ran workshops and hands-on training sessions to educate customers about the model training process, challenges with quality and quantity of data, prediction accuracy, and so on. Ultimately, we realized that, based on the level of understanding and experience with machine learning we encountered, we needed multiple channels to onboard users to our platform. So, we introduced <a href=\"https:\/\/einstein.ai\/support\">Einstein Support<\/a>. This page points you to the relevant resources for working with Einstein Vision and Language Platform.<\/p>\n<p>For a novice user who may not be familiar with machine learning concepts\u200a\u2014\u200asuch as dataset, training, prediction\u200a\u2014\u200awe provide step-by-step self-service tutorials via <a href=\"https:\/\/trailblazer.me\/TrailblazerLogin?mode=signup&amp;startURL=%2Fsetup%2Fsecur%2FRemoteAccessAuthorizationPage.apexp%3Fsource%3DCAAAAXfOD_rNMDAwMDAwMDAwMDAwMDAwAAAA5rnsstiOUOhUNdZC4YtziOGQNzO2FziS6SW4Sqn3aWcAlvuToIpiebnVwUK7-vlvgEzdKq3NQU0Fx4619cTkL8_BupD2k1rlmprD-3-9ipgMcUEMeCLbXtHWLVa1MfnbUtnF7CpkmO9v0_cOzH0etX4-KcYB_caj_IQtZlKsISwx9EF2ZgvA_UTa71aCcbtP_FQ499uqwn55y1rF8HkVGhU0ZcJ2FuW357ybCZ-BzLbaRUEtF6ZoLJ6c9xpgIsyjtu40AwknHf_0o5dKy64FVZ7VK2ru5erTrrj_Q7h9f8WDJlUJWrnvcN-BgyQ9VXwZjZK3BLThqjK8rRukyb3Ur8D9mwGzZPlOgX1v75ZNDk6rE-wqUXPLGjcADUJCCc7aNKeZKYu7OTQAuq_7j9ohc87ZdCEFBs42ZtpOUSGTETfud1SkdeZkK7TN7Bkd1cZpME4OccfovmKytEqDGAWGP127ugxE_PSUCczMYsN562SC-O4DO1cTO6PgJf8wm1B4hTt24kn8w70XwOvieWrp9a840uI_8JhHN-U0MOJs5GzmaLI3BzLIHbjZ1l-NG6mgKrJjW-sN3xASr7MjS26wbN-p8Si-UvfRzIP-3cQulhDxHSw-HrpwbLujBQg1f2DE6L744B2QRcHBg-XUF_YbRf2_IXSTvVGgmnaTqACge4YJWKn_cTc3xIu0ZY0hcayKczK2-a4o-2l3cTNw_j3UlEHw3VM2Od7_TTpa4nPT4Ctqp2umbxGOmoKhkRyRSeejjw%253D%253D%26display%3Dpage\">Trailhead<\/a>. One of my favorite guides is this <a href=\"https:\/\/metamind.readme.io\/docs\/journey-to-deep-learning\">Journey to Deep Learning<\/a>.<\/p>\n<p>To make it easier to get up and running, we also created <a href=\"https:\/\/appexchange.salesforce.com\/appxListingDetail?listingId=a0N3A00000Ed1V8UAJ\">Einstein Vision and Language Model Builder<\/a>. This is an AppExchange package that provides a UI for the Einstein Vision and Language APIs, and it enables customers to quickly train models and start using\u00a0them.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/1024\/1*a6ccP4s00B0uCHg6fCRLwg.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><figcaption><strong>Einstein Vision and Language Platform Offerings Guide<\/strong><\/figcaption><\/figure>\n<p>For a user who is a developer looking to dig in to code, we have code snippets that show API usage\u200a\u2014\u200asuch as uploading datasets\u200a\u2014\u200ain different <a href=\"https:\/\/github.com\/muenzpraeger\/salesforce-einstein-vision-java\">languages<\/a> and <a href=\"https:\/\/github.com\/muenzpraeger\/salesforce-einstein-vision-swift\">frameworks<\/a>. For a product manager, we have <a href=\"https:\/\/metamind.readme.io\/\">API documentation<\/a> with cURL command snippets to make API\u00a0calls.<\/p>\n<pre>curl -X POST <br>-H \"Authorization: Bearer &lt;TOKEN&gt;\" <br>-H \"Cache-Control: no-cache\" <br>-H \"Content-Type: multipart\/form-data\" <br>-F \"type=image\" <br>-F \"path=https:\/\/einstein.ai\/images\/mountainvsbeach.zip\" <br>https:\/\/api.einstein.ai\/v2\/vision\/datasets\/upload\/sync<\/pre>\n<p>In addition, we created a community of developers via <a href=\"https:\/\/developer.salesforce.com\/forums?communityId=09aF00000004HMGIA2#!\/feedtype=RECENT&amp;dc=Predictive_Services&amp;criteria=ALLQUESTIONS\">forums<\/a> and <a href=\"https:\/\/developer.salesforce.com\/blogs\/tag\/einstein\">blogs<\/a>. It\u2019s so rewarding to see a developer blog post like this\u00a0<a href=\"https:\/\/developer.salesforce.com\/blogs\/developer-relations\/2018\/01\/training-bears-lessons-learned-einstein-vision-classifier.html\">one<\/a>.<\/p>\n<p>To further improve the user experience and to provide a quick sense of our platform\u2019s capabilities, we introduced <a href=\"https:\/\/metamind.readme.io\/docs\/use-pre-built-models\">pre-built models<\/a>. These models are built by our team and are available to all customers. Users can <a href=\"https:\/\/api.einstein.ai\/signup\">sign up<\/a> and quickly make a prediction. Within minutes, the user gets a sense of the value that the platform can\u00a0provide.<\/p>\n<p>Lastly, I want to mention that we\u2019re obsessed with our API quality and follow strict processes for API changes and releases. Backwards compatibility is always a hotly debated topic of discussion across the\u00a0team.<\/p>\n<h3>Team<\/h3>\n<h4>Domain Expertise | Technical Expertise<\/h4>\n<p>Great products are built by great teams, and an AI platform is no exception. In my opinion, there are four team pillars on which an AI platform is successfully built: Product, Data Science, Engineering, and Infrastructure. Each of these teams has its own charter. But, it\u2019s important that each team has both domain expertise and technical expertise.<\/p>\n<p>Domain expertise is required to understand the target market, business value, the use cases, and how a customer uses AI within their business processes. Technical expertise involves understanding the importance of data and data quality, how models are trained and served, and the infrastructure that\u2019s the foundation of the platform. Unless all the stakeholders understand these challenges deeply, it\u2019s difficult to drive conversations towards solutions for addressing customer use\u00a0cases.<\/p>\n<p>Data scientists are focused on conducting research and experiments, building models, and pushing the boundaries of model accuracy. Engineers are focused on building production-grade systems. But, shipping a model from research to production is a non-trivial task. The more each team knows about both the product and technical aspects, the more efficient the team is and the better solutions they\u00a0create.<\/p>\n<p>For example, in a realtime chatbot session, the user expects to get an instantaneous response. A model may be highly accurate in its predictions, but those predictions lose value if they can\u2019t be served within a certain timeframe to the end user. To put it into perspective, we have thousands of customers simultaneously training their own models and using them in realtime. Our customers expect that, once their model is trained, it\u2019s available for use and serves predictions within a few hundred milliseconds.<\/p>\n<p>Building a platform that can support such requirements demands involving people who can understand model architectures and their functional and compute requirements, know how to use those models to serve customer requests and, ultimately, build them to return responses on which customers can build business logic and realize value. This requires data scientists, software engineers, infrastructure engineers, and product managers to collaborate closely in order to provide end-to-end user experience.<\/p>\n<h3>Process<\/h3>\n<h4>Communication | Collaboration<\/h4>\n<p>Given the cross-functional nature of our team organization, it took some iterating to land on a way for us to seamlessly work together. Active collaboration among all stakeholders\u200a\u2014\u200athe applied research and data science team, the platform and infrastructure engineering team, and the product team\u200a\u2014\u200ahas to start right from the first phase of identifying the requirements to delivering the solution to production.<\/p>\n<blockquote><p>An effective process of communication and collaboration reduces friction between cross-functional teams, creates accountability, improves productivity, promotes exchange of ideas, and drives innovation.<\/p><\/blockquote>\n<p>After iterating through different schedules of daily standup updates, weekly sync meetings, sprint demos, and retrospective meetings, we ultimately came up with an effective, end-to-end data science and engineering collaboration process, which you can see in the following timeline.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/1000\/1*qLCGOEQuGFlBQX0eKXg7HQ.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><figcaption><strong>Cross-Functional Team Collaboration Timeline and\u00a0Process<\/strong><\/figcaption><\/figure>\n<p>This is a high-level view of our collaboration process. The idea is that every use case involves active collaboration among stakeholders from multiple teams starting right at the beginning.<\/p>\n<p>Data scientists\/researchers, engineers and product managers work together to identify solutions and propose alternatives.<\/p>\n<p>For example, in the Spiking phase, there are multiple teams working in parallel. Data scientists figure out what neural network architecture best fits the use case. Platform engineers work to identify potential API changes, changes involved in data management, and set up training and prediction systems. Infrastructure engineers work to identify resource requirements: GPU\/CPU clusters, CI\/CD pipeline changes, and so\u00a0on.<\/p>\n<p>As part of each of the collaboration steps, we created a uniform way to track communication among these cross-functional teams and to record design and architecture artifacts and decisions using Quip documents.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/948\/1*VaZwYqmrvdaemFBsfZ9-VQ.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><figcaption><strong>Quip Document Templates for Recording Project Artifacts<\/strong><\/figcaption><\/figure>\n<p>This graphic shows a sample project home document that contains a project summary and a list of representation from across teams and maintains records of all project-related artifacts throughout the collaboration process.<\/p>\n<p>We created templates for all the artifacts so teams can simply fill in the details. Using templates ensures uniformity in recording information across teams and projects.<\/p>\n<p>Using this process, we were able to ship the first version of one of our first research-to-production models, Optical Character Recognition (OCR), in a matter of two\u00a0weeks.<\/p>\n<h3>Agility\u200a\u2014\u200aResearch to Production<\/h3>\n<h4>Cloud Agnostic | ML Framework Agnostic<\/h4>\n<p>The ability to iterate and ship new features is an important attribute of an AI platform team. When it comes to identifying the right solution to solve an AI use case, data scientists and researchers need to experiment with datasets and neural networks. This involves running model training sessions with several different parameters and various feature extraction and data augmentation techniques. All of these operations are expensive in terms of time and compute resources. Most experiments require expensive GPUs for training, and it can still take several hours (or sometimes days), depending on the complexity of the\u00a0problem.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/1024\/1*reDkhCFFL24o96HgdtXsmg.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><figcaption><strong>High-level View of Training and Experimentation Workflow<\/strong><\/figcaption><\/figure>\n<p>So our teams needed an easy, consistent, flexible, and framework-agnostic approach for experimentation. And also an established pipeline to launch features\/models to production for customers.<\/p>\n<p>Our platform provides a <a href=\"https:\/\/engineering.salesforce.com\/training-experimentation-a-next-generation-generic-ml-training-and-data-science-platform-for-dcad8c4621b\">training and experimentation framework<\/a> for researchers and data scientists to quickly launch experiments. We have published a training SDK that abstracts all the service and infrastructure-level components. The SDK also provides an interface to easily provide training parameters, pull datasets, push training metrics, and publish model artifacts. The framework also provides APIs to get the status of training jobs, organize experiments, and visualize experiment metadata.<\/p>\n<p>To ship a model to production, the data science and research team uses an established Continuous Integration (CI) pipeline to build and package their training and model serving code as Docker images. The platform team also provides base Docker images: Python, TensorFlow, PyTorch. The CI process publishes these Docker images to a central repository.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/1024\/1*CaNPnch2Xoxr5QlLdlNOdA.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><figcaption><strong>gRPC Based Abstraction for Serving Models in Production<\/strong><\/figcaption><\/figure>\n<p>Similar to the training SDK, our <a href=\"https:\/\/engineering.salesforce.com\/realtime-predictions-in-a-multitenant-environment-3b9018fdf63c\">prediction service<\/a> acts as a sidecar and interacts with the model serving\/inference container that the data science and research team publishes using a standard gRPC contract. This gRPC contract abstracts any pre\/post-processing and model prediction-specific operations and provides a uniform way to provide input to the model and receive a prediction as\u00a0output.<\/p>\n<p>The data science and research team can then use platform-provided APIs to launch the new version of these training and model serving Docker images in production. After the Docker images are launched, customers can then train new models and use them for generating insights.<\/p>\n<p>The key takeaway here is that our platform services are cloud-agnostic (we run on Kubernetes) and ML framework agnostic. As long as a new team member is familiar with Docker and API interfaces, they can be productive on day one. This drastically reduces the learning curve of using our platform, enables rapid prototyping, and improves time-to-production.<\/p>\n<h3>Extensibility and Scalability<\/h3>\n<h4>Standard Convention | Uniformity<\/h4>\n<p>One of the biggest learnings for us as a team over the past few years has been designing the platform for extensibility and scalability. Our platform has gone through several iterations of improvements. Initially, most of our services were designed for specific AI use cases. So some parts appeared rigid as we onboarded new use\u00a0cases.<\/p>\n<p>Our data management service was initially built to handle image classification and text classification use cases. Naturally, all of our data upload, validation and verification, and update and fetch operations were tuned to those two data formats\/schema. As we started onboarding image detection, optical character recognition, and named entity recognition use cases, we realized that it\u2019s redundant and inefficient to implement dataset-specific pipelines to perform these operations. It was evident that we needed a dataset type agnostic system that can provide a uniform way to ingest, maintain, and fetch datasets.<\/p>\n<p>We built a new <a href=\"https:\/\/engineering.salesforce.com\/deep-learning-dataset-management-system-at-scale-571532d0d200\">data management system<\/a>, which now allows our platform to support any type of dataset (images, text, audio) and provide a uniform way to access those datasets. The idea is to maintain a uniform \u201cvirtual\u201d dataset that\u2019s nothing but metadata of the dataset (number of examples, train set vs. test set, sample IDs) and to expose APIs that provide a simple and unified experience to access data regardless of the\u00a0type.<\/p>\n<p>As mentioned in the section above, our training and prediction services provide abstraction using SDK and gRPC contracts, respectively. This allows the platform to be consistent and uniform regardless of the type of deep learning framework, tool, or compute requirements. Our training service can scale horizontally to launch several experiments and training sessions without having to address framework or language-specific challenges. And, similarly, the prediction service can easily onboard new model serving containers and acquire predictions from them over gRPC contracts, unaware of any model- specific pre\/post-processing and neural network-level operations.<\/p>\n<p>Establishing a standard convention and abstracting the low-level details to create uniform interfaces has allowed our platform to be extensible and scalable.<\/p>\n<h3>Trust<\/h3>\n<h4>Security-First Mindset | Ethical\u00a0AI<\/h4>\n<p>Our #1 value at Salesforce is trust. For us, trust means keeping our customers\u2019 data secure. Any product or feature we build follows a security-first mindset. Einstein Vision and Language Platform is <a href=\"https:\/\/metamind.readme.io\/docs\/introduction-to-the-einstein-predictive-vision-service#section-einstein-vision-and-language-are-hipaa-compliant\">SOC-II &amp; HIPAA<\/a> compliant.<\/p>\n<p>Every piece of code we write and every third-party or open-source library we use goes through threat and vulnerability scans. We actively partner with product and infrastructure security team, as well as the legal team at Salesforce, to conduct reviews of features before we deliver them to production. We implement service-to-service mTLS, which means all the customer data that our systems process is encrypted in transit. And, to secure data at rest, we encrypt the storage\u00a0volumes.<\/p>\n<p>All customers are naturally sensitive about their data. The security processes we follow have been a critical part in gaining the trust of our customers. Being compliant also means we\u2019re able to venture into highly data-sensitive verticals like healthcare.<\/p>\n<p>Our research and data science teams follow <a href=\"https:\/\/einstein.ai\/ethics\">ethical AI<\/a> practices to ensure that the models we build are responsible, accountable, transparent, empowering, and inclusive.<\/p>\n<h3>Acknowledgement<\/h3>\n<p>Thanks to <a href=\"https:\/\/www.linkedin.com\/in\/shashankharinath\/\">Shashank Harinath<\/a> for key contributions in establishing the data science &amp; engineering collaboration process &amp; for keeping me honest about our systems and architecture and to <a href=\"https:\/\/www.linkedin.com\/in\/dsiebold\/\">Dianne Siebold<\/a> and <a href=\"https:\/\/www.linkedin.com\/in\/lauralindeman\/\">Laura Lindeman<\/a> for their review and feedback on improving this blog post. Also, I\u2019d like to take the opportunity to mention it has been a wonderful learning experience working with the talented folks on the Einstein Vision and Language Platform and Infra\u00a0teams.<\/p>\n<p>Please reach out to <a href=\"https:\/\/www.linkedin.com\/in\/arpeetkale\/\">me<\/a> with any questions. I would love to hear your thoughts on the topic. If you\u2019re interested in solving challenging deep learning and AI platform problems, we\u2019re\u00a0<a href=\"https:\/\/einstein.ai\/careers\">hiring<\/a>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/medium.com\/_\/stat?event=post.clientViewed&amp;referrerSource=full_rss&amp;postId=197a3c4d8b60\" width=\"1\" height=\"1\" alt=\"\"><\/p>\n<hr>\n<p><a href=\"https:\/\/engineering.salesforce.com\/building-a-successful-enterprise-ai-platform-197a3c4d8b60\">Building a Successful Enterprise AI Platform<\/a> was originally published in <a href=\"https:\/\/engineering.salesforce.com\/\">Salesforce Engineering<\/a> on Medium, where people are continuing the conversation by highlighting and responding to this story.<\/p>\n<p><a href=\"https:\/\/engineering.salesforce.com\/building-a-successful-enterprise-ai-platform-197a3c4d8b60?source=rss----cfe1120185d3---4\" target=\"_blank\" rel=\"noopener\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction In 2016, I started as a fresh grad software engineer at a small startup called MetaMind, which was acquired by Salesforce. Since then, it has been quite a journey to achieve a lot with a small team. I\u2019m part of Einstein Vision and Language Platform team. Our platform provides customers with the ability to&hellip; <a class=\"more-link\" href=\"https:\/\/fde.cat\/index.php\/2021\/08\/31\/building-a-successful-enterprise-ai-platform\/\">Continue reading <span class=\"screen-reader-text\">Building a Successful Enterprise AI Platform<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","footnotes":""},"categories":[7],"tags":[],"class_list":["post-288","post","type-post","status-publish","format-standard","hentry","category-technology","entry"],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":828,"url":"https:\/\/fde.cat\/index.php\/2024\/02\/20\/unlocking-hyperforce-migration-innovative-solutions-for-a-smooth-transition-to-the-cloud\/","url_meta":{"origin":288,"position":0},"title":"Unlocking Hyperforce Migration: Innovative Solutions for a Smooth Transition to the Cloud","date":"February 20, 2024","format":false,"excerpt":"In our \u201cEngineering Energizers\u201d Q&A series, we delve into the experiences and expertise of Salesforce Engineering leaders. Today, we\u2019re meeting Mahamadou Sylla, a Senior Member of the Technical Staff at Salesforce Engineering. Mahamadou is a key member of our Hyperforce\u2019s Bill of Materials (BOM) team, which assists internal teams in\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":688,"url":"https:\/\/fde.cat\/index.php\/2023\/03\/07\/automated-environment-build-salesforces-secret-sauce-for-rapid-cloud-expansion\/","url_meta":{"origin":288,"position":1},"title":"Automated Environment Build: Salesforce\u2019s Secret Sauce for Rapid Cloud Expansion","date":"March 7, 2023","format":false,"excerpt":"Around the world, companies must satisfy global compliance regulations or face pricey fines, where failure to comply results in 2.71 higher costs than the cost to comply. For example, Fortune 500 companies are projected to lose $8 billion per year as a result of GDPR non-compliance. In response, Salesforce created\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":694,"url":"https:\/\/fde.cat\/index.php\/2023\/03\/23\/big-data-processing-driving-data-migration-for-salesforce-data-cloud\/","url_meta":{"origin":288,"position":2},"title":"Big Data Processing: Driving Data Migration  for Salesforce Data Cloud","date":"March 23, 2023","format":false,"excerpt":"The tsunami of data \u2014 set to exceed 180 zettabytes by 2025 \u2014 places significant pressure on companies. Simply having access to customer information is not enough \u2014 companies must also analyze and refine the data to find actionable pieces that power new business. As businesses collect these volumes of\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":834,"url":"https:\/\/fde.cat\/index.php\/2024\/03\/06\/how-the-new-einstein-1-platform-manages-massive-data-and-ai-workloads-at-scale\/","url_meta":{"origin":288,"position":3},"title":"How the New Einstein 1 Platform Manages Massive Data and AI Workloads at Scale","date":"March 6, 2024","format":false,"excerpt":"In our \u201cEngineering Energizers\u201d Q&A series, we feature Leo Tran, Chief Architect of Platform Engineering at Salesforce. With over 15 years of engineering leadership experience, Leo is instrumental in developing the Einstein 1 Platform. This platform integrates generative AI, data management, CRM capabilities, and trusted systems to provide businesses with\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":896,"url":"https:\/\/fde.cat\/index.php\/2024\/07\/16\/the-unstructured-data-dilemma-how-data-cloud-handles-250-trillion-transactions-weekly\/","url_meta":{"origin":288,"position":4},"title":"The Unstructured Data Dilemma: How Data Cloud Handles 250 Trillion Transactions Weekly","date":"July 16, 2024","format":false,"excerpt":"In our \u201cEngineering Energizers\u201d Q&A series, we delve into the journeys of engineering leaders who have made notable strides in their areas of expertise. This edition features Adithya Vishwanath, Vice President of Software Engineering at Salesforce. He leads the Data Cloud team, a pivotal platform that integrates diverse data sources,\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":866,"url":"https:\/\/fde.cat\/index.php\/2024\/05\/15\/revealing-einsteins-blueprint-for-creating-the-new-unified-ai-platform-from-siloed-legacy-stacks\/","url_meta":{"origin":288,"position":5},"title":"Revealing Einstein\u2019s Blueprint for Creating the New, Unified AI Platform from Siloed Legacy Stacks","date":"May 15, 2024","format":false,"excerpt":"In our insightful \u201cEngineering Energizers\u201d Q&A series, we delve into the inspiring journeys of engineering leaders who have achieved remarkable success in their specific domains. Today, we meet Indira Iyer, Senior Vice President of Salesforce Engineering, leading Salesforce Einstein development. Her team\u2019s mission is to build Salesforce\u2019s next-gen AI Platform,\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/288","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/comments?post=288"}],"version-history":[{"count":1,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/288\/revisions"}],"predecessor-version":[{"id":422,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/288\/revisions\/422"}],"wp:attachment":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/media?parent=288"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/categories?post=288"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/tags?post=288"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}