{"id":224,"date":"2021-02-02T20:01:34","date_gmt":"2021-02-02T20:01:34","guid":{"rendered":"https:\/\/fde.cat\/?p=224"},"modified":"2021-02-02T20:01:35","modified_gmt":"2021-02-02T20:01:35","slug":"building-a-secured-data-intelligence-platform","status":"publish","type":"post","link":"https:\/\/fde.cat\/index.php\/2021\/02\/02\/building-a-secured-data-intelligence-platform\/","title":{"rendered":"Building a Secured Data Intelligence Platform"},"content":{"rendered":"<p>The Salesforce Unified Intelligence Platform (UIP) team is building a shared, central, internal data intelligence platform. Designed to drive business insights, UIP helps improve user experience, product quality, and operations. At Salesforce, Trust is our number one company value and building in robust security is a key component of our platform development. In this blog, I\u2019ll share our experience and learnings relating to security design, covering topics such as data classification, data encryption, network security, authentication, data access, multi-tenancy, data environments, and third-party software. If you also work on data platforms, I hope this blog will provide some\u00a0ideas!<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/1000\/1*wXQD7VjloUTr43C9GYAKyg.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><\/figure>\n<h3>Background<\/h3>\n<p>UIP\u2019s predecessor was a huge Hadoop cluster that stored Salesforce\u2019s internal data and provided internal services (such as HDFS, Hive, Spark etc) to all teams across the company. This ran in our first-party data centers and, as a result, faced several challenges such as capacity, scalability, and feature agility. In order to overcome those challenges, with the help from several infrastructure teams, we re-designed the system and built UIP from the ground to run in public clouds. It is still a work in progress, but the diagram below shows the architecture we envision.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/1024\/1*sXUB9K_9JEPe2DbNjzIxuw.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><\/figure>\n<p>When we look at our internal users, their usage scenarios typically follow a data lifecycle in logical order. Looking at the diagram, from the bottom to\u00a0top,<\/p>\n<ol>\n<li>First, our <strong>app developers<\/strong> produce internal logs, which are transported into object stores. Some users need a data warehouse from the object\u00a0stores.<\/li>\n<li>With this massive amount of data, <strong>infrastructure engineers <\/strong>need ETL to transform and prepare data and metadata; <strong>data stewards <\/strong>need to do data cataloging to power discovery and to do data governance such as access and retention control.<\/li>\n<li>Then users can compute the data. <strong>Data analysts <\/strong>and<strong> product managers <\/strong>like to run interactive queries;<strong> data engineers <\/strong>like to run batch jobs and to orchestrate the jobs; <strong>data scientists<\/strong> like to run machine learning\u00a0jobs.<\/li>\n<li>Finally, all users can drive business insights from the compute results. They can visualize them, share them in team notebooks, or use IDEs, customized tools, and automations to further consume the\u00a0results.<\/li>\n<\/ol>\n<p>Overall, UIP enables all personas along the data lifecycle. We\u2019ve designed it so that users of the current system, as well as new users, can migrate to public clouds as soon (and in as frictionless a way) as possible.<\/p>\n<p>There is a whole lot more I could share about UIP, but in an effort to keep this blog short, I\u2019ll only talk about security aspect\u00a0here.<\/p>\n<h3>Security-Driven Design<\/h3>\n<p>You might think that, being an internal-facing platform, UIP design wouldn\u2019t be concerned with security that much, but it\u2019s not the case. Trust is our #1 company value, from the inside out. UIP\u2019s design was guided by security considerations, from the beginning. Why didn\u2019t we do functionality design first and add in the security bits at the end? Because we didn\u2019t want to hit any last-minute surprises, or encounter security vulnerabilities requiring an architectural redesign, only to throw away all the previous design or implementation work. <em>Imagine you\u2019ve just built a new house, and then the city inspector comes and mandates you to change it from being a split-level to rambler, to change the walls from wood to concrete, and to replace the underground iron plumbing pipes with copper\u200a\u2014\u200aThe disruption from undertaking a late redesign would be huge!<\/em> And this is particularly true for a data platform, which tends to be more vulnerable in security. Data-related security issues, such as data leaks or data tampering, can be detrimental to a company and its customers with long lasting effects. So, we not only started with security design, but also kept security the main theme throughout our development lifecycle. This practice aligns with the modern security development lifecycle (SDL)\u00a0method.<\/p>\n<p>Next, I\u2019ll select a few components of our security design and talk about each of\u00a0them.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/1024\/1*3aTpPvux-QutOQCieOIj1Q.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><\/figure>\n<h3>Data Classification<\/h3>\n<p>One lesson learned is that data classification has a huge ripple impact on our architectural design, so it required consideration from early on. Different data sensitivity levels mean different compliance requirements and legal liabilities, which affected our technical choices for authentication and authorization mechanisms, data sharing restrictions, data recovery strategy, even the way our infra engineers can access the platform and troubleshoot the hosts. To revisit our earlier building analogy,<em> the construction of a storage vault would be completely different depending on whether it\u2019s for a park, a bank, or an\u00a0armory.<\/em><\/p>\n<p>We did reviews with legal and security teams to classify the data to be stored in UIP. During data ingestion, we combine many techniques, such as schema checks, anonymization, compliance scans, incident handling, and more in order to meet the requirements of the target data classes. Besides, UIP follows the design principle of \u201czero-trust infrastructure,\u201d to protect against accidental data leaks by internal users and services as much as against external ones. More details on some of the data protection mechanisms are in the sections\u00a0below.<\/p>\n<h3>Data Encryption<\/h3>\n<p>Here I\u2019ll focus on at-rest encryption (more on in-transit encryption later). Encryption encodes information to obscure its meaning, encrypting data when it\u2019s stored and decrypting it when it\u2019s retrieved. Encryption methods use cryptography and key management system (KMS) to add an extra layer of security on top of the identity and access management (IAM) system, so that even if intruders somehow gain access to the data, they will not be able to decode it. A real world analogy would be security deposit boxes at a bank; <em>a bank puts security deposit boxes inside of a vault, which is locked, and each individual security deposit box also has its own\u00a0key.<\/em><\/p>\n<p>Since UIP resides in public clouds, data encryption is particularly important to us. There are multiple levels of encryption options to choose from: at the lowest level, you rely on cloud vendors to transparently manage encryption\/ decryption for you, and manage the cryptographic keys on behalf of you. This is to say you trust the cloud vendor not to give the keys to anyone, including their own employees. At the middle level, you still rely on the cloud vendors to manage encryption\/decryption for you, but you control the keys when needed and do not want the cloud vendors to manage them. The trade off is you need to manage key rotation and lifecycle by yourself. The strictest level is client-side (as opposed to server-side) encryption, which means you encrypt the data on-premise even before it enters the cloud, so there is no way the cloud vendors can decrypt your\u00a0data.<\/p>\n<p>There are more options but, in general, there are tradeoffs between the strictness level and engineering\/operational costs, as well as implications to query performance and query quota. A best practice is to adopt the middle level by using server-side encryption with dedicated customer-managed keys to achieve a good balance between security requirements and engineering implications.<\/p>\n<h3>Network Security<\/h3>\n<p>There are common design principles affecting network security, and I will pick a few principles that are most relevant to a data platform.<\/p>\n<ol>\n<li><strong>Disable outbound access to the internet by default<\/strong>, since we are an internal-facing platform and don\u2019t want data egress. But we allowlist certain ports provided by cloud vendors in order to use their services.<\/li>\n<li><strong>Apply network segmentation using subnets<\/strong>, so that even if intruders break into a low level subnet, they still won\u2019t be able to access the more restricted subnets. UIP uses public subnets to place load balancers for receiving user requests, and uses private subnets to place big data computing resources.<\/li>\n<li><strong>Use security groups for finer grained access control. <\/strong>For example, within a single big data computing subnet, we have different clusters for Spark, Presto, Airflow, Notebooks, etc; each cluster may have different groups of nodes such as controller nodes and worker nodes. We place the clusters and nodes into different security groups, so we can configure the minimally-allowed inbound rules for each group separately.<\/li>\n<li><strong>Implement in-transit encryption<\/strong> for data exchanged over the wire. This is used together with data encryption at rest in order to achieve data encryption end-to-end. In-transit encryption makes sure data is not sent in clear text, so it\u2019s difficult for intruders to intercept the communications and steal the data. <em>An analogy would be to use sealed envelopes instead of postcards to communicate. <\/em>UIP uses TLS with private certificates, so the hosts can authenticate and securely talk to each\u00a0other.<\/li>\n<\/ol>\n<h3>Kerberos Authentication<\/h3>\n<p>UIP uses the Salesforce single sign-on (SSO) infrastructure to authenticate users. When a user is using a service, it generally triggers more services under the hood to finish the job. This requires service-to-service authentication, so that services can trust each other and avoid sending data to forged identities from intruders. Some services provided by UIP are from the Hadoop ecosystem, such as Hive, Hive Metastore, and Spark. They use <a href=\"https:\/\/web.mit.edu\/kerberos\/\">Kerberos<\/a> as the standard authentication method. So, we store these service\u2019s principals in a Kerberos key distribution center (KDC) and <a href=\"https:\/\/web.ornl.gov\/~romeja\/HowToKerb.html\">kerberize<\/a> the corresponding clusters. One problem encountered was that we want different users on a cluster to run on behalf of their own identities, not the shared service principal, to achieve role-based access control (more on that later) and user auditing. So, we enabled user impersonation, so that services such as notebooks would pass the Kerberos authentication when talking to Spark, but would run queries in the name of the individual user who started the notebook. <em>An analogy is, you pass the airport boarding gate by swiping your boarding pass, which is a temporary ticket granted in the name of your\u00a0ID.<\/em><\/p>\n<h3>Data Access<\/h3>\n<p>Data is the single most important asset of any data platform, including UIP. Data access, or authorizing who can access what data, is a critical piece of security design. The challenge we faced is this: UIP has a massive amount of data, connects to many tools hosted on clouds\/on-premise\/other vendors, and serves many internal users from different product lines and of different clearance levels. We need a unified and coherent data permissions control strategy, so no user and no tool can access a dataset they\u2019re not supposed to. <em>To take an analogy, it\u2019s like a hotel key card system that needs to decide who can access which rooms\/facilities, including guests\/ maids\/ staff etc. <\/em>Therefore, we adopted several design principles:<\/p>\n<ol>\n<li><strong>Role-based access control<\/strong> (RBAC). We assign a user to one or more LDAP groups. Each group is typically (but not necessarily) a subset of members from a team, such as \u201cteamX_general,\u201d \u201cteamX_restricted,\u201d \u201cteamX_contractors,\u201d and so on. We map the groups to roles, and define policies controlling which datasets, tables and bucket folders each role can access. We create these groups, roles, and policies according to user request, and rely on group admins to self-service member management from then on. At this time, we\u2019re also considering finer-grained access controls via attributes (a.k.a. attribute-based access control, or ABAC), such as user attributes, data tags, environmental variables, and so on. Both open source communities and public cloud vendors provide related services; we have also developed our customized solutions to have more flexibility, and plan to open source them\u00a0later.<\/li>\n<li><strong>Least privilege.<\/strong> For example, through RBAC, users only get the permissions they explicitly needs, and don\u2019t have access to datasets they\u2019re not allowed for. For another example, not every service needs to write to all the buckets and databases; we can explicitly reduce the scope of resources and actions for each service via policies.<\/li>\n<li><strong>Data agility.<\/strong> Managing least privilege is a never-ending job; sometimes it can require cumbersome restrictions and extra approval requests and be counterproductive. We don\u2019t want our users to feel frustrated that their hands are tied when they need to access a given dataset. Therefore, we need a balance between least privilege and data agility considerations, which sometimes requires a creative design. For example, we would provide a sandbox space for each team and grant them higher privilege to that sandbox. Users can test our platform and play with sanitized, temporary data in the sandbox and refine their jobs before they put them into production spaces.<\/li>\n<\/ol>\n<h3>Multitenancy<\/h3>\n<p>As a data platform, we provide users with access to various big data clusters such as Presto, Spark, and Airflow. One decision we make for each type of cluster is whether it\u2019s single-tenant or multi-tenant. This applies to other types of resources as well, but decisions at the cluster level are the most important. Single tenancy means we launch many clusters, one for each team of users; multitenancy means we launch a cluster that serves multiple teams. The benefits of multi-tenant clusters\u00a0include:<\/p>\n<ol>\n<li>A shared cluster tends to cost less than multiple dedicated clusters,<\/li>\n<li>Fewer clusters means there is less operational overhead required to manage, deploy, patch, and support\u00a0them.<\/li>\n<\/ol>\n<p>The benefit of single-tenant clusters\u00a0include:<\/p>\n<ol>\n<li>Each team\u2019s data is securely isolated from\u00a0others,<\/li>\n<li>Teams won\u2019t compete for cluster resources,<\/li>\n<li>A team can have certain customizations in its\u00a0cluster.<\/li>\n<\/ol>\n<p>The drawbacks of these two models are the inverse of the benefits of each other.<em> To make an analogy, an apartment building can choose to install a washer\/dryer within each unit, or one big washer\/dryer in the lobby shared by all\u00a0units.<\/em><\/p>\n<p>For UIP, decisions are made independently for each type of cluster, and, so far, the majority of our clusters are multitenant. The primary motivations are to optimize for cost and operations; for the concerns regarding multi-tenancy,<\/p>\n<ol>\n<li>Data isolation is achieved via\u00a0RBAC,<\/li>\n<li>Cluster resources can be given a quota for each tenant based on Kubernetes namespaces or resource queues,\u00a0and<\/li>\n<li>We can always launch a dedicated customized cluster for selected tenants if there\u2019s a strong business\u00a0case.<\/li>\n<\/ol>\n<h3>Data Environment<\/h3>\n<p>As common practice, we have separate dev, staging, and production environments. As a data platform, we should store real data assets in a production environment only, not in dev or staging. This means using synthetic data to test our data pipelines and computing services before they are shipped to production. <em>An analogy is a home seller would fill the house with staging furniture to show how the layout could work, but would not fill it with real treasures<\/em>. One challenge of using synthetic data is we won\u2019t know the actual user-experienced performance prior to deploying them. To mitigate that, we first did some lightweight smoke tests with synthetic data in dev or staging environment to better understand the performance, such as query time breakdown and performance impacts with different encryption options. Next, we initially released to production only for a selected team of pilot users, so they could perform user acceptance tests by running real-world stress queries and concurrent queries, which greatly helped us to identify performance bottlenecks and other issues, so we could address them before opening the platform to a broader set of\u00a0users.<\/p>\n<h3>Third-Party Licenses and\u00a0Security<\/h3>\n<p>In this open-source era, many big data services are based on open source. But having software under different licenses can result in different legal implications and restrictions regarding distribution, committing, etc, so you can\u2019t just grab whatever is \u201cfree\u201d online. Apache, MIT, and BSD licenses may have less restrictions than some other licenses, but, regardless, we still did thorough security scans and license reviews with our 3rd party review team before we included any software into our data platform. We will also do periodical reviews for the already included software, just in case some may change their license in the future. Even more subtle than services are dynamically installed libraries; for example, data scientists that use notebook services often need to dynamically try out new packages and decide whether to keep them, and it\u2019s not a good idea for them to freely fetch from the public internet for two\u00a0reasons:<\/p>\n<ol>\n<li>The license concerns mentioned earlier,\u00a0and<\/li>\n<li>Packages from internet might be harmful. There have been many cases where the official Python package repo PyPI contains malicious modules, such as <a href=\"https:\/\/arstechnica.com\/information-technology\/2017\/09\/devs-unknowingly-use-malicious-modules-put-into-official-python-repository\/\">this\u00a0example<\/a>.<\/li>\n<\/ol>\n<p>So instead, we maintain our own internal repo of packages that are scanned and reviewed. For common libraries such as pySpark, we bake them into our Docker images. When users ask for a new package to be included into the repo, we do a review first.<em> An analogy is, before we bring any delivered package home, we would check whether it\u2019s legit or suspicious at the doorstep<\/em>.<\/p>\n<h3>Conclusion<\/h3>\n<p>In this blog, I have shared our experience and learnings of some security design aspects for building a data intelligence platform at Salesforce. Hope this helps give you some ideas, and we welcome any comments or ideas! Lastly, it\u2019s a learning journey as we are still in the process of in building UIP, and there are so many more topics that I\u2019ll have to share later in separate blogs. Stay\u00a0tuned!<\/p>\n<p><em>Thanks to Trish Fuzesy, Laura Lindeman, George Hill, Loren Taylor, Threat Intelligence Researchers, Kurt Brown et al from Salesforce for proofreading. Thanks to the UIP team and Salesforce infrastructure teams for their amazing contributions!<\/em><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/medium.com\/_\/stat?event=post.clientViewed&amp;referrerSource=full_rss&amp;postId=ba85411a0c1b\" width=\"1\" height=\"1\" alt=\"\"><\/p>\n<hr>\n<p><a href=\"https:\/\/engineering.salesforce.com\/building-a-secured-data-intelligence-platform-ba85411a0c1b\">Building a Secured Data Intelligence Platform<\/a> was originally published in <a href=\"https:\/\/engineering.salesforce.com\/\">Salesforce Engineering<\/a> on Medium, where people are continuing the conversation by highlighting and responding to this story.<\/p>\n<p><a href=\"https:\/\/engineering.salesforce.com\/building-a-secured-data-intelligence-platform-ba85411a0c1b?source=rss----cfe1120185d3---4\" target=\"_blank\" rel=\"noopener\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Salesforce Unified Intelligence Platform (UIP) team is building a shared, central, internal data intelligence platform. Designed to drive business insights, UIP helps improve user experience, product quality, and operations. At Salesforce, Trust is our number one company value and building in robust security is a key component of our platform development. In this blog,&hellip; <a class=\"more-link\" href=\"https:\/\/fde.cat\/index.php\/2021\/02\/02\/building-a-secured-data-intelligence-platform\/\">Continue reading <span class=\"screen-reader-text\">Building a Secured Data Intelligence Platform<\/span><\/a><\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","footnotes":""},"categories":[7],"tags":[],"class_list":["post-224","post","type-post","status-publish","format-standard","hentry","category-technology","entry"],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":229,"url":"https:\/\/fde.cat\/index.php\/2021\/02\/02\/ml-lake-building-salesforces-data-platform-for-machine-learning\/","url_meta":{"origin":224,"position":0},"title":"ML Lake: Building Salesforce\u2019s Data Platform for Machine Learning","date":"February 2, 2021","format":false,"excerpt":"Salesforce uses machine learning to improve every aspect of its product suite. With the help of Salesforce Einstein, companies are improving productivity and accelerating key decision-making. Data is a critical component of all machine learning applications and Salesforce is no exception. In this post I will share some unique challenges\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":834,"url":"https:\/\/fde.cat\/index.php\/2024\/03\/06\/how-the-new-einstein-1-platform-manages-massive-data-and-ai-workloads-at-scale\/","url_meta":{"origin":224,"position":1},"title":"How the New Einstein 1 Platform Manages Massive Data and AI Workloads at Scale","date":"March 6, 2024","format":false,"excerpt":"In our \u201cEngineering Energizers\u201d Q&A series, we feature Leo Tran, Chief Architect of Platform Engineering at Salesforce. With over 15 years of engineering leadership experience, Leo is instrumental in developing the Einstein 1 Platform. This platform integrates generative AI, data management, CRM capabilities, and trusted systems to provide businesses with\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":754,"url":"https:\/\/fde.cat\/index.php\/2023\/08\/29\/data-enrichment-and-automation-helping-salesforce-security-overcome-the-threat-identification-challenge\/","url_meta":{"origin":224,"position":2},"title":"Data Enrichment and Automation: Helping Salesforce Security Overcome the Threat Identification Challenge","date":"August 29, 2023","format":false,"excerpt":"By Matt Saunders and Scott Nyberg In our \u201cEngineering Energizers\u201d Q&A series, we examine the professional life experiences that have shaped Salesforce Engineering leaders. Meet Matt Saunders, a Principal Member of the Technical Staff at Salesforce, supporting the Detection and Response Machine Learning team. In his role, Matt focuses on\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":790,"url":"https:\/\/fde.cat\/index.php\/2023\/08\/29\/data-enrichment-and-automation-helping-salesforce-security-overcome-the-threat-identification-challenge-2\/","url_meta":{"origin":224,"position":3},"title":"Data Enrichment and Automation: Helping Salesforce Security Overcome the Threat Identification Challenge","date":"August 29, 2023","format":false,"excerpt":"By Matt Saunders and Scott Nyberg In our \u201cEngineering Energizers\u201d Q&A series, we examine the professional life experiences that have shaped Salesforce Engineering leaders. Meet Matt Saunders, a Principal Member of the Technical Staff at Salesforce, supporting the Detection and Response Machine Learning team. In his role, Matt focuses on\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":840,"url":"https:\/\/fde.cat\/index.php\/2024\/03\/20\/aiops-engineering-secrets-revealed-how-ai-and-automation-slash-thousands-of-manual-hours-annually\/","url_meta":{"origin":224,"position":4},"title":"AIOps Engineering Secrets Revealed: How AI and Automation Slash Thousands of Manual Hours Annually","date":"March 20, 2024","format":false,"excerpt":"In our \u201cEngineering Energizers\u201d Q&A series, we explore the remarkable journeys of engineering leaders who have made significant contributions in their respective fields. Today, we meet Sravanthi Konduru, a Lead Member of the Technical Staff for Salesforce Engineering, who helps drive the development of the Warden AIOps platform. Explore how\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":828,"url":"https:\/\/fde.cat\/index.php\/2024\/02\/20\/unlocking-hyperforce-migration-innovative-solutions-for-a-smooth-transition-to-the-cloud\/","url_meta":{"origin":224,"position":5},"title":"Unlocking Hyperforce Migration: Innovative Solutions for a Smooth Transition to the Cloud","date":"February 20, 2024","format":false,"excerpt":"In our \u201cEngineering Energizers\u201d Q&A series, we delve into the experiences and expertise of Salesforce Engineering leaders. Today, we\u2019re meeting Mahamadou Sylla, a Senior Member of the Technical Staff at Salesforce Engineering. Mahamadou is a key member of our Hyperforce\u2019s Bill of Materials (BOM) team, which assists internal teams in\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/224","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/comments?post=224"}],"version-history":[{"count":1,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/224\/revisions"}],"predecessor-version":[{"id":240,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/224\/revisions\/240"}],"wp:attachment":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/media?parent=224"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/categories?post=224"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/tags?post=224"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}