{"id":284,"date":"2021-08-31T14:40:23","date_gmt":"2021-08-31T14:40:23","guid":{"rendered":"https:\/\/fde.cat\/?p=284"},"modified":"2021-08-31T14:40:23","modified_gmt":"2021-08-31T14:40:23","slug":"hadoop-hbase-on-kubernetes-and-public-cloud-part-i","status":"publish","type":"post","link":"https:\/\/fde.cat\/index.php\/2021\/08\/31\/hadoop-hbase-on-kubernetes-and-public-cloud-part-i\/","title":{"rendered":"Hadoop\/HBase on Kubernetes and Public Cloud (Part I)"},"content":{"rendered":"<p><em>Authors: Dhiraj Hegde, Ashutosh Parekh, and Prashant\u00a0Murthy<\/em><\/p>\n<p>At Salesforce, we run a large number of HBase and HDFS clusters in our own data centers. More recently, we have started <a href=\"https:\/\/www.salesforce.com\/news\/press-releases\/2020\/12\/02\/introducing-salesforce-hyperforce\/\">deploying our clusters on Public Cloud infrastructure<\/a> to take advantage of the on-demand scalability available there. As part of this foray onto the public cloud, we wanted to fundamentally rethink how we deployed and managed our HBase clusters. This post outlines how we ended up using Kubernetes and the challenges we had to overcome in this transition.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/1024\/1*HfrPQrzdy1RA41pG3pPp2A.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><\/figure>\n<p>For many years Salesforce has been running HBase on a static set of bare metal hosts in our data centers. Operating system updates on these hosts were managed by Puppet, and HBase\/Hadoop deployments were managed using <a href=\"https:\/\/ambari.apache.org\/\">Ambari<\/a>. These tools were geared towards mutable infrastructure where you modified binaries and configuration \u201cin place\u201d on the hosts. Such upgrade processes can result in partial changes to hosts when failures occur, resulting in a more complicated recovery process. While robust idempotent deployment mechanisms can overcome such issues, the more common issue seen in such environments is a temptation for engineers to apply manual fixes for urgent issues. These fixes are often forgotten, resulting in lingering config\u00a0drifts.<\/p>\n<p>In public cloud, Virtual Machines (VMs) and Containers provided us with the opportunity to embrace a more immutable form of deployment where the set of binaries and configuration are part of the VM or container image itself. If one tried to modify the image with a local change, it would be reset to the original image the next time the Container or VM is restarted. This kind of immutable environment enforces good engineering discipline.<\/p>\n<p>In the virtualized environment of public cloud, we also found resource usage advantages like<\/p>\n<ol>\n<li>Software-driven infrastructure deployment that could be elastically adjusted based on\u00a0usage.<\/li>\n<li>Right-sizing of virtualized hosts for specific needs in terms of CPU, memory, and network bandwidth.<\/li>\n<\/ol>\n<h3>VMs vs Containers<\/h3>\n<p>We had to address the question of whether to deploy HBase\/HDFS directly on VMs or within containers. At first glance, there were several factors that seemed to favor\u00a0VMs:<\/p>\n<ol>\n<li>The dominant container management system Kubernetes started with stateless application management with subsequent enhancements to work with stateful applications like DBs being added as an afterthought. This did not inspire confidence.<\/li>\n<li>Containers brought with them their own approach to networking, which seemed to add unnecessary complexity.<\/li>\n<li>For an application that has been running primarily on bare metal hosts, containers seemed to suggest additional OS level indirection, whereas using a VM provided an environment more similar to the existing\u00a0one.<\/li>\n<li>On VMs, one could perhaps reuse existing (mutable) deployment tools for bare metal and gradually evolve to a truly immutable approach.<\/li>\n<\/ol>\n<p>However, as we dug deeper we found some of the pros and cons\u00a0changed.<\/p>\n<ol>\n<li>While Kubernetes added support for stateful applications much later, the additions were pretty well thought out. In addition, we found Kubernetes to be very extensible. Any limitations in the features could be overcome by making our own enhancements on top of Kubernetes APIs.<\/li>\n<li>Salesforce embraced an immutable approach for OS updates across all teams. Layering a mutating application deployment approach on top of this OS layer (with a plan of gradually making it immutable) made little sense. The binaries would have to be reinstalled every time the OS was\u00a0updated.<\/li>\n<li>Containers are very lightweight constructs (essentially a jailed process), and the OS level performance implications in using them turned out to be negligible.<\/li>\n<li>Container management systems might be prescriptive about how networking among containers works (as mentioned above in cons), but with Kubernetes, all major cloud providers had built plugins that made communication between containers and the outside world as seamless as that between VMs and the outside\u00a0world.<\/li>\n<li><strong><em>Kubernetes provides a powerful standard mechanism for application deployment and management across cloud providers<\/em><\/strong>.<\/li>\n<\/ol>\n<p>The last point is one of the more significant insights. Cloud infra deployment tools like Terraform supported management of disks and VMs across different cloud providers. However, each cloud provider resulted in very different manifests, as there was very little abstraction and reusability in the manifests. Kubernetes, with its opinionated approach to managing compute, storage, and networking of containers, provided a much more consistent deployment and management interface across cloud providers.<\/p>\n<h3>Kubernetes and Stateful applications<\/h3>\n<p>We will cover some Kubernetes concepts here that will help in understanding the rest of the\u00a0blog.<\/p>\n<h3>Pods as Containers<\/h3>\n<p>Kubernetes manages deployment of containers, monitors their health, and restarts them in case of application failure or relocates them to new hosts in case of host failures. Hosts in Kubernetes are called <strong>Nodes<\/strong>. In Kubernetes, a container is wrapped inside a construct called a <strong>Pod<\/strong>. A Pod allows multiple containers that need to be co-located on the same host to be deployed together. Typically, most applications have multiple supporting processes (log forwarders, cert\/key refreshers, etc.) so a Pod makes it convenient to wrap these containerized processes into a single deployable unit. Each Pod has a unique IP address associated with it, which basically allows it to behave like a pseudo host. All containers within a Pod share that IP address and can also use the loopback address (127.0.0.1) to communicate among themselves. This gives them the same environment that processes in a single host have. The main difference is that each container within the Pod has a distinct view of its file system and of the processes running inside it. The containers can still share storage within a Pod, but they would have to explicitly mount the same volume in each container of the Pod to do so.<\/p>\n<p>Since creating each Pod individually would be laborious for users, <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/\">higher level constructs called <strong>Workload Resources<\/strong> are provided<\/a>, which define a template for a Pod and the number of instances of the Pod needed. The rest is left to the controller of that workload resource, which automatically creates and manages the Pods in Kubernetes cluster.<\/p>\n<h3>Persistent Volumes (PVs) for\u00a0Storage<\/h3>\n<p>Kubernetes provides a simple abstraction called <strong><em>Persistent Volume (PV)<\/em><\/strong> to represent storage volumes. This storage can be local volumes on the nodes or network attached volumes like EBS volumes in AWS. For the purposes of this blog, we are only considering network attached volumes. When a Pod needs a volume, it specifies a <strong><em>PV Claim (PVC)<\/em><\/strong> in its Pod manifest that describes the type and size of storage desired. Kubernetes responds by creating the volume and represents it with a PV instance. The PV and PVC are mapped to each other in a 1-to-1 relationship called a binding. When the Pod is created on a host, the PVs bound to its PVCs are mounted on that node as a volume and hence made available to the containers. Containers within that Pod can then mount that volume into their file systems.<\/p>\n<p> A PV continues to be retained by Kubernetes as long as the PVC is present. The PV mounts wherever the PVC goes. In the diagram below, you can see that a Pod can be removed in one node and then later recreated in another node, and the PV will follow it as long as the Pod manifest refers to the same PVC. To recycle the PV, both the Pod and the PVC have to be\u00a0deleted.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/784\/1*OBIvEMmOskY77KnlE5lpnQ.jpeg?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><\/figure>\n<h3>Pod and\u00a0State<\/h3>\n<p>Pods in Kubernetes are inherently stateless; when they are updated or moved from one node to another, they are destroyed and recreated. Any persistent state should be kept in attached PVs. However, early workload resources in Kubernetes only created Pods with temporary and randomized names (like http-nmx8). So when a Pod is deleted and recreated, it gets a new name. This worked great for applications whose instances were totally stateless and could be placed behind a load balancer (called a Service in Kubernetes) as shown\u00a0below.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/604\/1*_S6zK7FiICmealYoRlZ-DA.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><\/figure>\n<p>The clients only need to know the virtual hostname (or VIP) of the load balancer; there is never a need to know the actual Pod\u2019s hostname. But when you look at HDFS and HBase, the above model does not apply. In these applications, each application instance (Pod) needs to be individually addressable by the clients directly. The client typically contacts the specific Pod expecting to find some specific data in its PV to read or modify. If the Pod changes its identity, then the client is forced to error out or refresh any cached metadata, which is disruptive to the system. In the HBase architecture diagram below you can see there are a number of components that are intertwined by this pattern of communication among them. Each arrow indicates a specific instance of an application talking to another instance of another application with a well-defined hostname.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/825\/1*WrFlWDhijAB0o-G19Vu6YA.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><\/figure>\n<p>For applications like HBase (and others like Cassandra, Redis etc.) Kubernetes eventually introduced a workload resource called <a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/controllers\/statefulset\/\">StatefulSets<\/a>. This creates Pods with a well-defined name (and hostname). For example if you define a StatefulSet named zookeeper with three instances, then the Pods get created with the names zookeeper-0, zookeeper-1 and zookeeper-2. The number that keeps increasing in the names is called the ordinal number. If PVs are required, then PVCs with a similar ordinal based naming convention are specified within each of these Pod definitions (for example Pod zookeeper-0 can have a PVC named zk-data-0). Each PVC is bound to a different PV and that PV will be mounted wherever the uniquely named Pod lands. So now we not only have state (a PV) but also state that is permanently associated to a Pod with a fixed hostname. The fixed hostname allows all the Pod\u2019s clients to cache information about it and locate the state associated with\u00a0it.<\/p>\n<p>As you might have guessed from the HBase diagram above, all key components were deployed as StatefulSets in our clusters. The components are tabulated below. They are classified as <strong><em>master<\/em><\/strong> if they are meant for coordination, management and metadata components. The <strong><em>worker<\/em><\/strong> components are the ones carrying out the actual data processing.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/498\/1*jqsP4b4SEID89P_QLgvYog.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><\/figure>\n<h3>Availability Zones\u00a0(AZs)<\/h3>\n<p>In public cloud one can choose to run replicas of the components across fault domains to limit the impact of catastrophic failure in one domain. This usually translates to choosing to run your replicas across regions (geographically widely dispersed locations) or across availability zones (AZs), which are located within a region, but perhaps in separate buildings. Typically, latency between regions is too great for spreading replicas across them, so AZs with their low latency are the best choice for a database like HBase. We designed the component to run across three\u00a0AZs.<\/p>\n<h3>Spreading Across\u00a0AZs<\/h3>\n<p>Public cloud Kubernetes nodes have labels that identify the AZ and the region in which the node exists. This is leveraged while scheduling Pods on them. Two mechanisms were used in Kubernetes to guide the scheduling of\u00a0Pods:<\/p>\n<ul>\n<li><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#affinity-and-anti-affinity\">Affinity and anti-affinity rules<\/a> defined as annotations in Pod manifest. One can define the rules as <em>preferred<\/em> or <em>required<\/em> depending on how strictly they need to be enforced during scheduling. The rules specify either to attract (affinity) a Pod to a given label or repel it (anti-affinity) from the label. The target label can be on nodes or other Pods, so Pod scheduling can get influenced by the labels that are on a node or labels that are on Pods already scheduled on that\u00a0node.<\/li>\n<li><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/scheduling-eviction\/assign-pod-node\/#nodeselector\">Node selector<\/a> allows a Pod to be scheduled only on a node or nodes that have a specific label. This is very much like the required affinity feature above, but uses much simpler\u00a0syntax.<\/li>\n<\/ul>\n<p>Using these mechanisms, the following rules were defined for the\u00a0Pods<\/p>\n<ol>\n<li>Require a Pod anti-affinity that prevents Pods of the same component being on the same node. This prevents failure of one node impacting multiple Pods of a component.<\/li>\n<li>Prefer distribution of Pods of a component across AZs using AZ label anti-affinity. This is a preference and not a requirement as there are only 3 AZs and most components had more Pods than\u00a0that.<\/li>\n<li>Use <em>nodeSelector<\/em> to run DataNode, RegionServer and Yarn NodeManager Pods on separate groups of nodes (more on this\u00a0below).<\/li>\n<\/ol>\n<p>In public cloud you can pick from nodes of different sizes (varying size of CPU and memory). You just have to define a node group with certain cpu\/memory sizing and all nodes in that group would share those characteristics. One could have gone with a single standard sized node for the whole cluster (a single node group) and let Kubernetes handle allocation of Pods to the nodes. However, there were a couple of factors to\u00a0consider<\/p>\n<ol>\n<li>We wanted worker components to have guaranteed network bandwidth for their tasks, and Kubernetes did not account for bandwidth needs while placing Pods on nodes, only CPU and memory needs. By scheduling worker Pods on dedicated nodes of a particular group you could provision bandwidth for them much more predictably.<\/li>\n<li>For Yarn NodeManagers, we wanted to be able to grow and shrink the number of nodes aggressively based on activity, but for DataNodes (and RegionServers, to a lesser extent) we wanted to be very cautious about shrinking node counts. Separate node groups allowed us to choose which components experienced more turbulence in node\u00a0counts.<\/li>\n<li>We wanted Pod replicas of a given component to be on separate nodes to reduce the impact of node failure and also to have predictable bandwidth on each node. But if this were combined with a standard node, then the number of standard nodes would increase as data in the cluster grows (even if it\u2019s cold data). This, however, would have resulted in wastage, as the DataNodes have relatively light CPU\/memory requirements. By putting the Pods in nodes that are right sized for that component, we ensure that a new node is created in a size that is needed by that component which is growing in usage and hence minimize\u00a0wastage.<\/li>\n<\/ol>\n<h3>Data replicas and\u00a0AZs<\/h3>\n<p>DataNodes hold multiple replicas of data (three replicas, typically) for high availability. It is important to make sure that these replicas are spread across fault domains (AZs in our case) so that failure in one AZ leaves the other replicas safe. It was also important for availability reasons to make sure that the software upgrade process does not upgrade more than one replica of the same data. HDFS has topology awareness which takes feedback from a script to understand where the DataNodes are located in terms of fault domains. This was typically used to ensure that replicas ended up in DataNodes on different racks in a data center. In public cloud, we implemented a script that provided the topology of DataNodes in terms of AZs and ensured that the three replicas were across three\u00a0AZs.<\/p>\n<p>We also defined three separate StatefulSets for DataNodes. Each StatefulSet was responsible for Pods of a single AZ. Each StatefulSet used nodeSelector to ensure its Pods ran in nodes of a specific AZ. We did this so that we could be certain that while doing software upgrades of Pods of a particular StatefulSet, only one replica of the data is disrupted. The other two data replicas would be safely under two other StatefulSets. The diagram below shows how all the components are spread out across\u00a0AZs.<\/p>\n<figure><img decoding=\"async\" alt=\"\" src=\"https:\/\/i0.wp.com\/cdn-images-1.medium.com\/max\/602\/1*6zybBIW5nmWk6oqccBMNwQ.png?w=750&#038;ssl=1\" data-recalc-dims=\"1\"><\/figure>\n<p>In this first part of the blog we have covered introduction to concepts in Kubernetes and Public Cloud that are relevant to stateful application management. We also covered how we leveraged features in Kubernetes and Hadoop\/HBase to build a highly available service. In the second part of the blog we will cover some of the shortcomings we ran into while using these technologies and how those were overcome.<\/p>\n<p><em>Thank you to Joel Swiatek, Aditya Auradkar, and Laura Lindeman for additional review of this\u00a0post!<\/em><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/medium.com\/_\/stat?event=post.clientViewed&amp;referrerSource=full_rss&amp;postId=1a85a77c64ec\" width=\"1\" height=\"1\" alt=\"\"><\/p>\n<hr>\n<p><a href=\"https:\/\/engineering.salesforce.com\/hadoop-hbase-on-kubernetes-and-public-cloud-part-i-1a85a77c64ec\">Hadoop\/HBase on Kubernetes and Public Cloud (Part I)<\/a> was originally published in <a href=\"https:\/\/engineering.salesforce.com\/\">Salesforce Engineering<\/a> on Medium, where people are continuing the conversation by highlighting and responding to this story.<\/p>\n<p><a href=\"https:\/\/engineering.salesforce.com\/hadoop-hbase-on-kubernetes-and-public-cloud-part-i-1a85a77c64ec?source=rss----cfe1120185d3---4\" target=\"_blank\" rel=\"noopener\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Authors: Dhiraj Hegde, Ashutosh Parekh, and Prashant\u00a0Murthy At Salesforce, we run a large number of HBase and HDFS clusters in our own data centers. More recently, we have started deploying our clusters on Public Cloud infrastructure to take advantage of the on-demand scalability available there. As part of this foray onto the public cloud, we&hellip; <a class=\"more-link\" href=\"https:\/\/fde.cat\/index.php\/2021\/08\/31\/hadoop-hbase-on-kubernetes-and-public-cloud-part-i\/\">Continue reading <span class=\"screen-reader-text\">Hadoop\/HBase on Kubernetes and Public Cloud (Part I)<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","footnotes":""},"categories":[7],"tags":[],"class_list":["post-284","post","type-post","status-publish","format-standard","hentry","category-technology","entry"],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":285,"url":"https:\/\/fde.cat\/index.php\/2021\/08\/31\/hadoop-hbase-on-kubernetes-and-public-cloud-part-ii\/","url_meta":{"origin":284,"position":0},"title":"Hadoop\/HBase on Kubernetes and Public Cloud (Part II)","date":"August 31, 2021","format":false,"excerpt":"The first part of this two part blog provided an introduction to concepts in Kubernetes and Public Cloud that are relevant to stateful application management. We also covered how Kubernetes and Hadoop features were leveraged to provide a highly available service. In this second part of the blog we will\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":810,"url":"https:\/\/fde.cat\/index.php\/2024\/01\/09\/implementing-salesforces-largest-database-upgrade-inside-the-migration-to-hbase-2\/","url_meta":{"origin":284,"position":1},"title":"Implementing Salesforce\u2019s Largest Database Upgrade: Inside the Migration to HBase 2","date":"January 9, 2024","format":false,"excerpt":"Written by Viraj Jasani and Andrew Purtell Data is the engine behind Salesforce operations, helping our customers make better decisions on a daily basis. The Big Data Storage (BDS) team, a key part of Salesforce\u2019s engineering organization, deploys arguably one of the largest distributed database production footprints. This infrastructure is\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":456,"url":"https:\/\/fde.cat\/index.php\/2021\/09\/01\/evolution-of-region-assignment-in-the-apache-hbase-architecture%e2%80%8a-%e2%80%8apart-1\/","url_meta":{"origin":284,"position":2},"title":"Evolution of Region Assignment in the Apache HBase Architecture\u200a\u2014\u200aPart 1","date":"September 1, 2021","format":false,"excerpt":"Evolution of Region Assignment in the Apache HBase Architecture\u200a\u2014\u200aPart\u00a01 Written by Viraj Jasani and Andrew\u00a0Purtell At Salesforce, we run a large number of Apache HBase clusters in our own data centers as well as in public cloud infrastructure. This post outlines some important design aspects of Apache HBase and how\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":457,"url":"https:\/\/fde.cat\/index.php\/2021\/09\/02\/evolution-of-region-assignment-in-the-apache-hbase-architecture%e2%80%8a-%e2%80%8apart-2\/","url_meta":{"origin":284,"position":3},"title":"Evolution of Region Assignment in the Apache HBase Architecture\u200a\u2014\u200aPart 2","date":"September 2, 2021","format":false,"excerpt":"Evolution of Region Assignment in the Apache HBase Architecture\u200a\u2014\u200aPart\u00a02 The first part of this two-part series of blog posts provided an introduction to some of the important design aspects of Apache HBase. We introduced the concept of the AssignmentManager and the importance of its role in the HBase architecture. In\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":485,"url":"https:\/\/fde.cat\/index.php\/2021\/10\/07\/evolution-of-region-assignment-in-the-apache-hbase-architecture%e2%80%8a-%e2%80%8apart-3\/","url_meta":{"origin":284,"position":4},"title":"Evolution of Region Assignment in the Apache HBase Architecture\u200a\u2014\u200aPart 3","date":"October 7, 2021","format":false,"excerpt":"Evolution of Region Assignment in the Apache HBase Architecture\u200a\u2014\u200aPart\u00a03 Authors: Viraj Jasani, Andrew Purtell, \u5f20\u94ce(Duo\u00a0Zhang) In the second part of this blog post series, we provided an overview of how the redesigned AssignmentManager in HBase 2 efficiently and reliably manages the process of region assignment. In this third entry in\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":294,"url":"https:\/\/fde.cat\/index.php\/2021\/08\/31\/the-design-of-strongly-consistent-global-secondary-indexes-in-apache-phoenix%e2%80%8a-%e2%80%8apart-1\/","url_meta":{"origin":284,"position":5},"title":"The Design of Strongly Consistent Global Secondary Indexes in Apache Phoenix\u200a\u2014\u200aPart 1","date":"August 31, 2021","format":false,"excerpt":"The Design of Strongly Consistent Global Secondary Indexes in Apache Phoenix\u200a\u2014\u200aPart\u00a01Phoenix is a relational database with a SQL interface that uses HBase as its backing store. This combination allows it to leverage the flexibility and scalability of HBase, which is a distributed key-value store. Phoenix provides additional functionality on top\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/284","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/comments?post=284"}],"version-history":[{"count":1,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/284\/revisions"}],"predecessor-version":[{"id":426,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/284\/revisions\/426"}],"wp:attachment":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/media?parent=284"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/categories?post=284"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/tags?post=284"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}