{"id":295,"date":"2021-08-31T14:40:03","date_gmt":"2021-08-31T14:40:03","guid":{"rendered":"https:\/\/fde.cat\/?p=295"},"modified":"2021-08-31T14:40:03","modified_gmt":"2021-08-31T14:40:03","slug":"how-facebook-encodes-your-videos","status":"publish","type":"post","link":"https:\/\/fde.cat\/index.php\/2021\/08\/31\/how-facebook-encodes-your-videos\/","title":{"rendered":"How Facebook encodes your videos"},"content":{"rendered":"<p><span>People upload hundreds of millions of videos to Facebook every day. Making sure every video is delivered at the best quality \u2014 with the highest resolution and as little buffering as possible \u2014 means optimizing not only when and how our video codecs compress and decompress videos for viewing, but also which codecs are used for which videos. But the sheer volume of video content on Facebook also means finding ways to do this that are efficient and don\u2019t consume a ton of computing power and resources.<\/span><\/p>\n<p><span>To help with this, we employ a variety of codecs as well as adaptive bitrate streaming (<\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Adaptive_bitrate_streaming\"><span>ABR<\/span><\/a><span>), which improves the viewing experience and reduces buffering by choosing the best quality based on a viewer\u2019s network bandwidth. But while more advanced codecs like VP9 provide better compression performance over older codecs, like H264, they also consume more computing power. From a pure computing perspective, applying the most advanced codecs to every video uploaded to Facebook would be prohibitively inefficient. Which means there needs to be a way to prioritize which videos need to be encoded using more advanced codecs.<\/span><\/p>\n<p><span>Today, Facebook deals with its high demand for encoding high-quality video content by combining a benefit-cost model with a machine learning (ML) model that lets us prioritize advanced encoding for highly watched videos. By predicting which videos will be highly watched and encoding them first, we can reduce buffering, improve overall visual quality, and allow people on Facebook who may be limited by their data plans to watch more videos.<\/span><\/p>\n<p><span>But this task isn\u2019t as straightforward as allowing content from the most popular uploaders or those with the most friends or followers to jump to the front of the line. There are several factors that have to be taken into consideration so that we can provide the best video experience for people on Facebook while also ensuring that content creators still have their content encoded fairly on the platform.<\/span><\/p>\n<h2><span>How we used to encode video on Facebook<\/span><\/h2>\n<p><span>Traditionally, once a video is uploaded to Facebook, the process to enable ABR kicks in and the original video is quickly re-encoded into multiple resolutions (e.g., 360p, 480p, 720p, 1080p). Once the encodings are made, Facebook\u2019s video encoding system tries to further improve the viewing experience by using more advanced codecs, such as VP9, or more expensive \u201crecipes\u201d (a video industry term for fine-tuning transcoding parameters), such as H264 very slow profile, to compress the video file as much as possible. Different transcoding technologies (using different codec types or codec parameters) have different trade-offs between compression efficiency, visual quality, and how much computing power is needed.<\/span><\/p>\n<p><span>The question of how to order jobs in a way that maximizes the overall experience for everyone has already been top of mind. Facebook has a specialized encoding compute pool and dispatcher. It accepts encoding job requests that have a priority value attached to them and puts them into a priority queue where higher-priority encoding tasks are processed first. The video encoding system\u2019s job is then to assign the right priority to each task. It did so by following a list of simple, hard-coded rules. Encoding tasks could be assigned a priority based on a number of factors, including whether a video is a licensed music video, whether the video is for a product, and how many friends or followers the video\u2019s owner has.<\/span><\/p>\n<p><span>But there were disadvantages to this approach. As new video codecs became available, it meant expanding the number of rules that needed to be maintained and tweaked. Since different codecs and recipes have different computing requirements, visual quality, and compression performance trade-offs, it is impossible to fully optimize the end user experience by a coarse-grained set of rules.<\/span><\/p>\n<p><span>And, perhaps most important, Facebook\u2019s video consumption pattern is extremely skewed, meaning Facebook videos are uploaded by people and pages that have a wide spectrum in terms of their number of friends or followers. Compare the Facebook page of a big company like Disney with that of a vlogger that might have 200 followers. The vlogger can upload their video at the same time, but Disney\u2019s video is likely to get more watch time. However, any video can go viral even if the uploader has a small following. The challenge is to support content creators of all sizes, not just those with the largest audiences, while also acknowledging the reality that having a large audience also likely means more views and longer watch times.<\/span><\/p>\n<h2><span>Enter the Benefit-Cost model<\/span><\/h2>\n<p><span>The new model still uses a set of quick initial H264 ABR encodings to ensure that all uploaded videos are encoded at good quality as soon as possible. What\u2019s changed, however, is how we calculate the priority of encoding jobs after a video is published.<\/span><\/p>\n<p><span>The Benefit-Cost model grew out of a few fundamental observations:<\/span><\/p>\n<ol>\n<li><span>A video consumes computing resources only the first time it is encoded. Once it has been encoded, the stored encoding can be delivered as many times as requested without requiring additional compute resources.<\/span><\/li>\n<li><span>A relatively small percentage (roughly one-third) of all videos on Facebook generate the majority of overall watch time.<\/span><\/li>\n<li><span>Facebook\u2019s data centers have limited amounts of energy to power compute resources.<\/span><\/li>\n<li><span>We get the most bang for our buck, so to speak, in terms of maximizing everyone\u2019s video experience within the available power constraints, by applying more compute-intensive \u201crecipes\u201d and advanced codecs to videos that are watched the most.<\/span><\/li>\n<\/ol>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-large wp-image-17428\" src=\"https:\/\/i0.wp.com\/engineering.fb.com\/wp-content\/uploads\/2021\/04\/Video-encoding-final-hero.jpg?resize=750%2C422&#038;ssl=1\" alt=\"\" width=\"750\" height=\"422\"  data-recalc-dims=\"1\"><\/p>\n<p><span>Based on these observations, we came up with following definitions for benefit, cost, and priority:<\/span><\/p>\n<ol>\n<li><span><strong>Benefit<\/strong> = (relative compression efficiency of the encoding family at fixed quality) * (effective predicted watch time)<\/span><\/li>\n<li><span><strong>Cost<\/strong> = normalized compute cost of the missing encodings in the family<\/span><\/li>\n<li><span><strong>Priority<\/strong> = Benefit\/Cost<\/span><\/li>\n<\/ol>\n<p><b>Relative compression efficiency of the encoding family at fixed quality:<\/b><span> We measure benefit in terms of the encoding family\u2019s compression efficiency. \u201cEncoding family\u201d refers to the set of encoding files that can be delivered together. For example, H264 360p, 480p, 720p, and 1080p encoding lanes make up one family, and VP9 360p, 480p, 720p, and 1080p make up another family. One challenge here is comparing compression efficiency between different families at the same visual quality.<\/span><\/p>\n<p><span>To understand this, you first have to understand a metric we\u2019ve developed called Minutes of Video at High Quality per GB datapack (MVHQ). MVHQ links compression efficiency directly to a question people wonder about their internet allowance: Given 1 GB of data, how many <\/span><span>minutes of high-quality video can we stream?<\/span><\/p>\n<p><span>Mathematically, MVHQ can be understood as:<\/span><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-17389\" src=\"https:\/\/i0.wp.com\/engineering.fb.com\/wp-content\/uploads\/2021\/03\/MVHQ-Equation.png?resize=750%2C65&#038;ssl=1\" alt=\"\" width=\"750\" height=\"65\"  data-recalc-dims=\"1\"><\/p>\n<p><span>For example, let\u2019s say we have a video where the MVHQ using H264 fast preset encoding is 153 minutes, 170 minutes using H264 slow preset encoding, and 200 minutes using VP9. This means delivering the video using VP9 could extend watch time using 1 GB data by 47 minutes (200-153) at a high visual quality threshold compared to H264 fast preset. When calculating the benefit value of this particular video, we use H264 fast as the baseline. We assign 1.0 to H264 fast, 1.1 (170\/153) to H264 slow, and 1.3 (200\/153) to VP9.<\/span><\/p>\n<p><span>The actual MVHQ can be calculated only once an encoding is produced, but we need the value before encodings are available, so we use historical data to estimate the MVHQ for each of the encoding families of a given video.<\/span><\/p>\n<p><b>Effective predicted watch time:<\/b><span> As described further in the section below, we have a sophisticated ML model that predicts how long a video is going to be watched in the near future across all of its audience. Once we have the predicted watch time at the video level, we estimate how effectively an encoded family can be applied to a video. This is to account for the fact that not all people on Facebook have the latest devices, which can play newer codecs.<\/span><\/p>\n<p><span>For example, about 20 percent of video consumption happens on devices that cannot play videos encoded with VP9. So if the predicted watch time for a video is 100 hours the effective predicted watch time using the widely adopted H264 codec is 100 hours while effective predicted watch time of VP9 encodings is 80 hours.<\/span><\/p>\n<p><b>Normalized compute cost of the missing encodings in the family:<\/b><span> This is the amount of logical computing cycles we need to make the encoding family deliverable. An encoding family requires a minimum set of resolutions to be made available before we can deliver a video. For example, for a particular video, the VP9 family may require at least four resolutions. But some encodings take longer than others, meaning not all of the resolutions for a video can be made available at the same time.<\/span><\/p>\n<p><span>As an example, let\u2019s say Video A is missing all four lanes in the VP9 family. We can sum up the estimated CPU usage of all four lanes and assign the same normalized cost to all four jobs.\u00a0<\/span><\/p>\n<p><span>If we are only missing two out of four lanes, as shown in Video B, the compute cost is the sum of producing the remaining two encodings. The same cost is applied to both jobs. Since the priority is benefit divided by cost, this has the effect of a task\u2019s priority becoming more urgent as more lanes become available. Encoding lanes do not provide any value until they are deliverable, so it is important to get to a complete lane as quickly as possible. For example, having one video with all of its VP9 lanes adds more value than 10 videos with incomplete (and therefore, undeliverable) VP9 lanes.<\/span><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17424 size-full\" src=\"https:\/\/i0.wp.com\/engineering.fb.com\/wp-content\/uploads\/2021\/04\/Encoding-video_Video-B.jpg?resize=750%2C422&#038;ssl=1\" alt=\"Video encoding model encoding lanes\" width=\"750\" height=\"422\"  data-recalc-dims=\"1\"><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-large wp-image-17431\" src=\"https:\/\/i0.wp.com\/engineering.fb.com\/wp-content\/uploads\/2021\/04\/Encoding-Video_Graphic2_Last.jpg?resize=750%2C422&#038;ssl=1\" alt=\"\" width=\"750\" height=\"422\"  data-recalc-dims=\"1\"><\/p>\n<h2><span>Predicting watch time with ML<\/span><\/h2>\n<p><span>With a new benefit-cost model in place to tell us how certain videos should be encoded, the next piece of the puzzle is determining which videos should be prioritized for encoding. That\u2019s where we now utilize ML to predict which videos will be watched the most and thus should be prioritized for advanced encodings.<\/span><\/p>\n<p><span>Our model looks at a number of factors to predict how much watch time a video will get within the next hour. It does this by looking at the video uploader\u2019s friend or follower count and the average watch time of their previously uploaded videos, as well as metadata from the video itself including its duration, width, height, privacy status, post type (Live, Stories, Watch, etc.), how old it is, and its past popularity on the platform.<\/span><\/p>\n<p><span>But using all this data to make decisions comes with several built-in challenges:<\/span><\/p>\n<p><b>Watch time has high variance and has a very long-tail skewed nature.<\/b><span> Even when we focus on predicting the next hour of watch time, a video\u2019s watch time can range anywhere from zero to over 50,000 hours depending on its content, who uploaded it, and the video\u2019s privacy settings. The model must be able to tell not only whether the video will be popular, but also how popular.<\/span><\/p>\n<p><b>The best indicator of next-hour watch time is its previous watch time trajectory.<\/b><span> Video popularity is generally very volatile by nature. Different videos uploaded by the same content creator can sometimes have vastly different watch times depending on how the community reacts to the content. After experimenting with multiple features, we found that past watch time trajectory is the best predictor of future watch time. This poses two technical challenges in terms of designing the model architecture and balancing the training data:<\/span><\/p>\n<ul>\n<li><span>Newly uploaded videos don\u2019t have a watch time trajectory. The longer a video stays on Facebook, the more we can learn from its past watch time. This means that the most predictive features won\u2019t apply to new videos. We want our model to perform reasonably well with missing data because the earlier the system can identify videos that will become popular on the platform, the more opportunity there is to deliver higher-quality content.<\/span><\/li>\n<li><span>Popular videos have a tendency to dominate training data. The patterns of the most popular videos are not necessarily applicable to all videos.<\/span><\/li>\n<\/ul>\n<p><b>Watch time nature varies by video type.<\/b><span> Stories videos are shorter and get a shorter watch time on average than other videos. <a href=\"https:\/\/engineering.fb.com\/2020\/10\/22\/video-engineering\/live-streaming\/\">Live streams<\/a> get most of their watch time during the stream or a few hours afterward. Meanwhile, videos on demand (VOD) can have a varied lifespan and can rack up watch time long after they\u2019re initially uploaded if people start sharing them later.<\/span><\/p>\n<p><b>Improvements in ML metrics do not necessarily correlate directly to product improvements.<\/b><span> Traditional regression loss functions, such as RMSE, MAPE, and Huber Loss, are great for optimizing offline models. But the reduction in modeling error does not always translate directly to product improvement, such as improved user experience, more watch time coverage, or better compute utilization.<\/span><\/p>\n<h2><span>Building the ML model for video encoding<\/span><\/h2>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-17421 size-full\" src=\"https:\/\/i0.wp.com\/engineering.fb.com\/wp-content\/uploads\/2021\/04\/EncodingVideo_149EngBlog_Graphic4_draft_v3.jpg?resize=750%2C422&#038;ssl=1\" alt=\"video encoding machine learning model\" width=\"750\" height=\"422\"  data-recalc-dims=\"1\"><\/p>\n<p><span>To solve these challenges, we decided to train our model by using watch time event data. Each row of our training\/evaluation represents a decision point that the system has to make a prediction for.<\/span><\/p>\n<p><span>Since our watch time event data can be skewed or imbalanced in many ways as mentioned, we performed data cleaning, transformation, bucketing, and weighted sampling on the dimensions we care about.<\/span><\/p>\n<p><span>Also, since newly uploaded videos don\u2019t have a watch time trajectory to draw from, we decided to build two models, one for handling upload-time requests and other for view-time requests. The view-time model uses the three sets of features mentioned above. The upload-time model looks at the performance of other videos a content creator has uploaded and substitutes this for past watch time trajectories. Once a video is on Facebook long enough to have some past trajectories available, we switch it to use the view-time model.<\/span><\/p>\n<p><span>During model development, we selected the best launch candidates by looking at both <\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Root-mean-square_deviation\"><span>Root Mean Square Error<\/span><\/a><span> (RMSE) and <\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Mean_absolute_percentage_error\"><span>Mean Absolute Percentage Error<\/span><\/a><span> (MAPE). We use both metrics because RMSE is sensitive to outliers while MAPE is sensitive to small values. Our watch time label has a high variance, so we use MAPE to evaluate the performance of videos that are popular or moderately popular and RMSE to evaluate less watched videos. We also care about the model\u2019s ability to generalize well across different video types, ages, and popularity. Therefore, our evaluation will always include per-category metric as well.<\/span><\/p>\n<p><span>MAPE and RMSE are good summary metrics for model selection, but they don\u2019t necessarily reflect direct product improvements. Sometimes when two models have a similar RMSE and MAPE, we also translate the evaluation to classification problem to understand the trade-off. For example, if a video receives 1,000 minutes of watch time but Model A predicts 10 minutes, Model A\u2019s MAPE is 99 percent. If Model B predicts 1,990 minutes of watch time, Model B\u2019s MAPE will be the same as Model A\u2019s (i.e., 99 percent), but Model B\u2019s prediction will result in the video more likely having high-quality encoding.<\/span><\/p>\n<p><span>We also evaluate the classifications that videos are given because we want to capture the trade-off between applying advanced encoding too often and missing the opportunity to apply them when there would be a benefit. For example, at a threshold of 10 seconds, we count the number of videos where the actual video watch time is less than 10 seconds and the prediction is also less than 10 seconds, and vice versa, in order to calculate the model\u2019s false positive and false negative rates. We repeat the same calculation for multiple thresholds. This method of evaluation gives us insights into how the model performs on videos of different popularity levels and whether it tends to suggest more encoding jobs than necessary or miss some opportunities.<\/span><\/p>\n<h2><span>The impact of the new video encoding model<\/span><\/h2>\n<p><span>In addition to improving viewer experience with newly uploaded videos, the new model can identify older videos on Facebook that should have been encoded with more advanced encodings and route more computing resources to them. Doing this has shifted a large portion of watch time to advanced encodings, resulting in less buffering without requiring additional computing resources. The improved compression has also allowed people on Facebook with <a href=\"https:\/\/engineering.fb.com\/2020\/12\/21\/video-engineering\/rsys\/\">limited data plans<\/a>, such as those in <a href=\"https:\/\/engineering.fb.com\/2020\/12\/03\/production-engineering\/supercell-reaching-new-heights-for-wider-connectivity\/\">emerging markets<\/a>, to watch more videos at better quality.<\/span><\/p>\n<p><span>What\u2019s more, as we introduce new encoding recipes, we no longer have to spend a lot of time evaluating where in the priority range to assign them. Instead, depending on a recipe\u2019s benefit and cost value, the model automatically assigns a priority that would maximize overall benefit throughput. For example, we could introduce a very compute-intensive recipe that only makes sense to be applied to extremely popular videos and the model can identify such videos. Overall, this makes it easier for us to continue to invest in newer and more advanced codecs to give people on Facebook the best-quality video experience.<\/span><\/p>\n<h2><span>Acknowledgements<\/span><\/h2>\n<p><i><span>This work is the collective result of the entire Video Infra team at Facebook. The authors would like to personally thank Shankar Regunathan, Atasay Gokkaya, Volodymyr Kondratenko, Jamie Chen, Cosmin Stejerean, Denise Noyes, Zach Wang, Oytun Eskiyenenturk, Mathieu Henaire, Pankaj Sethi, and David Ronca for all their contributions.<\/span><\/i><\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/engineering.fb.com\/2021\/04\/05\/video-engineering\/how-facebook-encodes-your-videos\/\">How Facebook encodes your videos<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/engineering.fb.com\/\">Facebook Engineering<\/a>.<\/p>\n<p><a href=\"https:\/\/engineering.fb.com\/2021\/04\/05\/video-engineering\/how-facebook-encodes-your-videos\/\" target=\"_blank\" rel=\"noopener\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>People upload hundreds of millions of videos to Facebook every day. Making sure every video is delivered at the best quality \u2014 with the highest resolution and as little buffering as possible \u2014 means optimizing not only when and how our video codecs compress and decompress videos for viewing, but also which codecs are used&hellip; <a class=\"more-link\" href=\"https:\/\/fde.cat\/index.php\/2021\/08\/31\/how-facebook-encodes-your-videos\/\">Continue reading <span class=\"screen-reader-text\">How Facebook encodes your videos<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","footnotes":""},"categories":[7],"tags":[],"class_list":["post-295","post","type-post","status-publish","format-standard","hentry","category-technology","entry"],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":880,"url":"https:\/\/fde.cat\/index.php\/2024\/06\/13\/mlow-metas-low-bitrate-audio-codec\/","url_meta":{"origin":295,"position":0},"title":"MLow: Meta\u2019s low bitrate audio codec","date":"June 13, 2024","format":false,"excerpt":"At Meta, we support real-time communication (RTC) for billions of people through our apps, including WhatsApp, Instagram, and Messenger.\u00a0 We are working to make RTC accessible by providing a high-quality experience for everyone \u2013 even those who might not have the fastest connections or the latest phones. As more and\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":841,"url":"https:\/\/fde.cat\/index.php\/2024\/03\/20\/better-video-for-mobile-rtc-with-av1-and-hd\/","url_meta":{"origin":295,"position":1},"title":"Better video for mobile RTC with AV1 and HD","date":"March 20, 2024","format":false,"excerpt":"At Meta, we support real-time communication (RTC) for billions of people through our apps, including Messenger, Instagram, and WhatsApp. We\u2019ve seen significant benefits by adopting the AV1 codec for RTC. Here\u2019s how we are improving the RTC video quality for our apps with tools like the AV1 codec, the challenges\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":682,"url":"https:\/\/fde.cat\/index.php\/2023\/02\/21\/how-meta-brought-av1-to-reels\/","url_meta":{"origin":295,"position":2},"title":"How Meta brought AV1 to Reels","date":"February 21, 2023","format":false,"excerpt":"We\u2019re sharing how we\u2019re enabling production and delivery of AV1 for Facebook Reels and Instagram Reels. We believe AV1 is the most viable codec for Meta for the coming years. It offers higher quality at a much lower bit rate compared with previous generations of video codecs. Meta has worked\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":735,"url":"https:\/\/fde.cat\/index.php\/2023\/07\/17\/bringing-hdr-video-to-reels\/","url_meta":{"origin":295,"position":3},"title":"Bringing HDR video to Reels","date":"July 17, 2023","format":false,"excerpt":"Meta has made it possible for people to upload high dynamic range (HDR) videos from their phone\u2019s camera roll to Reels on Facebook and Instagram. To show standard dynamic range (SDR) UI elements and overlays legibly on top of HDR video, we render them at a brightness level comparable to\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":310,"url":"https:\/\/fde.cat\/index.php\/2021\/08\/31\/peering-automation-at-facebook\/","url_meta":{"origin":295,"position":4},"title":"Peering automation at Facebook","date":"August 31, 2021","format":false,"excerpt":"Traffic on the internet travels across many different kinds of links. A fast and reliable way to exchange traffic between different networks and service providers is through peering. Initially, we managed peering via a time-intensive manual process. Reliable peering is essential for Facebook and for everyone\u2019s internet use. But there\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":699,"url":"https:\/\/fde.cat\/index.php\/2023\/04\/11\/why-xhe-aac-is-being-embraced-at-meta\/","url_meta":{"origin":295,"position":5},"title":"Why xHE-AAC is being embraced at Meta","date":"April 11, 2023","format":false,"excerpt":"We\u2019re sharing how Meta delivers high-quality audio at scale with the xHE-AAC audio codec. xHE-AAC has already been deployed on Facebook and Instagram to provide enhanced audio for features like Reels and Stories.\u00a0 At Meta, we serve every media use case imaginable for billions of people across the world \u2014\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/295","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/comments?post=295"}],"version-history":[{"count":1,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/295\/revisions"}],"predecessor-version":[{"id":416,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/295\/revisions\/416"}],"wp:attachment":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/media?parent=295"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/categories?post=295"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/tags?post=295"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}