{"id":768,"date":"2023-10-05T16:00:32","date_gmt":"2023-10-05T16:00:32","guid":{"rendered":"https:\/\/fde.cat\/index.php\/2023\/10\/05\/meta-contributes-new-features-to-python-3-12\/"},"modified":"2023-10-05T16:00:32","modified_gmt":"2023-10-05T16:00:32","slug":"meta-contributes-new-features-to-python-3-12","status":"publish","type":"post","link":"https:\/\/fde.cat\/index.php\/2023\/10\/05\/meta-contributes-new-features-to-python-3-12\/","title":{"rendered":"Meta contributes new features to Python 3.12"},"content":{"rendered":"<p><span>Python 3.12 is out! It includes new features and performance improvements \u2013 some contributed by Meta \u2013 that we believe will benefit all Python users.<\/span><br \/>\n<span>We\u2019re sharing details about these new features that we worked closely with the Python community to develop.<\/span><\/p>\n<p><span>This week\u2019s release of <\/span><a href=\"https:\/\/discuss.python.org\/t\/python-3-12-0-final-is-here\/35186\" target=\"_blank\" rel=\"noopener\"><span>Python 3.12<\/span><\/a><span> marks a milestone in our efforts to make our work developing and scaling Python for Meta\u2019s use cases <\/span><a href=\"https:\/\/discuss.python.org\/t\/making-cinder-more-broadly-available\/14062\" target=\"_blank\" rel=\"noopener\"><span>more accessible to the broader Python community<\/span><\/a><span>. Open source at Meta is an important part of how we work and share our learnings with the community. <\/span><\/p>\n<p><span>For several years, we have been sharing our work on Python and CPython through our open source Python runtime,<\/span> <a href=\"https:\/\/github.com\/facebookincubator\/cinder\" target=\"_blank\" rel=\"noopener\"><span>Cinder<\/span><\/a><span>. We have also been working closely with the Python community to introduce new features and optimizations to improve Python\u2019s performance and to allow third parties to experiment with Python runtime optimization more easily.<\/span><\/p>\n<p><span>For the Python 3.12 release, we collaborated with the Python community on several categories of features:<\/span><\/p>\n<p><span>Immortal Objects<\/span><br \/>\n<span>Type system improvements<\/span><br \/>\n<span>Performance optimizations<\/span><br \/>\n<span>New benchmarks<\/span><br \/>\n<span>Cinder hooks\u00a0<\/span><\/p>\n<h2><span>Immortal Objects<\/span><\/h2>\n<p><a href=\"https:\/\/peps.python.org\/pep-0683\/\" target=\"_blank\" rel=\"noopener\"><span>Immortal Objects \u2013 PEP 683<\/span><\/a><span> makes it possible to create Python objects that don\u2019t participate in <\/span><a href=\"https:\/\/devguide.python.org\/internals\/garbage-collector\/\" target=\"_blank\" rel=\"noopener\"><span>reference counting<\/span><\/a><span>, and will live until Python interpreter shutdown. The<\/span> <a href=\"https:\/\/engineering.fb.com\/2023\/08\/15\/developer-tools\/immortal-objects-for-python-instagram-meta\/\" target=\"_blank\" rel=\"noopener\"><span>original motivation<\/span><\/a><span> for this feature was to reduce memory use in the forking Instagram web-server workload by reducing copy-on-writes triggered by reference-count updates.<\/span><\/p>\n<p><span>Immortal Objects are also an important step towards truly immutable Python objects that can be shared between Python interpreters with no need for locking, for example, via the global interpreter lock (GIL) This can enable improved Python single-process parallelism, whether via<\/span><a href=\"https:\/\/peps.python.org\/pep-0684\/\" target=\"_blank\" rel=\"noopener\"> <span>multiple sub-interpreters<\/span><\/a><span> or<\/span> <a href=\"https:\/\/peps.python.org\/pep-0703\/\" target=\"_blank\" rel=\"noopener\"><span>GIL-free multi-threading<\/span><\/a><span>.<\/span><\/p>\n<h2><span>Type system improvements<\/span><\/h2>\n<p><span>The engineering team behind <\/span><a href=\"https:\/\/pyre-check.org\/\" target=\"_blank\" rel=\"noopener\"><span>Pyre<\/span><\/a><span>, an open source Python type-checker, authored and implemented <\/span><a href=\"https:\/\/peps.python.org\/pep-0698\/\" target=\"_blank\" rel=\"noopener\"><span>PEP 698<\/span><\/a><span> to add a <\/span><span>@typing.override<\/span><span> decorator, which helps avoid bugs when refactoring class inheritance hierarchies that use method overriding.\u00a0<\/span><\/p>\n<p><span>Python developers can apply this new decorator to a subclass method that overrides a method from a base class. As a result, static type checkers will be able to warn developers if the base class is modified such that the overridden method no longer exists. Developers can avoid accidentally turning a method override into dead code. This improves confidence in refactoring and helps keep the code more maintainable.\u00a0\u00a0<\/span><\/p>\n<h2><span>Performance optimizations<\/span><\/h2>\n<h3><span>Faster comprehensions<\/span><\/h3>\n<p><span>In previous Python versions, all comprehensions were compiled as nested functions, and every execution of a comprehension allocated and destroyed a single-use Python function object.<\/span><\/p>\n<p><span>In Python 3.12,<\/span> <a href=\"https:\/\/peps.python.org\/pep-0709\/\" target=\"_blank\" rel=\"noopener\"><span>PEP 709<\/span><\/a><span> inlines all list, dict, and set comprehensions for better performance (up to two times better in the best case).<\/span><\/p>\n<p><span>The implementation and debugging of PEP 709 also uncovered a pre-existing bytecode compiler bug that could result in silently wrong code execution in Python 3.11, which we <\/span><a href=\"https:\/\/github.com\/python\/cpython\/pull\/104620\"><span>fixed<\/span><\/a><span>.<\/span><\/p>\n<h3><span>Eager asyncio tasks<\/span><\/h3>\n<p><span>While Python\u2019s asynchronous programming support enables single-process concurrency, it also has noticeable runtime overhead. Every call to an async function creates an extra coroutine object, and the standard asyncio library will often bring additional overhead in the form of<\/span> <a href=\"https:\/\/docs.python.org\/3.12\/library\/asyncio-task.html#asyncio.Task\" target=\"_blank\" rel=\"noopener\"><span>Task<\/span><\/a><span> objects and event loop scheduling.<\/span><\/p>\n<p><span>We observed that, in practice, in a fully async codebase, many async functions are often able to return a result immediately, with no need to suspend. (This may be due to memoization, for example.) In these cases, if the result of the function is immediately awaited (e.g., by <\/span><span>await some_async_func()<\/span><span><span>,<\/span> the most common way to call an async function), the coroutine\/Task objects and event loop scheduling can be unnecessary overhead.<\/span><\/p>\n<p><span>Cinder eliminates this overhead via eager async execution. If an async function call is awaited immediately, it is called with a flag set that allows it to return a result directly, if possible, without creating a coroutine object. If an<\/span> <a href=\"https:\/\/docs.python.org\/3.12\/library\/asyncio-task.html#asyncio.gather\" target=\"_blank\" rel=\"noopener\"><span>asyncio.gather()<\/span><\/a><span> is immediately awaited, and all the async functions it gathers are able to return immediately, there\u2019s no need to ever create a <\/span><span>Task<\/span><span>\u00a0 or schedule it to the event loop.\u00a0<\/span><\/p>\n<p><span>Fully eager async execution would be an invasive (and breaking) change to Python, and doesn\u2019t work as well with the new Python 3.11+<\/span> <a href=\"https:\/\/docs.python.org\/3.12\/library\/asyncio-task.html#asyncio.TaskGroup\" target=\"_blank\" rel=\"noopener\"><span>TaskGroup<\/span><\/a><span> API for managing concurrent tasks. So in Python 3.12 we added a simpler version of the feature:<\/span> <a href=\"https:\/\/docs.python.org\/3.12\/library\/asyncio-task.html#asyncio.eager_task_factory\" target=\"_blank\" rel=\"noopener\"><span>eager asyncio tasks<\/span><\/a><span>. With eager tasks, coroutine and Task objects are still created when a result is available immediately, but we can sometimes avoid scheduling the task to the event loop and instead resolve it right away.<\/span><\/p>\n<p><span>This is more efficient, but it is a semantic change, so this feature is <\/span><a href=\"https:\/\/docs.python.org\/3.12\/library\/asyncio-task.html#asyncio.eager_task_factory\" target=\"_blank\" rel=\"noopener\"><span>opt-in via a custom task factory<\/span><\/a><span>.<\/span><\/p>\n<h3><span>Other asyncio improvements<\/span><\/h3>\n<p><span>We also landed a faster<\/span> <a href=\"https:\/\/github.com\/python\/cpython\/pull\/100345\" target=\"_blank\" rel=\"noopener\"><span>C implementation of asyncio.current_task<\/span><\/a><span> and an <\/span><a href=\"https:\/\/github.com\/python\/cpython\/pull\/103767\" target=\"_blank\" rel=\"noopener\"><span>optimization to async task creation<\/span><\/a><span> that shows a <\/span><a href=\"https:\/\/github.com\/python\/cpython\/pull\/103767#issuecomment-1528900046\" target=\"_blank\" rel=\"noopener\"><span>win of up to 5 percent on asyncio benchmarks<\/span><\/a><span>.<\/span><span>\u00a0<\/span><\/p>\n<h3><span>Faster <\/span><span>super()<\/span><span> calls<\/span><\/h3>\n<p><span>The new<\/span> <a href=\"https:\/\/docs.python.org\/3.12\/library\/dis.html#opcode-LOAD_SUPER_ATTR\" target=\"_blank\" rel=\"noopener\"><span>LOAD_SUPER_ATTR opcode<\/span><\/a><span> optimizes code of the form <\/span><span>super().attr<\/span><span> and <\/span><span>super().method(\u2026)<\/span><span>. Such code previously had to allocate, and then throw away, a single-use \u201csuper\u201d object each time it ran. Now it has little more overhead than an ordinary method call or attribute access.<\/span><\/p>\n<h3><span>Other performance optimizations<\/span><\/h3>\n<p><span>We also landed two<\/span> <a href=\"https:\/\/github.com\/python\/cpython\/pull\/104063\" target=\"_blank\" rel=\"noopener\"><span>hasattr <\/span><\/a><a href=\"https:\/\/github.com\/python\/cpython\/pull\/104079\" target=\"_blank\" rel=\"noopener\"><span>optimizations<\/span><\/a><span> and a<\/span>\u00a0<a href=\"https:\/\/github.com\/python\/cpython\/pull\/100252\" target=\"_blank\" rel=\"noopener\"><span>3.8x performance improvement to unittest.mock.Mock<\/span><\/a><span>.<\/span><\/p>\n<h2><span>New benchmarks<\/span><\/h2>\n<p><span>When we optimize Python for internal use at Meta, we are usually able to test and validate our optimizations directly against our real-world workloads. Optimization work on open-source Python doesn\u2019t have such a production workload to test against and needs to be effective (and avoid regression) on a variety of different workloads.<\/span><\/p>\n<p><span>The<\/span> <a href=\"https:\/\/github.com\/python\/pyperformance\" target=\"_blank\" rel=\"noopener\"><span>Python Performance Benchmark suite<\/span><\/a><span> is the standard set of benchmarks used in open-source Python optimization work. During the 3.12 development cycle, we contributed several new benchmarks to it so that it more accurately represents workload characteristics we see at Meta.<\/span><\/p>\n<p><span>We added:<\/span><\/p>\n<p><span>A<\/span> <a href=\"https:\/\/github.com\/python\/pyperformance\/pull\/187\" target=\"_blank\" rel=\"noopener\"><span>set of async_tree benchmarks<\/span><\/a><span> that better model an asyncio-heavy workload.<\/span><br \/>\n<span>A pair of benchmarks that exercise<\/span> <a href=\"https:\/\/github.com\/python\/pyperformance\/pull\/265\" target=\"_blank\" rel=\"noopener\"><span>comprehensions<\/span><\/a><span> and<\/span> <a href=\"https:\/\/github.com\/python\/pyperformance\/pull\/271\" target=\"_blank\" rel=\"noopener\"><span>super()<\/span><\/a><span> more thoroughly, which were blind spots of the existing benchmark suite.<\/span><\/p>\n<h2><span>Cinder hooks<\/span><\/h2>\n<p><span>Some parts of Cinder (our<\/span> <a href=\"https:\/\/github.com\/facebookincubator\/cinder#the-cinder-jit\" target=\"_blank\" rel=\"noopener\"><span>JIT compiler<\/span><\/a><span> and<\/span> <a href=\"https:\/\/github.com\/facebookincubator\/cinder#static-python\" target=\"_blank\" rel=\"noopener\"><span>Static Python<\/span><\/a><span>) wouldn\u2019t make sense as part of upstream CPython (because of limited platform support, C versus C++, semantic changes, and just the size of the code), so our goal is to package these as an independent extension module, CinderX.<\/span><\/p>\n<p><span>This requires a number of new hooks in the core runtime. We landed many of these hooks in Python 3.12:<\/span><\/p>\n<p><span>An<\/span> <a href=\"https:\/\/github.com\/python\/cpython\/pull\/92257\" target=\"_blank\" rel=\"noopener\"><span>API to set the vectorcall entrypoint for a Python function<\/span><\/a><span>. This gives the JIT an entry point to take over execution for a given function.<\/span><br \/>\n<span>We added<\/span> <a href=\"https:\/\/github.com\/python\/cpython\/pull\/31787\" target=\"_blank\" rel=\"noopener\"><span>dictionary watchers<\/span><\/a><span>,<\/span> <a href=\"https:\/\/github.com\/python\/cpython\/pull\/97875\" target=\"_blank\" rel=\"noopener\"><span>type watchers<\/span><\/a><span>,<\/span> <a href=\"https:\/\/github.com\/python\/cpython\/pull\/98175\" target=\"_blank\" rel=\"noopener\"><span>function watchers<\/span><\/a><span>, and<\/span> <a href=\"https:\/\/github.com\/python\/cpython\/pull\/99859\" target=\"_blank\" rel=\"noopener\"><span>code object watchers<\/span><\/a><span>. All of these allow the Cinder JIT to be notified of dynamic changes that might invalidate its assumptions, so its fast path can remain as fast as possible.<\/span><br \/>\n<span>We landed<\/span> <a href=\"https:\/\/github.com\/python\/cpython\/pull\/102022\" target=\"_blank\" rel=\"noopener\"><span>extensibility in the code generator for CPython\u2019s core interpreter<\/span><\/a><span> that will allow Static Python to easily re-generate an interpreter with added Static Python opcodes, and a <\/span><a href=\"https:\/\/github.com\/python\/cpython\/pull\/102014\" target=\"_blank\" rel=\"noopener\"><span>C API to visit all GC-tracked objects<\/span><\/a><span>, which will allow the Cinder JIT to discover functions that were created before it was enabled.<\/span><br \/>\n<span>We also added a <\/span><a href=\"https:\/\/github.com\/python\/cpython\/pull\/103546\" target=\"_blank\" rel=\"noopener\"><span>thread-safe API for writing to perf-map files<\/span><\/a><span>. Perf-map files allow the Linux perf profiler to give a human-readable name to dynamically-generated sections of machine code, e.g. from a JIT compiler. This API will allow the Cinder JIT to safely write to perf map files without colliding with other JITs or with the new Python 3.12<\/span> <a href=\"https:\/\/github.com\/python\/cpython\/pull\/96123\" target=\"_blank\" rel=\"noopener\"><span>perf trampoline feature<\/span><\/a><span>.<\/span><\/p>\n<p><span>These improvements will be useful to anyone building a third party JIT compiler or runtime optimizer for CPython. There are also plans to use the watchers internally in core CPython.\u00a0<\/span><\/p>\n<h2><span>Beyond Python 3.12<\/span><\/h2>\n<p><span>Python plays a significant role at Meta. It\u2019s an important part of our infrastructure, including the<\/span> <a href=\"https:\/\/engineering.fb.com\/2023\/08\/15\/developer-tools\/immortal-objects-for-python-instagram-meta\/\" target=\"_blank\" rel=\"noopener\"><span>Instagram server stack<\/span><\/a><span>. And it\u2019s the lingua franca for<\/span> <a href=\"https:\/\/ai.facebook.com\/blog\/code-llama-large-language-model-coding\" target=\"_blank\" rel=\"noopener\"><span>our AI\/ML work<\/span><\/a><span>, highlighted by our development of<\/span> <a href=\"https:\/\/pytorch.org\/\" target=\"_blank\" rel=\"noopener\"><span>PyTorch<\/span><\/a><span>, a machine learning framework for a wide range of use cases including computer vision, natural language processing, and more.<\/span><\/p>\n<p><span>Our work with the Python community doesn\u2019t end with the 3.12 release. We are currently discussing a new proposal, <\/span><a href=\"https:\/\/peps.python.org\/pep-0703\/\" target=\"_blank\" rel=\"noopener\"><span>PEP 703<\/span><\/a><span>, with the Python Steering Council to remove the GIL and allow Python to run in multiple threads in parallel. This update could greatly help anyone using Python in a multi-threaded environment.\u00a0<\/span><\/p>\n<p><span>Meta\u2019s involvement with the Python community also goes beyond code. In 2023, we continued supporting the <\/span><a href=\"https:\/\/pyfound.blogspot.com\/2022\/03\/meta-deepens-its-investment-in-python.html\" target=\"_blank\" rel=\"noopener\"><span>Developer in Residence program for Python<\/span><\/a><span> and sponsored events like <\/span><a href=\"https:\/\/us.pycon.org\/2023\/#\" target=\"_blank\" rel=\"noopener\"><span>PyCon US<\/span><\/a><span>. We also shared our learnings in talks like \u201c<\/span><a href=\"https:\/\/us.pycon.org\/2023\/schedule\/presentation\/155\/\" target=\"_blank\" rel=\"noopener\"><span>Breaking Boundaries: Advancements in High-Performance AI\/ML through PyTorch\u2019s Python Compiler<\/span><\/a><span>\u201d and <\/span><a href=\"https:\/\/engineering.fb.com\/?s=python\" target=\"_blank\" rel=\"noopener\"><span>posts on the Meta Engineering blog<\/span><\/a><span>.\u00a0<\/span><\/p>\n<p><span>We are grateful to be a part of this open source community and look forward to working together to move the Python programming language forward.<\/span><\/p>\n<h2><span>Acknowledgements<\/span><\/h2>\n<p><span>The author would like to acknowledge the following people for their work in contributing to all of these new features: Eddie Elizondo, Vladimir Matveev, Itamar Oren, Steven Troxler, Joshua Xu, Shannon Zhu, Jacob Bower, Pranav Thulasiram Bhat, Ariel Lin, Andrew Frost, and Sam Gross.<\/span><\/p>\n<p>The post <a href=\"https:\/\/engineering.fb.com\/2023\/10\/05\/developer-tools\/python-312-meta-new-features\/\">Meta contributes new features to Python 3.12<\/a> appeared first on <a href=\"https:\/\/engineering.fb.com\/\">Engineering at Meta<\/a>.<\/p>\n<p>Engineering at Meta<\/p>","protected":false},"excerpt":{"rendered":"<p>Python 3.12 is out! It includes new features and performance improvements \u2013 some contributed by Meta \u2013 that we believe will benefit all Python users. We\u2019re sharing details about these new features that we worked closely with the Python community to develop. This week\u2019s release of Python 3.12 marks a milestone in our efforts to&hellip; <a class=\"more-link\" href=\"https:\/\/fde.cat\/index.php\/2023\/10\/05\/meta-contributes-new-features-to-python-3-12\/\">Continue reading <span class=\"screen-reader-text\">Meta contributes new features to Python 3.12<\/span><\/a><\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","footnotes":""},"categories":[7],"tags":[],"class_list":["post-768","post","type-post","status-publish","format-standard","hentry","category-technology","entry"],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":824,"url":"https:\/\/fde.cat\/index.php\/2024\/02\/12\/meta-loves-python\/","url_meta":{"origin":768,"position":0},"title":"Meta loves Python","date":"February 12, 2024","format":false,"excerpt":"By now you\u2019re already aware that Python 3.12 has been released. But did you know that several of its new features were developed by Meta? Meta engineer Pascal Hartig (@passy) is joined on the Meta Tech Podcast by Itamar Oren and Carl Meyer, two software engineers at Meta, to discuss\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":795,"url":"https:\/\/fde.cat\/index.php\/2023\/11\/21\/writing-and-linting-python-at-scale\/","url_meta":{"origin":768,"position":1},"title":"Writing and linting Python at scale","date":"November 21, 2023","format":false,"excerpt":"Python plays a big part at Meta. It powers Instagram\u2019s backend and plays an important role in our configuration systems, as well as much of our AI work. Meta even made contributions to Python 3.12, the latest version of Python. On this episode of the\u00a0Meta Tech Podcast, Meta engineer Pascal\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":748,"url":"https:\/\/fde.cat\/index.php\/2023\/08\/15\/introducing-immortal-objects-for-python\/","url_meta":{"origin":768,"position":2},"title":"Introducing Immortal Objects for Python","date":"August 15, 2023","format":false,"excerpt":"Instagram has introduced Immortal Objects \u2013 PEP-683 \u2013 to Python. Now, objects can bypass reference count checks and live throughout the entire execution of the runtime, unlocking exciting avenues for true parallelism. At Meta, we use Python (Django) for our frontend server within Instagram. To handle parallelism, we rely on\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":742,"url":"https:\/\/fde.cat\/index.php\/2023\/08\/07\/fixit-2-metas-next-generation-auto-fixing-linter\/","url_meta":{"origin":768,"position":3},"title":"Fixit 2: Meta\u2019s next-generation auto-fixing linter","date":"August 7, 2023","format":false,"excerpt":"Fixit is dead! Long live Fixit 2 \u2013 the latest version of our open-source auto-fixing linter. Fixit 2 allows developers to efficiently build custom lint rules and perform auto-fixes for their codebases. Fixit 2 is available today on PyPI. Python is one of the most popular languages in use at\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":176,"url":"https:\/\/fde.cat\/index.php\/2021\/01\/25\/smart-argument-suite-seamlessly-connecting-python-jobs\/","url_meta":{"origin":768,"position":4},"title":"Smart Argument Suite: Seamlessly connecting Python jobs","date":"January 25, 2021","format":false,"excerpt":"Co-authors: Jun Jia and Alice Wu Introduction It\u2019s a very common scenario that an AI solution involves composing different jobs, such as data processing and model training or evaluation, into workflows and then submitting them to an orchestration engine for execution. At large companies such as LinkedIn, there may be\u2026","rel":"","context":"In &quot;External&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":566,"url":"https:\/\/fde.cat\/index.php\/2022\/04\/26\/sql-notebooks-combining-the-power-of-jupyter-and-sql-editors-for-data-analytics\/","url_meta":{"origin":768,"position":5},"title":"SQL Notebooks: Combining the power of Jupyter and SQL editors for data analytics","date":"April 26, 2022","format":false,"excerpt":"At Meta, our internal data tools are the main channel from our data scientists to our production engineers. As such, it\u2019s important for us to empower our scientists and engineers not only to use data to make decisions, but also to do so in a secure and compliant way. We\u2019ve\u2026","rel":"","context":"In &quot;Technology&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/768","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/comments?post=768"}],"version-history":[{"count":0,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/posts\/768\/revisions"}],"wp:attachment":[{"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/media?parent=768"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/categories?post=768"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fde.cat\/index.php\/wp-json\/wp\/v2\/tags?post=768"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}