{"id":4576,"date":"2025-11-09T00:38:20","date_gmt":"2025-11-09T00:38:20","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=4576"},"modified":"2025-11-09T00:38:20","modified_gmt":"2025-11-09T00:38:20","slug":"yun-ta-tsai","status":"publish","type":"page","link":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=4576","title":{"rendered":"Yun-Ta Tsai"},"content":{"rendered":"\n<p>Senior Staff Software Engineer at Tesla<\/p>\n\n\n\n<p><a href=\"https:\/\/scholar.google.com\/citations?user=7fUcF9UAAAAJ&amp;hl=en\">\u202aYun-Ta Tsai\u202c &#8211; \u202aGoogle Scholar\u202c<\/a><\/p>\n\n\n\n<p><strong><br>Speciality: Full Stack Engineering, ML, Janitor of All.<\/strong><\/p>\n\n\n\n<p><strong>2023<\/strong> Sr. Staff Engineer, <a href=\"https:\/\/x.com\/@Tesla_AI\" target=\"_blank\" rel=\"noreferrer noopener\">@Tesla_AI<\/a><\/p>\n\n\n\n<p><strong>2020<\/strong> Staff Engineer, <a href=\"https:\/\/x.com\/@Tesla_AI\" target=\"_blank\" rel=\"noreferrer noopener\">@Tesla_AI<\/a><\/p>\n\n\n\n<p><strong>2019<\/strong> Staff SWE, <a href=\"https:\/\/x.com\/@GoogleAI\" target=\"_blank\" rel=\"noreferrer noopener\">@GoogleAI<\/a><\/p>\n\n\n\n<p><strong>2017<\/strong> Senior SWE, <a href=\"https:\/\/x.com\/@GoogleAI\" target=\"_blank\" rel=\"noreferrer noopener\">@GoogleAI<\/a><\/p>\n\n\n\n<p><strong>2015<\/strong> SWE, <a href=\"https:\/\/x.com\/@GoogleAI\" target=\"_blank\" rel=\"noreferrer noopener\">@GoogleAI<\/a><\/p>\n\n\n\n<p><strong>2014<\/strong> SWE, Google X Senior Research Scientist, <a href=\"https:\/\/x.com\/@nvidia\" target=\"_blank\" rel=\"noreferrer noopener\">@nvidia<\/a><\/p>\n\n\n\n<p><strong>2011<\/strong> Research Scientist, <a href=\"https:\/\/x.com\/@nvidia\" target=\"_blank\" rel=\"noreferrer noopener\">@nvidia<\/a><\/p>\n\n\n\n<p><strong>2010<\/strong> Senior Researcher, <a href=\"https:\/\/x.com\/@nokia\" target=\"_blank\" rel=\"noreferrer noopener\">@nokia<\/a><\/p>\n\n\n\n<p><strong>2009<\/strong> Researcher, <a href=\"https:\/\/x.com\/@nokia\" target=\"_blank\" rel=\"noreferrer noopener\">@nokia<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/x.com\/YunTaTsai1\"><\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/x.com\/YunTaTsai1\"><\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/x.com\/YunTaTsai1\">Yun-Ta Tsai<\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/x.com\/YunTaTsai1\"><\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/x.com\/YunTaTsai1\"><\/a><\/p>\n\n\n\n<p><a href=\"https:\/\/x.com\/YunTaTsai1\">@YunTaTsai1<\/a><\/p>\n\n\n\n<p>Designing an inference chip for robots is actually very difficult. In data centers each chip is bathed in jacuzzi and babysat by nannies. If they died it would be hot swapped by one of their clones. The fault rate of GPUs in datacenter is actually quite high. Industrial average annual fault rate of H100 is 9%. Ideal conditions could reduce it down to 2% but never below a single digit. The fault recovery of GPU nodes actually could take a while, from minutes to hours. It is not instantaneous. In robots, the chips are out in the cold and they need rapid self recovery. The fault tolerance is in a different league. It is not uncommon many robotic companies struggle to get the chip running more than a few hours without rebooting. For chip companies, this is great since they would tell robotic companies to buy more chips for hot swapping. For robotic companies, this is bad since it is obviously not a scaleable solution but they are stuck with endless back-and-forth JIRA tickets with vendors.<\/p>\n\n\n\n<p><a href=\"https:\/\/x.com\/YunTaTsai1\/status\/1987200430938456068\"><time datetime=\"2025-11-08T16:47:57.000Z\">5:47 PM \u00b7 Nov 8, 2025<\/time><\/a><\/p>\n\n\n\n<p>\u00b7<\/p>\n\n\n\n<p><strong>4.3M<\/strong> Views<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:Zph67rFs4hoC\">Neural light transport<\/a>T Yun-Ta, X Zhang, JT Barron, S FANELLO, SUN Tiancheng, T XueUS Patent 12,094,054<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=5205053994814684881\">5<\/a><\/td><td>2024<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:3fE2CSJIrl8C\">Photo relighting using deep neural networks and confidence learning<\/a>SUN Tiancheng, T Yun-TaUS Patent 11,776,095<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=10388241334313770676\">8<\/a><\/td><td>2023<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:5nxA0vEk-isC\">Neural light transport for relighting and view synthesis<\/a>X Zhang, S Fanello, YT Tsai, T Sun, T Xue, R Pandey, S Orts-Escolano, &#8230;ACM Transactions on Graphics (TOG) 40 (1), 1-17<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=8251832931143197765\">134<\/a><\/td><td>2021<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:MXK_kJrjxJIC\">Cross-camera convolutional color constancy<\/a>M Afifi, JT Barron, C LeGendre, YT Tsai, F BleibelProceedings of the IEEE\/CVF International Conference on Computer Vision&nbsp;\u2026<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=1569423775114839676\">77<\/a><\/td><td>2021<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:8k81kl-MbHgC\">Light stage super-resolution: continuous high-frequency relighting<\/a>T Sun, Z Xu, X Zhang, S Fanello, C Rhemann, P Debevec, YT Tsai, &#8230;ACM Transactions on Graphics (TOG) 39 (6), 1-12<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=4537518894756346993\">57<\/a><\/td><td>2020<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:UebtZRa9Y70C\">Portrait shadow manipulation<\/a>X Zhang, JT Barron, YT Tsai, R Pandey, X Zhang, R Ng, DE JacobsACM Transactions on Graphics (TOG) 39 (4), 78: 1-78: 14<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=14524694223740638606\">113<\/a><\/td><td>2020<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:hqOjcs7Dif8C\">Sky optimization: Semantically aware image processing of skies in low-light photography<\/a>O Liba, L Cai, YT Tsai, E Eban, Y Movshovitz-Attias, Y Pritch, H Chen, &#8230;Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern&nbsp;\u2026<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=3716173147240519873\">20<\/a><\/td><td>2020<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:eQOLeE2rZwMC\">Handheld mobile photography in very low light.<\/a>O Liba, K Murthy, YT Tsai, T Brooks, T Xue, N Karnad, Q He, JT Barron, &#8230;ACM Trans. Graph. 38 (6), 164:1-164:16<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=17768193993343974750\">147<\/a><\/td><td>2019<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:zYLM7Y9cAGgC\">Single image portrait relighting.<\/a>T Sun, JT Barron, YT Tsai, Z Xu, X Yu, G Fyffe, C Rhemann, J Busch, &#8230;ACM Trans. Graph. 38 (4), 79:1-79:12<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=9262979268526607727\">315<\/a><\/td><td>2019<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:9yKSN-GCB0IC\">Fast fourier color constancy<\/a>JT Barron, YT TsaiProceedings of the IEEE conference on computer vision and pattern&nbsp;\u2026<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=5448854653320631324\">257<\/a><\/td><td>2017<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:_FxGoFyzp5QC\">Efficient approximate-nearest-neighbor (ANN) search for high-quality collaborative filtering<\/a>DS PAJAK, T Yun-Ta, M SteinbergerUS Patent 9,454,806<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=6165637755345942357\">14<\/a><\/td><td>2016<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:2osOgNQ5qMEC\">Fast ANN for High\u2010Quality Collaborative Filtering<\/a>YT Tsai, M Steinberger, D Paj\u0105k, K PulliComputer graphics forum 35 (1), 138-151<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=6425260792291883986\">6<\/a><\/td><td>2016<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:ufrVoPGSRksC\">Face beautification system and method of use thereof<\/a>E Albuz, C Tracey, N Garg, YT TSAI, D PajakUS Patent App. 14\/031,551<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=7916581953358164150\">6<\/a><\/td><td>2014<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:Y0pCki6q_DkC\">Flexisp: A flexible camera image processing framework<\/a>F Heide, M Steinberger, YT Tsai, M Rouf, D Paj\u0105k, D Reddy, O Gallo, J Liu, &#8230;ACM Transactions on Graphics (ToG) 33 (6), 1-13<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=11828260838519832084\">410<\/a><\/td><td>2014<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:LkGwnXOMwfcC\">Image pyramid processor and method of multi-resolution image processing<\/a>Q Zhu, N Garg, T Yun-Ta, K Pulli, A MeixnerUS Patent App. 13\/764,416<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=1412054455896269607\">6<\/a><\/td><td>2014<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:ULOm3_A8WrAC\">Fast ANN for high-quality collaborative filtering<\/a>YT Tsai, M Steinberger, D Paj\u0105k, K PulliProceedings of High Performance Graphics, 61-70<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=9648103565797775478\">11<\/a><\/td><td>2014<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:IjCSPb-OGe4C\">An energy efficient time-sharing pyramid pipeline for multi-resolution computer vision<\/a>Q Zhu, N Garg, YT Tsai, K Pulli2013 IFIP\/IEEE 21st International Conference on Very Large Scale Integration&nbsp;\u2026<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=9158784895198834743\">6<\/a><\/td><td>2013<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:YsMSGLbcyi4C\">Mobile visual computing in C++ on Android<\/a>YT Tsai, O Gallo, D Pajak, K PulliACM SIGGRAPH 2013 Mobile, 1-1<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=14974115139179388757\">1<\/a><\/td><td>2013<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:UeHWp8X0CEIC\">Urban canvas: Unfreezing street-view imagery with semantically compressed LIDAR pointclouds<\/a>T Korah, YT Tsai2011 10th IEEE International Symposium on Mixed and Augmented Reality, 271-272<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=7886694024112304814\">4<\/a><\/td><td>2011<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:u5HHmVD_uO8C\">Indirect augmented reality<\/a>J Wither, YT Tsai, R AzumaComputers &amp; Graphics 35 (4), 810-822<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=2781400890826227628\">195<\/a><\/td><td>2011<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;cstart=20&amp;pagesize=80&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:roLk4NBRz8UC\">Electric Agents: combining television and mobile phones for an educational game<\/a>R Ballagas, G Revelle, K Buza, H Horii, K Mori, H Raffle, M Spasojevic, &#8230;Proceedings of the 10th International Conference on Interaction Design and&nbsp;\u2026<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=3566934232001514315\">5<\/a><\/td><td>2011<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;cstart=20&amp;pagesize=80&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:Tyk-4Ss8FVUC\">Mobile augmented reality at the hollywood walk of fame<\/a>T Korah, J Wither, YT Tsai, R Azuma2011 IEEE Virtual Reality Conference, 183-186<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=11017337400417401576\">19<\/a><\/td><td>2011<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;cstart=20&amp;pagesize=80&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:qjMakFHDy7sC\">The westwood experience: connecting story to locations via mixed reality<\/a>J Wither, R Allen, V Samanta, J Hemanus, YT Tsai, R Azuma, W Carter, &#8230;2010 IEEE International Symposium on Mixed and Augmented Reality-Arts, Media&nbsp;\u2026<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=7740920588846848890\">58<\/a><\/td><td>2010<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;cstart=20&amp;pagesize=80&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:u-x6o8ySG0sC\">CDIKP: A highly-compact local feature descriptor<\/a>YT Tsai, Q Wang, S You2008 19th International Conference on Pattern Recognition, 1-4<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=4109365307626166686\">18<\/a><\/td><td>2008<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;cstart=20&amp;pagesize=80&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:W7OEmFMy1HYC\">Practical Realtime Fracture Simulation for Games<\/a>YT Tsai, JS Chang, CY Lin<\/td><td><a href=\"https:\/\/scholar.google.com\/scholar?oi=bibs&amp;hl=en&amp;cites=13719256472714564811\">1<\/a><\/td><td>2005<\/td><\/tr><tr><td><a href=\"https:\/\/scholar.google.com\/citations?view_op=view_citation&amp;hl=en&amp;user=7fUcF9UAAAAJ&amp;cstart=20&amp;pagesize=80&amp;sortby=pubdate&amp;citation_for_view=7fUcF9UAAAAJ:kNdYIx-mwKoC\">Cross-Camera Convolutional Color Constancy Supplemental Material<\/a>M Afifi, JT Barron, C LeGendre, YT Tsai, F Bleibel<\/td><\/tr><\/tbody><\/table><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Senior Staff Software Engineer at Tesla \u202aYun-Ta Tsai\u202c &#8211; \u202aGoogle Scholar\u202c Speciality: Full Stack Engineering, ML, Janitor of All. 2023 Sr. Staff Engineer, @Tesla_AI 2020 Staff Engineer, @Tesla_AI 2019 Staff SWE, @GoogleAI 2017 Senior SWE, @GoogleAI 2015 SWE, @GoogleAI 2014 SWE, Google X Senior Research Scientist, @nvidia 2011 Research Scientist, @nvidia 2010 Senior Researcher, @nokia&hellip;&nbsp;<a href=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?page_id=4576\" rel=\"bookmark\"><span class=\"screen-reader-text\">Yun-Ta Tsai<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"googlesitekit_rrm_CAowgMPcCw:productID":"","neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"class_list":["post-4576","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/pages\/4576","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4576"}],"version-history":[{"count":1,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/pages\/4576\/revisions"}],"predecessor-version":[{"id":4577,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/pages\/4576\/revisions\/4577"}],"wp:attachment":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4576"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}