{"id":2997,"date":"2025-08-20T05:24:33","date_gmt":"2025-08-20T05:24:33","guid":{"rendered":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=2997"},"modified":"2025-08-20T05:24:35","modified_gmt":"2025-08-20T05:24:35","slug":"meta-open-source","status":"publish","type":"post","link":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=2997","title":{"rendered":"Meta Open Source\u00a0"},"content":{"rendered":"\n<p>All Projects&nbsp;<\/p>\n\n\n\n<p>Artificial Intelligence \/ Machine LearningBlockchainData InfrastructureDeveloper OperationsDevelopment ToolsFrontendMobileOtherSecurity and PrivacyVirtual Reality&nbsp;<\/p>\n\n\n\n<p>BY\u202fTAGS&nbsp;<\/p>\n\n\n\n<p>3d3d-reconstructionabstract-interpretationadstockingaiairflowandroidapp-frameworkappearance-invarianceartificial-intelligenceaudioaudio-processingautogradaws-batchbayesian-logistic-regressionbenchmarkbig-databilevel-optimizationbuckbuck2budget-allocationbuild-toolsbundlercc-langcachecache-enginecampaign-plannercaptioningcdpcliclusterscode-qualitycodegencommand-linecommand-line-toolcompilercomponentscompressioncomputational-geometrycomputer-visionconcurrencycontainerizationcontrolcontrol-flow-analysisconvolutional-neural-networkscost-response-curvecpluspluscppcpucpu-cachecpu-modelcpu-topologycpuidcross-platformcsscudadark-modedatadatabasedatasetdataset-generationdecision-makingdeclarativedeep-learningdeep-reinforcement-learningdetectordevtoolsdialogdifferentiable-optimizationdifferential-privacydiffingdigital-watermarkingdistributed-computingdistributed-trainingdockerdocumentationdocusauruse2eeconometricsembeddedembodied-aiend-to-enderlangevolutionary-algorithmfacebookfacebook-apifastmrifastmri-challengefastmri-datasetfeature-attributionfeature-extractionfeature-importancefind-and-replacefinetuningflake8flake8-pluginflashlightfmmforecastingformatterframeworkfrequency-analysisfront-endfrontendgauss-newtongeospatialgogolanggpugradient-based-optimisationgradientshackhacklanghacktoberfesthadoophashinghateful-memesheaphermeshessianshhvmhivehyperparameter-optimizationi18nimageimage-hashingimage-processingimage-similarityimageryimagesimplicit-differentiationinstagraminstruction-setinternationalizationinterpretabilityinterpretable-aiinterpretable-mlinterpreterioiosjavajavascriptjaxjetsonjitjupyterhubkuberneteslakehouselangchainleaklevenberg-marquardtlibrarylibtorchlinterlinuxllamallama2llmmachine-learningmachine-translationmachine-translation-data-processingmap-buildingmapillarymarketing-apimarketing-mix-modelingmarketing-mix-modellingmarketing-sciencemarlmedical-imagingmemorymetricsmlmlopsmmmmobilemobile-developmentmodel-based-reinforcement-learningmoemrimri-reconstructionmtlmulti-agentmulti-agent-reinforcement-learningmulti-taskingmultimodalmultitask-learningncmecneural-compressionneural-networknlpnlunmtnodejsnonlinear-least-squaresnumpynvdnvidiaoauthoauthenticatorobjective-cocamlopen-sourceopencvoptical-flowopticsoptimizationperceptual-hashingperf-toolsperformancephppipelinesplanningpoint-trackingpplpreprocessingprestopretrained-modelsprivacy-preserving-machine-learningprobabilistic-programming-languagesprogram-analysispythonpytorchqueryrrayraycasterrcwareach-curvesreactreact-nativerebar3-pluginrecommendation-systemrecommender-systemreinforcement-learningrendererrendering-engineresearchresource-controllerresponsiveridge-regressionrlroboticsruntimerustsecurityservingsfmshardingsim2realsimulatorslurmsnapshotspatial-visualizationspeechspeech-recognitionsqlsqlitessdstarlarkstatic-analysisstatic-code-analysisstopnciistorage-enginestreet-imagerystreet-leveltaint-analysistensortensorrttextvqathreatexchangetorchtrack-anythingtranscodingtranslationtype-checktypecheckertypescriptuiuicollectionviewunix-toolsv8videovideo-hashingvideo-similarityviewervirtual-realityvisuzalizationvllmvoicevqavulnerability-managementwatermarkingwav2letterwebwebglwebsitewitwitaixhpzero-configuration&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>React<\/strong>&nbsp;<\/p>\n\n\n\n<p>A JavaScript library for building user interfaces.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>React Native<\/strong>&nbsp;<\/p>\n\n\n\n<p>A framework for building native applications using React&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Create React App<\/strong>&nbsp;<\/p>\n\n\n\n<p>Set up a modern web app by running one command.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>PyTorch<\/strong>&nbsp;<\/p>\n\n\n\n<p>Tensors and Dynamic neural networks in Python with strong GPU acceleration&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Docusaurus<\/strong>&nbsp;<\/p>\n\n\n\n<p>Easy to maintain open source documentation websites.&nbsp;<\/p>\n\n\n\n<p>Meta LLaMA&nbsp;<\/p>\n\n\n\n<p><strong>llama<\/strong>&nbsp;<\/p>\n\n\n\n<p>Inference code for Llama models&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>segment-anything<\/strong>&nbsp;<\/p>\n\n\n\n<p>The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Fairseq<\/strong>&nbsp;<\/p>\n\n\n\n<p>Facebook AI Research Sequence-to-Sequence Toolkit written in Python.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Detectron 2<\/strong>&nbsp;<\/p>\n\n\n\n<p>Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Faiss<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library for efficient similarity search and clustering of dense vectors.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>rocksdb<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library that provides an embeddable, persistent key-value store for fast storage.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>folly<\/strong>&nbsp;<\/p>\n\n\n\n<p>An open-source C++ library developed and used at Facebook.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>zstd<\/strong>&nbsp;<\/p>\n\n\n\n<p>Zstandard &#8211; Fast real-time compression algorithm&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>flow<\/strong>&nbsp;<\/p>\n\n\n\n<p>Adds static typing to JavaScript to improve developer productivity and code quality.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>PyTorch Examples<\/strong>&nbsp;<\/p>\n\n\n\n<p>A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>Recoil<\/strong>&nbsp;<\/p>\n\n\n\n<p>Recoil is an experimental state management library for React apps. It provides several capabilities that are difficult to achieve with React alone, while being compatible with the newest features of React.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>audiocraft<\/strong>&nbsp;<\/p>\n\n\n\n<p>Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor \/ tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Relay<\/strong>&nbsp;<\/p>\n\n\n\n<p>The GraphQL client that scales with you.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>hhvm<\/strong>&nbsp;<\/p>\n\n\n\n<p>A virtual machine for executing programs written in Hack.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>prophet<\/strong>&nbsp;<\/p>\n\n\n\n<p>Tool for producing high quality forecasts for time series data that has multiple seasonality with linear or non-linear growth.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Fresco<\/strong>&nbsp;<\/p>\n\n\n\n<p>An Android library for managing images and the memory they use.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Lexical<\/strong>&nbsp;<\/p>\n\n\n\n<p>Lexical is an extensible text editor framework that provides excellent reliability, accessibility and performance.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Yoga<\/strong>&nbsp;<\/p>\n\n\n\n<p>Yoga is an embeddable layout engine targeting web standards.&nbsp;<\/p>\n\n\n\n<p>Presto&nbsp;<\/p>\n\n\n\n<p><strong>presto<\/strong>&nbsp;<\/p>\n\n\n\n<p>The official home of the Presto distributed SQL query engine for big data&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>TorchVision<\/strong>&nbsp;<\/p>\n\n\n\n<p>Datasets, Transforms and Models specific to Computer Vision&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>infer<\/strong>&nbsp;<\/p>\n\n\n\n<p>A static analyzer for Java, C, C++, and Objective-C&nbsp;<\/p>\n\n\n\n<p>Meta LLaMA&nbsp;<\/p>\n\n\n\n<p><strong>codellama<\/strong>&nbsp;<\/p>\n\n\n\n<p>Inference code for CodeLlama models&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Flipper<\/strong>&nbsp;<\/p>\n\n\n\n<p>A desktop debugging platform for mobile developers.&nbsp;<\/p>\n\n\n\n<p>Instagram&nbsp;<\/p>\n\n\n\n<p><strong>IGListKit<\/strong>&nbsp;<\/p>\n\n\n\n<p>A data-driven UICollectionView framework for building fast and flexible lists.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>watchman<\/strong>&nbsp;<\/p>\n\n\n\n<p>Watches files and records, or triggers actions, when they change.&nbsp;<\/p>\n\n\n\n<p>ReactJS&nbsp;<\/p>\n\n\n\n<p><strong>react.dev<\/strong>&nbsp;<\/p>\n\n\n\n<p>The React documentation website&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Animated Drawings<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code to accompany &#8220;A Method for Animating Children&#8217;s Drawings of the Human Figure&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>seamless_communication<\/strong>&nbsp;<\/p>\n\n\n\n<p>Foundational Models for State-of-the-Art Speech and Text Translation&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>SocketRocket<\/strong>&nbsp;<\/p>\n\n\n\n<p>A conforming Objective-C WebSocket client library.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>pifuhd<\/strong>&nbsp;<\/p>\n\n\n\n<p>High-Resolution 3D Human Digitization from A Single Image.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Hermes<\/strong>&nbsp;<\/p>\n\n\n\n<p>A JavaScript engine optimized for running React Native.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>chisel<\/strong>&nbsp;<\/p>\n\n\n\n<p>Chisel is a collection of LLDB commands to assist debugging iOS apps.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>jscodeshift<\/strong>&nbsp;<\/p>\n\n\n\n<p>A JavaScript codemod toolkit.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>pytorch3d<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch3D is FAIR&#8217;s library of reusable components for deep learning with 3D data&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>hydra<\/strong>&nbsp;<\/p>\n\n\n\n<p>Hydra is a framework for elegantly configuring complex applications&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>proxygen<\/strong>&nbsp;<\/p>\n\n\n\n<p>A collection of C++ HTTP libraries including an easy to use HTTP server.&nbsp;<\/p>\n\n\n\n<p>Meta LLaMA&nbsp;<\/p>\n\n\n\n<p><strong>Llama Recipes<\/strong>&nbsp;<\/p>\n\n\n\n<p>Scripts for fine-tuning Llama2 with composable FSDP &amp; PEFT methods to cover single\/multi-node GPUs. Supports default &amp; custom datasets for applications such as summarization &amp; question answering. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment.Demo apps to showcase Llama2 for WhatsApp &amp; Messenger&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>StyleX<\/strong>&nbsp;<\/p>\n\n\n\n<p>StyleX is the styling system for ambitious user interfaces.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ImageBind<\/strong>&nbsp;<\/p>\n\n\n\n<p>ImageBind One Embedding Space to Bind Them All&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>nougat<\/strong>&nbsp;<\/p>\n\n\n\n<p>Implementation of Nougat Neural Optical Understanding for Academic Documents&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Facebook SDK for iOS<\/strong>&nbsp;<\/p>\n\n\n\n<p>Used to integrate the Facebook Platform with your iOS &amp; tvOS apps.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>tutorials<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch tutorials.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>litho<\/strong>&nbsp;<\/p>\n\n\n\n<p>A declarative framework for building efficient UIs on Android.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dinov2<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch code and models for the DINOv2 self-supervised learning method.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Demucs<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for the paper Hybrid Spectrogram and Waveform Source Separation&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>xFormers<\/strong>&nbsp;<\/p>\n\n\n\n<p>Hackable and optimized Transformers building blocks, supporting a composable construction.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Pyre<\/strong>&nbsp;<\/p>\n\n\n\n<p>Performant type-checking for python.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>mae<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch implementation of MAE https\/\/arxiv.org\/abs\/2111.06377&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>metaseq<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repo for external large-scale work&nbsp;<\/p>\n\n\n\n<p>Flashlight&nbsp;<\/p>\n\n\n\n<p><strong>wav2letter<\/strong>&nbsp;<\/p>\n\n\n\n<p>Facebook AI Research&#8217;s Automatic Speech Recognition Toolkit&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SlowFast<\/strong>&nbsp;<\/p>\n\n\n\n<p>PySlowFast: video understanding codebase from FAIR for reproducing state-of-the-art video models.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>facebook-android-sdk<\/strong>&nbsp;<\/p>\n\n\n\n<p>Used to integrate Android apps with Facebook Platform.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Redex<\/strong>&nbsp;<\/p>\n\n\n\n<p>A bytecode optimizer for Android apps&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>componentkit<\/strong>&nbsp;<\/p>\n\n\n\n<p>A React-inspired view framework for iOS.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Sapling SCM<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Scalable, User-Friendly Source Control System.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dino<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO&nbsp;<\/p>\n\n\n\n<p>BoltsFramework&nbsp;<\/p>\n\n\n\n<p><strong>Bolts-ObjC<\/strong>&nbsp;<\/p>\n\n\n\n<p>Bolts is a collection of low-level libraries designed to make developing mobile apps easier.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>MMF<\/strong>&nbsp;<\/p>\n\n\n\n<p>A modular framework for vision &amp; language multimodal research from Facebook AI Research (FAIR)&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>fishhook<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library that enables dynamically rebinding symbols in Mach-O binaries running on iOS.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>PathPicker<\/strong>&nbsp;<\/p>\n\n\n\n<p>PathPicker accepts a wide range of input &#8212; output from git commands, grep results, searches &#8212; pretty much anything. After parsing the input, PathPicker presents you with a nice UI to select which files you&#8217;re interested in. After that you can open them in your favorite editor or execute arbitrary commands.&nbsp;<\/p>\n\n\n\n<p>Flashlight&nbsp;<\/p>\n\n\n\n<p><strong>Flashlight<\/strong>&nbsp;<\/p>\n\n\n\n<p>A C++ standalone library for machine learning&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Metro<\/strong>&nbsp;<\/p>\n\n\n\n<p>\ud83d\ude87 The JavaScript bundler for React Native&nbsp;<\/p>\n\n\n\n<p>PyTorch Labs&nbsp;<\/p>\n\n\n\n<p><strong>gpt-fast<\/strong>&nbsp;<\/p>\n\n\n\n<p>Simple and efficient pytorch-native transformer text generation in &lt;1000 LOC of python.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>AugLy<\/strong>&nbsp;<\/p>\n\n\n\n<p>A data augmentations library for audio, image, text, and video.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Kats<\/strong>&nbsp;<\/p>\n\n\n\n<p>Kats, a kit to analyze time series data, a lightweight, easy-to-use, generalizable, and extendable framework to perform time series analysis, from understanding the key statistics and characteristics, detecting change points and anomalies, to forecasting future trends.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DiT<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official PyTorch Implementation of &#8220;Scalable Diffusion Models with Transformers&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DrQA<\/strong>&nbsp;<\/p>\n\n\n\n<p>Reading Wikipedia to Answer Open-Domain Questions&nbsp;<\/p>\n\n\n\n<p>Instagram&nbsp;<\/p>\n\n\n\n<p><strong>MonkeyType<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Python library that generates static type annotations by collecting runtime types&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>katran<\/strong>&nbsp;<\/p>\n\n\n\n<p>A high performance layer 4 load balancer&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>captum<\/strong>&nbsp;<\/p>\n\n\n\n<p>Model interpretability and understanding for PyTorch&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>moco<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch implementation of MoCo:&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1911.05722\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/1911.05722<\/a>&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>idb<\/strong>&nbsp;<\/p>\n\n\n\n<p>idb is a flexible command line interface for automating iOS simulators and devices&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>prop-types<\/strong>&nbsp;<\/p>\n\n\n\n<p>Runtime type checking for React props and similar objects&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>AITemplate<\/strong>&nbsp;<\/p>\n\n\n\n<p>AITemplate is a Python framework which renders neural network into high performance CUDA\/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Haxl<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Haskell library that simplifies access to remote data, such as databases or web-based services.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>FBRetainCycleDetector<\/strong>&nbsp;<\/p>\n\n\n\n<p>iOS library to help detecting retain cycles in runtime.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>memlab<\/strong>&nbsp;<\/p>\n\n\n\n<p>A framework for finding JavaScript memory leaks and analyzing heap snapshots&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>duckling<\/strong>&nbsp;<\/p>\n\n\n\n<p>Language, engine, and tooling for expressing, testing, and evaluating composable language rules on input strings.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>serve<\/strong>&nbsp;<\/p>\n\n\n\n<p>Serve, optimize and scale PyTorch models in production&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>regenerator<\/strong>&nbsp;<\/p>\n\n\n\n<p>Source transformer enabling ECMAScript 6 generator functions in JavaScript-of-today.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>fbt<\/strong>&nbsp;<\/p>\n\n\n\n<p>A JavaScript Internationalization Framework&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>nevergrad<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Python toolbox for performing gradient-free optimization&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dlrm<\/strong>&nbsp;<\/p>\n\n\n\n<p>An implementation of a deep learning recommendation model (DLRM)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ReAgent<\/strong>&nbsp;<\/p>\n\n\n\n<p>A platform for Reasoning systems (Reinforcement Learning, Contextual Bandits, etc.)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>LASER<\/strong>&nbsp;<\/p>\n\n\n\n<p>Language-Agnostic SEntence Representations&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>cinder<\/strong>&nbsp;<\/p>\n\n\n\n<p>Cinder is Meta&#8217;s internal performance-oriented production version of CPython.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Buck2<\/strong>&nbsp;<\/p>\n\n\n\n<p>Build system, successor to Buck&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>mcrouter<\/strong>&nbsp;<\/p>\n\n\n\n<p>Mcrouter is a memcached protocol router for scaling memcached deployments.&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>OpenSfM<\/strong>&nbsp;<\/p>\n\n\n\n<p>Open source Structure-from-Motion pipeline&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>pytorchvideo<\/strong>&nbsp;<\/p>\n\n\n\n<p>A deep learning library for video understanding research.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>glow<\/strong>&nbsp;<\/p>\n\n\n\n<p>Compiler for Neural Network hardware accelerators&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Velox<\/strong>&nbsp;<\/p>\n\n\n\n<p>A C++ vectorized database acceleration library aimed to optimizing query engines and data processing systems.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>EnCodec<\/strong>&nbsp;<\/p>\n\n\n\n<p>State-of-the-art deep learning based audio codec supporting both mono 24 kHz audio and stereo 48 kHz audio.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>wangle<\/strong>&nbsp;<\/p>\n\n\n\n<p>Wangle is a framework providing a set of common client\/server abstractions for building services in a consistent, modular, and composable way.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>BoTorch<\/strong>&nbsp;<\/p>\n\n\n\n<p>Bayesian optimization in PyTorch&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>WDT<\/strong>&nbsp;<\/p>\n\n\n\n<p>Warp speed Data Transfer (WDT) is an embeddedable library (and command line tool) aiming to transfer data between 2 systems as fast as possible over multiple TCP paths.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>fairscale<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch extensions for high performance and large scale training.&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>WhatsApp Stickers<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains the iOS and Android sample apps and API for creating third party sticker packs for WhatsApp.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>esm<\/strong>&nbsp;<\/p>\n\n\n\n<p>Evolutionary Scale Modeling (esm): Pretrained language models for proteins&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>IGL<\/strong>&nbsp;<\/p>\n\n\n\n<p>Intermediate Graphics Library (IGL) is a cross-platform library that commands the GPU. It provides a single low-level cross-platform interface on top of various graphics APIs (e.g. OpenGL, Metal and Vulkan).&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>React Strict DOM<\/strong>&nbsp;<\/p>\n\n\n\n<p>React Strict DOM (RSD) is a subset of React DOM, imperative DOM, and CSS that supports web and native targets&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Official PyTorch code and models for I-JEPA self-supervised learning paper.<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official codebase for I-JEPA, the Image-based Joint-Embedding Predictive Architecture. First outlined in the CVPR paper, &#8220;Self-supervised learning from images with a joint-embedding predictive architecture.&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>fbthrift<\/strong>&nbsp;<\/p>\n\n\n\n<p>Facebook&#8217;s branch of Apache Thrift, including a new C++ server.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>mysql-5.6<\/strong>&nbsp;<\/p>\n\n\n\n<p>Facebook&#8217;s branch of the Oracle MySQL database. This includes MyRocks.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>torchaudio<\/strong>&nbsp;<\/p>\n\n\n\n<p>Data manipulation and transformation for audio signal processing, powered by PyTorch&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Audio2Photoreal<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code and dataset for photorealistic Codec Avatars driven from audio&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Habitat Sim<\/strong>&nbsp;<\/p>\n\n\n\n<p>A flexible, high-performance 3D simulator for Embodied AI research.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>TensorRT<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch\/TorchScript\/FX compiler for NVIDIA GPUs using TensorRT&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Pearl<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Production-ready Reinforcement Learning AI Agent Library brought by the Applied Reinforcement Learning team at Meta.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CoTracker<\/strong>&nbsp;<\/p>\n\n\n\n<p>CoTracker is a model for tracking any point (pixel) on a video.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>pyrobot<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyRobot: An Open Source Robotics Research Platform&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Ax<\/strong>&nbsp;<\/p>\n\n\n\n<p>Adaptive Experimentation Platform&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>JEPA<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch code and models for V-JEPA self-supervised learning from video.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>pycls<\/strong>&nbsp;<\/p>\n\n\n\n<p>Codebase for Image Classification Research, written in PyTorch.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Mask2Former<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;Masked-attention Mask Transformer for Universal Image Segmentation&#8221;&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>node-wit<\/strong>&nbsp;<\/p>\n\n\n\n<p>Node.js SDK for Wit.ai&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SentEval<\/strong>&nbsp;<\/p>\n\n\n\n<p>A python tool for evaluating the quality of sentence embeddings.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SparseConvNet<\/strong>&nbsp;<\/p>\n\n\n\n<p>Submanifold sparse convolutional networks&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>FBX2glTF<\/strong>&nbsp;<\/p>\n\n\n\n<p>A command-line tool for the conversion of 3D model assets on the FBX file format to the glTF file format.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>video-nonlocal-net<\/strong>&nbsp;<\/p>\n\n\n\n<p>Non-local Neural Networks for Video Classification&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>spectrum<\/strong>&nbsp;<\/p>\n\n\n\n<p>A client-side image transcoding library.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>jsx<\/strong>&nbsp;<\/p>\n\n\n\n<p>The JSX specification is a XML-like syntax extension to ECMAScript.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>fbjs<\/strong>&nbsp;<\/p>\n\n\n\n<p>A collection of utility libraries used by other Meta JS projects.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>React Native Website<\/strong>&nbsp;<\/p>\n\n\n\n<p>The React Native website and docs&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>fvcore<\/strong>&nbsp;<\/p>\n\n\n\n<p>Collection of common code that&#8217;s shared among different research projects in FAIR computer vision team.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Detecting Twenty-thousand Classes using Image-level Supervision<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;Detecting Twenty-thousand Classes using Image-level Supervision&#8221;.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>TorchRL<\/strong>&nbsp;<\/p>\n\n\n\n<p>A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>screenshot-tests-for-android<\/strong>&nbsp;<\/p>\n\n\n\n<p>Generate fast deterministic screenshots during Android instrumentation tests&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Messenger Platform Samples<\/strong>&nbsp;<\/p>\n\n\n\n<p>Messenger Platform samples for sending and receiving messages. Walk through the Get Started with this code.&nbsp;<a href=\"https:\/\/developers.facebook.com\/docs\/messenger-platform\/quickstart\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/developers.facebook.com\/docs\/messenger-platform\/quickstart<\/a>&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>oomd<\/strong>&nbsp;<\/p>\n\n\n\n<p>A userspace out-of-memory killer&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Habitat Lab<\/strong>&nbsp;<\/p>\n\n\n\n<p>A modular high-level library to train embodied AI agents across a variety of tasks and environments.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>TransCoder<\/strong>&nbsp;<\/p>\n\n\n\n<p>Public release of the TransCoder research project&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2006.03511.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/pdf\/2006.03511.pdf<\/a>&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>torchrec<\/strong>&nbsp;<\/p>\n\n\n\n<p>Pytorch domain library for recommendation systems&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>tnt<\/strong>&nbsp;<\/p>\n\n\n\n<p>A lightweight library for PyTorch training tools and utilities&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>poincare-embeddings<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch implementation of the NIPS-17 paper &#8220;Poincar\u00e9 Embeddings for Learning Hierarchical Representations&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>fastmod<\/strong>&nbsp;<\/p>\n\n\n\n<p>A fast partial replacement for the codemod tool&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ClassyVision<\/strong>&nbsp;<\/p>\n\n\n\n<p>An end-to-end PyTorch framework for image and video classification&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>opacus<\/strong>&nbsp;<\/p>\n\n\n\n<p>Training PyTorch models with differential privacy&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>consistent_depth<\/strong>&nbsp;<\/p>\n\n\n\n<p>We estimate dense, flicker-free, geometrically consistent depth from monocular video, for example hand-held cell phone video.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Theseus<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library for differentiable nonlinear optimization&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>TextLayoutBuilder<\/strong>&nbsp;<\/p>\n\n\n\n<p>An Android library that allows you to build text layouts more easily.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Bowler<\/strong>&nbsp;<\/p>\n\n\n\n<p>Safe code refactoring for modern Python.&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>xhp-lib<\/strong>&nbsp;<\/p>\n\n\n\n<p>Class libraries for XHP. XHP is a Hack feature that augments the syntax of the language such that XML document fragments become valid Hack expressions.&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>pywit<\/strong>&nbsp;<\/p>\n\n\n\n<p>Python library for Wit.ai&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>mvfst<\/strong>&nbsp;<\/p>\n\n\n\n<p>An implementation of the QUIC transport protocol.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>hub<\/strong>&nbsp;<\/p>\n\n\n\n<p>Submission to&nbsp;<a href=\"https:\/\/pytorch.org\/hub\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/pytorch.org\/hub\/<\/a>&nbsp;<\/p>\n\n\n\n<p>Instagram&nbsp;<\/p>\n\n\n\n<p><strong>LibCST<\/strong>&nbsp;<\/p>\n\n\n\n<p>A concrete syntax tree parser and serializer library for Python that preserves many aspects of Python&#8217;s abstract syntax tree&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CrypTen<\/strong>&nbsp;<\/p>\n\n\n\n<p>A framework for Privacy Preserving Machine Learning&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>TimeSformer<\/strong>&nbsp;<\/p>\n\n\n\n<p>The official pytorch implementation of our paper &#8220;Is Space-Time Attention All You Need for Video Understanding?&#8221;&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>functorch<\/strong>&nbsp;<\/p>\n\n\n\n<p>functorch is JAX-like composable function transforms for PyTorch.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>SoLoader<\/strong>&nbsp;<\/p>\n\n\n\n<p>Native code loader for Android&nbsp;<\/p>\n\n\n\n<p>BoltsFramework&nbsp;<\/p>\n\n\n\n<p><strong>Bolts-Swift<\/strong>&nbsp;<\/p>\n\n\n\n<p>Bolts is a collection of low-level libraries designed to make developing mobile apps easier.&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>inplace_abn<\/strong>&nbsp;<\/p>\n\n\n\n<p>In-Place Activated BatchNorm for Memory-Optimized Training of DNNs&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>LAMA<\/strong>&nbsp;<\/p>\n\n\n\n<p>LAnguage Model Analysis&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>Unifex<\/strong>&nbsp;<\/p>\n\n\n\n<p>Unified Executors&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DomainBed<\/strong>&nbsp;<\/p>\n\n\n\n<p>DomainBed is a suite to test domain generalization algorithms&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Multimodal<\/strong>&nbsp;<\/p>\n\n\n\n<p>TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ConvNeXt-V2<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for ConvNeXt V2 model&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>facebook-python-business-sdk<\/strong>&nbsp;<\/p>\n\n\n\n<p>Python SDK for Meta Marketing APIs&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>fastMRI<\/strong>&nbsp;<\/p>\n\n\n\n<p>A large-scale dataset of both raw MRI measurements and clinical MRI images.&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>WhatsApp Proxy Host<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains the WhatsApp proxy implementation for users to host their own proxy infrastructure to connect to WhatsApp for chat (VoIP is not currently supported)&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>ThreatExchange<\/strong>&nbsp;<\/p>\n\n\n\n<p>Trust &amp; Safety tools for working together to fight digital harms.&nbsp;<\/p>\n\n\n\n<p>Relay&nbsp;<\/p>\n\n\n\n<p><strong>relay-examples<\/strong>&nbsp;<\/p>\n\n\n\n<p>A collection of sample Relay applications&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>gloo<\/strong>&nbsp;<\/p>\n\n\n\n<p>Collective communications library with various primitives for multi-machine training.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>fizz<\/strong>&nbsp;<\/p>\n\n\n\n<p>C++14 implementation of the TLS-1.3 standard&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>FBGEMM<\/strong>&nbsp;<\/p>\n\n\n\n<p>FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) &#8211;&nbsp;<a href=\"https:\/\/code.fb.com\/ml-applications\/fbgemm\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/code.fb.com\/ml-applications\/fbgemm\/<\/a>&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>CacheLib<\/strong>&nbsp;<\/p>\n\n\n\n<p>Pluggable in-process caching engine to build and scale high performance services&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>meshrcnn<\/strong>&nbsp;<\/p>\n\n\n\n<p>code for Mesh R-CNN, ICCV 2019&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>submitit<\/strong>&nbsp;<\/p>\n\n\n\n<p>Python 3.8+ toolbox for submitting jobs to Slurm&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>svoice<\/strong>&nbsp;<\/p>\n\n\n\n<p>We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>data<\/strong>&nbsp;<\/p>\n\n\n\n<p>A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>shumai<\/strong>&nbsp;<\/p>\n\n\n\n<p>Fast Differentiable Tensor Library in JavaScript and TypeScript with Bun + Flashlight&nbsp;<\/p>\n\n\n\n<p>PyTorch Labs&nbsp;<\/p>\n\n\n\n<p><strong>segment-anything-fast<\/strong>&nbsp;<\/p>\n\n\n\n<p>A batched offline inference oriented version of segment-anything&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>fatal<\/strong>&nbsp;<\/p>\n\n\n\n<p>Fatal is a library for fast prototyping software in modern C++. It provides facilities to enhance the expressive power of C++. The library is heavily based on template meta-programming, while keeping the complexity under-the-hood.&nbsp;<\/p>\n\n\n\n<p>Flow&nbsp;<\/p>\n\n\n\n<p><strong>flow-for-vscode<\/strong>&nbsp;<\/p>\n\n\n\n<p>Flow for Visual Studio Code&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>transform360<\/strong>&nbsp;<\/p>\n\n\n\n<p>Transform360 is an equirectangular to cubemap transform for 360 video.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>dhcplb<\/strong>&nbsp;<\/p>\n\n\n\n<p>dhcplb is Facebook&#8217;s implementation of a load balancer for DHCP.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>OnlineSchemaChange<\/strong>&nbsp;<\/p>\n\n\n\n<p>A tool for performing online schema changes on MySQL.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>VMZ<\/strong>&nbsp;<\/p>\n\n\n\n<p>VMZ: Model Zoo for Video Modeling&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>FixRes<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository reproduces the results of the paper: &#8220;Fixing the train-test resolution discrepancy&#8221;&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1906.06423\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/1906.06423<\/a>&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>below<\/strong>&nbsp;<\/p>\n\n\n\n<p>A time traveling resource monitor for modern Linux systems&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>Robyn<\/strong>&nbsp;<\/p>\n\n\n\n<p>Robyn is an experimental, AI\/ML-powered and open sourced Marketing Mix Modeling (MMM) package from Meta Marketing Science. Our mission is to democratise modeling knowledge, inspire the industry through innovation, reduce human bias in the modeling process &amp; build a strong open source marketing science community.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>mariana-trench<\/strong>&nbsp;<\/p>\n\n\n\n<p>A security focused static analysis tool for Android and Java applications.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>TorchDynamo<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Python-level JIT compiler designed to make unmodified PyTorch programs faster.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>MetaCLIP<\/strong>&nbsp;<\/p>\n\n\n\n<p>ICLR2024 Spotlight: curation\/training code, metadata, distribution and pre-trained models for MetaCLIP.&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>Wit.ai<\/strong>&nbsp;<\/p>\n\n\n\n<p>Natural Language Interface for apps and devices&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>cpuinfo<\/strong>&nbsp;<\/p>\n\n\n\n<p>CPU INFOrmation library (x86\/x86-64\/ARM\/ARM64, Linux\/Windows\/Android\/macOS\/iOS)&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>openr<\/strong>&nbsp;<\/p>\n\n\n\n<p>Distributed platform for building autonomic network functions.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>MIRAI<\/strong>&nbsp;<\/p>\n\n\n\n<p>Rust mid-level IR Abstract Interpreter&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>mobile-vision<\/strong>&nbsp;<\/p>\n\n\n\n<p>Mobile vision models and code&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Replica-Dataset<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Replica Dataset v1 as published in&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/1906.05797\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/1906.05797<\/a>&nbsp;.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SpanBERT<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for using and evaluating SpanBERT.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>nle<\/strong>&nbsp;<\/p>\n\n\n\n<p>The NetHack Learning Environment&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>mbrl-lib<\/strong>&nbsp;<\/p>\n\n\n\n<p>Library for Model Based RL&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Glean<\/strong>&nbsp;<\/p>\n\n\n\n<p>System for collecting, deriving and working with facts about source code.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CompilerGym<\/strong>&nbsp;<\/p>\n\n\n\n<p>Reinforcement learning environments for compiler and program optimization tasks&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Common Objects In 3D<\/strong>&nbsp;<\/p>\n\n\n\n<p>Tooling for the Common Objects In 3D dataset.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ToMe<\/strong>&nbsp;<\/p>\n\n\n\n<p>A method to increase the speed and lower the memory footprint of existing vision transformers.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CutLER<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;Cut and Learn for Unsupervised Object Detection and Instance Segmentation&#8221; and &#8220;VideoCutLER: Surprisingly Simple Unsupervised Video Instance Segmentation&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Local Radiance Fields<\/strong>&nbsp;<\/p>\n\n\n\n<p>An algorithm for reconstructing the radiance field of a large-scale scene from a single casually captured video.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>facebook-php-business-sdk<\/strong>&nbsp;<\/p>\n\n\n\n<p>PHP SDK for Meta Marketing API&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>fboss<\/strong>&nbsp;<\/p>\n\n\n\n<p>Facebook Open Switching System Software for controlling network switches.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>TorchBench<\/strong>&nbsp;<\/p>\n\n\n\n<p>TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>NSVF<\/strong>&nbsp;<\/p>\n\n\n\n<p>Open source code for the paper of Neural Sparse Voxel Fields.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>ktfmt<\/strong>&nbsp;<\/p>\n\n\n\n<p>A program that reformats Kotlin source code to comply with the common community standard for Kotlin code conventions.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>fairo<\/strong>&nbsp;<\/p>\n\n\n\n<p>A modular embodied agent architecture and platform for building embodied agents&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>madgrad<\/strong>&nbsp;<\/p>\n\n\n\n<p>MADGRAD Optimization Method&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>d2go<\/strong>&nbsp;<\/p>\n\n\n\n<p>D2Go is a toolkit for efficient deep learning&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>AV-HuBERT<\/strong>&nbsp;<\/p>\n\n\n\n<p>A self-supervised learning framework for audio-visual speech&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Battery-Metrics<\/strong>&nbsp;<\/p>\n\n\n\n<p>Library that helps in instrumenting battery related system metrics.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>GENRE<\/strong>&nbsp;<\/p>\n\n\n\n<p>Autoregressive Entity Retrieval&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Winterfell<\/strong>&nbsp;<\/p>\n\n\n\n<p>A STARK prover and verifier for arbitrary computations&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CodeGen<\/strong>&nbsp;<\/p>\n\n\n\n<p>Reference implementation of code generation projects from Facebook AI Research. General toolkit to apply machine learning to code, from dataset creation to model training and evaluation. Comes with pretrained models.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>multiface<\/strong>&nbsp;<\/p>\n\n\n\n<p>Hosts the Multiface dataset, which is a multi-view dataset of multiple identities performing a sequence of facial expressions.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>omni3d<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;Omni3D A Large Benchmark and Model for 3D Object Detection in the Wild&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>hermit<\/strong>&nbsp;<\/p>\n\n\n\n<p>Hermit launches linux x86_64 programs in a special, hermetically isolated sandbox to control their execution. Hermit translates normal, nondeterministic behavior, into deterministic, repeatable behavior. This can be used for various applications, including replay-debugging, reproducible artifacts, chaos mode concurrency testing and bug analysis.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>home-robot<\/strong>&nbsp;<\/p>\n\n\n\n<p>Mobile manipulation research tools for roboticists&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>balance<\/strong>&nbsp;<\/p>\n\n\n\n<p>The balance python package offers a simple workflow and methods for dealing with biased data samples when looking to infer from them to some target population of interest.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>hiera<\/strong>&nbsp;<\/p>\n\n\n\n<p>Hiera: A fast, powerful, and simple hierarchical vision transformer.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>pyre2<\/strong>&nbsp;<\/p>\n\n\n\n<p>Python wrapper for RE2&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>IT-CPE<\/strong>&nbsp;<\/p>\n\n\n\n<p>Meta&#8217;s Client Platform Engineering tools. Some of the tools we have written to help manage our fleet of client systems.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>openbmc<\/strong>&nbsp;<\/p>\n\n\n\n<p>OpenBMC is an open software framework to build a complete Linux image for a Board Management Controller (BMC).&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>chef-cookbooks<\/strong>&nbsp;<\/p>\n\n\n\n<p>Open source chef cookbooks.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>SPARTA<\/strong>&nbsp;<\/p>\n\n\n\n<p>SPARTA is a library of software components specially designed for building high-performance static analyzers based on the theory of Abstract Interpretation.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>open_lth<\/strong>&nbsp;<\/p>\n\n\n\n<p>A repository in preparation for open-sourcing lottery ticket hypothesis code.&nbsp;<\/p>\n\n\n\n<p>Instagram&nbsp;<\/p>\n\n\n\n<p><strong>Fixit<\/strong>&nbsp;<\/p>\n\n\n\n<p>Advanced Python linting framework with auto-fixes and hierarchical configuration that makes it easy to write custom in-repo lint rules.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>synsin<\/strong>&nbsp;<\/p>\n\n\n\n<p>View synthesis for the public.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>time<\/strong>&nbsp;<\/p>\n\n\n\n<p>Meta&#8217;s Time libraries&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>rebel<\/strong>&nbsp;<\/p>\n\n\n\n<p>An algorithm that generalizes the paradigm of self-play reinforcement learning and search to imperfect-information games.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>Kineto<\/strong>&nbsp;<\/p>\n\n\n\n<p>A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>InterHand2.6M<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official PyTorch implementation of &#8220;InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image&#8221;, ECCV 2020&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>starlark-rust<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Rust implementation of the Starlark language&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>3DETR &#8211; End-to-end transformer model for 3D object detection<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code &amp; Models for 3DETR &#8211; an End-to-end transformer model for 3D object detection&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>TorchArrow<\/strong>&nbsp;<\/p>\n\n\n\n<p>High performance model preprocessing library on PyTorch&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>ExecuTorch<\/strong>&nbsp;<\/p>\n\n\n\n<p>End-to-end solution for enabling on-device AI across mobile and edge devices for PyTorch models&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>TensorDict<\/strong>&nbsp;<\/p>\n\n\n\n<p>TensorDict is a pytorch dedicated tensor container.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ov-seg<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>fairseq2<\/strong>&nbsp;<\/p>\n\n\n\n<p>FAIR Sequence Modeling Toolkit 2&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>PoseDiffusion<\/strong>&nbsp;<\/p>\n\n\n\n<p>[ICCV 2023] PoseDiffusion: Solving Pose Estimation via Diffusion-aided Bundle Adjustment&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>silk<\/strong>&nbsp;<\/p>\n\n\n\n<p>SiLK (Simple Learned Keypoint) is a self-supervised deep learning keypoint model.&nbsp;<\/p>\n\n\n\n<p>Meta LLaMA&nbsp;<\/p>\n\n\n\n<p><strong>PurpleLlama<\/strong>&nbsp;<\/p>\n\n\n\n<p>Set of tools to assess and improve LLM security.&nbsp;<\/p>\n\n\n\n<p>Flow&nbsp;<\/p>\n\n\n\n<p><strong>flow-bin<\/strong>&nbsp;<\/p>\n\n\n\n<p>Binary wrapper for Flow &#8211; A static type checker for JavaScript&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>facebook-sdk-for-unity<\/strong>&nbsp;<\/p>\n\n\n\n<p>The facebook sdk for unity.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>facebook-nodejs-business-sdk<\/strong>&nbsp;<\/p>\n\n\n\n<p>Node.js SDK for Meta Marketing APIs&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>retrie<\/strong>&nbsp;<\/p>\n\n\n\n<p>Retrie is a powerful, easy-to-use codemodding tool for Haskell.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>FiD<\/strong>&nbsp;<\/p>\n\n\n\n<p>Fusion-in-Decoder&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Instance-Conditioned GAN<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official repository for the paper &#8220;Instance-Conditioned GAN&#8221; by Arantxa Casanova, Marlene Careil, Jakob Verbeek, Micha\u0142 Dro\u017cd\u017cal, Adriana Romero-Soriano.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>NeuralCompression<\/strong>&nbsp;<\/p>\n\n\n\n<p>A collection of tools for neural compression enthusiasts.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ppuda<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>Reverie<\/strong>&nbsp;<\/p>\n\n\n\n<p>An ergonomic and safe syscall interception framework for Linux.&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>waraft<\/strong>&nbsp;<\/p>\n\n\n\n<p>An Erlang implementation of RAFT from WhatsApp&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>banmo<\/strong>&nbsp;<\/p>\n\n\n\n<p>BANMo Building Animatable 3D Neural Models from Many Casual Videos&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>PiPPy<\/strong>&nbsp;<\/p>\n\n\n\n<p>Pipeline Parallelism for PyTorch&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>omnivore<\/strong>&nbsp;<\/p>\n\n\n\n<p>Omnivore: A Single Model for Many Visual Modalities&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>textlesslib<\/strong>&nbsp;<\/p>\n\n\n\n<p>Library for Textless Spoken Language Processing&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>vicreg<\/strong>&nbsp;<\/p>\n\n\n\n<p>VICReg official code base&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>eqWAlizer<\/strong>&nbsp;<\/p>\n\n\n\n<p>A type-checker for Erlang&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>AudioMAE<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repo hosts the code and models of &#8220;Masked Autoencoders that Listen&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dadaptation<\/strong>&nbsp;<\/p>\n\n\n\n<p>D-Adaptation for SGD, Adam and AdaGrad&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>atlas<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code repository for supporting the paper &#8220;Atlas Few-shot Learning with Retrieval Augmented Language Models&#8221;,(https\/\/arxiv.org\/abs\/2208.03299)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>hyperreel<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Lexical iOS<\/strong>&nbsp;<\/p>\n\n\n\n<p>Lexical iOS is an extensible text editor framework that integrates the APIs and philosophies from Lexical Web with a Swift API built on top of TextKit.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>DotSlash<\/strong>&nbsp;<\/p>\n\n\n\n<p>Simplified executable deployment&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>mapillary-js<\/strong>&nbsp;<\/p>\n\n\n\n<p>Interactive, extendable street imagery map experiences in the browser, powered by WebGL&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>facebook-java-business-sdk<\/strong>&nbsp;<\/p>\n\n\n\n<p>Java SDK for Meta Marketing APIs&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>FAI-PEP<\/strong>&nbsp;<\/p>\n\n\n\n<p>Facebook AI Performance Evaluation Platform&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>nvdtools<\/strong>&nbsp;<\/p>\n\n\n\n<p>A set of tools to work with the feeds (vulnerabilities, CPE dictionary etc.) distributed by National Vulnerability Database (NVD)&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Rapid Editor<\/strong>&nbsp;<\/p>\n\n\n\n<p>The OpenStreetMap editor driven by open data, AI, and supercharged features&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>WhatsApp-Business-API-Setup-Scripts<\/strong>&nbsp;<\/p>\n\n\n\n<p>The scripts related to setting up WhatsApp business API&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>infima<\/strong>&nbsp;<\/p>\n\n\n\n<p>A UI framework that provides websites with the minimal CSS and JS needed to get started with building a modern responsive beautiful website&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>MiniHack<\/strong>&nbsp;<\/p>\n\n\n\n<p>MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>vizseq<\/strong>&nbsp;<\/p>\n\n\n\n<p>An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>libri-light<\/strong>&nbsp;<\/p>\n\n\n\n<p>dataset for lightly supervised training using the librivox audio book recordings.&nbsp;<a href=\"https:\/\/librivox.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/librivox.org\/<\/a>.&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>erlfmt<\/strong>&nbsp;<\/p>\n\n\n\n<p>An automated code formatter for Erlang&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>eft<\/strong>&nbsp;<\/p>\n\n\n\n<p>visualization code for 3D human body annotation by EFT (Exemplar Fine-tuning)&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>CG-SQL<\/strong>&nbsp;<\/p>\n\n\n\n<p>CG\/SQL is a compiler that converts a SQL Stored Procedure like language into C for SQLite. SQLite has no stored procedures of its own. CG\/CQL can also generate other useful artifacts for testing and schema maintenance.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Cupcake<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Rust library for lattice-based additive homomorphic encryption.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>speech-resynthesis<\/strong>&nbsp;<\/p>\n\n\n\n<p>An official reimplementation of the method described in the INTERSPEECH 2021 paper &#8211; Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>superconsole<\/strong>&nbsp;<\/p>\n\n\n\n<p>The superconsole crate provides a handler and building blocks for powerful, yet minimally intrusive TUIs. It is cross platform, supporting Windows 7+, Linux, and MacOS. Rustaceans who want to create non-interactive TUIs can use the component composition building block system to quickly deploy their code.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>LaViLa<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;Learning Video Representations from Large Language Models&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>OrienterNet<\/strong>&nbsp;<\/p>\n\n\n\n<p>Source Code for Paper &#8220;OrienterNet Visual Localization in 2D Public Maps with Neural Matching&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>eai-vc<\/strong>&nbsp;<\/p>\n\n\n\n<p>The repository for the largest and most comprehensive empirical study of visual foundation models for Embodied AI (EAI).&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>torchtune<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Native-PyTorch Library for LLM Fine-tuning&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>hack-codegen<\/strong>&nbsp;<\/p>\n\n\n\n<p>Library to programatically generate Hack code and write it to signed files&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>audience-network<\/strong>&nbsp;<\/p>\n\n\n\n<p>Open source projects to demonstrate SDK and sample code usages and integration, and to collaborate and support peers in this community.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>builder<\/strong>&nbsp;<\/p>\n\n\n\n<p>Continuous builder and binary build scripts for pytorch&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>sound-spaces<\/strong>&nbsp;<\/p>\n\n\n\n<p>A first-of-its-kind acoustic simulation platform for audio-visual embodied AI research. It supports training and evaluating multiple tasks and applications.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>ort<\/strong>&nbsp;<\/p>\n\n\n\n<p>Accelerate PyTorch models with ONNX Runtime&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>FlowTorch<\/strong>&nbsp;<\/p>\n\n\n\n<p>This library would form a permanent home for reusable components for deep probabilistic programming. The library would form and harness a community of users and contributors by focusing initially on complete infra and documentation for how to use and create components.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>mvit<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code Release for MViTv2 on Image Recognition.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>torchdim<\/strong>&nbsp;<\/p>\n\n\n\n<p>Named tensors with first-class dimensions for PyTorch&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>muavic<\/strong>&nbsp;<\/p>\n\n\n\n<p>MuAViC: A Multilingual Audio-Visual Corpus for Robust Speech Recognition and Robust Speech-to-Text Translation&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dropout<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;Dropout Reduces Underfitting&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>AudioDec<\/strong>&nbsp;<\/p>\n\n\n\n<p>An Open-source Streaming High-fidelity Neural Audio Codec&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>VLPart<\/strong>&nbsp;<\/p>\n\n\n\n<p>[ICCV2023] VLPart: Going Denser with Open-Vocabulary Part Segmentation&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DABA<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official implementation of &#8220;Decentralization and Acceleration Enables Large-Scale Bundle Adjustment&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>TimelineBuilder<\/strong>&nbsp;<\/p>\n\n\n\n<p>A public release of TimelineBuilder for building personal digital data timelines.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>belebele<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repo for the Belebele dataset, a massively multilingual reading comprehension dataset.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>chef-utils<\/strong>&nbsp;<\/p>\n\n\n\n<p>Utilities related to Chef&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>EmbodiedQA<\/strong>&nbsp;<\/p>\n\n\n\n<p>Train embodied agents that can answer questions in environments&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>seamseg<\/strong>&nbsp;<\/p>\n\n\n\n<p>Seamless Scene Segmentation&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Mephisto<\/strong>&nbsp;<\/p>\n\n\n\n<p>A suite of tools for managing crowdsourcing tasks from the inception through to data packaging for research use.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>nbref<\/strong>&nbsp;<\/p>\n\n\n\n<p>Codebase for paper &#8220;N-Bref A High-fidelity Decompiler Exploiting Programming Structures&#8221;&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>TorchX<\/strong>&nbsp;<\/p>\n\n\n\n<p>TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training\/research and support for E2E production ML pipelines when you&#8217;re ready.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-TheWorldBeyond<\/strong>&nbsp;<\/p>\n\n\n\n<p>Presence Platform showcase demonstrating usage of Scene, Passthrough, Interaction, Voice, and Spatializer. The Oculus SDK and other supporting material is subject to the Oculus proprietary license. Multiple licenses may apply.&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>wit-ruby<\/strong>&nbsp;<\/p>\n\n\n\n<p>Ruby library for Wit.ai&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>EGG<\/strong>&nbsp;<\/p>\n\n\n\n<p>EGG: Emergence of lanGuage in Games&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dietgpu<\/strong>&nbsp;<\/p>\n\n\n\n<p>GPU implementation of a fast generalized ANS (asymmetric numeral system) entropy encoder and decoder, with extensions for lossless compression of numerical and other data types in HPC\/ML applications.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>mega<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sequence modeling with Mega.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>TTPForge<\/strong>&nbsp;<\/p>\n\n\n\n<p>The TTPForge is a Cybersecurity Framework for developing, automating, and executing attacker Tactics, Techniques, and Procedures (TTPs).&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>stable_signature<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official implementation of the paper &#8220;The Stable Signature Rooting Watermarks in Latent Diffusion Models&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>opaque-ke<\/strong>&nbsp;<\/p>\n\n\n\n<p>An implementation of the OPAQUE password-authenticated key exchange protocol&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>IMGUR5K-Handwriting-Dataset<\/strong>&nbsp;<\/p>\n\n\n\n<p>IMGUR5K handwriting set. It is a handwritten in-the-wild dataset, which contains challenging real world handwritten samples from different writers.The dataset is shared as a set of image urls with annotations. This code downloads the images and verifies the hash to the image to avoid data contamination.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>VRS<\/strong>&nbsp;<\/p>\n\n\n\n<p>VRS is a file format optimized to record &amp; playback streams of sensor data, such as images, audio samples, and any other discrete sensors (IMU, temperature, etc), stored in per-device streams of timestamped records.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>torchrecipes<\/strong>&nbsp;<\/p>\n\n\n\n<p>Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research techniques without significant engineering overhead.Specifically, recipes aims to provide- Consistent access to pre-trained SOTA models ready for production- Reference implementations for SOTA research reproducibility, and infrastructure to guarantee correctness, efficiency, and interoperability.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Ego4d<\/strong>&nbsp;<\/p>\n\n\n\n<p>Ego4d dataset repository. Download the dataset, visualize, extract features &amp; example usage of the dataset&nbsp;<\/p>\n\n\n\n<p>Meta Quest&nbsp;<\/p>\n\n\n\n<p><strong>immersive-web-emulator<\/strong>&nbsp;<\/p>\n\n\n\n<p>Browser extension that emulates Meta Quest devices for WebXR development. Lead: Felix Zhang (fe1ix@meta.com)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>luckmatters<\/strong>&nbsp;<\/p>\n\n\n\n<p>Understanding Training Dynamics of Deep ReLU Networks&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>mae_st<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official Open Source code for &#8220;Masked Autoencoders As Spatiotemporal Learners&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>dns<\/strong>&nbsp;<\/p>\n\n\n\n<p>Collection of Meta&#8217;s DNS Libraries&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Nerf-Det<\/strong>&nbsp;<\/p>\n\n\n\n<p>[ICCV 2023] Code for NeRF-Det: Learning Geometry-Aware Volumetric Representation for Multi-View 3D Object Detection&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Project Aria Tools<\/strong>&nbsp;<\/p>\n\n\n\n<p>projectaria_tools is an C++\/Python open-source toolkit to interact with Project Aria data&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SONAR<\/strong>&nbsp;<\/p>\n\n\n\n<p>SONAR, a new multilingual and multimodal fixed-size sentence embedding space, with a full suite of speech and text encoders and decoders.&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>mapillary_tools<\/strong>&nbsp;<\/p>\n\n\n\n<p>Command line tools for processing and uploading Mapillary imagery&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>kbc<\/strong>&nbsp;<\/p>\n\n\n\n<p>Tools for state of the art Knowledge Base Completion.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>fbjni<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library designed to simplify the usage of the Java Native Interface&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>paco<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts, and attributes prediction models, query evaluation scripts, and visualization notebooks.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>facebook360_dep<\/strong>&nbsp;<\/p>\n\n\n\n<p>Facebook360 Depth Estimation Pipeline &#8211;&nbsp;<a href=\"https:\/\/facebook.github.io\/facebook360_dep\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/facebook.github.io\/facebook360_dep<\/a>&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dpr-scale<\/strong>&nbsp;<\/p>\n\n\n\n<p>Scalable training for dense retrieval models.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>nocturne<\/strong>&nbsp;<\/p>\n\n\n\n<p>A data-driven, fast driving simulator for multi-agent coordination under partial observability.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DiffQ<\/strong>&nbsp;<\/p>\n\n\n\n<p>DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Dora the Explorer<\/strong>&nbsp;<\/p>\n\n\n\n<p>Dora is an experiment management framework. It expresses grid searches as pure python files as part of your repo. It identifies experiments with a unique hash signature. Scale up to hundreds of experiments without losing your sanity.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>FLSim<\/strong>&nbsp;<\/p>\n\n\n\n<p>Federated Learning Simulator (FLSim) is a flexible, standalone core library that simulates FL settings with a minimal, easy-to-use API. FLSim is domain-agnostic and accommodates many use cases such as vision and text.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>stopes<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library for preparing data for machine translation research (monolingual preprocessing, bitext mining, etc.) built by the FAIR NLLB team.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-FirstHand<\/strong>&nbsp;<\/p>\n\n\n\n<p>Oculus Interaction SDK showcase demonstrating the use of Interaction SDK in Unity with hand tracking. This project contains the interactions used in the &#8220;First Hand&#8221; demo available on App Lab. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>graph-api-webhooks-samples<\/strong>&nbsp;<\/p>\n\n\n\n<p>These are sample clients for Facebook&#8217;s Graph API Webhooks and Instagram&#8217;s Real-time Photo Updates API.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>PyTouch<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTouch is a machine learning library for tactile touch sensing.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-Movement<\/strong>&nbsp;<\/p>\n\n\n\n<p>Body, Eye and Face Tracking code sample.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>AGRoL<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;Avatars Grow Legs Generating Smooth Human Motion from Sparse Tracking Inputs with Diffusion Model&#8221;, CVPR 2023&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>PUG<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is the repository for the Photorealistic Unreal Graphics (PUG) datasets for representation learning.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>tac_plus<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Tacacs+ Daemon tested on Linux (CentOS) to run AAA via TACACS+ Protocol via IPv4 and IPv6.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>pytorch.github.io<\/strong>&nbsp;<\/p>\n\n\n\n<p>The website for PyTorch&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>senpai<\/strong>&nbsp;<\/p>\n\n\n\n<p>Senpai is an automated memory sizing tool for container applications.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>gazebo<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Rust library containing a collection of small well-tested primitives.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>akd<\/strong>&nbsp;<\/p>\n\n\n\n<p>An implementation of an auditable key directory&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DistDepth<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repository for &#8220;Toward Practical Monocular Indoor Depth Estimation&#8221; (CVPR 2022)&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>facebook-ruby-business-sdk<\/strong>&nbsp;<\/p>\n\n\n\n<p>Ruby SDK for Meta Marketing API&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>hydra-torch<\/strong>&nbsp;<\/p>\n\n\n\n<p>Configuration classes enabling type-safe PyTorch configuration for Hydra apps&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SEAL_OGB<\/strong>&nbsp;<\/p>\n\n\n\n<p>An open-source implementation of SEAL for link prediction in open graph benchmark (OGB) datasets.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>VisualVoice<\/strong>&nbsp;<\/p>\n\n\n\n<p>Audio-Visual Speech Separation with Cross-Modal Consistency&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-SharedSpaces<\/strong>&nbsp;<\/p>\n\n\n\n<p>Oculus multiplayer showcase demonstrating basic multiplayer functionality in Unity. Including: Oculus Social APIs, Oculus Platform authentication, Photon Realtime, and Photon Voice with Oculus Spatializer. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>optimizers<\/strong>&nbsp;<\/p>\n\n\n\n<p>For optimization algorithm research and development.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>online-dt<\/strong>&nbsp;<\/p>\n\n\n\n<p>Online Decision Transformer&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Holistic Trace Analysis<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library to analyze PyTorch traces.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Shepherd<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is the repo for the paper Shepherd &#8212; A Critic for Language Model Generation&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>cppdocs<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch C++ API Documentation&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Ad-Library-API-Script-Repository<\/strong>&nbsp;<\/p>\n\n\n\n<p>GitHub repository of commonly used python scripts that allows everyone to pull data via the Ad Library API&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Private-ID<\/strong>&nbsp;<\/p>\n\n\n\n<p>A collection of algorithms that can do join between two parties while preserving the privacy of keys on which the join happens&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>vocoder-benchmark<\/strong>&nbsp;<\/p>\n\n\n\n<p>A repository for benchmarking neural vocoders by their quality and speed.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>isc2021<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for the Image similarity challenge.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>robust-dynrf<\/strong>&nbsp;<\/p>\n\n\n\n<p>An algorithm for reconstructing the radiance field of a dynamic scene from a casually-captured video.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>LLM-QAT<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code repo for the paper &#8220;LLM-QAT Data-Free Quantization Aware Training for Large Language Models&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ViewDiff<\/strong>&nbsp;<\/p>\n\n\n\n<p>ViewDiff generates high-quality, multi-view consistent images of a real-world 3D object in authentic surroundings. (CVPR2024).&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Workplace Platform Samples<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sample code to enable Workplace customers to make the most of the features of the Workplace Custom Integrations platform.&nbsp;<\/p>\n\n\n\n<p>Meta Sites&nbsp;<\/p>\n\n\n\n<p><strong>Open-Mapping-At-Facebook<\/strong>&nbsp;<\/p>\n\n\n\n<p>Documentation for Open Mapping At Facebook&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>original-coast-clothing<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sample Messenger App &#8211; Original Coast Clothing&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>side<\/strong>&nbsp;<\/p>\n\n\n\n<p>The AI Knowledge Editor&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>torcheval<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to facilitate metric computation in distributed training and tools for PyTorch model evaluations.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>FFCV-SSL<\/strong>&nbsp;<\/p>\n\n\n\n<p>FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CT2Hair<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is the official implementation of CT2Hair High-fidelity 3D Hair Modeling Using Computed Tomography.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-Discover<\/strong>&nbsp;<\/p>\n\n\n\n<p>Discover is a showcase of the Meta Quest Mixed Reality APIs. This project demonstrate how to use Passthrough, Spatial Anchors, Scene API, Colocation and Shared Anchors. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>audioseal<\/strong>&nbsp;<\/p>\n\n\n\n<p>Localized watermarking for AI-generated speech audios, with SOTA on robustness and very fast detector&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>\u00b5sort<\/strong>&nbsp;<\/p>\n\n\n\n<p>Safe, minimal import sorting for Python projects.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>workshops<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is a repository for all workshop related materials.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CPA<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Compositional Perturbation Autoencoder (CPA) is a deep generative framework to learn effects of perturbations at the single-cell level. CPA performs OOD predictions of unseen combinations of drugs, learns interpretable embeddings, estimates dose-response curves, and provides uncertainty estimates.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>mulitpy<\/strong>&nbsp;<\/p>\n\n\n\n<p>torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters in a single C++ process.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>WhatsApp API Examples<\/strong>&nbsp;<\/p>\n\n\n\n<p>Examples of how to use WhatsApp Cloud API on the WhatsApp Business Platform&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>ExtendedAndroidTools<\/strong>&nbsp;<\/p>\n\n\n\n<p>Extended Android Tools is a place to host and maintain a build environment and makefiles cross compiling Linux tools we all love for Android.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>reindeer<\/strong>&nbsp;<\/p>\n\n\n\n<p>Reindeer is a tool to transform Rust Cargo dependencies into generated Buck build rules&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>LabGraph<\/strong>&nbsp;<\/p>\n\n\n\n<p>LabGraph is a Python framework for rapidly prototyping experimental systems for real-time streaming applications. It is particularly well-suited to real-time neuroscience, physiology and psychology experiments.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-HandGameplay<\/strong>&nbsp;<\/p>\n\n\n\n<p>Oculus showcase of hand tracking based interactions in Unreal.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SWAG<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official repository for &#8220;Revisiting Weakly Supervised Pre-Training of Visual Perception Models&#8221;.&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2201.08371\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/2201.08371<\/a>.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>dynolog<\/strong>&nbsp;<\/p>\n\n\n\n<p>Dynolog is a telemetry daemon for performance monitoring and tracing. It exports metrics from different components in the system like the linux kernel, CPU, disks, Intel PT, GPUs etc. Dynolog also integrates with pytorch and can trigger traces for distributed training applications.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-Phanto<\/strong>&nbsp;<\/p>\n\n\n\n<p>Phanto is a showcase of the Meta Quest Mixed Reality APIs. This project demonstrate how to use Meshes. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>grocery-delivery<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Grocery Delivery utility for managing cookbook uploads to distributed Chef backends.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>FCR<\/strong>&nbsp;<\/p>\n\n\n\n<p>FBNet-Command-Runner: A thrift service to run commands on heterogeneous Network devices with configurable parameters.&nbsp;<\/p>\n\n\n\n<p>Relay&nbsp;<\/p>\n\n\n\n<p><strong>relay-devtools<\/strong>&nbsp;<\/p>\n\n\n\n<p>Relay Development Tools&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>oculus-linux-kernel<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Linux kernel code for Oculus devices&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>wit-go<\/strong>&nbsp;<\/p>\n\n\n\n<p>Go client for wit.ai HTTP API&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ScaDiver<\/strong>&nbsp;<\/p>\n\n\n\n<p>Project for the paper &#8220;A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>GeoLift<\/strong>&nbsp;<\/p>\n\n\n\n<p>GeoLift is an end-to-end geo-experimental methodology based on Synthetic Control Methods used to measure the true incremental effect (Lift) of ad campaign.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Kotlin AST Tools<\/strong>&nbsp;<\/p>\n\n\n\n<p>Utilities and examples used in Meta to simplify migration from Java to Kotlin and maintenance of Kotlin code.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ResponsibleNLP<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repository for research in the field of Responsible NLP at Meta.&nbsp;<\/p>\n\n\n\n<p>Meta Quest&nbsp;<\/p>\n\n\n\n<p><strong>ProjectFlowerbed<\/strong>&nbsp;<\/p>\n\n\n\n<p>WebXR immersive gardening experience.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DynamicStereo<\/strong>&nbsp;<\/p>\n\n\n\n<p>[CVPR 2023] DynamicStereo: Consistent Dynamic Depth from Stereo Videos.&nbsp;<\/p>\n\n\n\n<p>PyTorch Labs&nbsp;<\/p>\n\n\n\n<p><strong>float8_experimental<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains the experimental PyTorch native float8 training UX&nbsp;<\/p>\n\n\n\n<p>PyTorch Labs&nbsp;<\/p>\n\n\n\n<p><strong>ao<\/strong>&nbsp;<\/p>\n\n\n\n<p>torchao: PyTorch Architecture Optimization (AO). A repository to host AO techniques and performant kernels that work with PyTorch.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>taste-tester<\/strong>&nbsp;<\/p>\n\n\n\n<p>Software to manage a chef-zero instance and use it to test changes on production servers.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>TestSlide<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Python test framework&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>FioSynth<\/strong>&nbsp;<\/p>\n\n\n\n<p>Tool which enables the creation of synthetic storage workloads, automates the execution and results collection of synthetic storage benchmarks.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>dispenso<\/strong>&nbsp;<\/p>\n\n\n\n<p>The project provides high-performance concurrency, enabling highly parallel computation.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>resctl-demo<\/strong>&nbsp;<\/p>\n\n\n\n<p>Demonstrate and benchmark various features of Linux resource control in a self-contained package.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>hsthrift<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Haskell Thrift Compiler. This is an implementation of the Thrift spec that generates code in Haskell. It depends on the fbthrift project for the implementation of the underlying transport.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>fbpcf<\/strong>&nbsp;<\/p>\n\n\n\n<p>Private computation framework library allows developers to perform randomized controlled trials, without leaking information about who participated or what action an individual took. It uses secure multiparty computation to guarantee this privacy. It is suitable for conducting A\/B testing, or measuring advertising lift and learning the aggregate statistics without sharing information on the individual level.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Private Computation Solutions<\/strong>&nbsp;<\/p>\n\n\n\n<p>FBPCS (Facebook Private Computation Solutions) leverages secure multi-party computation (MPC) to output aggregated data without making unencrypted, readable data available to the other party or any third parties. Facebook provides impression &amp; opportunity data, and the advertiser provides conversion \/ outcome data.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>tart<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code and model release for the paper &#8220;Task-aware Retrieval with Instructions&#8221; by Asai et al.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>Object Introspection<\/strong>&nbsp;<\/p>\n\n\n\n<p>Object Introspection (OI) enables on-demand, hierarchical profiling of objects in arbitrary C\/C++ programs with no recompilation.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>dataclassgenerate<\/strong>&nbsp;<\/p>\n\n\n\n<p>DataClassGenerate (or simply DCG) is a Kotlin compiler plugin that addresses an Android APK size overhead from Kotlin data classes.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>InterWild<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official PyTorch implementation of &#8220;Bringing Inputs to Shared Domains for 3D Interacting Hands Recovery in the Wild&#8221;, CVPR 2023&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>mtm<\/strong>&nbsp;<\/p>\n\n\n\n<p>MTM Masked Trajectory Models for Prediction, Representation, and Control.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>BenchMARL<\/strong>&nbsp;<\/p>\n\n\n\n<p>A collection of MARL benchmarks based on TorchRL&nbsp;<\/p>\n\n\n\n<p>BoltsFramework&nbsp;<\/p>\n\n\n\n<p><strong>Bolts-Java<\/strong>&nbsp;<\/p>\n\n\n\n<p>[Archive] Bolts is a collection of low-level libraries designed to make developing mobile apps easier.&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>user-documentation<\/strong>&nbsp;<\/p>\n\n\n\n<p>Documentation for those that use HHVM and write Hack code.&nbsp;<\/p>\n\n\n\n<p>CrowdTangle&nbsp;<\/p>\n\n\n\n<p><strong>CrowdTangle Public API Docs<\/strong>&nbsp;<\/p>\n\n\n\n<p>API Documentation&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>digit-design<\/strong>&nbsp;<\/p>\n\n\n\n<p>Design files for the DIGIT tactile sensor&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>simmc<\/strong>&nbsp;<\/p>\n\n\n\n<p>With the aim of building next generation virtual assistants that can handle multimodal inputs and perform multimodal actions, we introduce two new datasets (both in the virtual shopping domain), the annotation schema, the core technical tasks, and the baseline models. The code for the baselines and the datasets will be opensourced.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>iopath<\/strong>&nbsp;<\/p>\n\n\n\n<p>A python library that provides common I\/O interface across different storage backends.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>sapp<\/strong>&nbsp;<\/p>\n\n\n\n<p>Post Processor for Facebook Static Analysis Tools.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>shaDow_GNN<\/strong>&nbsp;<\/p>\n\n\n\n<p>NeurIPS 2021: Improve the GNN expressivity and scalability by decoupling the depth and receptive field of state-of-the-art GNN architectures&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Code Verify<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code Verify is an open source web browser extension that confirms that your Facebook, Messenger, Instagram, and WhatsApp Web code hasn\u2019t been tampered with or altered, and that the Web experience you\u2019re getting is the same as everyone else\u2019s.&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>torchsnapshot<\/strong>&nbsp;<\/p>\n\n\n\n<p>A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>denoised_mdp<\/strong>&nbsp;<\/p>\n\n\n\n<p>Open source code for paper &#8220;Denoised MDPs: Learning World Models Better Than the World Itself&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>holo_diffusion<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code repository for the CVPR2023 publication &#8220;HoloDiffusion: Training a 3D diffusion model using 2D Images&#8221;&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>Erlang Language Platform<\/strong>&nbsp;<\/p>\n\n\n\n<p>Erlang Language Platform. LSP server and CLI.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-DepthAPI<\/strong>&nbsp;<\/p>\n\n\n\n<p>Examples of using Depth API for real-time, dynamic occlusions&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>minimax<\/strong>&nbsp;<\/p>\n\n\n\n<p>Efficient baselines for autocurricula in JAX.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-CrypticCabinet<\/strong>&nbsp;<\/p>\n\n\n\n<p>Cryptic Cabinet is a short Mixed Reality (MR) experience for Meta Quest headsets. It will demonstrate the possibilities of MR through gameplay, narrative, and aesthetics. The app adapts to your room (big or small) to create a unique experience for everyone.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>squangle<\/strong>&nbsp;<\/p>\n\n\n\n<p>SQuangLe is a C++ API for accessing MySQL servers&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>homebrew-fb<\/strong>&nbsp;<\/p>\n\n\n\n<p>OS X Homebrew formulas to install Meta open source software&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>StringPacks<\/strong>&nbsp;<\/p>\n\n\n\n<p>Extracts localized strings from an Android app and stores it in a much more efficient format.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Neural-Code-Search-Evaluation-Dataset<\/strong>&nbsp;<\/p>\n\n\n\n<p>evaluation dataset consisting of natural language query and code snippet pairs&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>jacobian_regularizer<\/strong>&nbsp;<\/p>\n\n\n\n<p>A pytorch implementation of our jacobian regularizer to encourage learning representations more robust to input perturbations.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Hanabi_SPARTA<\/strong>&nbsp;<\/p>\n\n\n\n<p>Research code implementing the search AI agent for Hanabi, as well as a web server so people can play against it&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>impact-driven-exploration<\/strong>&nbsp;<\/p>\n\n\n\n<p>impact-driven-exploration&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>UnsupervisedDecomposition<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch original implementation of &#8220;Unsupervised Question Decomposition for Question Answering&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dcem<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Differentiable Cross-Entropy Method&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-SharedSpaces<\/strong>&nbsp;<\/p>\n\n\n\n<p>Oculus multiplayer showcase demonstrating basic multiplayer functionality in Unreal, including Oculus Platform Social APIs, Photon as the transport layer and UE replication. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-AssetStreaming<\/strong>&nbsp;<\/p>\n\n\n\n<p>Oculus asset streaming showcase demonstrating how to use asset streaming when navigating open world project while using different level of details. This sample also demonstrates how to use the Unity Addressables system. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Threat Research<\/strong>&nbsp;<\/p>\n\n\n\n<p>Welcome to the Meta Threat Research Indicator Repository, a dedicated resource for the sharing of Indicators of Compromise (IOCs) and other threat indicators with the external research community&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>VIP: Value-Implicit Pre-Training<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official repository for &#8220;VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>allocative<\/strong>&nbsp;<\/p>\n\n\n\n<p>Library and proc macro to analyze memory usage of data structures in rust.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>ocamlrep<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sets of libraries and tools to write applications and libraries mixing OCaml and Rust. These libraries will help keeping your types and data structures synchronized, and enable seamless exchange between OCaml and Rust&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>erlfuzz<\/strong>&nbsp;<\/p>\n\n\n\n<p>erlfuzz is a fuzzer for the Erlang ecosystem&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>OmnimatteRF<\/strong>&nbsp;<\/p>\n\n\n\n<p>A matting method that combines dynamic 2D foreground layers and a 3D background model.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Eyeful Tower dataset<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official release of the Eyeful Tower dataset, a high-fidelity multi-view capture of 11 real-world scenes, from the paper \u201cVR-NeRF High-Fidelity Virtualized Walkable Spaces\u201d (Xu et al., SIGGRAPH Asia 2023).&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>mysql-8.0<\/strong>&nbsp;<\/p>\n\n\n\n<p>MySQL Server, the world&#8217;s most popular open source database, and MySQL Cluster, a real-time, open source transactional database.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>HVVR<\/strong>&nbsp;<\/p>\n\n\n\n<p>Hierarchical Visibility for Virtual Reality, which implements a hybrid CPU\/GPU ray-caster, suited for real time rendering of effects such as lens distortion.&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>hsl<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Hack Standard Library&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>rfcs<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch RFCs (experimental)&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Kuduraft<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Raft Library in C++ based on the Raft implementation in Apache Kudu&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>RidgeSfM<\/strong>&nbsp;<\/p>\n\n\n\n<p>Ridge SfM Structure from Motion via robust pairwise matching under depth uncertainty&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>torchdistX<\/strong>&nbsp;<\/p>\n\n\n\n<p>Torch Distributed Experimental&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Generic-Grouping<\/strong>&nbsp;<\/p>\n\n\n\n<p>Open-source code for Generic Grouping Network (GGN, CVPR 2022)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Aria data tools<\/strong>&nbsp;<\/p>\n\n\n\n<p>Aria data tools provide the open-source toolkit in C++ and Python to interact with data from Project Aria&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dcd<\/strong>&nbsp;<\/p>\n\n\n\n<p>Implementations of robust Dual Curriculum Design (DCD) algorithms for unsupervised environment design.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>diht<\/strong>&nbsp;<\/p>\n\n\n\n<p>Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DCI<\/strong>&nbsp;<\/p>\n\n\n\n<p>Densely Captioned Images (DCI) dataset repository.&nbsp;<\/p>\n\n\n\n<p>Relay&nbsp;<\/p>\n\n\n\n<p><strong>eslint-plugin-relay<\/strong>&nbsp;<\/p>\n\n\n\n<p>A plugin for the code linter ESLint to lint specific details about Relay.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>2.5D-Visual-Sound<\/strong>&nbsp;<\/p>\n\n\n\n<p>2.5D visual sound&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>access<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code to reproduce the experiments from the paper.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>rust-shed<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repository containing Rust crates common between other Facebook open source projects (like Mononoke or Eden).&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>pplbench<\/strong>&nbsp;<\/p>\n\n\n\n<p>Evaluation Framework for Probabilistic Programming Languages&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>csprng<\/strong>&nbsp;<\/p>\n\n\n\n<p>Cryptographically secure pseudorandom number generators for PyTorch&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>param<\/strong>&nbsp;<\/p>\n\n\n\n<p>PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for evaluation of training and inference platforms.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>fbpcp<\/strong>&nbsp;<\/p>\n\n\n\n<p>FBPCP (Facebook Private Computation Platform) is a secure, privacy safe and scalable architecture to deploy MPC (Multi Party Computation) applications in a distributed way on virtual private clouds. FBPCF (Facebook Private Computation Framework) is for scaling MPC computation up via threading, while FBPCP is for scaling MPC computation out via Private Scaling architecture.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ssl-relation-prediction<\/strong>&nbsp;<\/p>\n\n\n\n<p>Simple yet SoTA Knowledge Graph Embeddings.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>asym-siam<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch implementation of Asymmetric Siamese (<a href=\"https:\/\/arxiv.org\/abs\/2204.00613\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/2204.00613<\/a>)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>PERFECT<\/strong>&nbsp;<\/p>\n\n\n\n<p>PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dva<\/strong>&nbsp;<\/p>\n\n\n\n<p>Drivable Volumetric Avatars&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>bpfilter<\/strong>&nbsp;<\/p>\n\n\n\n<p>BPF-based packet filtering framework&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>r-mae<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch implementation of R-MAE https\/\/arxiv.org\/abs\/2306.05411&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>motif<\/strong>&nbsp;<\/p>\n\n\n\n<p>Intrinsic Motivation from Artificial Intelligence Feedback&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>go-qfext<\/strong>&nbsp;<\/p>\n\n\n\n<p>a fast counting quotient filter implementation in golang&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>pytorch_sphinx_theme<\/strong>&nbsp;<\/p>\n\n\n\n<p>PyTorch Sphinx Theme&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>clutrr<\/strong>&nbsp;<\/p>\n\n\n\n<p>Diagnostic benchmark suite to explicitly test logical relational reasoning on natural language&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>rela<\/strong>&nbsp;<\/p>\n\n\n\n<p>Reinforcement Learning Assembly&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>GraphLog<\/strong>&nbsp;<\/p>\n\n\n\n<p>API for accessing the GraphLog dataset&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>mapillary_sls<\/strong>&nbsp;<\/p>\n\n\n\n<p>Mapillary Street-level Sequences Dataset&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>unnas<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for &#8220;Are labels necessary for neural architecture search&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DeepHandMesh<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official PyTorch implementation of &#8220;DeepHandMesh: A Weakly-Supervised Deep Encoder-Decoder Framework for High-Fidelity Hand Mesh Modeling,&#8221; ECCV 2020&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>EasyCom Dataset<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Easy Communications (EasyCom) dataset is a world-first dataset designed to help mitigate the *cocktail party effect* from an augmented-reality (AR) -motivated multi-sensor egocentric world view.&nbsp;<\/p>\n\n\n\n<p>Lofelt&nbsp;<\/p>\n\n\n\n<p><strong>NiceVibrations<\/strong>&nbsp;<\/p>\n\n\n\n<p>\ud83c\udfae \ud83d\ude80 Nice Vibrations and Lofelt Studio SDK source code repository&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>meta-ot<\/strong>&nbsp;<\/p>\n\n\n\n<p>Meta Optimal Transport&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>BiT: Robust Binary Multi-distilled Transformer<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code repo for the paper BiT Robustly Binarized Multi-distilled Transformer&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-SharedSpatialAnchors<\/strong>&nbsp;<\/p>\n\n\n\n<p>Unity-SharedSpatialAnchors was built to demonstrate how to use the Shared Spatial Anchors API, available in the Oculus Integration SDK for the Unity game engine. The sample app showcases the creation, saving, loading, and sharing of Spatial Anchors.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-UltimateGloveBall<\/strong>&nbsp;<\/p>\n\n\n\n<p>Meta Quest ESport Showcase demonstrating multiplayer functionalities in Unity. Including Oculus Social APIs, Avatars, Oculus Platform authentication, Oculus Multiplayer APIs, Photon Realtime, Photon Voice with Oculus Spatializer, and In-app purchases. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>flashy<\/strong>&nbsp;<\/p>\n\n\n\n<p>Framework for writing deep learning training loops. Lightweight, and retaining full freedom to design as you see fits. It handles checkpointing, logging, distributed, compatibility with Dora, and more!&nbsp;<\/p>\n\n\n\n<p>Meta Quest&nbsp;<\/p>\n\n\n\n<p><strong>reality-accelerator-toolkit<\/strong>&nbsp;<\/p>\n\n\n\n<p>RATK (Reality Accelerator Toolkit) simplifies the integration of Mixed Reality experiences in WebXR, making it easier for developers to bring their MR ideas to life.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DVSR<\/strong>&nbsp;<\/p>\n\n\n\n<p>DVSR (&#8220;Consistent Direct Time-of-Flight Video Depth Super-Resolution&#8221;), CVPR 2023&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>riemannian-fm<\/strong>&nbsp;<\/p>\n\n\n\n<p>code for &#8220;Riemannian Flow Matching on General Geometries&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>facebook-business-sdk-codegen<\/strong>&nbsp;<\/p>\n\n\n\n<p>Codegen project for our business SDKs&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>power_shell<\/strong>&nbsp;<\/p>\n\n\n\n<p>Erlang shell with advanced features: evaluating non-exported functions and shortcuts for frequently used functions.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dachshund<\/strong>&nbsp;<\/p>\n\n\n\n<p>Dachshund is a graph mining library written in Rust. It provides high performance data structures for multiple kinds of graphs, from simple undirected graphs to typed hypergraphs. Dachshund also provides algorithms for common tasks for graph mining and analysis, ranging from shortest paths to graph spectral analysis.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SimulEval<\/strong>&nbsp;<\/p>\n\n\n\n<p>SimulEval: A General Evaluation Toolkit for Simultaneous Translation&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>baspacho<\/strong>&nbsp;<\/p>\n\n\n\n<p>Direct solver for sparse SPD matrices for nonlinear optimization. Implements supernodal Cholesky decomposition algorithm, and supports GPU (CUDA).&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>RCDM<\/strong>&nbsp;<\/p>\n\n\n\n<p>Visualizing representations with diffusion based conditional generative model.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>e3b<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official repo for the E3B algorithm described in the paper &#8220;Exploration via Elliptical Episodic Bonuses&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Tacquito<\/strong>&nbsp;<\/p>\n\n\n\n<p>Tacquito is an open source TACACs+ server written in Go that implements RFC8907&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>modem<\/strong>&nbsp;<\/p>\n\n\n\n<p>MoDem Accelerating Visual Model-Based Reinforcement Learning with Demonstrations&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CiT<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for the paper titled &#8220;CiT Curation in Training for Effective Vision-Language Data&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>go-media-webtransport-server<\/strong>&nbsp;<\/p>\n\n\n\n<p>WebTransport media server that enables ultra low latency live streaming over QUIC (also VOD and rewind)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>galactic<\/strong>&nbsp;<\/p>\n\n\n\n<p>Galactic Scaling End-to-End Reinforcement Learning for Rearrangement at 100k Steps-Per-Second&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SoundingBodies<\/strong>&nbsp;<\/p>\n\n\n\n<p>We present a model that can generate accurate 3D sound fields of human bodies from headset microphones and body pose as inputs.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>cruxeval<\/strong>&nbsp;<\/p>\n\n\n\n<p>CRUXEval: Code Reasoning, Understanding, and Execution Evaluation&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>fb303<\/strong>&nbsp;<\/p>\n\n\n\n<p>fb303 is a core set of thrift functions that provide a common mechanism for querying stats and other information from a service.&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>mapillary_vistas<\/strong>&nbsp;<\/p>\n\n\n\n<p>MVD Evaluation Scripts&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>hhast<\/strong>&nbsp;<\/p>\n\n\n\n<p>Mutable AST library for Hack with linting and code migrations&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>dcrpm<\/strong>&nbsp;<\/p>\n\n\n\n<p>A tool to detect and correct common issues around RPM database corruption.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>voxelcnn<\/strong>&nbsp;<\/p>\n\n\n\n<p>VoxelCNN: Order-Aware Generative Modeling Using the 3D-Craft Dataset&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Radlibrary<\/strong>&nbsp;<\/p>\n\n\n\n<p>An R package for accessing the Facebook Ad Library API&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>Wit Unity SDK<\/strong>&nbsp;<\/p>\n\n\n\n<p>Wit-Unity is a Unity C# wrapper around the the Wit.ai rest APIs and is a core component of Voice SDK.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Noresqa<\/strong>&nbsp;<\/p>\n\n\n\n<p>This github repo is for Neurips 2021 and Interspeech 2022 papers on Non-Matching Reference based estimation of speech quality assessment.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>data2vec_vision<\/strong>&nbsp;<\/p>\n\n\n\n<p>Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>holotorch<\/strong>&nbsp;<\/p>\n\n\n\n<p>Holotorch is an optimization framework for differentiable wave-propagation written in PyTorch&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>vsc2022<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for the Video Similarity Challenge.&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>Erlang Tree-sitter Grammar<\/strong>&nbsp;<\/p>\n\n\n\n<p>Tree-sitter Grammar for Erlang&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>NGDF<\/strong>&nbsp;<\/p>\n\n\n\n<p>Neural Grasp Distance Fields for Robot Manipulation&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>ForgeArmory<\/strong>&nbsp;<\/p>\n\n\n\n<p>ForgeArmory provides TTPs that can be used with the TTPForge (<a href=\"https:\/\/github.com\/facebookincubator\/ttpforge\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/github.com\/facebookincubator\/ttpforge<\/a>).&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>webcodecs-capture-play<\/strong>&nbsp;<\/p>\n\n\n\n<p>Live streaming low latency experimentation platform in the browser (using WebCodecs)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>IMU2CLIP<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code repository for IMU2CLIP(https\/\/arxiv.org\/pdf\/2210.14395.pdf)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>assemblyhands-toolkit<\/strong>&nbsp;<\/p>\n\n\n\n<p>AssemblyHands Toolkit is a Python package that provides data loader, visualization, and evaluation tools for the AssemblyHands dataset (CVPR 2023).&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>FMMAX<\/strong>&nbsp;<\/p>\n\n\n\n<p>Fourier modal method with Jax&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Temporally Consistent Online Depth Estimation Using Point-Based Fusion<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for our CVPR 2023 paper on online, temporally consistent depth estimation.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>EgoObjects<\/strong>&nbsp;<\/p>\n\n\n\n<p>[ICCV2023] EgoObjects: A Large-Scale Egocentric Dataset for Fine-Grained Object Understanding&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-MoveFast<\/strong>&nbsp;<\/p>\n\n\n\n<p>Oculus Interaction SDK showcase demonstrating the use of Interaction SDK in Unity with hand tracking for a fitness-style app. This project contains the code and assets used in the &#8220;Move Fast&#8221; demo available on App Lab. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-StarterSamples<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository brings multiple samples that can help you explore features and bring them into your project.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>between-meals<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library to provide calculations between Chef diffs.&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>xhp-js<\/strong>&nbsp;<\/p>\n\n\n\n<p>Easily create JS controllers for XHP elements, and XHP wrappers for React elements&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>react-native-deprecated-modules<\/strong>&nbsp;<\/p>\n\n\n\n<p>Deprecated modules that were formerly part of React Native.&nbsp;<\/p>\n\n\n\n<p>Flashlight&nbsp;<\/p>\n\n\n\n<p><strong>Flashlight Text<\/strong>&nbsp;<\/p>\n\n\n\n<p>Text utilities, including beam search decoding, tokenizing, and more, built for use in Flashlight.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Secure Key Storage<\/strong>&nbsp;<\/p>\n\n\n\n<p>Secure Key Storage (SKS) is a library for Go that abstracts Security Hardware on laptops.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>antlir<\/strong>&nbsp;<\/p>\n\n\n\n<p>ANother Linux Image buildeR&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>InjKit<\/strong>&nbsp;<\/p>\n\n\n\n<p>Injection Kit. It is a java bytecode processing library for bytecode injection and transformation.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>adversarially-motivated-intrinsic-goals<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains code for the method and experiments of the paper &#8220;Learning with AMIGo: Adversarially Motivated Intrinsic Goals&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>ConversionsAPI-Tag-for-GoogleTagManager<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository will contain the artifacts needed for setting up Conversions API implementation on Google Tag Manager&#8217;s serverside. Please follow the instructions&nbsp;<a href=\"https:\/\/www.facebook.com\/business\/help\/702509907046774\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.facebook.com\/business\/help\/702509907046774<\/a>&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>test-infra<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository hosts code that supports the testing infrastructure for the main PyTorch repo. For example, this repo hosts the logic to track disabled tests and slow tests, as well as our continuation integration jobs HUD\/dashboard.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ContactOpt<\/strong>&nbsp;<\/p>\n\n\n\n<p>Physical contact plays a critical role in hand and object grasping. By estimating desirable contact then optimizing the hand pose to achieve it, ContactOpt improves the accuracy and realism of estimated hand and object poses.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Anonymous_Credential_Service<\/strong>&nbsp;<\/p>\n\n\n\n<p>Meta\u2019s Anonymous Credential Service (ACS) is designed to enable it to authenticate users in a \u201cde-identified manner,\u201d permitting access to services without gathering any data that could be used to identify the subject\u2019s identity.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>LAWT<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for papers Linear Algebra with Transformers (TMLR) and What is my Math Transformer Doing? (AI for Maths Workshop, Neurips 2022)&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>errpy<\/strong>&nbsp;<\/p>\n\n\n\n<p>An Error-Recovering Parser for Python&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>cop3d<\/strong>&nbsp;<\/p>\n\n\n\n<p>Common Pets in 3D&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>UmeTrack<\/strong>&nbsp;<\/p>\n\n\n\n<p>UmeTrack Unified multi-view end-to-end hand tracking for VR&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>VLaMP<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for \u201cPretrained Language Models as Visual Planners for Human Assistance\u201d&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SemDeDup<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for &#8220;SemDeDup&#8221;, a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically similar, but not exactly identical).&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>PartDistillation<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for the CVPR&#8217;23 paper titled &#8220;PartDistillation Learning part from Instance Segmentation&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>EgoBlur<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains a command-line interface(CLI) that can detect and blur out faces and license plates(PII) from images and videos. The CLI takes an image or video file as input, runs an anonymization algorithm on it, and writes the blurred output to a specified path.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>EgoVLPv2<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone&#8221; [ICCV, 2023]&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>maws<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code and models for the paper &#8220;The effectiveness of MAE pre-pretraining for billion-scale pretraining&#8221;&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2303.13496\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/2303.13496<\/a>&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>text-simplification-evaluation<\/strong>&nbsp;<\/p>\n\n\n\n<p>Reference-less Quality Estimation of Text Simplification Systems&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>obs-plugins<\/strong>&nbsp;<\/p>\n\n\n\n<p>OBS Plugins&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>glTFVariantMeld<\/strong>&nbsp;<\/p>\n\n\n\n<p>An application that accepts files on the glTF format, interprets them as variants of an over-arching whole, and melds them together.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>xR-EgoPose<\/strong>&nbsp;<\/p>\n\n\n\n<p>New egocentric synthetic dataset for egocentric 3D human pose estimation&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>digit-interface<\/strong>&nbsp;<\/p>\n\n\n\n<p>Python interface for the DIGIT tactile sensor&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>jps<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for &#8220;Joint Policy Search for Collaborative Multi-agent Incomplete Information Games&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>AEPsych<\/strong>&nbsp;<\/p>\n\n\n\n<p>AEPsych is a tool for adaptive experimentation in psychophysics and perception research, built on top of gpytorch and botorch.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>svg<\/strong>&nbsp;<\/p>\n\n\n\n<p>On the model-based stochastic value gradient for continuous reinforcement learning&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Carbon Explorer<\/strong>&nbsp;<\/p>\n\n\n\n<p>Carbon Explorer helps evaluating solutions make datacenters operate on renewable energy.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>TCDM<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;Learning Dexterous Manipulation from Exemplar Object Trajectories and Pre-Grasps&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization<\/strong>&nbsp;<\/p>\n\n\n\n<p>Bayesian Optimization over Discrete and Mixed Spaces via Probabilistic Reparameterization&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>voicesdk-samples-whisperer<\/strong>&nbsp;<\/p>\n\n\n\n<p>Oculus Voice SDK showcase demonstrating the use of Voice SDK in Unity. This project contains the source code for the &#8220;Whisperer&#8221; demo available on App Lab. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>miniF2F<\/strong>&nbsp;<\/p>\n\n\n\n<p>An updated version of miniF2F with lots of fixes and informal statements \/ solutions.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SyncMatch<\/strong>&nbsp;<\/p>\n\n\n\n<p>Self-supervised Correspondence Estimation via Multiview Registration&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>MODeL<\/strong>&nbsp;<\/p>\n\n\n\n<p>Memory Optimizations for Deep Learning (ICML 2023)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Controllable Agent<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Controllable Agent project trains RL Agents able to optimize any reward function specified in real time, without any further learning or fine-tuning. Training is reward-free and based on the Forward-Backward representation.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>active_indexing<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official implementation of &#8220;Active Image Indexing&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>genecis<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code and Models for &#8220;GeneCIS A Benchmark for General Conditional Image Similarity&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>how-to-autorl<\/strong>&nbsp;<\/p>\n\n\n\n<p>Plug-and-play hydra sweepers for the EA-based multifidelity method DEHB and several population-based training variations, all proven to efficiently tune RL hyperparameters.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-Decommissioned<\/strong>&nbsp;<\/p>\n\n\n\n<p>Unity project for &#8220;Decommissioned A VR Social Deduction Showcase&#8221; on Meta Quest&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>spot-sim2real<\/strong>&nbsp;<\/p>\n\n\n\n<p>Spot Sim2Real Infrastructure&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ProcedureVRL<\/strong>&nbsp;<\/p>\n\n\n\n<p>[CVPR 2023] Official code for &#8220;Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>privacy_adversarial_framework<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspired by MITRE ATT&amp;CK\u00ae.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>RLCD<\/strong>&nbsp;<\/p>\n\n\n\n<p>Reproduction of &#8220;RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>doc-storygen-v2<\/strong>&nbsp;<\/p>\n\n\n\n<p>Codebase for LLM story generation; updated version of https\/\/github.com\/yangkevin2\/doc-story-generation&nbsp;<\/p>\n\n\n\n<p>PyTorch Labs&nbsp;<\/p>\n\n\n\n<p><strong>TorchFix<\/strong>&nbsp;<\/p>\n\n\n\n<p>TorchFix &#8211; a linter for PyTorch-using code with autofix support&nbsp;<\/p>\n\n\n\n<p>BoltsFramework&nbsp;<\/p>\n\n\n\n<p><strong>boltsframework.github.io<\/strong>&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>later<\/strong>&nbsp;<\/p>\n\n\n\n<p>A framework for python asyncio with batteries included for people writing services in python asyncio&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>openbmc-linux<\/strong>&nbsp;<\/p>\n\n\n\n<p>Linux kernel consumed by OpenBMC&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>go2chef<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Golang tool to bootstrap a system from zero so that it&#8217;s able to run Chef to be managed&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CausalSkillLearning<\/strong>&nbsp;<\/p>\n\n\n\n<p>Codebase for project about unsupervised skill learning via variational inference and causality.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>BinauralSDM<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains a set of tools to render Binaural Room Impulse Responses (BRIR) using the Spatial Decomposition Method (SDM).The implementation features a series of improvements presented in Amengual et al. 2020, such as quantization of the direction of arrival (DOA) estimates to improve the spectral properties of the rendered BRIRs, or RTMod and RTMod+AP equalization for the late reverberation.The repository also contains the necessary files to 3D print an array holder of optimized topology for the estimation of DOA information.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>ConversionsAPI-Client-for-GoogleTagManager<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository will contain the artifacts needed for setting up Conversions API implementation on Google Tag Manager&#8217;s serverside. Primarily we will be hosting, &#8211; ConversionsAPI(Facebook) Client &#8211; listens on the events fired to GTM Server and maps them to common GTM schema. &#8211; ConversionsAPI(Facebook) Tag &#8211; server tag that fires events to CAPI.For more details on Design here https\/\/fburl.com\/uae68vlr&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>OpenBIC<\/strong>&nbsp;<\/p>\n\n\n\n<p>BICs (Bridge IC) are standalone devices deployed within a Data Center that enable monitoring a multi-host system using a single BMC device.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>UNLU<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for the paper &#8220;UnNatural Language Inference&#8221; to appear at ACL 2021 (Long Paper)&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>Mapillary Python SDK<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Python 3 library built on the Mapillary API v4 to facilitate retrieving and working with Mapillary data.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>netconsd<\/strong>&nbsp;<\/p>\n\n\n\n<p>Receive and process logs from the Linux kernel.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>BalancingGroups<\/strong>&nbsp;<\/p>\n\n\n\n<p>Simple data balancing baselines for worst-group-accuracy benchmarks.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Robust Multi-Objective Bayesian Optimization Under Input Noise<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for &#8220;Robust Multi-Objective Bayesian Optimization Under Input Noise&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>concurrentqa<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repo contains data and code for the paper &#8220;Reasoning over Public and Private Data in Retrieval-Based Systems.&#8221;&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-AppSpaceWarp<\/strong>&nbsp;<\/p>\n\n\n\n<p>Application SpaceWarp showcase demonstrating how developers can generate only every other frame for their application, effectively allowing them to render at half framerate. This gives developers more time to generate better graphics and simulations in their application.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>buck2-prelude<\/strong>&nbsp;<\/p>\n\n\n\n<p>Prelude for the Buck2 project&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>MidasTouch<\/strong>&nbsp;<\/p>\n\n\n\n<p>MidasTouch: Monte-Carlo inference over distributions across sliding touch&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>w2ot<\/strong>&nbsp;<\/p>\n\n\n\n<p>Euclidean Wasserstein-2 optimal transportation&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Leveraging Demonstrations with Latent Space Priors<\/strong>&nbsp;<\/p>\n\n\n\n<p>Source code release for &#8220;Leveraging Demonstrations with Latent Space Priors&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Whac-A-Mole<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for the paper &#8220;A Whac-A-Mole Dilemma Shortcuts Come in Multiples Where Mitigating One Amplifies Others&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>AutoCAT<\/strong>&nbsp;<\/p>\n\n\n\n<p>AutoCAT: Reinforcement Learning for Automated Exploration of Cache-Timing Attacks&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>HierVL<\/strong>&nbsp;<\/p>\n\n\n\n<p>[CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DejaVu<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repository for the paper Do SSL Models Have D\u00e9j\u00e0 Vu? A Case of Unintended Memorization in Self-supervised Learning&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Commuting Zones<\/strong>&nbsp;<\/p>\n\n\n\n<p>Commuting zones are geographic areas where people live and work and are useful for understanding local economies, as well as how they differ from traditional boundaries. These zones are a set of boundary shapes built using aggregated estimates of home and work locations. Data used to build commuting zones is aggregated and de-identified.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Ternary_Binary_Transformer<\/strong>&nbsp;<\/p>\n\n\n\n<p>ACL 2023&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>AdaTT<\/strong>&nbsp;<\/p>\n\n\n\n<p>pytorch open-source library for the paper &#8220;AdaTT Adaptive Task-to-Task Fusion Network for Multitask Learning in Recommendations&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>clip-rocket<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;Improved baselines for vision-language pre-training&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>novel-view-acoustic-synthesis<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for Novel View Acoustic Synthesis paper&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Accelerating Hair Rendering by Learning High-Order Scattered Radiance<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is the official implementation of our EGSR 2023 paper, Accelerating Hair Rendering by Learning High-Order Scattered Radiance.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>PlatoNeRF<\/strong>&nbsp;<\/p>\n\n\n\n<p>PlatoNeRF: 3D Reconstruction in Plato&#8217;s Cave via Single-View Two-Bounce Lidar&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>crystal-llm<\/strong>&nbsp;<\/p>\n\n\n\n<p>Large language models to generate stable crystals.&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>hhvm-third-party<\/strong>&nbsp;<\/p>\n\n\n\n<p>All of the dependencies that hhvm needs which don&#8217;t have nice packages&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>hhvm.com<\/strong>&nbsp;<\/p>\n\n\n\n<p>The landing page for HHVM and the blog of Hack\/HHVM&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>hacktest<\/strong>&nbsp;<\/p>\n\n\n\n<p>A unit testing framework for Hack&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>wordpress-messenger-customer-chat-plugin<\/strong>&nbsp;<\/p>\n\n\n\n<p>Messenger Customer Chat Plugin for WordPress&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Facebook-Pixel-for-Wordpress<\/strong>&nbsp;<\/p>\n\n\n\n<p>A plugin for advertisers who use WordPress to enable them easily setup the facebook pixel.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>fb-vscode<\/strong>&nbsp;<\/p>\n\n\n\n<p>Visual Studio Code&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>ristretto255-js<\/strong>&nbsp;<\/p>\n\n\n\n<p>Javascript implementation of the Ristretto255 group operations, built on top of the popular TweetNaCl.js crypto library&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>online_dialog_eval<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for the paper &#8220;Learning an Unreferenced Metric for Online Dialogue Evaluation&#8221;, ACL 2020&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>android-voice-demo<\/strong>&nbsp;<\/p>\n\n\n\n<p>Example on how to build a voice-enabled Android app with Wit.ai&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>interaction-exploration<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for &#8220;Learning Affordance Landscapes for Interaction Exploration in 3D Environments&#8221; (NeurIPS 20)&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>webxr-voice-demo<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code sample that demonstrates how to integrate a voice user interface with WebXR using Wit.ai, Web-Speech-API, and A-Frame.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Hyperion<\/strong>&nbsp;<\/p>\n\n\n\n<p>This project enabled intercepting and virtualizing the browser API&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>FAMBench<\/strong>&nbsp;<\/p>\n\n\n\n<p>Benchmarks to capture important workloads.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>BELA<\/strong>&nbsp;<\/p>\n\n\n\n<p>Bi-encoder entity linking architecture&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>voprf<\/strong>&nbsp;<\/p>\n\n\n\n<p>An implementation of a verifiable oblivious pseudorandom function (RFC 9497)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Entity Factored RL<\/strong>&nbsp;<\/p>\n\n\n\n<p>Source code for the paper &#8220;Policy Architectures for Compositional Generalization in Control&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Semantic Image Translation<\/strong>&nbsp;<\/p>\n\n\n\n<p>Evaluation benchmark for the task of Semantic Image Translation. Contains code to run FlexIT (CVPR 2022)&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>siMMMulator<\/strong>&nbsp;<\/p>\n\n\n\n<p>siMMMulator is an open source R-package that helps users to generate simulated data to plug into Marketing Mix Models (MMMs). The package features a variety of functions to help users build a data set from scratch.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Reels Publishing APIs<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains sample apps for developers who are interested in integrating with Reels APIs.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Reliable VQA<\/strong>&nbsp;<\/p>\n\n\n\n<p>Implementation for the paper &#8220;Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly&#8221; (ECCV 2022: https\/\/arxiv.org\/abs\/2204.13631).&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>WhatsApp Business Platform On-Premise API Deployment Templates<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repo hosts the cloud templates which enable one-click deployment of their WhatsApp Business Platform On-Premise API on different cloud platforms with stable high messaging throughput.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>E2EVE<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official repository for the paper &#8220;End-to-End Visual Editing with a Generatively Pre-Trained Artist&#8221;, which is accepted at ECCV 2022. Here, we consider the targeted image editing problem blending a region in a source image with a driver image that specifies the desired change. Differently, from prior works, we solve this problem by learning a conditional probability distribution of the edits, end-to-end.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ede<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for the paper &#8220;Uncertainty-Driven Exploration for Generalization in Reinforcement Learning&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>MAViL: Masked Audio-Video Learners<\/strong>&nbsp;<\/p>\n\n\n\n<p>The repo host the code and model of MAViL.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>AgentHive<\/strong>&nbsp;<\/p>\n\n\n\n<p>AgentHive provides the primitives and helpers for a seamless usage of robohive within TorchRL.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>NasRec<\/strong>&nbsp;<\/p>\n\n\n\n<p>NASRec Weight Sharing Neural Architecture Search for Recommender Systems&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>6DoF-Auraliser<\/strong>&nbsp;<\/p>\n\n\n\n<p>An auralisation system that takes a head-worn microphone array recordings as input and renders the audio for binaural playback; taking into account both the listener&#8217;s head-orientation and relative position from the recording point.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>Memory Snapshot Analyzer<\/strong>&nbsp;<\/p>\n\n\n\n<p>Analysis tooling for memory snapshots of managed code runtimes, specifically, Unity Memory Snapshots.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>EgoT2<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for the paper &#8220;Egocentric Video Task Translation&#8221; (CVPR 2023 Highlight)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>PostText<\/strong>&nbsp;<\/p>\n\n\n\n<p>PostText is a QA system for querying your text data. When appropriate structured views are in place, PostText is good at answering queries that require computing aggregates over your data.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ssorl<\/strong>&nbsp;<\/p>\n\n\n\n<p>Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ViP-MAE<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>simple diffusion studio<\/strong>&nbsp;<\/p>\n\n\n\n<p>sdstudio project for image generation and modification&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>replay_dataset<\/strong>&nbsp;<\/p>\n\n\n\n<p>Download scripts and tools for Replay dataset.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CL-LNS<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code repo for ICML&#8217;23 Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>LANCER<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repo for the paper &#8220;Landscape Surrogate Learning Decision Losses for Mathematical Optimization Under Partial Information&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>three_bricks<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official Implementation of the paper &#8220;Three Bricks to Consolidate Watermarks for LLMs&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Conversation Platform 4 Marketing (CP4M)<\/strong>&nbsp;<\/p>\n\n\n\n<p>CP4M is a conversational marketing platform which enables advertisers to integrate their customer-facing chatbots with FB Messenger\/WhatsApp, in order to meet customers where they are and drive native conversations on the advertiser&#8217;s owned infra.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>lss_eval<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found here&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>High Resolution Canopy Height Maps<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository provides inference code to compute canopy height maps from aerial images, as described in the paper &#8220;Very high resolution canopy height maps from RGB imagery using self-supervised vision transformer and convolutional decoder trained on Aerial Lidar&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ca_body<\/strong>&nbsp;<\/p>\n\n\n\n<p>Codec Avatar Body&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-Phanto<\/strong>&nbsp;<\/p>\n\n\n\n<p>Phanto is a showcase of the Meta Quest Mixed Reality APIs. This project demonstrates how to use Meshes. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>hacklang.org<\/strong>&nbsp;<\/p>\n\n\n\n<p>The content for hacklang.org&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>openbmc-uboot<\/strong>&nbsp;<\/p>\n\n\n\n<p>Tracking Denx Das u-boot with various trusted computing add-ons.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dynabench<\/strong>&nbsp;<\/p>\n\n\n\n<p>Dynamic Adversarial Benchmarking platform&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Imppres<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository houses the IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of &gt;25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types. IMPPRES is an NLI dataset following the format of SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018) and XNLI (Conneau et al., 2018), which was created to determine how well trained NLI models do on recognizing several kinds of presuppositions and scalar implicatures.&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>WhatsApp Runtime System (WARTS)<\/strong>&nbsp;<\/p>\n\n\n\n<p>Erlang\/OTP&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>MY_ENUM<\/strong>&nbsp;<\/p>\n\n\n\n<p>Small c++ macro library to add compile-time introspection to c++ enum classes.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Opacus-lab<\/strong>&nbsp;<\/p>\n\n\n\n<p>Research and experimental code related to Opacus, an open-source library for training PyTorch models with Differential Privacy&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>deepmeg-recurrent-encoder<\/strong>&nbsp;<\/p>\n\n\n\n<p>deepmeg recurrent encoder&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>RidgeSketch<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Fast sketching based solver for large scale ridge regression&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>torch_ucc<\/strong>&nbsp;<\/p>\n\n\n\n<p>Pytorch process group third-party plugin for UCC&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>Mapillary Metropolis SDK<\/strong>&nbsp;<\/p>\n\n\n\n<p>A collection of code examples to help users get started with the Mapillary Metropolis dataset&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>pyre-action<\/strong>&nbsp;<\/p>\n\n\n\n<p>GitHub Action for Pyre&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>grounding-inductive-biases<\/strong>&nbsp;<\/p>\n\n\n\n<p>reproduces experiments from &#8220;Grounding inductive biases in natural images: invariance stems from variations in data&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Pysa Action<\/strong>&nbsp;<\/p>\n\n\n\n<p>GitHub Action for Pysa&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Spark AR Core Libs<\/strong>&nbsp;<\/p>\n\n\n\n<p>Core libraries that can be used in Spark AR. You can import each library depends on your requirements.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Data Acquisition for ML Benchmark<\/strong>&nbsp;<\/p>\n\n\n\n<p>DAM Data Acquisition for ML Benchmark, as part of the DataPerf benchmark suite,&nbsp;<a href=\"https:\/\/dataperf.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/dataperf.org\/<\/a>&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>PatternedClothing<\/strong>&nbsp;<\/p>\n\n\n\n<p>Dataset of dynamic clothing from pattern registration&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>t2motion<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official implementation of Breaking The Limits of Text-conditioned 3D Motion Synthesis with Elaborative Descriptions. (ICCV2023)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>UNIREX<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is the official PyTorch repo for &#8220;UNIREX: A Unified Learning Framework for Language Model Rationale Extraction&#8221; (ICML 2022).&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-GraphicsShowcase<\/strong>&nbsp;<\/p>\n\n\n\n<p>Oculus showcase demonstrating how to use Vulkan subpasses to implement performant tonemapping for color grading LUTs, day-night cycle, fade in \/ fade out, and vignette effects.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>A Data Source for Reasoning Embodied Agents<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Data Source for Reasoning Embodied Agents&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>conversational-voice-capture<\/strong>&nbsp;<\/p>\n\n\n\n<p>A general purpose web app for connecting participants to engage in realtime conversations based on generated prompts.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>rush<\/strong>&nbsp;<\/p>\n\n\n\n<p>RUSH (Reliable &#8211; unreliable &#8211; Streaming Protocol)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>go-fresh<\/strong>&nbsp;<\/p>\n\n\n\n<p>Original code for the paper &#8220;Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping&#8221; by Mezghani et al.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>protoquant<\/strong>&nbsp;<\/p>\n\n\n\n<p>Prototype routines for GPU quantization written using PyTorch.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>gtmf<\/strong>&nbsp;<\/p>\n\n\n\n<p>The implementation of GTMF(Ground Truth Maturity Framework)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>gismo<\/strong>&nbsp;<\/p>\n\n\n\n<p>companion code for &#8220;Learning to substitute Ingredients in Recipes&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SparseBO<\/strong>&nbsp;<\/p>\n\n\n\n<p>code associated with paper &#8220;Sparse Bayesian Optimization&#8221;&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>WhatsApp-OTP-Sample-App<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sample app that integrates with WhatsApp OTP (One-Time Password) copy code and &#8220;one-tap&#8221; autofill features. This project shows how to send and receive OTP code from WhatsApp and best practices around integration.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>qEUBO<\/strong>&nbsp;<\/p>\n\n\n\n<p>Reproducible code for paper &#8220;qEUBO A Decision-Theoretic Acquisition Function for Preferential Bayesian Optimization&#8221; from AISTATS 2023&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>Conversion-Leads-Salesforce-APEX<\/strong>&nbsp;<\/p>\n\n\n\n<p>Setup Conversion Leads API integration using Salesforce APEX triggers on Lead objects&nbsp;<\/p>\n\n\n\n<p>Meta Quest&nbsp;<\/p>\n\n\n\n<p><strong>spatial-web-template<\/strong>&nbsp;<\/p>\n\n\n\n<p>WebXR sample \/ template app&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>EgoTV<\/strong>&nbsp;<\/p>\n\n\n\n<p>EgoTV Egocentric Task Verification from Natural Language Task Descriptions&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>selective-vqa_ood<\/strong>&nbsp;<\/p>\n\n\n\n<p>Implementation for the CVPR 2023 paper &#8220;Improving Selective Visual Question Answering by Learning from Your Peers&#8221; (<a href=\"https:\/\/arxiv.org\/abs\/2306.08751\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/arxiv.org\/abs\/2306.08751<\/a>)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>bc-irl<\/strong>&nbsp;<\/p>\n\n\n\n<p>Implementation of BC-IRL and other IRL baselines&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>UmeTrack_data<\/strong>&nbsp;<\/p>\n\n\n\n<p>Dataset for the paper UmeTrack Unified multi-view end-to-end hand tracking for VR&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>TimelineQA<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is the repository for TimelineQA, a benchmark for querying lifelogs.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>coocmap<\/strong>&nbsp;<\/p>\n\n\n\n<p>code for paper &#8220;Accessing higher dimensions for unsupervised word translation&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>iclmlp<\/strong>&nbsp;<\/p>\n\n\n\n<p>Experiments for &#8220;A Closer Look at In-Context Learning under Distribution Shifts&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>distributed_traces<\/strong>&nbsp;<\/p>\n\n\n\n<p>Distributed tracing data from Meta&#8217;s microservices architecture.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SIE<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for the paper Self-Supervised Learning of Split Invariant Equivariant Representations&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>erldist_filter<\/strong>&nbsp;<\/p>\n\n\n\n<p>erldist_filter NIF for filtering and logging Erlang Dist Protocol messages&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>adaptive_scheduling<\/strong>&nbsp;<\/p>\n\n\n\n<p>Experimental scripts for researching data adaptive learning rate scheduling.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>gen_dgrl<\/strong>&nbsp;<\/p>\n\n\n\n<p>DGRL Official Code&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>buck2-change-detector<\/strong>&nbsp;<\/p>\n\n\n\n<p>Given a Buck2 built project and a set of changes (e.g. from source control) compute the targets that may have changed. Sometimes known as a target determinator, useful for optimizing a CI system.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>MoCA<\/strong>&nbsp;<\/p>\n\n\n\n<p>Motion-conditional image animation for video editing&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>rlfh-gen-div<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ExPLORe<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is code to accompany the paper &#8220;Accelerating Exploration with Unlabeled Prior Data&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SOC-matching<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for &#8220;Stochastic Optimal Control Matching&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>moq-encoder-player<\/strong>&nbsp;<\/p>\n\n\n\n<p>This project is provides a minimal implementation (inside the browser) of a live video and audio encoder and video \/ audio player creating and consuming IETF MOQ stream. The goal is to provide a minimal live platform components that helps testing IETF MOQ interop&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Qinco<\/strong>&nbsp;<\/p>\n\n\n\n<p>Residual Quantization with Implicit Neural Codebooks&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>MMCSG<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains the baseline system for CHiME-8 MMCSG challenge focusing on transcribing both sides of a conversation where one participant is wearing smart glasses equipped with a microphone array and camera.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>fbrell<\/strong>&nbsp;<\/p>\n\n\n\n<p>An interactive environment to explore the Facebook JavaScript SDK.&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>xhp-js-example<\/strong>&nbsp;<\/p>\n\n\n\n<p>Example project for XHP-JS&nbsp;<\/p>\n\n\n\n<p>Flow&nbsp;<\/p>\n\n\n\n<p><strong>ocaml-dtoa<\/strong>&nbsp;<\/p>\n\n\n\n<p>double-to-ascii ocaml implementation&nbsp;<\/p>\n\n\n\n<p>Flow&nbsp;<\/p>\n\n\n\n<p><strong>ocaml-wtf8<\/strong>&nbsp;<\/p>\n\n\n\n<p>An ocaml library that implements a WTF-8 encoder and decoder.&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>Augmentor<\/strong>&nbsp;<\/p>\n\n\n\n<p>Image augmentation library in Python for machine learning.&nbsp;<\/p>\n\n\n\n<p>Flow&nbsp;<\/p>\n\n\n\n<p><strong>ocaml-vlq<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library to encode\/decode numbers in OCaml.&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>hack-http-request-response-interfaces<\/strong>&nbsp;<\/p>\n\n\n\n<p>Defines common cross-framework interfaces to represent HTTP requests and responses&nbsp;<\/p>\n\n\n\n<p>Flow&nbsp;<\/p>\n\n\n\n<p><strong>ocaml-ppx_gen_rec<\/strong>&nbsp;<\/p>\n\n\n\n<p>ocaml preprocessor that generates a recursive module&nbsp;<\/p>\n\n\n\n<p>HHVM&nbsp;<\/p>\n\n\n\n<p><strong>hack-mode<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Emacs major mode for editing Hack code&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Portal-Kernel<\/strong>&nbsp;<\/p>\n\n\n\n<p>Kernel Code for Portal.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>fbooja<\/strong>&nbsp;<\/p>\n\n\n\n<p>Implements the bootstrap and jackknife methods of&nbsp;<a href=\"http:\/\/tygert.com\/jdssv.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">http:\/\/tygert.com\/jdssv.pdf<\/a>&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>parcus<\/strong>&nbsp;<\/p>\n\n\n\n<p>Soft Pattern Matching for Interpretable Low-Resource Classification&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>jupyterhub_fb_authenticator<\/strong>&nbsp;<\/p>\n\n\n\n<p>JupyterHub Facebook Authenticator is a Facebook OAuth authenticator built on top of OAuthenticator.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>meta-fbvuln<\/strong>&nbsp;<\/p>\n\n\n\n<p>OpenEmbedded meta-layer that allows producing a vulnerability manifest alongside a Yocto build. The produced manifest is suitable for ongoing vulnerability scanning of fielded software.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>svinfer<\/strong>&nbsp;<\/p>\n\n\n\n<p>The FORT team has released differentially private Condor data to external researchers in H1 2020. It is known that analyzing DP data via classic statistical models will lead to biased conclusions. We are releasing at-scale statistical models which provide valid inference from DP data.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>clara<\/strong>&nbsp;<\/p>\n\n\n\n<p>CLARA: Confidence of Labels and Raters&nbsp;<\/p>\n\n\n\n<p>PyTorch&nbsp;<\/p>\n\n\n\n<p><strong>pytorch-integration-testing<\/strong>&nbsp;<\/p>\n\n\n\n<p>Testing downstream libraries using pytorch release candidates&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>FIND<\/strong>&nbsp;<\/p>\n\n\n\n<p>FIND: search For Inductive biases IN Deep seq2seq&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>mc<\/strong>&nbsp;<\/p>\n\n\n\n<p>I have implemented in both python and R two papers for estimating subgroup means under misclassification, which are useful for data analyses. T. K. MAK, W. K. LI, A new method for estimating subgroup means under misclassification, Biometrika, Volume 75, Issue 1, March 1988, Pages 105\u2013111, https\/\/doi.org\/10.1093\/biomet\/75.1.105 Sel\u00e9n, Jan. \u201cAdjusting for Errors in Classification and Measurement in the Analysis of Partly and Purely Categorical Data.\u201d Journal of the American Statistical Association, vol. 81, no. 393, 1986, pp. 75\u201381. JSTOR,&nbsp;<a href=\"http:\/\/www.jstor.org\/stable\/2287969\" target=\"_blank\" rel=\"noreferrer noopener\">www.jstor.org\/stable\/2287969<\/a>. Accessed 10 Aug. 2020.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>cp_reference<\/strong>&nbsp;<\/p>\n\n\n\n<p>We are building a 3rd party commerce platform partner reference implementation.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Eigen-FBPlugins<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is collection of plugins extending Eigen arrays\/matrices with main focus on using them for computer vision. In particular, this project should provide support for multichannel arrays (missing in vanilla Eigen) and seamless integration between Eigen types and OpenCV functions.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>dnf-plugin-cow<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code to enable Copy on Write features being upstreamed in rpm and librepo&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>fbcdgraph<\/strong>&nbsp;<\/p>\n\n\n\n<p>The codes reproduce the figures and statistics in the paper, &#8220;Cumulative deviation of a subpopulation from the full population,&#8221; by Mark Tygert. The repo also provides the LaTeX and BibTex sources required for replicating the paper.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Planning Excellence Toolkit<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sample codes that allows to fetch different Reach and Frequency curves from the Facebook Marketing API.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ucc<\/strong>&nbsp;<\/p>\n\n\n\n<p>Unified Communication Collectives Library&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Lead Ads Webhook Sample at AWS<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sample code to accelerate client&#8217;s adoption of Lead Ads and Conversion Leads products, by integrating with our advertising platform. Developers can also take this as reference when building integrations, without having to start from scratch.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>edencommon<\/strong>&nbsp;<\/p>\n\n\n\n<p>Shared library for Watchman and Eden projects.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>automerge<\/strong>&nbsp;<\/p>\n\n\n\n<p>A JSON-like data structure (a CRDT) that can be modified concurrently by different users, and merged again automatically.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>kperf<\/strong>&nbsp;<\/p>\n\n\n\n<p>TCP and TLS performance testing tool.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>image-goal-nav-dataset<\/strong>&nbsp;<\/p>\n\n\n\n<p>Dataset for Image-Goal Navigation in Habitat&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>preference-exploration<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for replicating experiments from the paper, Preference Exploration for Efficient Bayesian Optimization with Multiple Outcomes, published in AISTATS 2022.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dp_compression<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code repo for the paper &#8220;Privacy-aware Compression for Federated Data Analysis&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>python-typing-tutorial<\/strong>&nbsp;<\/p>\n\n\n\n<p>A sample Python project to demonstrate basic type checking concepts and best practices.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>bounding_data_reconstruction<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repo for the paper &#8220;Bounding Training Data Reconstruction in Private (Deep) Learning&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Share to Reels | Android Sample App<\/strong>&nbsp;<\/p>\n\n\n\n<p>Android sample app with Share to Reels&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Share to Reels | iOS Sample App<\/strong>&nbsp;<\/p>\n\n\n\n<p>iOS sample app with Share to Reels&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ecevecce<\/strong>&nbsp;<\/p>\n\n\n\n<p>Metrics of calibration for probabilistic predictions&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>VeriPy<\/strong>&nbsp;<\/p>\n\n\n\n<p>VeriPy is a python based Verilog\/Systemverilog automation tool. It automates ports\/wire\/reg\/logic declarations, sub-module Instantiation, embedded python, IO spec flow, memory wrapper generation, various code generation plugins and configurable code generation. The VeriPy Wiki has more the detailed documentation.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>PyTorch Quantization Workshop<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for a workshop hosted at the MLOps World Summit &#8217;22&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Implicit-HRTF<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains the dataset used to train the neural network model descried in the paper &#8220;Implicit HRTF Modeling Using Temporal Convolutional Networks&#8221;, ICASSP 2021.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Computer Vision Bias Amplification<\/strong>&nbsp;<\/p>\n\n\n\n<p>Bias amplification and overconfidence in computer vision.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>nccl<\/strong>&nbsp;<\/p>\n\n\n\n<p>Optimized primitives for collective multi-GPU communication&nbsp;<\/p>\n\n\n\n<p>Flashlight&nbsp;<\/p>\n\n\n\n<p><strong>Flashlight Sequence<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sequence algorithms for use in Flashlight.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>lr-with-bins<\/strong>&nbsp;<\/p>\n\n\n\n<p>An experimental first-stage model used for quick and efficient inference on part of the data.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>flowframe<\/strong>&nbsp;<\/p>\n\n\n\n<p>FlowFrame enforces information flow control policies in Scala Spark applications.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>spotlight_hardware_designs<\/strong>&nbsp;<\/p>\n\n\n\n<p>Partial set of hardware designs for a Meta-developed brain computer interface (BCI) research prototype system (Spotlight).&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>caldera-security-tests<\/strong>&nbsp;<\/p>\n\n\n\n<p>This project was created to provide examples of a TTP Runner and Security Regression Pipeline using vulnerabilities discovered in MITRE CALDERA by Jayson Grace from Meta&#8217;s Purple Team.&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>VoiceSDK Unreal Engine<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Voice SDK for the Unreal engine.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dmae_st<\/strong>&nbsp;<\/p>\n\n\n\n<p>Directed masked autoencoders&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Insights Dashboard<\/strong>&nbsp;<\/p>\n\n\n\n<p>Insights Dashboard is a sample app that integrates with Meta&#8217;s Insights APIs&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>EgocentricUserAdaptation<\/strong>&nbsp;<\/p>\n\n\n\n<p>In this codebase we establish a benchmark for egocentric user adaptation based on Ego4d.First, we start from a population model which has data from many users to learn user-agnostic representations.As the user gains more experience over its lifetime, we aim to tailor the general model to user-specific expert models.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>PSSL<\/strong>&nbsp;<\/p>\n\n\n\n<p>Experiment research ideas in self-supervised learning.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>griphin<\/strong>&nbsp;<\/p>\n\n\n\n<p>High performance pytorch engine for distributed graph storage, processing and analytics&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>go-belt<\/strong>&nbsp;<\/p>\n\n\n\n<p>It is an implementation-agnostic Go(lang) package to generalize observability tooling (logger, metrics, tracer and so on) and provide ability to use any of these tools with a standard context. Essentially it is an attempt to standardize observability API in Go.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SplitAO3D<\/strong>&nbsp;<\/p>\n\n\n\n<p>Project code for our HPG2023 paper &#8220;PSAO: Point-Based Split Rendering for Ambient Occlusion&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>mit-dl-workshop<\/strong>&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Sustainable Concrete<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repository to track versions of concrete strength data, models, and active learning proposals.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>synlm<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for paper: &#8220;Privately generating tabular data using language models&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>latticer<\/strong>&nbsp;<\/p>\n\n\n\n<p>The codes reproduce the figures in the paper, &#8220;An efficient algorithm for integer lattice reduction.&#8221; The repo also provides the LaTeX and BibTeX sources required for replicating the paper.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>pyvrs<\/strong>&nbsp;<\/p>\n\n\n\n<p>Python interface for https\/\/github.com\/facebookresearch\/vrs.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DST-EGQA<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for Continual Dialogue State Tracking via Example-Guided Question Answering work.&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>MapWithAI Tasking Manager<\/strong>&nbsp;<\/p>\n\n\n\n<p>A fork of the HOTOSM Tasking Manager (tasks.hotosm.org) to deploy and test experimental integrations and features.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>SafeC<\/strong>&nbsp;<\/p>\n\n\n\n<p>Library containing safer alternatives\/wrappers for insecure C APIs.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>color difference metric for binocular rivalry, delta E bino<\/strong>&nbsp;<\/p>\n\n\n\n<p>calculate color difference metric for binocular rivalry, delta E bino&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>LINKExtension<\/strong>&nbsp;<\/p>\n\n\n\n<p>replication code for &#8220;Node Attribute Prediction on Multilayer Networks with Weighted and Directed Edges&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>scrut<\/strong>&nbsp;<\/p>\n\n\n\n<p>Scrut is a testing toolkit for CLI applications. A tool to scrutinize terminal programs without fuss.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>video_rep_learning<\/strong>&nbsp;<\/p>\n\n\n\n<p>SSL Video Representation Learning project&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>react-router-3<\/strong>&nbsp;<\/p>\n\n\n\n<p>Fork of https\/\/www.npmjs.com\/package\/react-router v3.0.5&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CompGenRep_MLRC2022<\/strong>&nbsp;<\/p>\n\n\n\n<p>The repository for the project A Replication Study of Compositional Generalization Works on Semantic Parsing, accepted into MLRC2022.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Ego4D Goal-Step<\/strong>&nbsp;<\/p>\n\n\n\n<p>Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>tce<\/strong>&nbsp;<\/p>\n\n\n\n<p>Library for the Test-based Calibration Error (TCE) metric to quantify the degree to classifier calibration.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>OpenSFEDS<\/strong>&nbsp;<\/p>\n\n\n\n<p>OpenSFEDS, a near-eye gaze estimation dataset containing approximately 2M synthetic camera-photosensor image pairs sampled at 500 Hz under varied appearance and camera position.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>hashtag-generation<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repo is the official implementation of the paper titled &#8220;Generating Hashtags for Short-form Videos with Guided Signals&#8221; (ACL 2023).&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>wireguard_py<\/strong>&nbsp;<\/p>\n\n\n\n<p>Cython library for Wireguard&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-MetaXRAudioSDK<\/strong>&nbsp;<\/p>\n\n\n\n<p>This project contains Unity samples for Meta&#8217;s Presence Platform Audio SDK. The Oculus SDK and other supporting material is subject to the Oculus SDK License. Multiple licenses may apply.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dual-system-for-visual-language-reasoning<\/strong>&nbsp;<\/p>\n\n\n\n<p>Github repo for Peifeng&#8217;s internship project&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-HandSample<\/strong>&nbsp;<\/p>\n\n\n\n<p>This Unreal sample illustrates how hand tracking is implemented. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>RLCompOpt<\/strong>&nbsp;<\/p>\n\n\n\n<p>Learning Compiler Pass Orders using Coreset and Normalized Value Prediction. (ICML 2023)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SurCo<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repo for ICML&#8217;23 paper SurCo Learning Linear Surrogates For Combinatorial Nonlinear Optimization Problems&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>NORD<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code and pre-trained model release for the ICASSP 2023 Paper &#8220;NORD NON-MATCHING REFERENCE BASED RELATIVE DEPTH ESTIMATION FROM BINAURAL AUDIO&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>sado<\/strong>&nbsp;<\/p>\n\n\n\n<p>A macOS signed-app shim for running daemons with reliable capabilities.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-HandPoseShowcase<\/strong>&nbsp;<\/p>\n\n\n\n<p>This Unreal sample demonstrates how to implement hand gesture recognition. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-Locomotion<\/strong>&nbsp;<\/p>\n\n\n\n<p>This sample demonstrates a range of different locomotion and interaction types for VR. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-OcclusionSample<\/strong>&nbsp;<\/p>\n\n\n\n<p>This sample demonstrates objects occlusion functionality. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-PassthroughSample<\/strong>&nbsp;<\/p>\n\n\n\n<p>This Unreal sample demonstrates the Passthrough functionality provided by Meta Quest headsets.The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-Scene<\/strong>&nbsp;<\/p>\n\n\n\n<p>This sample demonstrates the usage of Scene in Unreal Engine. Though its main feature is related to Scene, it also uses a quad-layer to render an in-game menu with higher quality, works with audio, and provides a Passthrough stereo layer.The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-SpatialAnchorsSample<\/strong>&nbsp;<\/p>\n\n\n\n<p>Spatial anchors are world-locked frames of reference you can use as origin points to position content that can persist across sessions. This sample is to demonstrate Spatial anchor functionality in Unreal Engine.The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-SharedAnchorsSample<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Shared Spatial Anchors sample project demonstrates the capabilities of the Spatial Anchors system. This sample project also provides example code for handling and maintaining spatial anchors, which may be reused in users&#8217; own projects. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p><strong>ocaml-scripts<\/strong>&nbsp;<\/p>\n\n\n\n<p>Set of scripts to help building OCaml projects using buck2. &#8211;&nbsp;WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>WhatsApp-Android-OTP-SDK<\/strong>&nbsp;<\/p>\n\n\n\n<p>WhatsApp Android OTP SDK helps you integrate with one-time password solution provided by WhatsApp.It provides handy functions that simplifies the integration work.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>pal<\/strong>&nbsp;<\/p>\n\n\n\n<p>Active self-supervised learning with pairwise comparison &#8211;&nbsp;Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>rn-chrome-devtools-frontend<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Chrome DevTools UI, customized for React Native (experimental) &#8211;&nbsp;Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>infoshap<\/strong>&nbsp;<\/p>\n\n\n\n<p>Codebase for information theoretic shapley values to explain predictive uncertainty.This repo contains the code related to the paperWatson, D., O&#8217;Hara, J., Tax, N., Mudd, R., &amp; Guy, I. (2023). Explaining predictive uncertainty with information theoretic Shapley values. NeurIPS 2023. &#8211; 0WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>WhatsApp-Flows-Tools<\/strong>&nbsp;<\/p>\n\n\n\n<p>Tools and examples to help you create WhatsApp Flows&nbsp;<a href=\"https:\/\/developers.facebook.com\/docs\/whatsapp\/flows\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/developers.facebook.com\/docs\/whatsapp\/flows<\/a>&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>language-model-plasticity<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official code for the paper Improving Language Plasticity via Pretraining with Active Forgetting, NeurIPS 2023&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>verde<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code accompanying NeurIPS &#8217;23 accepted paper &#8220;SALSA VERDE a machine learning attack on Learning with Errors with sparse small secrets&#8221;&nbsp;<\/p>\n\n\n\n<p><strong>erlang-argo<\/strong>&nbsp;<\/p>\n\n\n\n<p>Erlang implementation of Argo for GraphQL&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>worldsense<\/strong>&nbsp;<\/p>\n\n\n\n<p>WorldSense benchmark for grounded reasoning in language models&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>hgap<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code release for H-GAP Humanoid Control with a Generalist Planner&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>chat2map-official<\/strong>&nbsp;<\/p>\n\n\n\n<p>[CVPR 2023] Code and datasets for &#8216;Chat2Map Efficient Scene Mapping from Multi-Ego Conversations&#8217;&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-LocalMultiplayerMR<\/strong>&nbsp;<\/p>\n\n\n\n<p>Mixed Reality samples that demonstrate how to enable multiplayer&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>moq-go-server<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is an experimental relay (optimized for low latency media transfers) that implements IETF MOQ protocol&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>taskmet<\/strong>&nbsp;<\/p>\n\n\n\n<p>TaskMet Task-driven Metric Learning for Model Learning&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>htstep<\/strong>&nbsp;<\/p>\n\n\n\n<p>HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-Movement<\/strong>&nbsp;<\/p>\n\n\n\n<p>Body, Eye and Face Tracking code sample.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>emphassess<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository presents an evaluation framework for speech-to-speech (S2S) models, following the methodology described in the EmphAsses paper (de Seyssel et al., 2023).&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SIEVE<\/strong>&nbsp;<\/p>\n\n\n\n<p>SIEVE: Multimodal Dataset Pruning using Image-Captioning Models&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SceNeRFlow<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is the official code release for &#8220;SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes&#8221; (3DV 2024), a NeRF-based method to reconstruct a general, non-rigid scene in a time-consistent manner, including large motion.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>GuidedDistillation<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official implementation of the paper &#8220;Guided Distillation for Semi-Supervised Instance Segmentation&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ego-env<\/strong>&nbsp;<\/p>\n\n\n\n<p>Human-centric environment representations from egocentric video&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>llama-hd-dataset<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is a balanced dataset for English homograph disambiguation (HD), generated with Meta&#8217;s Llama 2-Chat 70B model.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>UnifiedUncertaintyCalibration<\/strong>&nbsp;<\/p>\n\n\n\n<p>UnifiedUncertaintyCalibration&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ICRM<\/strong>&nbsp;<\/p>\n\n\n\n<p>Context is Environment&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>strobelight<\/strong>&nbsp;<\/p>\n\n\n\n<p>Meta&#8217;s fleetwide profiler framework&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>TMPI<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for ICCV 2023 paper on tiled multiplane images for single-view 3D photography.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>taser-tgnn<\/strong>&nbsp;<\/p>\n\n\n\n<p>Adaptive neighbor sampling for temporal GNN&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SSLForPDEs<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code for Lie Symmetries SSL paper&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>MultiModalExplorer<\/strong>&nbsp;<\/p>\n\n\n\n<p>Visualize multi-model embedding spaces. The first goal is to quickly get a lay of the land of any embedding space. Then be able to scroll, zoom, search (via any modality text, image, audio etc) .&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>VidOSC<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code and data release for the paper &#8220;Learning Object State Changes in Videos: An Open-World Perspective&#8221; (CVPR 2024)&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Bounded Gaussian Mechanism<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repo for the project &#8220;Privacy Amplification for the Gaussian Mechanism via Bounded Support&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>ToolVerifier<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains the ToolSelect dataset which was used to fine-tune Llama-2 70B for tool selection.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>real-acoustic-fields<\/strong>&nbsp;<\/p>\n\n\n\n<p>Real Acoustic Fields An Audio-Visual Room Acoustics Dataset and Benchmark&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>GCD<\/strong>&nbsp;<\/p>\n\n\n\n<p>Computing the greatest common divisor with transformers, source code for the paper https\/\/arxiv.org\/abs\/2308.15594&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>language-capirca<\/strong>&nbsp;<\/p>\n\n\n\n<p>Adds syntax highlighting for Capirca filetypes in Atom. Capirca is an open source standard for writing vendor-neutral firewall policies as originally released by Google:&nbsp;<a href=\"https:\/\/github.com\/google\/capirca\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/github.com\/google\/capirca<\/a>&nbsp;<\/p>\n\n\n\n<p>Flow&nbsp;<\/p>\n\n\n\n<p><strong>ocaml-sourcemaps<\/strong>&nbsp;<\/p>\n\n\n\n<p>An ocaml implementation of JavaScript sourcemaps&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>django-s3direct<\/strong>&nbsp;<\/p>\n\n\n\n<p>Add direct uploads to S3 with a progress bar to file input fields. Perfect for Heroku.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>r8<\/strong>&nbsp;<\/p>\n\n\n\n<p>Customized version of the D8 dexer and R8 shrinker&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>elasticsearch-py<\/strong>&nbsp;<\/p>\n\n\n\n<p>Official Python low-level client for Elasticsearch.&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>opengv<\/strong>&nbsp;<\/p>\n\n\n\n<p>OpenGV is a collection of computer vision methods for solving geometric vision problems.&nbsp;<\/p>\n\n\n\n<p>Mapillary&nbsp;<\/p>\n\n\n\n<p><strong>s2geometry<\/strong>&nbsp;<\/p>\n\n\n\n<p>Computational geometry and spatial indexing on the sphere&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>fbc_owrt_feed<\/strong>&nbsp;<\/p>\n\n\n\n<p>Facebook Connectivity OpenWrt Feed. Package feed for OpenWrt router OS by Facebook Connectivity programme.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>transparent-automated-ads-demo-app<\/strong>&nbsp;<\/p>\n\n\n\n<p>A demo web app to simulate the marketplace integration for Transparent Automated Ads&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Clockwork<\/strong>&nbsp;<\/p>\n\n\n\n<p>Recurring batch data pipelines are a staple of the modern enterprisescale data warehouse. As a data warehouse scales to support more products and services, a growing number of interdependent pipelines running at various cadences can give rise to periodic resource bottlenecks for a cluster. This resource contention results in pipelines starting at unpredictable times each day and consequently variable landing times for the data artifacts they produce. The variability gets compounded by the dependency structure of the workload, and the resulting unpredictability can disrupt the project workstreams which consume this data. We present Clockwork, a delay-based global scheduling framework for data pipelines which improves landing time stability by spreading out tasks throughout the day. Whereas most scheduling algorithms optimize for makespan or average job completion times, Clockwork\u2019s execution plan optimizes for stability in task completion times while also targeting predifined pipeline. Online experiments comparing this novel scheduling algorithm and a previously proposed greedy procrastinating heurstic show tasks complete almost an hour earlier on average, while exhibiting lower landing time variance and producing significantly less competition for resources in a target cluster.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>fbcddisgraph<\/strong>&nbsp;<\/p>\n\n\n\n<p>The codes reproduce the figures and statistics in the paper, &#8220;A graphical method of cumulative differences between two subpopulations,&#8221; by Mark Tygert. The repo also provides the LaTeX and BibTex sources required for replicating the paper.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>electron<\/strong>&nbsp;<\/p>\n\n\n\n<p>:electron: Build cross-platform desktop apps with JavaScript, HTML, and CSS&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>SubMix<\/strong>&nbsp;<\/p>\n\n\n\n<p>Code repo for the paper: &#8220;SubMix: Practical Private Prediction for Large-scale Language Models&#8221;&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Gaussian Sparse Histogram Mechanism<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository contains code needed to estimate delta using the Gaussian Sparse Histogram Mechanism and to reproduce the figures contained in the paper, &#8216;Exact Privacy Analysis of the Gaussian Sparse Histogram Mechanism&#8217;&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>fnob<\/strong>&nbsp;<\/p>\n\n\n\n<p>Open source Fnob (Command-line Dynamic Random Generator) package;&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Portal SDK Samples<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sample implementations demonstrating how to integrate various Portal SDK feature modules into an Android App for Portal&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>essim<\/strong>&nbsp;<\/p>\n\n\n\n<p>eSSIM is an evolution of SSIM which improves correlation with subjective scores and reduces complexity by employing box filters, striding, and Minkowski pooling&nbsp;<\/p>\n\n\n\n<p>Meta Sites&nbsp;<\/p>\n\n\n\n<p><strong>DRC Infra Survey 2020-21<\/strong>&nbsp;<\/p>\n\n\n\n<p>Democratic Republic of the Congo Infrastructure Survey conducted in 2020-201&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>Voice SDK Toolkit &#8211; Unity<\/strong>&nbsp;<\/p>\n\n\n\n<p>A collection of assets, prefabs, and scripts to help help build better Voice SDK experiences on Unity.&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>Wit Unreal<\/strong>&nbsp;<\/p>\n\n\n\n<p>The Wit.ai SDK for Unreal.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>Rmarkdown kernel for jupyter<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is a very simple fork of https\/\/github.com\/IRkernel\/IRkernel to provide an rmarkdown (rather than R) jupyter kernel.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>xr-ed-bv0-sample-code<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sample code for Bundle V0&nbsp;<\/p>\n\n\n\n<p>Wit.ai&nbsp;<\/p>\n\n\n\n<p><strong>voicesdk-unreal-samples<\/strong>&nbsp;<\/p>\n\n\n\n<p>Voice SDK Unreal Samples&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>glTF<\/strong>&nbsp;<\/p>\n\n\n\n<p>glTF \u2013 Runtime 3D Asset Delivery&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>cutlass-fork<\/strong>&nbsp;<\/p>\n\n\n\n<p>A Meta fork of NV CUTLASS repo.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>Data-driven Infer<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository includes the implementation for the paper &#8220;Learning to Boost Disjunctive Static Bug-Finders.&#8221;It improves the efficiency of the disjunctive static analyzer, Infer, by a machine-learning technique.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>kotlin-compile-testing<\/strong>&nbsp;<\/p>\n\n\n\n<p>A library for testing Kotlin and Java annotation processors, compiler plugins and code generation&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>IoT Session Manager<\/strong>&nbsp;<\/p>\n\n\n\n<p>The IoT Session Manager is an application that provides a reliable and scalable device to device messaging network with simple setup. The system is deployable locally on a machine or on a cloud server and provides extensible methods for device authentication and control automation.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>dragonclaw_library<\/strong>&nbsp;<\/p>\n\n\n\n<p>code that goes with the dragonclaw paper&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>metapaired<\/strong>&nbsp;<\/p>\n\n\n\n<p>The codes reproduce the figures and statistics in the paper, &#8220;Cumulative differences between paired samples.&#8221; The repo also provides the LaTeX and BibTeX sources required for replicating the paper.&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>csproj_benchmark<\/strong>&nbsp;<\/p>\n\n\n\n<p>A tool that generates Visual Studio C# projects and measures IDE startup performance&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>CARL<\/strong>&nbsp;<\/p>\n\n\n\n<p>Classical Action Recognition Library&nbsp;<\/p>\n\n\n\n<p>Meta&nbsp;<\/p>\n\n\n\n<p><strong>SPIRV-Headers<\/strong>&nbsp;<\/p>\n\n\n\n<p>SPIRV-Headers&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-LayerSample<\/strong>&nbsp;<\/p>\n\n\n\n<p>This sample provides blueprints illustrating the use of VR compositor layers to display a UMG UI widgets. The Oculus SDK and other supporting material is subject to the Oculus proprietary license.&nbsp;<\/p>\n\n\n\n<p><strong>SPIRV-Registry<\/strong>&nbsp;<\/p>\n\n\n\n<p>SPIR-V specs &#8211;&nbsp;Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-CoLocationHS<\/strong>&nbsp;<\/p>\n\n\n\n<p>This Unreal sample illustrates how to implement colocation. The Oculus SDK and other supporting material is subject to the Oculus proprietary license. &#8211;&nbsp;Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-HandsTrainSample<\/strong>&nbsp;<\/p>\n\n\n\n<p>This Unreal sample demonstrates how to implement hand tracking to enable users to interact with objects in the physics system. In this sample, users can use their hands to interact with near or distant objects and perform actions that affect the scene. The Oculus SDK and other supporting material is subject to the Oculus proprietary license. &#8211;&nbsp;Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-OculusInputTest<\/strong>&nbsp;<\/p>\n\n\n\n<p>This sample demonstrates how to manage oculus device input.The Oculus SDK and other supporting material is subject to the Oculus proprietary license. &#8211;&nbsp;Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-RenderingTechniques<\/strong>&nbsp;<\/p>\n\n\n\n<p>This sample demonstrates range of Oculus rendering techniques in Unreal game.The Oculus SDK and other supporting material is subject to the Oculus proprietary license. &#8211;&nbsp;Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unreal-SharedSceneSample<\/strong>&nbsp;<\/p>\n\n\n\n<p>This sample app demonstrates a shared scene experience based on Shared Spatial Anchors, Scene, and Passthrough in Unreal.The Oculus SDK and other supporting material is subject to the Oculus proprietary license. &#8211;&nbsp;Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>Unity-PerformanceSettings<\/strong>&nbsp;<\/p>\n\n\n\n<p>Performance settings testing ground, showing how to use game-engine-agnostic performance controls for Meta Quest devices such as dual-core boost and dynamic resolution.&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>Kernel Patches Daemon<\/strong>&nbsp;<\/p>\n\n\n\n<p>Sync Patchwork series&#8217;s with Github pull requests&nbsp;<\/p>\n\n\n\n<p><strong>bookworm<\/strong>&nbsp;<\/p>\n\n\n\n<p>Bookworm is a program that gleans context from a Chef\/Ruby codebase leveraging RuboCop AST pattern matching&nbsp;<\/p>\n\n\n\n<p><strong>gfxreconstruct<\/strong>&nbsp;<\/p>\n\n\n\n<p>Graphics API Capture and Replay Tools for Reconstructing Graphics Application Behavior&nbsp;<\/p>\n\n\n\n<p><strong>FairNotification<\/strong>&nbsp;<\/p>\n\n\n\n<p>Implementation of algorithms for auction-based notifications appearing in paper &#8220;Fair Notification Optimization An Auction Approach&#8221;.&nbsp;<\/p>\n\n\n\n<p><strong>ndctl<\/strong>&nbsp;<\/p>\n\n\n\n<p>A &#8220;device memory&#8221; enabling project encompassing tools and libraries for CXL, NVDIMMs, DAX, memory tiering and other platform memory device topics. Forked from&nbsp;<a href=\"https:\/\/github.com\/pmem\/ndctl\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/github.com\/pmem\/ndctl<\/a>&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>WMG<\/strong>&nbsp;<\/p>\n\n\n\n<p>Serverless Application Model (SAM) project to deploy a Cloudformation insfrastructure capable of measuring signals in WhatsApp conversations.&nbsp;<\/p>\n\n\n\n<p>Meta Samples&nbsp;<\/p>\n\n\n\n<p><strong>Whatsapp Marketing Messages- ROI Measurement Playbook<\/strong>&nbsp;<\/p>\n\n\n\n<p>This project provides best practices on how to measure WhatsApp marketing messages effectively, understand how many incremental conversions businesses canget as well as how to compare the effectiveness of marketing messages against other external platforms such as email\/SMS.&nbsp;<\/p>\n\n\n\n<p><strong>glTF-Sample-Assets<\/strong>&nbsp;<\/p>\n\n\n\n<p>To store all models and other assets related to glTF&nbsp;<\/p>\n\n\n\n<p><strong>glXF<\/strong>&nbsp;<\/p>\n\n\n\n<p>glTF Experience Format (glXF)&nbsp;<\/p>\n\n\n\n<p>Meta Experimental&nbsp;<\/p>\n\n\n\n<p><strong>cxx<\/strong>&nbsp;<\/p>\n\n\n\n<p>Safe interop between Rust and C++&nbsp;<\/p>\n\n\n\n<p><strong>React Native Chrome DevTools Protocol implementation tracker<\/strong>&nbsp;<\/p>\n\n\n\n<p>Tracks the status of the CDP implementation in Hermes and React Native projects.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>CompositionalityValidity<\/strong>&nbsp;<\/p>\n\n\n\n<p>This is the repository for the project The Validity of Evaluation Results Assessing Concurrence Across Compositionality Benchmarks, published at CoNLL 2023.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>PerSE<\/strong>&nbsp;<\/p>\n\n\n\n<p>Personalized Story Evaluation Model&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>rates<\/strong>&nbsp;<\/p>\n\n\n\n<p>Statistical convergence rates&nbsp;<\/p>\n\n\n\n<p>Oculus Samples&nbsp;<\/p>\n\n\n\n<p><strong>haptics-studio-examples<\/strong>&nbsp;<\/p>\n\n\n\n<p>github.com\/oculus-samples\/haptics-studio-examples\/similar to https\/\/github.com\/oculus-samples\/Unity-FirstHand&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>DIG-In<\/strong>&nbsp;<\/p>\n\n\n\n<p>This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>GST_colorbook<\/strong>&nbsp;<\/p>\n\n\n\n<p>Geometry style transfer colorbook&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>multicontrievers-analysis<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repository for paper&nbsp;<a href=\"https:\/\/openreview.net\/forum?id=JWHf7lg8zM&amp;noteId=tEU5I2TzCc\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/openreview.net\/forum?id=JWHf7lg8zM&amp;noteId=tEU5I2TzCc<\/a>&nbsp;<\/p>\n\n\n\n<p><strong>dotslash-publish-release<\/strong>&nbsp;<\/p>\n\n\n\n<p>Create DotSlash files for GitHub releases&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>glemos<\/strong>&nbsp;<\/p>\n\n\n\n<p>This repository provides code and data for the paper &#8220;GLEMOS: Benchmark for Instantaneous Graph Learning Model Selection&#8221; (NeurIPS 2023).&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>HWbits lib<\/strong>&nbsp;<\/p>\n\n\n\n<p>Abstraction of hardware register-level protocols in a python semantic names.&nbsp;<\/p>\n\n\n\n<p><strong>Pysa<\/strong>&nbsp;<\/p>\n\n\n\n<p>Python Static Analyzer&nbsp;<\/p>\n\n\n\n<p><strong>install-dotslash<\/strong>&nbsp;<\/p>\n\n\n\n<p>A simple GitHub Action to install a precompiled dotslash binary&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>MD-CRL<\/strong>&nbsp;<\/p>\n\n\n\n<p>Experiments to reproduce the results in &#8220;Multi-Domain Causal Representation Learning via Weak Distributional Invariances&#8221;.&nbsp;<\/p>\n\n\n\n<p>Meta Research&nbsp;<\/p>\n\n\n\n<p><strong>quantized_identifiability<\/strong>&nbsp;<\/p>\n\n\n\n<p>Repository for the code associated with the paper &#8220;On the Identifiability of Quantized Factors&#8221; by Vit\u00f3ria Barin-Pacela, Kartik Ahuja, Simon Lacoste-Julien, Pascal Vincent, Conference on Causal Learning Reasoning (CLeaR), 2024.&nbsp;<\/p>\n\n\n\n<p>WhatsApp&nbsp;<\/p>\n\n\n\n<p><strong>erlang_taint_analysis<\/strong>&nbsp;<\/p>\n\n\n\n<p>A dynamic taint analysis for Erlang&nbsp;<\/p>\n\n\n\n<p>Meta Incubator&nbsp;<\/p>\n\n\n\n<p><strong>haberdashery<\/strong>&nbsp;<\/p>\n\n\n\n<p>A collection of high-performance crypto implementations.&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code><\/code><\/pre>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>All Projects&nbsp; Artificial Intelligence \/ Machine LearningBlockchainData InfrastructureDeveloper OperationsDevelopment ToolsFrontendMobileOtherSecurity and PrivacyVirtual Reality&nbsp; BY\u202fTAGS&nbsp; 3d3d-reconstructionabstract-interpretationadstockingaiairflowandroidapp-frameworkappearance-invarianceartificial-intelligenceaudioaudio-processingautogradaws-batchbayesian-logistic-regressionbenchmarkbig-databilevel-optimizationbuckbuck2budget-allocationbuild-toolsbundlercc-langcachecache-enginecampaign-plannercaptioningcdpcliclusterscode-qualitycodegencommand-linecommand-line-toolcompilercomponentscompressioncomputational-geometrycomputer-visionconcurrencycontainerizationcontrolcontrol-flow-analysisconvolutional-neural-networkscost-response-curvecpluspluscppcpucpu-cachecpu-modelcpu-topologycpuidcross-platformcsscudadark-modedatadatabasedatasetdataset-generationdecision-makingdeclarativedeep-learningdeep-reinforcement-learningdetectordevtoolsdialogdifferentiable-optimizationdifferential-privacydiffingdigital-watermarkingdistributed-computingdistributed-trainingdockerdocumentationdocusauruse2eeconometricsembeddedembodied-aiend-to-enderlangevolutionary-algorithmfacebookfacebook-apifastmrifastmri-challengefastmri-datasetfeature-attributionfeature-extractionfeature-importancefind-and-replacefinetuningflake8flake8-pluginflashlightfmmforecastingformatterframeworkfrequency-analysisfront-endfrontendgauss-newtongeospatialgogolanggpugradient-based-optimisationgradientshackhacklanghacktoberfesthadoophashinghateful-memesheaphermeshessianshhvmhivehyperparameter-optimizationi18nimageimage-hashingimage-processingimage-similarityimageryimagesimplicit-differentiationinstagraminstruction-setinternationalizationinterpretabilityinterpretable-aiinterpretable-mlinterpreterioiosjavajavascriptjaxjetsonjitjupyterhubkuberneteslakehouselangchainleaklevenberg-marquardtlibrarylibtorchlinterlinuxllamallama2llmmachine-learningmachine-translationmachine-translation-data-processingmap-buildingmapillarymarketing-apimarketing-mix-modelingmarketing-mix-modellingmarketing-sciencemarlmedical-imagingmemorymetricsmlmlopsmmmmobilemobile-developmentmodel-based-reinforcement-learningmoemrimri-reconstructionmtlmulti-agentmulti-agent-reinforcement-learningmulti-taskingmultimodalmultitask-learningncmecneural-compressionneural-networknlpnlunmtnodejsnonlinear-least-squaresnumpynvdnvidiaoauthoauthenticatorobjective-cocamlopen-sourceopencvoptical-flowopticsoptimizationperceptual-hashingperf-toolsperformancephppipelinesplanningpoint-trackingpplpreprocessingprestopretrained-modelsprivacy-preserving-machine-learningprobabilistic-programming-languagesprogram-analysispythonpytorchqueryrrayraycasterrcwareach-curvesreactreact-nativerebar3-pluginrecommendation-systemrecommender-systemreinforcement-learningrendererrendering-engineresearchresource-controllerresponsiveridge-regressionrlroboticsruntimerustsecurityservingsfmshardingsim2realsimulatorslurmsnapshotspatial-visualizationspeechspeech-recognitionsqlsqlitessdstarlarkstatic-analysisstatic-code-analysisstopnciistorage-enginestreet-imagerystreet-leveltaint-analysistensortensorrttextvqathreatexchangetorchtrack-anythingtranscodingtranslationtype-checktypecheckertypescriptuiuicollectionviewunix-toolsv8videovideo-hashingvideo-similarityviewervirtual-realityvisuzalizationvllmvoicevqavulnerability-managementwatermarkingwav2letterwebwebglwebsitewitwitaixhpzero-configuration&nbsp; Meta&nbsp; React&nbsp; A JavaScript library for building user interfaces.&nbsp; Meta&nbsp; React Native&nbsp; A framework for building native applications using React&nbsp; Meta&nbsp; Create React App&nbsp; Set up a modern web app by running one command.&nbsp; PyTorch&nbsp; PyTorch&nbsp; Tensors and Dynamic neural&hellip;&nbsp;<a href=\"https:\/\/172-234-197-23.ip.linodeusercontent.com\/?p=2997\" rel=\"bookmark\"><span class=\"screen-reader-text\">Meta Open Source\u00a0<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"categories":[7],"tags":[],"class_list":["post-2997","post","type-post","status-publish","format-standard","hentry","category-the-truben-show"],"_links":{"self":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/2997","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2997"}],"version-history":[{"count":1,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/2997\/revisions"}],"predecessor-version":[{"id":2998,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=\/wp\/v2\/posts\/2997\/revisions\/2998"}],"wp:attachment":[{"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2997"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2997"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/172-234-197-23.ip.linodeusercontent.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2997"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}