Reality Labs Research

Location: Redmond, WA
Time Frame: Sept 23, 2021 - Sept 23, 2023

C#C++PythonRust

GitGitHubMercurialPerforce Helix

RiderPyCharmCLionVisual Studio CodeVim

UnityUnreal EngineFigma

The majority of my work done for RL-R is currently under NDA. As more publications, patents, and hardware is released, I'll be adding it to this portfolio. For the time being, the following is all I can share.

Text Input Based Research

The majority of the work I contributed to at RL-R involved devices briefly discuss and showcased in this article. While I didn’t work on the hardware itself, I utilized the data from the hardware to create a text-input system within the Unity3D engine. The system was first designed to be a one-off system used for a research study, however, upon working with three interns all needing similar functionality, it quickly evolved into an SDK.

The first of the papers published that utilized this text-input SDK can be found here. Support for various text-input methods including swipe-typing, single-character input, and predictive dictionary suggestions were provided. The text-input decoding used various methods and language libraries; each method also being an option within the SDK.

Hackathon: BrainVR

While working at Meta, I was able to participate in an internal hackathon where I partnered with two researchers that came from medical backgrounds. With these two researchers we were able to build out an Oculus Pro AR application that was able to load MRI data and allow the user to visualize and manipulate the MRI data in a number of ways. The project was built mostly off an open source repository, and many other startups quickly integrated this library into their own health-tech systems.

Link to open-source repository.

Unfortunately, all hackathon work resides within the internal Meta systems and is proprietary Meta software.