Joseph Chow

Joseph Chow

Frontend Developer in Bay Area, CA

About

👋 Front-end developer here. I primarily work with the web but I like to explore outside of it as well.

Over the course of my career I am fortunate to have had opportunities to work on projects for prominent companies such as Google and Apple. Additionally, I have also had the chance to utilize my skills to assist smaller businesses.

I have extensive experience with the typical front-end web development technologies. However, I should note that I have primarily focused on building unique solutions from scratch usually without the help of more modern frameworks. Since the technology landscape is constantly evolving, especially on the web, I've made it a point to be flexible which is why I have not focused on any one stack. Though that being the case, I have tried to keep up as best I can through self-study. Lately I've been a big fan of Svelte and Solid for the front-end.

In addition, my professional experience has allowed me to work with some back-end technologies and frameworks such as PHP and WordPress; I even had some exposure to Ruby on Rails at one point. Lately I've started trying to pick up a little more back-end knowledge and have looked at things like Go and Elixir.

While I don't consider myself an expert in anything, I am very confident in my ability to learn new things which my code repositories hopefully reflect.

Although my professional focus has been in web development, I have actively sought out opportunities to expand my knowledge beyond this domain. As a result, I have explored technologies such as ARKit, WebGL, and Vulkan. These experiences have allowed me to broaden my horizons and gain a deeper understanding of various technologies beyond my usual focus.

Please do feel free to reach out if you would like to work together on something!

(NOTE - the content mentioned here is just a sample of what I am able to show. There is unfortunately quite a bit that has to stay under wraps. For a more complete work history, please see LinkedIn which is linked to at the bottom. There are other links at the bottom that may be of interest as well.)

Projects

2023

Back around August(ish) of 2023, I was asked to help with finishing up a project for Snap which involved building a retro-style game around the concept of being in an advertising agency.

I was responsible for building two of the mini games within the experience. I also helped flush out some of the initial back-end components necessary to make the experience work.

The project is built using WebGL via the Phaser Engine. Sveltekit was used as the underlying front-end/back-end stack. The backend portion was just a very simple Firebase setup.

This project won an FWA on 10.18.2023, an Awwwards Site of the day on 10.27.2023 and was also written about in Communication Arts!

NOTE: As of 10.7.2024 - unfortunately it looks like the campaign has ended and the site has been taken down; the link will now direct to the Studio Mega case study of the project.

2021

I was asked to help prototype a possible experience for Nike that could go into one of their stores. Essentially it is a apparel visualizer that layers on user selections onto an image of a model. I built a tool that could take preset selections of clothing as well as pull in clothing from the Nike website.

This was done with web technologies, namely React and Three.js.

I recently went back and remade it taking some time to make some changes and improvements. You can see it by clicking on the link in the title. The source can be found at

gitlab.com/xoio/clothing-…

2021

Around late summer in 2021, I was offered the opportunity to help build an in-person installation version of one of the exercises for the Teachable Machine website.

The idea was basically to take one of the experiences from the site and bring it to life in one of the Google offices so that visitors could experience machine learning in action and in person.

The setup is somewhat unconventional compared to how most installations might be built in that everything was still built with web technologies(vanilla JS, CSS, HTML in this case) vs something like C++.

While it's long since been taken down, you can still visit the Teachable Machine website and try the Image Project to get a sense of what the experience was like.

2019
Spark AR Filter work at Meta / TEKSystems

I worked as a "creative coder" for Facebook through a third party staffing agency(TEKSystems).

For the most part, my role was to facilitate the building of various effects for Facebook/Meta Camera. These usually consist of 2D photo manipulation effects as, at the time, it was very difficult to do more without either more functionality being built into the SparkAR Studio program or having other employees on the team with the necessary skill-sets(ie 3D modeler, etc). Building these effects usually utilize the Spark AR studio which allows node based building as well as providing a Typescript based interface for writing code.

One major project I took on was one where I was the primary driver involved converting existing Facebook effects to be compatible with a newer updated rendering engine that was being built at the time.

2017

I spent some time at the agency Level Studios doing work for Apple. My role was to help maintain apple.com. In my time there I did work on the Apple Music and Final Cut portions of the site.

Though I believe it’s now changed, at the time, Apple was using an in-house custom framework that constructed pages out of Markdown and merged the output with stand-alone JS and SCSS files that were using in-house libraries to build each part of the site.

Side Projects

2024

Since I have a lot of free time on my hands I thought it would be good to take some time to understand how the LLMs work and built a system that can query a YouTube account's video transcripts.

It builds a database of transcript information using Chroma. I then use LlamaIndex as an interface to query the information. It works pretty good for the most part; I just need too rework the scraping method to fetch multiple pages of content.


Not much of a UI at the moment but I'll get around to it eventually. Also the link points to a template and not an actual application but it's basically the same structure.

2024

I wanted to get a better understanding about how hardware based raytracing works so I put together a quick little experiment.

The scene contains hardware raytraced geometry + lighting using Vulkan. Normals are a bit of a mess though ha.

This is based on work by Jun Kiyoshi but altered a bit to run things on the GPU instead of the CPU. The geometry was originally made in Houdini, then reconstructed using compute shaders before being put through the raytracing process to add color and lighting.

2024
StreamDiffusion Experimenting

I recently took some time to explore StreamDiffusion, a framework allowing diffusion models to run in a substantially faster method compared to how things are normally run.

While you still need a pretty substantial setup in order for it to truly get close to real-time, this setup allowed me significantly better performance compared to my previous explorations even on older and less capable hardware.

The experiment utilizes WebGPU as well as Clojurescript.

2024

I learned about the Odin programming language not too long ago and did a quick(ish) experiment building something in it and compiling to WebAssembly.

It's a nice language! Language design is something I admittingly don't think too much about, it's always been more of a mindset of "does this let me do what I need it to do?" and then I move on, so I don't feel I'm really qualified to speak to any specifics about the language.

What I can say is that it was really easy to adapt to and figure stuff out just by looking at the overview and demo file that's available on the website(but this is coming from someone that's used C++, Rust, and Zig before so your mileage may vary).

Fun fact - this is also the same language that is used to make EmberGen and the language creator also works for JangaFX

Language website - <odin-lang.org/>

2023

This was an experiment to see if it's possible to pipe in some WebGL content into a diffusion pipeline.

The is largely based on the work of Radamés Ajna, who did a similar experiment but using webcam / screen content instead.

The idea is pretty much the same, with the exception that I'm streaming content from a Canvas element instead of taking in the webcam / screen capture stream.

I did rebuild parts of it - his original code was built around the idea of multiple concurrent connections and to be deploy-able. Given that I unfortunately can't afford to rent a GPU at the moment and since I really just wanted to see if what I was thinking was even possible, I stripped and simplified some parts of the code. It was also a good opportunity to brush up with my Python knowledge as I don't really get a chance to use Python all that often.

I also took the opportunity to rebuild the client around Solid.js, much for the same reasons as the server portion.

2023

This was a sample project I built as part of an interview process.

It uses React and Three.js through the react-three-fiber framework. It's also basically my first scroll driven site as well; there are a few hiccups still if you scroll fast enough but I think things worked out for the most part in the end.

Though I didn't make it in the end and this is a little rough around the edges(I only had a few days to finish), they liked it enough to give me a second interview so I figured I'd share since I have had very very very few opportunities to touch React in the first place.

If you'd like to see it live, you can by clicking the link in the title.

You can find the code here

<gitlab.com/xoio/example-p…>

2023

This is a simple exercise using SvelteKit to simplify the use of an infamously complicated piece of software called FFMpeg which is commonly used for media encoding and conversions.

One potentially interesting thing about this is that it does not require a local copy of FFMpeg but rather, uses the WASM version of FFMpeg instead, allowing everything to run entirely in the browser.

To simplify commands, OpenAI's completion api is utilized to turn a user's instructions into a command that can be passed to FFMpeg.

Source can be found here
gitlab.com/xoio/example-p…

2023

I have normally used bit.ly to make links shorter in the past.

Unfortunately, with bit.ly,'s free tier, you can't edit the destination of links that you set up, you have to delete then re-add the link.

I understand needing to make money and thus, guiding customers towards one of the paid plans, but at the same time it feels like a basic feature that should be there no matter what.

That being the case I thought I should just build my own.

The domain name selection was the hardest part, in the end I settled on lnnks

This project has also given me the excuse to explore several varieties of languages and frameworks namely

  • Svelte(Kit)

  • Qwik

  • and now I've shifted to the backend and am now running a simple Go server to manage everything.

  • I'm also exploring Elixir as a possible option and have most of the same functionality built out using the Phoenix framework(yes it is a little overkill for this) but I'll probably save Elixir for something else.

Apologies for no images, since it's a personal tool I didn't really bother styling it. See the link above in the title for the source.

Most of the links below are now set up using the system, do feel free to try it out.

2023

This was just a quick experiment building a custom COP(compositing operator) node in SideFX's Houdini software via HuggingFace's Diffusers library.

The way it works is first by turning your scene inside of Houdini into a 2D image. It then passes that image to StableDiffusion and outputs the result.

The one disadvantage of this method is that you kind of loose out on the benefits of using something like Houdini as you have to recreate different aspects of the scene yourself that might normally be done for free with one of the built-in renderers(though you could probably get by this by running Stable Diffusion in a post-render step).

Overall this was an interesting experiment.

2022

I got back into photography last year, specifically focusing on street photography. Picked up a used camera and have been shooting as much as I can since.

I usually post to Instagram under the @sortofsleepy but I also post on Flickr since Instagram seems to do "something" to the uploads that occasionally creates noticeable artifacts.

2021
Yoi

This is a personal project started, initially, in order to get familiar with Vulkan. I started in C++ but eventually thought that this might be a good opportunity to try to get familiar with Rust as well(which in hindsight might not have been the best idea haha)

It's evolved into something reasonably feature complete, at least for my use cases and I'm able to easily accomplish the same kind of work I might have normally used Cinder or openFrameworks for.

2020

This is a custom WebGPU wrapper I started back in 2020 when the WebGPU was just starting to get into a workable state.

Originally it started out as a web based library, then shifted to Rust. However, recently I revamped it a bit and made the web/rust versions separate repositories in addition to cleaning up both versions. The web version is linked to in the title and you can find the Rust version here

gitlab.com/sortofsleepy/w…

(currently doing work in the "cleanup" branch)

2018

This is a personal library I started in order to learn more about raw WebGL. Up until this point I was using Three.js but I wanted to have a better understanding of what was happening under the hood, so I decided to undertake the immense(at the time) task of writing my own library from scratch.

I've started a re-write recently which I'm almost done with; that said I'm not sure if it's worth fully pursing extensively given that WebGPU is now in Chrome publicly and set to be available for other browsers in the not too distant future.

2017

This is an openFrameworks addon that provides some basic helpers with using ARKit in conjunction with openFrameworks.

One of the unique things about this is that there is a translation layer that processes the incoming camera image with Metal, then passes that over to the OpenGL side of things(which is what openFrameworks uses under the hood).

I began developing this addon during my time at Level/Apple. Due to the nature of my work at Apple Marcom, my workload was mostly concentrated before their major annual releases such as WWDC. As a result, I had some spare time on my hands and decided to work on this project during that period.

Eventually other people contributed a bit to the work to make it into the addon that it is today, supporting things like face tracking as well as ... airpod tracking too.

At the moment I've been a bit slow at keeping it updated as I don't normally use Macs anymore and the one iPhone I have is quite old and doesn't support the newer features. Also with the iOS deprecation of OpenGL I've been waiting to see what the openFrameworks team has planned.

As to what has been made with the addon, while in my opinion I generally lack the creativity to come up with interesting ideas and I unfortunately have been doing non-related things on the professional front, others have utilized this extension to produce some incredible results.

engadget.com/2017-09-11-the…

wired.com/story/an-artis…

Exhibitions

2023
Tokyo

Not related to what I do professionally, but I had a photo featured during this exhibition which took place in May of 2023.

If you're interested in seeing my photography:

flickr.com/photos/sortofs…
portraitmode.io/profile/sortof…
instagram.com/sortofsleepy

(note that they all largely show the same photos, just a few different options for you to choose from to follow if you want!)

Work Experience

2011 — Now

I have worked off and on as a self-employed developer over the years, helping to build a wide variety of projects. These largely focus on front-end web development but from time to time, other kinds of projects have cropped up as well.

2022 — 2022
B-Reel

I helped primarily with kicking off a new project with Google, which I cannot talk about at the moment or show pictures of. This was done within the Android ecosystem.

2019 — 2021
Freelance Creative Coder at TEKSystems / Meta

I worked as a "creative coder" for Facebook through a third party staffing agency(TEKSystems).

For the most part, my role was to facilitate the building of various effects for Facebook Camera. These usually consist of 2D photo manipulation effects as, at the time, it was very difficult to do more without either more functionality being built into the SparkAR Studio program or having other employees on the team with the necessary skill-sets(ie 3D modeler, etc).

One project where I was the primary driver behind was converting existing Facebook effects to be compatible with a newer updated rendering engine.

2018 — 2018
Freelance Dev / Apprentice at Rare Volume

I worked at Rare Volume for a little bit as a Freelance Developer / Apprentice back in the later half of 2018.

I originally came to Rare Volume to help with a particular project that would involve web based technologies; unfortunately that project ended up getting cancelled.

They were very kind and kept me on for a little bit longer in spite of that. I still helped with various needs of the agency. Eventually I would have the opportunity to have somewhat of a key role on a project for SK-II where, under the supervision of a senior developer, I helped write a new auto-focus algorithm for some cameras that were going to be used outdoors. The project was an on-going project with several iterations; the particular iteration I was helping out with would be outdoors, so a revised version of the existing software was required.

In addition to that I helped with some prototyping for some new ideas to use within the Cinder C++ framework including bringing it to the web via Emscripten. Some of the initial research had been done before I started but I took it across the finish line on my own.

2018 — 2018
Potion Design

I spent some time at Potion Design helping them with various prototyping needs. One such need to was to have a library to use with a possible replacement for the Kinect, the Orbecc Astra.

Another thing I worked on was prototyping something for a project proposal using IR reflective material. The idea was to use the reflective material as an ID marker of sorts in order to identify physical objects in a space. The software I wrote was able to distinguish a different "id" based on the shape that the material was cut into which worked great, albeit a bit finicky at times as this was done purely with OpenCV and nothing else.

The big thing I helped with was an installation for NYU Langone Children's hospital. They wanted an interactive display that helped list the donors to the hospital as well as tell stories about the happenings going around the hospital. It was essentially an HTML5 website that utilized touch interaction. Hospital staff is able to update the display with new information being taken in from a Wordpress installation on a separate PC nearby.

2017 — 2018
Level Studios / Apple

I spent some time at the agency Level Studios doing work for Apple. My role was to help maintain apple.com. In my time there I did work on the Apple Music and Final Cut portions of the site.

Though I believe it’s now changed, at the time, Apple was using an in-house custom framework that constructed pages out of Markdown and merged the output with stand-alone JS and SCSS files that were using in-house libraries to build each part of the site.

Thanks to the WaybackMachine, it's possible to still interact with that work. You can find the relevant sections here

web.archive.org/web/2017110200…

web.archive.org/web/2022112218…

Contact

Website
CV
GitLab
Youtube
Vimeo
Instagram
Email
Flickr
PortraitMode