Tive

2 min read

Tive app screens

How I built a social media for the d/Deaf community as a single engineer

What was the original goal?

The original goal of Tive was to connect d/Deaf users to ASL interpreters on demand. Think Uber, but instead of quickly getting a vehicle you get an ASL interpreter.

What did I do first?

Based on the initial goal, I set out to see what video chat technologies were available to me off the shelf that I could leverage to demo the application to the company.

Step 1 (Video Chat)

I initially went with Agora, a Real-Time Voice video platform to power the demo. The example applications were just a few lines of code to connect to a room.

Agora Join Room Code Snippet

The demos were working well and it was time to turn the demo into a real version of the app. Creating a mobile application application with the Agora SDK integration was quite straightforward. However, as a company, we didn't have interpreters ready to go. It would be quite a long process to get interpreters on our platform. In the meantime, the video chat interpreter feature was pivoted to part of a chat system.

As the video chat integration with a chat room became more and more customized, we started to become limited by what Agora could do for us. We wanted the users to see custom messages in the chat room that a video chat was started/ended and be able to do some server-side events when actions happened in the chat. At that time, it didn't seem Agora could handle webhook events.

We needed some very customizable, but we didn't want to manage our own Webrtc signaling servers and scaling. Luckily I was able to find Twilio's Video API. Which provides a nice high-level wrapper over all the parts of WebRTC but still allows high customizability.

Step 2 (Interpreters)

SIP video call demo

Turns out that hiring and supporting ASL interpreters is very complicated and requires a lot of work with those who are experts in ASL interpretation services. There was an initial system built to support ASL interpreters on our system, but eventually, we migrated to another company's ASL video chat software. Turns out this system uses SIP (Session Initiation Protocol) over WebRTC. This would require another multiple-month process of business agreements and developer integration work.

There were a lot of new technologies for me to learn along the way. I had to learn about the SIP, WebRTC, STUN servers, Signaling servers, and setting up the 3rd parties Asterisk server to host the interpreter video calling software. There was a lot of back-and-forth communication about how to properly call the 3rd parties server over SIP. Since there was no documentation, I ended up doing a lot of debugging of their demo web application to see all the configuration options that a SIP call requires to connect to the server 馃う.

Other Projects