It’s been so much fun building out Asklet over the past few weeks, I thought I’d share a little more detail on how it works and what’s powering it.
Asklet has been built with the same tech we use across the rest of our Surveys platform – Elixir, Phoenix LiveView and Postgres. Why this stack? We’ve found it incredibly efficient to work with as a small team of 4, and it’s built for robust realtime experiences which is exactly the feel we wanted people to have.
For infrastructure we’re on Amazon’s Elastic Container Service (ECS) which has many of the key elements of full Kubernetes, but with less maintenance overhead. It’s a good balance between a fully fledged PaaS, and an entire DIY approach. All the benefits of multi-region scaling, with much less Yaml to wrangle! This is the right fit for us now, but because the application is containerised with few dependencies we can easily move to something else if it makes sense in the future.
The most complex piece was creating a solid experience for users who opted for voice usage. We felt it was important to make this as slick as possible, and allow movement between voice and text modalities for accessibility. After playing around with a few options we settled on a WebRTC connection to OpenAI’s Realtime API – this is primarily designed for telephony like products so we spent most of our time tweaking the integration to get it just right for what we needed.
Some of the other challenges we dedicated time to were:
-
Finding the balance between digging deeper for the best possible feedback, and not creating a tedious or frustrating experience for people who were responding.
-
Supporting a preview experience for those building an Asklet, as well as a standalone and embeddable version for respondents.
At some point we’ll put up a more detailed explanation of how it all works, but feel free to drop any questions. We’re really happy to share!
