Hudl Bits Blog

Insights from the Hudl Product Team

Caching Hudl’s news feed with ElastiCache for Redis

Every coach and athlete that logs into Hudl immediately lands on their news feed. Each feed is tailored to each user and consists of content from teams they are in as well as accounts they choose to follow. This page is the first impression for our users and performance is critical. Our solution: ElastiCache for Redis.

Benefits of Exposing Data Through GraphQL

GraphQL is rapidly growing in popularity within our organization at Hudl. Although we have only been using GraphQL for a few months, we are already reaping the benefits of it.

How We Stay Sane with a Large AWS Infrastructure

We’ve been running hudl.com in AWS since 2009 and have grown to running hundreds, at times even thousands of servers. As our business grew, we developed a few standards that help us make sense of our large AWS infrastructure.

Measuring Availability: Instead of Nines, Let’s Count Minutes

It’s hard to find detailed explanations about how companies go about computing and tracking their availability, particularly for complex SaaS websites. Here’s how we do it for our primary web application, hudl.com.

The Low-Hanging Fruit of Redshift Performance

This post is a case-study about how we fixed some Redshift performance issues we started running in to as we had more and more people using it. A lot of the information I present is documented in Redshift’s best practices, but some of it isn’t easily findable or wasn’t obvious to us when we were just getting started. I hope that reading this post will save you some time if you are just getting started with Redshift and want to avoid some of the pitfalls that we ran into!

Populating Fulla with SQL Data and Application Logs

This is the second in a series of posts on Fulla, Hudl’s data warehouse. This post discusses our methods to update Fulla daily with data from our production SQL databases and our application logs.

Migrating Millions of Users in Broad Daylight

In August we migrated our core user data (around 5.5MM user records) from SQL Server to MongoDB. We moved the data during the daytime while still taking full production traffic, maintaining nearly 100% availability for reads and writes during the course of the migration. Our CPO fittingly described it as akin to “swapping out a couple of the plane’s engines while it’s flying at 10,000 feet.” I’d like to share our approach to the migration and some of the code we used to do it.

Hello Fulla

Over the last year, the Data Engineering squad has been building a data warehouse called Fulla. Recently, the squad rethought our entire data warehouse stack. We’ve now released Fulla v2 and Hudlies are querying data like never before giving us a better understanding of our customers and our product.

Data Science on Firesquads: Classifying Emails with Naive Bayes

At Hudl, each squad on the product team takes two weeks each year to help out the coach relations team in an ongoing rotation known as Firesquads. This year, for Firesquads, the data science squad built a Naive Bayes classifier to automate the task of categorizing emails.

Faster and Cheaper: How Hudl Saved 50% and Doubled Performance

We took time to optimize our EC2 instance types. By finding the maximum load a server could handle we were able to run a quarter as many app servers. Our hourly spend dropped by 50%. Despite the huge cost savings, we also saw a 2x improvement in response times! This came about by moving to a newer instance family.