Skip to main content

Snowflake - coming soon

· 4 min read

ACTUALLY… by the time you read this…. IT’S HERE!

WE’RE RUNNING ON SNOWFLAKE!

leap

How it was

So we’ve been doing this for a while. Decades actually. Early on I was running denormalized dimensional modeling on row based databases and squeezing every ounce of performance out if it that I could. Pre-aggregating years of transactional data and preprocessing as much as I could to get that fast end user experience. Of course this wasn’t highly efficient or cost effective, and we couldn’t easily or dynamically get back down to fine grained data.

Back then we poured over every detail down to the actual hardware, io and memory etc. (it mattered what the disk controller was and how our raid array was configured), we kept processing as close to the data as possible… basically because we had to, to get any kind of real performance. Then a world of change happened. Distributed processing and columnar stores. Eventually columnar stores became the de facto standard for analytics. This makes a lot of sense. It’s more aligned with how data is read for analytics, reduces io with higher data compression rates, and the models lend themselves better to distributed processing.

Then came big data - and with it columnar based file formats from hadoop, like parquet and orc. The cloud became a bigger thing and data lakes were the way to go. But they weren’t something that was prepackaged for you like the databases of yore. You had to build them almost from scratch, and it wasn’t easy. Your query engine was separate from your index store and separate from your data storage. You needed to handle the writing with integrity on failure. Understanding Hadoop was a big thing and tools felt like they were lego bricks you needed to click together in just the right way.

snowflake

A better way

With the arrival of Snowflake things changed for the better again. It handled so much for you, providing that ‘database engine feel’ on big data infrastructure. Beautiful! Because of this, Snowflake became popular in no time. It grew like crazy and became a desired tech because it made modern approaches more accessible.

What was making big data costly and expensive was resourcing (human, hardware), Snowflake helped cut those by abstracting all the complexity of big data ecosystems from developers and letting you just write basic SQL. It handled the hardware aspects of the lake, instead of spinning up and managing a farm of hardware and machines to process everything. Snowflake just took care of it all. The separation of storage and compute allowed you to minimize your data footprint, and maximize your processing… elasticity! Running the required resources for a particular process for a limited period of time lowered your cost (the headache of dealing with node failure was gone too). Your developers could focus on implementing business needs rather than constantly maintaining and enhancing the big data cluster.

trillabit

Our Value Add

Enter onto the scene the next step in capabilities and simplification - TrillaBit. We created a smart low-code analytics layer that dynamically runs on top of highly performant and efficient analytic processing platforms like Snowflake. We help you leverage your Snowflake investment further by letting users drill into and explore data at their whim, without knowing sql or other underlying tech.

TrillaBit can now simply point to an instance or multiple instances of Snowflake, locally or globally to provide self service analytics to users in the most efficient and cost effective way.

Thanks,

Keith

quote

“We are the music makers and we are the dreamers of dreams” ~ Willy Wonka"

contact us!