I love the fact that Snowflake is getting involved in the betterment of our planet!
This is the first in a series of three posts from one of my colleagues examining the concept of net zero data and how advances in technology can help the world’s largest world’s largest organisations—especially those which are particularly emissions-intensive like oil and gas—reduce the carbon emissions footprint of their data.
Read the whole post here – Snowflake and Net Zero: The Case for Data Decarbonisation (Part One)
The Data Warrior & Chief Technical Evangelist at Snowflake
I am seeing a HUGE uptick in interest in Data Vault around the globe. Part of the interest is the need for agility in building a modern data platform. One of the benefits of the Data Vault 2.0 method is the repeatable patterns which lend themselves to automation. I am please to pass on this great new post with details on how to automate building your Data Vault 2.0 architecture on Snowflake using erwin! Thanks to my buddy John Carter at erwin for taking this project on.
The Data Vault methodology can be applied to almost any data store and populated by almost any ETL or ELT data integration tool. As Snowflake Chief Technical Evangelist Kent Graziano mentions in one of his many blog posts, “DV (Data Vault) was developed specifically to address agility, flexibility, and scalability issues found in the other mainstream data modeling approaches used in the data warehousing space.” In other words, it enables you to build a scalable data warehouse that can incorporate disparate data sources over time. Traditional data warehousing typically requires refactoring to integrate new sources, but when implemented correctly, Data Vault 2.0 requires no refactoring.
Successfully implementing a Data Vault solution requires skilled resources and traditionally entails a lot of manual effort to define the Data Vault pipeline and create ETL (or ELT) code from scratch. The entire process can take months or even years, and it is often riddled with errors, slowing down the data pipeline. Automating design changes and the code to process data movement ensures organizations can accelerate development and deployment in a timely and cost-effective manner, speeding the time to value of the data.
Snowflake’s Data Cloud contains all the necessary components for building, populating, and managing Data Vault 2.0 solutions. erwin’s toolset models, maps, and automates the creation, population, and maintenance of Data Vault solutions on Snowflake. The combination of Snowflake and erwin provides an end-to-end solution for a governed Data Vault with powerful performance.
Get the rest of the details here: Data Vault Automation with erwin and Snowflake
Vault away my friends!
The Data Warrior
On the heels of a very successful #DataCloud Summit, I am pleased to let you all know that Snowflake CEO Frank Slootman is publishing a book that really illuminates the Data Cloud and how we got here.
According to Gartner, the public cloud services market continues to grow, largely due to the data demands of modern applications and workloads. And data is one of the leading factors in this transition. In recent years, organizations have struggled with processing big data, sets of data large enough to overwhelm commercially available computing systems.
For a long time, the only real solution was data warehousing services. These services relied on specialized computer hardware to increase the scale of data processing. But these systems had major drawbacks in terms of their extremely high cost and performance constraints. Increasing scale this way wasn’t feasible for many or even most companies. With demand continuing to explode, the world desperately needed a more democratic solution for big data delivery.
A new book, The Rise of the Data Cloud, looks at how that problem can be solved over a few short years. As the founders of Snowflake came together to design a better big data solution, they built an entirely new class of cloud computing in the process.
Get all the details here – The Rise of the Data Cloud
The Data Warrior
P.S. If for some reason you missed the Snowflake Data Cloud Summit, you can still view on the content by signing into the Summit site here.
This will be an EPIC event that you will not want to miss.
Join me and the rest of the Snowflake team at Data Cloud Summit 2020! This year we are going virtual with eight business and technology summit tracks, filled with never-before-seen demos, customer presentations, fun interviews, fireside executive chats and, of course, technical deep dive sessions.
This is a totally FREE event that will deliver:
All focused on using the Snowflake Data Cloud to #MobilizeYourData!
Mark this date on your calendar – November 17, 8:30 AM – 2:30 PM PT.
And if you miss any part of the event, sessions will be available for replay immediately after.
Register now: Snowflake Data Cloud Summit 2020
And if you CAN’T Make the North American event, we have additional dedicated events for all my global friends too:
The EMEA event will be on November 18th. Register here: Data Cloud Summit – EMEA
For my friends down under in APAC, the event will be November 19: Data Cloud Summit – APAC
And for all the folks in JAPAN, we have an event just for you on November 25th: Data Cloud Summit – Japan
So no reason to miss this great event.
Check out the agenda today and plan you viewing!
See you online.
The Data Warrior and Chief Technical Evangelist, Snowflake