The Data Warrior

Changing the world, one data model at a time. How can I help you?

Archive for the tag “#BetterDataModeling”

Are You Certifiable? 1st #DataVault 2.0 Bootcamp of the Year

A quick note for all the folks out there that have been contemplating diving deep into Dan Linstedt’s Data Vault 2.0 System of Business Intelligence.

Dan will be teaching a Data Vault 2.o Bootcamp in February! You can sign up here.

You’ve read the articles, read the blog posts (mine included), attended the talks at the conferences, maybe even read the Super Charge book…

Are you done trying to figure it out on your own?

Ready to not only learn how to do it right, but get certified as a Data Vault 2.0 Practitioner?

Well let’s get 2016 off to a great start and attend the 1st Data Vault 2.0 Bootcamp of 2016 in beautiful St. Albans, Vermont, taught by none other than the inventor of Data Vault, my good friend Dan Linstedt.

You could of course just buy the new book, and try it out on your own…

But if you are like me, you do much better when you can interact, face-to-face with a qualified instructor, ask the hard questions, and get the insights that will make you truly successful.

So why not invest in yourself and your future success? Go sign up now.

As an added incentive, Dan has added some brand new material.

NEW TOPICS

Dan will be discussing DV2 on Hive / Hadoop, the benefits, pros and cons, some suggestions on how to build it and leverage it properly.  He will be talking about Satellites on HDFS, Hubs & Links on Hive.  He will discuss data modeling implications, and using SERDe definitions at query time.  This is the first time ever that this information will be presented in the DV2 class!

Make the commitment to a great 2016 now and go sign up before the class fills up. If you sign up before February 1st, you can save over $400!

To your success!

Kent

The Data Warrior

Data Vault Master and CDVP2

P.S. For you skiers, St. Albans is a short drive to both Stowe and Smuggler’s Notch – both great east coast ski areas and with the snow they just got the skiing will be epic. Go take the class, then reward yourself with a little weekend ski trip.

Better Data Modeling: Customizing Oracle Sql Developer Data Modeler (#SQLDevModeler) to Support Custom Data Types

On a recent customer call (for Snowflake), the data architects were asking if Snowflake provided a data model diagramming tool to design and generate data warehouse tables or to view a data model of an existing Snowflake data warehouse. Or if we knew of any that would work with Snowflake.

Well, we do not provide one of our own – our service is the Snowflake Elastic Data Warehouse (#ElasticDW).

The good news is that there are data modeling tools in the broader ecosystem that you can of course use (since we are ANSI SQL compliant).

If you have read my previous posts on using JSON within the Snowflake, you also know that we have a new data type called VARIANT for storing semi structured data like JSON, AVRO, and XML.

In this post I will bring it together and show you the steps to customize SDDM to allow you to model and generate table DDL that contain columns that use the VARIANT data type.

Read the details of how I did it here on my Snowflake blog:

Snowflake SQL: Customizing Oracle Sql Developer Data Modeler (SDDM) to Support Snowflake VARIANT – Snowflake

Enjoy!

Kent

The Data Warrior

P.S. If you are in Austin, Texas this weekend, I will be speaking at Data Day Texas (#DDTX16). Snowflake will have a booth there too, so come on by and say howdy!

Better Data Modeling: Discovering Foreign Keys (FK) in #SQLDevModeler (SDDM)

A while back I had an interesting situation when I was attempting to reverse engineer and document a database for a client.

I had a 3rd party database that had PKs defined on every table but no FKs in the database. The question I posed (on the Data Modeler Forum) was:

How do I get the FK Discover utility to find FK columns with this type of pattern:

Parent PK column = TABCUSTNUM

Child FK column = ABCCUSTNUM

So the root column name (CUSTNUM) is standard but in every table the column name has a different 3 character “prefix” that is effectively the table short name. Is there way to get the utility to ignore the first three characters of the column names?

This was in SDDM 4.1.873.

No easy answer.

Well, the ever helpful Philip was very kind and wrote me a slick custom Transformation Script that did the trick! (Check the post if you want to see the code.)

But wait there’s more!

In his response he mentioned a feature coming in 4.1.888 – the ability to include a table prefix as part of a FK column naming template (just like this app had done).

Cool, I thought, but how does that help?

Well with the template in place it turns out that you can have the FK Discovery utility search based on the Naming Template model rather than just look for exact matching column names.

Using the Custom Naming Template

So recently (today in fact) I was trying to add FKs to the Snowflake DB model I reverse engineered a few weeks back (Jeff pointed out they were missing). I noticed the model had that pattern of a prefix on both the FK and PK column names.

In the CUSTOMER table the PK is C_CUSTKEY. In the ORDER table it is O_CUSTKEY. Nice simple pattern (see the diagram below for more). That reminded me of the previous issue and Philip’s script.

Snowflake Schema

Off to OTN to find that discussion and refresh my memory.

In the post, Philip has posted an example of a template that fit my previous problem:

{table abbr}SUBSTR(4,30,FRONT,{ref column})

With the note that {table abbr} would be the equivalent of what I called the table prefix. So first I went to the table properties and put in the prefixes using the Abbreviation property:

Add Table Abbrev

Then all I had to do was modify his example to account for the underscore and the fact that the main column text would start at character #3 instead of #4:

{table abbr}_SUBSTR(3,30,FRONT,{ref column})

I input that by going to Properties -> Settings -> Naming Standards -> Templates and then editing the default template:

Set up FK template

Discover FKs!

Now it was just a matter of running the utility. You find that with a right mouse click on the Relational Design node:

Discover FK tool

Next you get the list of candidate FKs:

Create FKs

Note that the utility also suggested some FKs based on the unique constraints (UKs) as well. I did not want those, so I unchecked them before I hit “OK”.

The result was getting all the FKs I wanted added into my model! Viola!

Snowflake with RI

So I can happily report that Philip’s little enhancement works just fine in 4.1.3. WooHoo! I can see this being very useful for a lots of cases in the future.

In a future post (early next year), I will continue with showing how we implemented Referential Integrity constraints in Snowflake DB and if I can generate the DDL from #SQLDevModeler.

Happy New Year Y’all

Kent

The Data Warrior & Snowflake Technical Evangelist

Snowflake SQL: Making Schema-on-Read a Reality (Part 1) 

This is my 1st official post on the Snowflake blog in my new role as their Technical Evangelist. It discusses getting results from semi-structured JSON data using our extensions to ANSI SQL.

Schema? I don’t need no stinking schema!

Over the last several years, I have heard this phrase schema-on-read used to explain the benefit of loading semi-structured data into a Big Data platform like Hadoop. The idea being you could delay data modeling and schema design until long after the data was loaded (so as to not slow down getting your data while waiting for those darn data modelers).

Every time I heard it, I thought (and sometimes said) – “but that implies there is a knowable schema.”  So really you are just delaying the inevitable need to understand the structure in order to derive some business value from that data. Pay me now or pay me later.

Why delay the pain?

Check out the rest of the post here:

Snowflake SQL: Making Schema-on-Read a Reality (Part 1) – Snowflake

Enjoy!

Kent

The Data Warrior

Better Data Modeling: What is #DataVault 2.0 and Why do I care?

Have you heard?

Dan Linstedt has just had his new book published on Data Vault 2.0. It is called Building a Scalable Data Warehouse with Data Vault 2.0. If you are at all into data warehousing and agile design, you need to get this book now. So click here and be done.

For those of you not sure what this DV 2.0 stuff is all about and why you might want to learn about it, I recently did a series of guest posts for Vertabelo to introduce folks to the concepts. In the series I walk you through some of the history of Data Vault and why we need a new way to do data warehousing. Then I get into the basics of modeling the Data Vault, the Business Vault, and finally building Information Marts from a Data Vault.

So you can find the posts here:

#1 – Agile Modeling: Not an Option Anymore

#2 – Data Vault 2.0 Modeling Basics

#3 – The Business Data Vault

#4 – Building an Information Mart with Your Data Vault

Once you have read these, I am sure you will want to go buy the new Data Vault 2.0 book and maybe sign up for some online training on LearnDataVault.com

Model on!

Kent

The Data Warrior

P.S. If you want to catch up, you can still purchase the original Data Vault (1.0) modeling book Super Charge Your Data Warehouse. It is a great reference book to have on hand (you can get it on Kindle too). Might as well have the whole set.

P.P.S. I turned this series into a Kindle ebook for easier reference, you can find it on my Author Profile or just click on the book cover in the right side bar above.

Post Navigation

%d bloggers like this: