The Data Warrior

Changing the world, one data model at a time. How can I help you?

Archive for the tag “data model design”

#SQLDevModeler Tip: From Domain to Database… A Comment Conundrum

Great tip on creating a custom transformation script in SQL Developer Data Modeler (SDDM) from the awesome David Schleis:

Recently on the Data Modeler Forum, I came across this question:

Is it possible to mirror domain comments, from Domain Administration into attribute “Comments in RDBMS”?  Would like to mirror these to the ddl so they can be then available in column comments in the database.

I knew that one of the example transformation scripts provided with Data Modeler copies the column “Comments” property to “Comments in RDBMS”, so I thought I would point this out to the questioner, and that would be that.  But….

See the rest of the story… From Domain to Database… A Comment Conundrum

Model on!

Kent

The Data Warrior

Maintaining disabled FK’s, wisdom or farce?

A while back, I wrote a post about having FKs (foreign keys) in your data warehouse.

Well, a similar question came up recently on an Oracle forum with the above title. It is a fair question and it does surface fairly regularly in a variety of contexts (not just data warehousing).

Of course, as The Data Warrior, I felt is was my duty to respond.

The Question

Is there any reason to maintain a permanently disabled FK in the data model?  I’m not envisioning a reason to do it.  If it is not going to be enabled, then from my perspective, it would not make any sense to have it defined.  If anything, provide the definition of the relationship in the comment of the child column.

My Answer

Yes, by all means keep the FK please!

I see three good reasons for doing so:

  1. It is valuable metadata (& documentation). If somebody reverse engineers the database (say with ERWin or Oracle Data Modeler), the FK shows up in the diagram (way better than having to read a column comment to find out)
    Data Vault 2.0 Example

    A picture is worth a thousand words!

    .

  2. BI Metadata – If you want to use any sort of reporting or BI tool against the database, most tools will import the FK definition with the tables and build the proper join conditions. Way better than having someone guess what the join will be and then manually adding it to the metadata layer in the reporting tool. Examples that can read the Oracle data dictionary include OBIEE, Business Objects, COGNOS, Looker, and many others.(Note here that since the FK is not enforced on the remote databases, you might want to make sure these are treated as outer joins, lest you lose some transaction in the reports).
  3. The Oracle optimizer will use disabled constraints to improve query performance of joins. Again, this is metadata in the data dictionary which the optimizer can read. This is documented in the Oracle Data Warehouse guide and I have validated it on multiple occasions with Oracle product management.

While #3 applies specifically to Oracle, for other databases like MS SQL Server and Snowflake, #1 and #2 still apply.

Even if only one of the above is true for a given database, that, in my opinion, still justifies keeping the disabled constraint around.

Final Answer = Wisdom

What do you think? Feel free to comment below.

And please share on your favorite social media platform!

Model on!

Kent

The Data Warrior

 

It WAS the #Best #DataVault Event Ever!

Last week I had the pleasure of spending a few days in lovely Stowe, Vermont at the Stoweflake Mountain Resort and Spa attending the 3rd Annual World Wide Data Vault Consortium (#WWDVC). Not only was the location picturesque, the weather was near perfect, the beer was tasty, and the learning and networking were outstanding.

We had 75 attendees coming from all over the world – Germany, Switzerland, Canada, England, Australia, New Zealand, The Netherlands, USA, Finland, and India. Quite a turnout!

20160523_190630.jpg

20160523_190617.jpg

Day 1- Data Vault Brainstorming

This year I arrived early enough to participate in what is arguably the best part of the event – a full day, open forum discussion with certified Data Vault modelers and practitioners, lead by the inventor of Data Vault, Dan Linstedt.

The brain power in the room was stunning. There were about 30 people in all and we all got to introduce ourselves and talk a bit about what we had been doing with Data Vault. It was great to hear the many and varied ways in which Data Vault is being used across multiple industries (including a US intelligence agency – but that is a secret). Everything from traditional data warehousing and BI, to realtime streaming IoT data, to virtual Data Vaults and virtualized information marts, to using Data Vault to help with Master Data Management (MDM). It was eye opening and exciting to hear all these applications and opportunities.

If you are not yet certified, get certified! Then you can attend this session at WWDVC 2017 (spoiler – at Stoweflake again!). And you are in luck as Dan just announced three classes later this year in St Albans, Vermont. Plus there are multiple classes coming up in Europe as well.

Day 2- Hands on Workshops

Another unprecedented day at WWDVC.  The three platinum sponsors, AnalytixDS, Talend, and Varigence, all ran 3-hour hands on workshops. These were a fantastic opportunity to see how these vendors have really stepped up to the plate to support quickly building Data Vault solutions with their tools.

These were great sessions, led but highly qualified folks. They showcased some great solutions and answered a lot of questions.

All three sessions were standing room only – with over 35 attendees. (We had to drag in chairs from other rooms!)

Be sure to make time to attend these next year as I am sure they will be on the agenda again.

Day 3 – The Main Event Begins

Yes, all the way to Day 3 before the official kickoff with keynote and speakers.

Dan of course got us started with welcome, thanks to all the sponsors, and housekeeping. Nicely this event only has one room and one track so no one has to pick between sessions!

20160526_080206.jpg

Keynote

The keynote this year was Swimming in the Data Lake by none other than the Father of Data Warehousing, Bill Inmon. I greatly enjoyed his somewhat irreverent look at our industry and his discussion on Big Data and the Data Lake concepts. It was quite a humorous talk (“I don’t mean to offend anybody, but….”). I would say it is one of the best talks I have ever heard Bill give over my 20+ years of knowing him (so I have heard a few).

WWDVC_BillInmon_DataLake

And being a prolific author, Bill of course has a new book out on Data Lakes (available now on Amazon here).

WWDVC_BillInmon_DataLakeBook

Being good geeks, several of us did manage to get our picture taken with Mr. Inmon as “social evidence” that we know him (well, I actually did co-author a book with him back in the day).

Kent with Bill 2016

Lots of Talks

Yes it was a full day with tons of stuff to fill our heads with ideas: new, useful and occasionally controversial. (stay tuned for videos on all these!)

Dan’s business partner, Sanjay Pande, came all the way from India to talk about Data Vault 2.0 on Hadoop. Roelant Vos came again from Australia to give us a business based view of a data vault project at his company (Allianz) about Customer Centric Analytics. Mary Mink and Sam Bendayan of Ultimate Software came for the 2nd year to talk about how their SaaS company is using Data Vault to provide customer value. This time they talked about their efforts to move to virtual information marts (very cool).

I did my presentation on Building a Virtualized ODS. This was a real life example from my consulting last year on doing an agile data warehouse project based on Data Vault architectural principles. It was a fun talk with lots of interaction. I love challenging the norm, then proving it works!

KentWWDVC16_VODS

Of course I did have to do a little intro promo about my employer, Snowflake Computing. I am happy to say there was quite a bit of interest in our cloud-native, elastic data warehouse offering.

KentWWDVC16_Snowflake

After my talk I did a drawing for a GoPro camera (courtesy of Snowflake). I am happy to say it went Russell Searle from Australia! This man loves Data Vault so much he has paid his own way to Vermont twice now to attend WWDVC. Now that is dedication!

KentWWDVC16_GoPro

Days 4 & 5

Sadly I had other commitments back in Texas and could not stay for these days (but did follow along a bit on twitter). If you want to see everything that happened, search Twitter for #WWDVC.

One fun thing on Day 4 was a few people got to go up in a tethered hot air balloon. Hopefully I can try that next year.

WWDVC_StoweflakeBalloon

Other Fun Stuff

Of course not everything happens in the sessions. Lots of good networking and information exchange happens informally at these events. I did several impromptu demonstrations of Snowflake. The German and Australian contingents were quite interested and can’t wait until Snowflake is available in their regions.

20160525_143828.jpg

Thanks to my friends Paul and Raphael at WhereScape for loaning me their big monitor!

I introduced a very international crowd to the best northern-style, southern BBQ at the Sunset Grille. We had good Data Vault, and non-DV, conversations along with finger licking ribs, brisket, and pulled pork (and beer of course).

20160524_193334.jpg

Take Aways

Every year, as he closes out the event, Dan tries to summarize key learnings for everyone to take home. Here they are for WWDVC 2016:

WWDVC_conclusions

Well that is it for this time around. With such a great event it is impossible to adequately cover everything but I hope this is enough to get you to put WWDVC 2017 on your event calendar. Ask for the time off now!

Safe travels to all the attendees. See you again soon.

Kent

The Data Warrior

Better Data Modeling: Customizing Oracle Sql Developer Data Modeler (#SQLDevModeler) to Support Custom Data Types

On a recent customer call (for Snowflake), the data architects were asking if Snowflake provided a data model diagramming tool to design and generate data warehouse tables or to view a data model of an existing Snowflake data warehouse. Or if we knew of any that would work with Snowflake.

Well, we do not provide one of our own – our service is the Snowflake Elastic Data Warehouse (#ElasticDW).

The good news is that there are data modeling tools in the broader ecosystem that you can of course use (since we are ANSI SQL compliant).

If you have read my previous posts on using JSON within the Snowflake, you also know that we have a new data type called VARIANT for storing semi structured data like JSON, AVRO, and XML.

In this post I will bring it together and show you the steps to customize SDDM to allow you to model and generate table DDL that contain columns that use the VARIANT data type.

Read the details of how I did it here on my Snowflake blog:

Snowflake SQL: Customizing Oracle Sql Developer Data Modeler (SDDM) to Support Snowflake VARIANT – Snowflake

Enjoy!

Kent

The Data Warrior

P.S. If you are in Austin, Texas this weekend, I will be speaking at Data Day Texas (#DDTX16). Snowflake will have a booth there too, so come on by and say howdy!

Better Data Modeling: Discovering Foreign Keys (FK) in #SQLDevModeler (SDDM)

A while back I had an interesting situation when I was attempting to reverse engineer and document a database for a client.

I had a 3rd party database that had PKs defined on every table but no FKs in the database. The question I posed (on the Data Modeler Forum) was:

How do I get the FK Discover utility to find FK columns with this type of pattern:

Parent PK column = TABCUSTNUM

Child FK column = ABCCUSTNUM

So the root column name (CUSTNUM) is standard but in every table the column name has a different 3 character “prefix” that is effectively the table short name. Is there way to get the utility to ignore the first three characters of the column names?

This was in SDDM 4.1.873.

No easy answer.

Well, the ever helpful Philip was very kind and wrote me a slick custom Transformation Script that did the trick! (Check the post if you want to see the code.)

But wait there’s more!

In his response he mentioned a feature coming in 4.1.888 – the ability to include a table prefix as part of a FK column naming template (just like this app had done).

Cool, I thought, but how does that help?

Well with the template in place it turns out that you can have the FK Discovery utility search based on the Naming Template model rather than just look for exact matching column names.

Using the Custom Naming Template

So recently (today in fact) I was trying to add FKs to the Snowflake DB model I reverse engineered a few weeks back (Jeff pointed out they were missing). I noticed the model had that pattern of a prefix on both the FK and PK column names.

In the CUSTOMER table the PK is C_CUSTKEY. In the ORDER table it is O_CUSTKEY. Nice simple pattern (see the diagram below for more). That reminded me of the previous issue and Philip’s script.

Snowflake Schema

Off to OTN to find that discussion and refresh my memory.

In the post, Philip has posted an example of a template that fit my previous problem:

{table abbr}SUBSTR(4,30,FRONT,{ref column})

With the note that {table abbr} would be the equivalent of what I called the table prefix. So first I went to the table properties and put in the prefixes using the Abbreviation property:

Add Table Abbrev

Then all I had to do was modify his example to account for the underscore and the fact that the main column text would start at character #3 instead of #4:

{table abbr}_SUBSTR(3,30,FRONT,{ref column})

I input that by going to Properties -> Settings -> Naming Standards -> Templates and then editing the default template:

Set up FK template

Discover FKs!

Now it was just a matter of running the utility. You find that with a right mouse click on the Relational Design node:

Discover FK tool

Next you get the list of candidate FKs:

Create FKs

Note that the utility also suggested some FKs based on the unique constraints (UKs) as well. I did not want those, so I unchecked them before I hit “OK”.

The result was getting all the FKs I wanted added into my model! Viola!

Snowflake with RI

So I can happily report that Philip’s little enhancement works just fine in 4.1.3. WooHoo! I can see this being very useful for a lots of cases in the future.

In a future post (early next year), I will continue with showing how we implemented Referential Integrity constraints in Snowflake DB and if I can generate the DDL from #SQLDevModeler.

Happy New Year Y’all

Kent

The Data Warrior & Snowflake Technical Evangelist

Post Navigation