The Data Warrior

Changing the world, one data model at a time. How can I help you?

Archive for the category “Data Vault”

Data Vault vs.The World (3 of 3)

So far in this series I have compared and contrasted the Data Vault approach to that of  using a replicated ODS or a dimensional Kimball-style structure for an enterprise data warehouse repository. Let me thank everyone who has replied or commented for taking these posts as they were intended, keeping the conversation professional and not starting any flame wars. 🙂

So to close out the thread I must of course discuss using classic Inmon-style CIF (Corporate Information Factory) approach. Many of you may also think of this as a 3NF EDW approach.

To start it should be noted that I started my data warehouse career building 3NF data warehouses and applying Bill Inmon’s principles. Very shortly after I began learning about data warehousing I had the incredible privilege of getting to not only meet Mr. Inmon and learn from him but I got to co-author a book (my first) with him. That book was The Data Model Resource Book. I got to work with Bill on the data warehouse chapters and because of that experience became well versed in his theories and principles. For many years I even gave talks at user group events about how to convert an enterprise data model to an enterprise data warehouse model.

So how does this approach work? Basically you do some moderate denormalization of the source system model (where it is not already denormalized) and add a snapshot date to all the primary keys (to track changes over time). This of course is an oversimplification – there are a number of denormaliztion techniques that could be used in build a data warehouse using Bill’s original thesis.

Additionally this approach (CIF or even DW 2.0) calls for then building dimensional data marts on top of that EDW (along with other reporting structures as needed). Risks here are similar to those mentioned in the previous posts.

The primary one being that the resulting EDW data structure is usually pretty tightly coupled to the OLTP model so the risk of reworking and reloading data is very high as the OLTP structure changes over time. This of course would have impacts downstream to the dimensional models, reports, and dependent extracts.

The addition of snapshot dates to all the PKs in this style data warehouse model also adds quite a bit of complexity to the load and query logic as the dates cascade down through parent-child-child-type relationships. Getting data out ends up needing lots of nested Max(Date) sorts of sub-queries. Miss a sub-query or get it wrong and you get the wrong data. Overall a fairly fragile architecture in the long run.

Also like the dimensional approach, I have encountered few teams that have been successful trying to implement this style of data warehouse in an incremental or agile fashion. My bad luck? Maybe…

The loosely coupled Data Vault data model mitigates these risks and also allows for agile deployment.

As discussed in the previous posts, the data model for a Data Vault based data warehouse is based on business keys and processes rather than the model of any one source system. The approach was specifically developed to mitigate the risks and struggles that were evident in the traditional approaches to data warehousing, including what we all considered the Inmon approach.

As I mentioned earlier I got to interact with Bill Inmon while we worked on a book. The interaction did not stop there. I have had many discussions over the years with Bill on many topics related to data warehousing, which of course includes talking about Data Vault. Both Dan and I talked with Bill about the ideas in the Data Vault approach. I spent a number of lunches telling him about my real-world experience with the approach and how it compared to his original approach (since I had done both). There were both overlaps and differences. Initially, Bill simply agreed it sounded like a reasonable approach (which was a relief to me!).

Over a period of time, many conversations with many people, study, and research, we actually won Bill Inmon over and got his endorsement for Data Vault. In June of 2007,  Bill Inmon stated for the record: 

The Data Vault is the optimal choice for modeling the EDW in the DW 2.0 framework.

So if Bill Inmon agrees that Data Vault is a better approach for modeling an enterprise data warehouse, why would anyone keep using his old methods and not at least consider learning more about Data Vault?

Something to think about, eh?

I hope you enjoyed this little series about Data Vault and will keep it in mind as you get into your new data warehouse projects for 2013.

Kent

P.S. – if you are ready to learn more about Data Vault, check out the introductory paper on my White Papers page, or just go for broke and buy the Super Charge book.

Data Vault vs. The World (2 of 3)

In the first post of this series, I discussed advantages of using a Data Vault over a replicated ODS. In this post I am going to discuss Data Vault and the more classical approach of a Kimball-style dimensional (star schema) design.

To be clear upfront, I will be comparing using Data Vault to using dimensional for the enterprise data warehouse repository. Using star schemas for reporting data marts is not at issue (we use those in the data vault framework regularly).

I also want to recognize that there are some very expert architects out there who have had great success building Kimball-style EDWs and mitigated all the risks I am going to mention. Unfortunately, there are not enough to them to go around.  So for the rest of us, we might need an alternative approach…

When considering a Kimball-style approach, organizations often consider designing facts and dimensions to hold the operational data from one of their source systems. One downside to this approach is that to design optimal structures we would have to have a very solid understanding of all the types of questions the business needs to answer. Otherwise we risk not having the right fact tables designed to support the ever-changing needs of the business. So this tends to make it difficult to implement in an agile, iterative manner and can lead to the need for re-engineering. This is especially a risk with conformed dimensions. While adding new fact tables in an interative fashion is not a problem, having to redeign or rebuild a large conformed dimension could be big effort.

Additionally there may be requirements to extract data for various other analysis. Dimensional models do not lend themselves well to all types of extracts. Not every query can be supported by cube-type designs – especially if the data is spread across multiple fact tables with varying levels of granularity.

With a Data Vault solution we can store all the data at an atomic level in a form that can easily be projected into dimensional views, 3NF ODS-style structures or just flat denormalized spreadsheet style extracts. This will allow us to be more agile in addressing the changing report and extract requirements than if we have to design new facts and dimensions, then write and test the ETL code to build them.

Plus the instantiated dimension model (which is highly denormalized) will simply take more space than a data vault model which is more normalized.

Using the dimensional approach, as the basis for the foundation layer of an enterprise data warehouse, there is also the risk pf having to redesign, drop and reload both facts and dimensions as the OLTP model evolves. That effort can be very expensive and take a lot of time.

With the Data Vault  it is much easier to evolve the warehouse model by simply adding new structures (Hubs, Links, Sats) to the model to absorb the changes.

Integrating other source systems can also be a challenge with dimensional models, especially in populating conformed dimensions as you have to account for all semantic and structural differences between the various systems before you load the data. this could mean a lot for cleansing and transforming the source data (which could jeopardize your ability to clearly trace data to the source.)

Data Vault avoids this issue by integrating around business keys (via Hubs) and allowing for apparently disparate data sets to be associated through the use of Same-As-Link tables to support dynamic equivalence mappings of the data sets from different sources. In agile terms, these structures can be built in a future sprint after the data is loaded and profiled. (As a side note, Same-As-Links are also very helpful for using your vault to do master data type work.)

Well, those are the primary issues I have run across. I am sure there are more, but this should be enough for now.

Can’t wait to hear your comments! Be sure to tweet this to your friends.

Later,

Kent

P.S. Stay tuned for #3 where I will discuss traditional Inmon 3NF-style data warehouse.

Data Vault vs. The World (1 of 3)

Okay, maybe not “the world” but is does sometimes seem like it.

Even though the Data Vault has been around for well over 10 years now, has multiple books, video, and tons of success stories,  I am constantly asked to compare and contrast Data Vault to approaches generally accepted in the industry.

What’s up with that?

When was the last time you got asked to justify using a star schema for your data warehouse project?

Or when was that expensive consulting firm even asked “so what data modeling technique do you recommend for our data warehouse?”

Oh…like never.

Such is the life of the “new guy.” (If you are new to Data Vault, read this first.)

So, over the next few posts, I am going to lay out some of the explanations and justifications I use when comparing Data Vault to other approaches to data warehousing.

The first contestant: Poor man’s ODS vs. Data Vault

This approach entails simply replicating the operational (OLTP) tables to another server for read only reporting. This could be used as a partial data warehouse solution using something like Oracle’s GoldenGate to support near real time operational reporting that would minimize impact on the operational system.

This solution, however, does not adequately support needs for dimensional analysis nor would it allow for tracking of changes to the data historical (beyond any temporal tracking inherent in the OLTP data model).

A big risk of this approach is that as the OLTP structures continue to morph and change over time, reports and other extracts that access the changed structures would of course break as soon as the change was replicated to the ODS.

How does Data Vault handle this?

Data Vault avoids these problems by using structures that are not tightly coupled to any one source system. So as the source systems change we simply add Satellite and Link structures as needed.  In the Data Vault methodology we do not drop any existing structures so reports will continue to work until we can properly rewrite them to take advantage of the new structure.  If there is totally new data added to a source, we would probably end up adding new Hubs as well.

An additional advantage is that because Data Vault uses this loosely coupled approach we can load data from multiple sources. If we replicate specific OLTP structures, we would not be able to easily integrate other source system feeds – we would have to build another repository to do the integration (which would likely entail duplicating quite a bit of the data).

Don’t get me wrong, there is nothing wrong with using replication tools to build real time operational data stores.

In fact it is an excellent solution to getting your operational reporting offloaded from the main production server.

It is a tried and true solution – for a specific problem.

It is however, not the right solution if you are building an enterprise data warehouse and need to integrate multiple sources or need to report on changes to your data over time.

So let’s use the right tool for the right job.

Data Vault is newer, better tool.

In the next two posts I will compare Data Vault to the Kimball-style dimensional approach (part 2 of 3) and then to Inmon-style 3NF (part 3 of 3).

Stay tuned.

Kent

P.S. Be sure to sign up to follow my blog so you don’t miss the next round of Data Vault vs. The World.

 

Happy 2013! What will you do this year?

Happy New Year! Welcome to year #2 of the Oracle Data Warrior.

I hope everyone is looking forward to a bright, happy, and successful year (however you measure it).

For me it will be a year of figuring out my long term business model (maybe?), writing a few more short ebooks (stay tuned), doing my Oracle ACE Director thing, continuing to work as a Data Vault and Data Warehouse advisor and consultant,  presenting at RMOUG, KScope13, and hopefully a few other choice events, and of course writing on this blog (and practicing my martial arts).

That ought to do it, don’t you think?

But you never know what life may throw your way, so I am not tied to any of that really, but that is where my wave seems to be heading today.

One thing I have already done was to take advantage of Vizify to build a visual story about myself. I really like the look and feel of the app and the way it presents my information. Check out the animation on the location page and then the timeline on the career page (which is not quite complete yet). Very cool.

How about you? What is on your horizon for 2013?

Cowabunga!

Kent

P.S. See this cool 2012 report WordPress generated automatically. It covers the stats I put in my last post but much nicer presentation. 😉

The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

4,329 films were submitted to the 2012 Cannes Film Festival. This blog had 23,000 views in 2012. If each view were a film, this blog would power 5 Film Festivals

Click here to see the complete report.

2012: Year in the Life of an Oracle Data Warrior

Hard to believe it is nearly the end of the year. But…it is here.

I will be taking time until the end of the year so I am doing my “year-end” post now.

It was a significant year for me with many new things, events, conferences, and clients. Here is a list, by month of a few of them:

January

I launched this blog – Oracle Data Warrior! At the stroke of midnight on January 1, I hit publish for this posting. So far I have had over 22,000 views on the site with the best/biggest day drawing 294 views on September 24th. People came to check out a free promotion for my new Kindle book.

So far 78 of you have subscribed to this blog and hence get notification whenever I post something new.

Thanks for your support! (For the rest – subscribed now so you don’t miss anything in 2013).

In January I also launched the Year of the Data Vault by going to Dan Linstedt’s Data Vault certification class in Montreal. It was a great class. Check the January archive for my posts about the class.

February

I posted what has turned out to be THE most popular article so far: The best FREE data modeling tool ever. So far it has had 8,213 views! Wow! (of course since a bunch of you just clicked the link that number has gone up again)

Also big in February (every year) is the RMOUG Training Days in Denver, Colorado. This year I did the first ever remote presentation via skype as part of their pre-conference seminar on data warehousing. My presentation was, of course, on Data Vault. There were a few technical issues but with the help of my good friend Jerry Ireland we got through it fine.

(Note: For RMOUG 2013, I will actually be presenting in person).

March

Two really big things this month:

  1. I filed with the state of Texas and formed Data Warrior LLC, signed my very first 1099 (independent) contract and became an official business.
  2. The Data Vault Training Portal was launched. You can read my post about that here.

April

Business wise, I started the 1099 contract work at MD Anderson Cancer Center and got to work building a data vault for one of their internal projects.

On the blog, I made some modification to the layout and added a War Chest page with links to some resources that cost a little money (as opposed to my White Paper page which has Free stuff).

May

After one month of being an independent contractor I bought my first smartphone – an LG Nitro. I am not really a huge gadget guy so I had put this off for sometime but finally gave in so I could tweet at the upcoming ODTUG conference in San Antonio.

Of course this means I signed up for Twitter. You can find me there at https://twitter.com/KentGraziano.

June

June was  HUGE month.

  1. The Data Vault modeling book, hit #1 on Kindle.
  2. I got “promoted” to Oracle ACE Director (and found out via a Facebook post!).
  3. And of course there was KScope12 in San Antonio, Texas. I taught Chi Gung every morning at 7 AM and blogged about the event every night (at about midnight). Just check my June archives for all the posts and plenty of pictures.

July

Slowed down a bit here. Recovered from KScope12 (started planning for KScope13). Wrote a bit about work/life balance and posted this cool InfoGraphic.

August

Another first for me in August was I published my first eBook on Kindle about data model design reviews.

Then we had an excellent family vacation with my father back east. We drove through the Adirondack Mountains in New York State and then to the Green Mountains of Vermont where we stayed at the Trapp Family Lodge. It gets my highest recommendation for a family friendly, environmentally aware, upscale, outdoor vacation resort. Pay the money and go – you only live once!

While on the trip, my nine year old son came up with a great idea for a blog post: How to make data modeling fun. When we got back, I wrote and posted it here. (Soon it will be a presentation at a conference near you)

September

This was another big and fun month – all about Oracle Open World 2012 and getting to attend my first Oracle ACE Director meeting at Oracle HQ. Like at KScope, I blogged every night in the wee hours to capture what I saw and learned that day. The smart phone got a lot of use taking pictures in session and around San Francisco. It is all logged in the September archives.

October

Actually OOW 2012 bled over into October so there are even more posts and pictures in the October Archive folder.

The other biggie in October was that I finished out my contract at MD Anderson Cancer Center and started a new gig at McKesson Specialty Health (US Oncology). This has turned out to be a great project with a good team (like I had at MD Anderson), but with the added benefit of only being 9 miles from my house. This is the shortest commute I have had since college! Saves me 2.5 hours a day in driving.

Needless to say, that is a very nice aspect of the job.

November

This month was less about data (and my normal work) and more about fitness, a new habit, and being a warrior. (Though I did get accepted to present at the RMOUG Training Days in Denver.)

The highlight of the month was attending the 20th Anniversary celebration for the International Combat Hapkido Federation. I have been attending their workshops and seminars for over 15 of those years and have had the privilege to train with several of their master as well as their founder and grand master John Pellegrini. Combat Hapkido is a very practical martial art for self-defense and a lot of fun to learn and practice.

It was a great event with back to back workshops (i.e., work outs!) with many masters and grand masters. We got training in Tai Chi, stretching, conditioning, kicking, Filipino Escrima, ground survival, and pressure points. There were actual martial arts celebs in attendance including Bill “Superfoot” Wallace, Cynthia Rothrock, and Stephen Hayes.

Since my main art is Tae Kwon Do, I was very privileged to meet and train with Grandmaster Bill Wallace (who actually has signed my last two black belt certificates along with GM Pellegrini). GM Wallace’s session was challenging and fun. He is quite entertaining.

Me (right) with GM Superfoot Wallace (center)  and Master Ramon Voils

Me (right) with GM Superfoot Wallace (center) and Master Ramon Voils

At 67 years old, GM Wallace can kick faster and higher than pretty much everyone I have every trained with. I can only hope to be doing so well when I reach that age.

This why he is called "Superfoot"

This is why he is called “Superfoot”

For more pictures from the event, you can subscribe to my newsfeed on Facebook or like my page. You might even find a picture of me in a suit!

December

And now we are up to this final month of 2012. I have been very busy with my work at McKesson so have only got one post out about the newest release of SQL Developer Data Modeler (which I use nearly every day!).

I did however recently get notification that I had several papers accepted for presentation at the ODTUG  KScope13 conference in New Orleans next June. Be sure to register for that event too!

Yes it was quite the busy year…

Stay tuned for 2013 and see what happens.

Merry Christmas and Happy New Year!

Kent

The Oracle Data Warrior

Post Navigation