The Data Warrior

Changing the world, one data model at a time. How can I help you?

Data Vault vs.The World (3 of 3)

So far in this series I have compared and contrasted the Data Vault approach to that of  using a replicated ODS or a dimensional Kimball-style structure for an enterprise data warehouse repository. Let me thank everyone who has replied or commented for taking these posts as they were intended, keeping the conversation professional and not starting any flame wars. 🙂

So to close out the thread I must of course discuss using classic Inmon-style CIF (Corporate Information Factory) approach. Many of you may also think of this as a 3NF EDW approach.

To start it should be noted that I started my data warehouse career building 3NF data warehouses and applying Bill Inmon’s principles. Very shortly after I began learning about data warehousing I had the incredible privilege of getting to not only meet Mr. Inmon and learn from him but I got to co-author a book (my first) with him. That book was The Data Model Resource Book. I got to work with Bill on the data warehouse chapters and because of that experience became well versed in his theories and principles. For many years I even gave talks at user group events about how to convert an enterprise data model to an enterprise data warehouse model.

So how does this approach work? Basically you do some moderate denormalization of the source system model (where it is not already denormalized) and add a snapshot date to all the primary keys (to track changes over time). This of course is an oversimplification – there are a number of denormaliztion techniques that could be used in build a data warehouse using Bill’s original thesis.

Additionally this approach (CIF or even DW 2.0) calls for then building dimensional data marts on top of that EDW (along with other reporting structures as needed). Risks here are similar to those mentioned in the previous posts.

The primary one being that the resulting EDW data structure is usually pretty tightly coupled to the OLTP model so the risk of reworking and reloading data is very high as the OLTP structure changes over time. This of course would have impacts downstream to the dimensional models, reports, and dependent extracts.

The addition of snapshot dates to all the PKs in this style data warehouse model also adds quite a bit of complexity to the load and query logic as the dates cascade down through parent-child-child-type relationships. Getting data out ends up needing lots of nested Max(Date) sorts of sub-queries. Miss a sub-query or get it wrong and you get the wrong data. Overall a fairly fragile architecture in the long run.

Also like the dimensional approach, I have encountered few teams that have been successful trying to implement this style of data warehouse in an incremental or agile fashion. My bad luck? Maybe…

The loosely coupled Data Vault data model mitigates these risks and also allows for agile deployment.

As discussed in the previous posts, the data model for a Data Vault based data warehouse is based on business keys and processes rather than the model of any one source system. The approach was specifically developed to mitigate the risks and struggles that were evident in the traditional approaches to data warehousing, including what we all considered the Inmon approach.

As I mentioned earlier I got to interact with Bill Inmon while we worked on a book. The interaction did not stop there. I have had many discussions over the years with Bill on many topics related to data warehousing, which of course includes talking about Data Vault. Both Dan and I talked with Bill about the ideas in the Data Vault approach. I spent a number of lunches telling him about my real-world experience with the approach and how it compared to his original approach (since I had done both). There were both overlaps and differences. Initially, Bill simply agreed it sounded like a reasonable approach (which was a relief to me!).

Over a period of time, many conversations with many people, study, and research, we actually won Bill Inmon over and got his endorsement for Data Vault. In June of 2007,  Bill Inmon stated for the record: 

The Data Vault is the optimal choice for modeling the EDW in the DW 2.0 framework.

So if Bill Inmon agrees that Data Vault is a better approach for modeling an enterprise data warehouse, why would anyone keep using his old methods and not at least consider learning more about Data Vault?

Something to think about, eh?

I hope you enjoyed this little series about Data Vault and will keep it in mind as you get into your new data warehouse projects for 2013.

Kent

P.S. – if you are ready to learn more about Data Vault, check out the introductory paper on my White Papers page, or just go for broke and buy the Super Charge book.

Data Vault vs. The World (2 of 3)

In the first post of this series, I discussed advantages of using a Data Vault over a replicated ODS. In this post I am going to discuss Data Vault and the more classical approach of a Kimball-style dimensional (star schema) design.

To be clear upfront, I will be comparing using Data Vault to using dimensional for the enterprise data warehouse repository. Using star schemas for reporting data marts is not at issue (we use those in the data vault framework regularly).

I also want to recognize that there are some very expert architects out there who have had great success building Kimball-style EDWs and mitigated all the risks I am going to mention. Unfortunately, there are not enough to them to go around.  So for the rest of us, we might need an alternative approach…

When considering a Kimball-style approach, organizations often consider designing facts and dimensions to hold the operational data from one of their source systems. One downside to this approach is that to design optimal structures we would have to have a very solid understanding of all the types of questions the business needs to answer. Otherwise we risk not having the right fact tables designed to support the ever-changing needs of the business. So this tends to make it difficult to implement in an agile, iterative manner and can lead to the need for re-engineering. This is especially a risk with conformed dimensions. While adding new fact tables in an interative fashion is not a problem, having to redeign or rebuild a large conformed dimension could be big effort.

Additionally there may be requirements to extract data for various other analysis. Dimensional models do not lend themselves well to all types of extracts. Not every query can be supported by cube-type designs – especially if the data is spread across multiple fact tables with varying levels of granularity.

With a Data Vault solution we can store all the data at an atomic level in a form that can easily be projected into dimensional views, 3NF ODS-style structures or just flat denormalized spreadsheet style extracts. This will allow us to be more agile in addressing the changing report and extract requirements than if we have to design new facts and dimensions, then write and test the ETL code to build them.

Plus the instantiated dimension model (which is highly denormalized) will simply take more space than a data vault model which is more normalized.

Using the dimensional approach, as the basis for the foundation layer of an enterprise data warehouse, there is also the risk pf having to redesign, drop and reload both facts and dimensions as the OLTP model evolves. That effort can be very expensive and take a lot of time.

With the Data Vault  it is much easier to evolve the warehouse model by simply adding new structures (Hubs, Links, Sats) to the model to absorb the changes.

Integrating other source systems can also be a challenge with dimensional models, especially in populating conformed dimensions as you have to account for all semantic and structural differences between the various systems before you load the data. this could mean a lot for cleansing and transforming the source data (which could jeopardize your ability to clearly trace data to the source.)

Data Vault avoids this issue by integrating around business keys (via Hubs) and allowing for apparently disparate data sets to be associated through the use of Same-As-Link tables to support dynamic equivalence mappings of the data sets from different sources. In agile terms, these structures can be built in a future sprint after the data is loaded and profiled. (As a side note, Same-As-Links are also very helpful for using your vault to do master data type work.)

Well, those are the primary issues I have run across. I am sure there are more, but this should be enough for now.

Can’t wait to hear your comments! Be sure to tweet this to your friends.

Later,

Kent

P.S. Stay tuned for #3 where I will discuss traditional Inmon 3NF-style data warehouse.

Data Vault vs. The World (1 of 3)

Okay, maybe not “the world” but is does sometimes seem like it.

Even though the Data Vault has been around for well over 10 years now, has multiple books, video, and tons of success stories,  I am constantly asked to compare and contrast Data Vault to approaches generally accepted in the industry.

What’s up with that?

When was the last time you got asked to justify using a star schema for your data warehouse project?

Or when was that expensive consulting firm even asked “so what data modeling technique do you recommend for our data warehouse?”

Oh…like never.

Such is the life of the “new guy.” (If you are new to Data Vault, read this first.)

So, over the next few posts, I am going to lay out some of the explanations and justifications I use when comparing Data Vault to other approaches to data warehousing.

The first contestant: Poor man’s ODS vs. Data Vault

This approach entails simply replicating the operational (OLTP) tables to another server for read only reporting. This could be used as a partial data warehouse solution using something like Oracle’s GoldenGate to support near real time operational reporting that would minimize impact on the operational system.

This solution, however, does not adequately support needs for dimensional analysis nor would it allow for tracking of changes to the data historical (beyond any temporal tracking inherent in the OLTP data model).

A big risk of this approach is that as the OLTP structures continue to morph and change over time, reports and other extracts that access the changed structures would of course break as soon as the change was replicated to the ODS.

How does Data Vault handle this?

Data Vault avoids these problems by using structures that are not tightly coupled to any one source system. So as the source systems change we simply add Satellite and Link structures as needed.  In the Data Vault methodology we do not drop any existing structures so reports will continue to work until we can properly rewrite them to take advantage of the new structure.  If there is totally new data added to a source, we would probably end up adding new Hubs as well.

An additional advantage is that because Data Vault uses this loosely coupled approach we can load data from multiple sources. If we replicate specific OLTP structures, we would not be able to easily integrate other source system feeds – we would have to build another repository to do the integration (which would likely entail duplicating quite a bit of the data).

Don’t get me wrong, there is nothing wrong with using replication tools to build real time operational data stores.

In fact it is an excellent solution to getting your operational reporting offloaded from the main production server.

It is a tried and true solution – for a specific problem.

It is however, not the right solution if you are building an enterprise data warehouse and need to integrate multiple sources or need to report on changes to your data over time.

So let’s use the right tool for the right job.

Data Vault is newer, better tool.

In the next two posts I will compare Data Vault to the Kimball-style dimensional approach (part 2 of 3) and then to Inmon-style 3NF (part 3 of 3).

Stay tuned.

Kent

P.S. Be sure to sign up to follow my blog so you don’t miss the next round of Data Vault vs. The World.

 

Tech Tip: Connect to SQL Server Using Oracle SQL Developer (updated)

I spend a lot of time reverse engineering client databases to see what kind of design they are working with or to simply create a data model diagram for them (so they know what they have).

Along the way I often need to actually look at the data as well to do some analysis and profiling.

Often this means looking at data and models in SQL Server as well as Oracle.

What’s an Oracle Data Warrior to do?

Hook up my FREE handy dandy Oracle SQL Developer to the SQL Server database.

How do you do that?

First you need to get the right driver. You can find it here: http://sourceforge.net/projects/jtds/files/jtds/1.2.5/jtds-1.2.5-dist.zip/download

NOTE: For SQL Developer 4.0EA3 and SQL Developer Data Modeler 4.0 (production) you now need jtds-1.3.1. Get it here: http://sourceforge.net/projects/jtds/files/

Then follow these steps:

  1. Download and unzip the file into the main SQL Developer directory (or the directory of your choice).
  2. In SQL Developer go to Tools -> Preferences -> Database -> Third party JDBC Drivers
  3. Click the “add entry” button
  4. Navigate to the jtds-1.2.5.jar file. (or the 1.3.1 file for 4.x installs)
  5. Save and exit preferences.
  6. Close and restart SQL Developer
  7. Open “Add Connection” – there should now be a SQL Server tab.
SQL Developer Preferences

SQL Developer Preferences

With this in place, you can now connect to SQL Server without having to load any other software.

Pretty useful.

Happy Querying!

Kent

P.S. You can connect to other non-Oracle dbs as well. Check out this post by Jeff Smith for even more details.

Additional Notes on SSO errors:

Lots of folks, including me, have had issues getting the native Windows SSO connection to SQL Server to work. Based on answers on the OTN Forum and this post (http://www.oracle-base.com/blog/2013/10/01/sql-developer-4-ea2-connecting-to-sql-server/) I finally got my new 4.x versions to work.

For SQL Developer 4.0EA3, I did as suggested in the article: http://www.oracle-base.com/blog/2013/10/01/sql-developer-4-ea2-connecting-to-sql-server/. I put the ntlmauth.dll where my JDK 1.7 was installed: C:\Program Files\Java\jdk1.7.0_40\jre\bin

For Data Modeler 4.0.13 (production), based on a suggestion from Jeff Smith, I put the dll file here: C:\SQLDeveloper\SQLDeveloper4.0.13\sqldeveloper\sqldeveloper\bin

If I was better at setting windows paths, I am sure there is a better way to do this.

Happy 2013! What will you do this year?

Happy New Year! Welcome to year #2 of the Oracle Data Warrior.

I hope everyone is looking forward to a bright, happy, and successful year (however you measure it).

For me it will be a year of figuring out my long term business model (maybe?), writing a few more short ebooks (stay tuned), doing my Oracle ACE Director thing, continuing to work as a Data Vault and Data Warehouse advisor and consultant,  presenting at RMOUG, KScope13, and hopefully a few other choice events, and of course writing on this blog (and practicing my martial arts).

That ought to do it, don’t you think?

But you never know what life may throw your way, so I am not tied to any of that really, but that is where my wave seems to be heading today.

One thing I have already done was to take advantage of Vizify to build a visual story about myself. I really like the look and feel of the app and the way it presents my information. Check out the animation on the location page and then the timeline on the career page (which is not quite complete yet). Very cool.

How about you? What is on your horizon for 2013?

Cowabunga!

Kent

P.S. See this cool 2012 report WordPress generated automatically. It covers the stats I put in my last post but much nicer presentation. 😉

The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

4,329 films were submitted to the 2012 Cannes Film Festival. This blog had 23,000 views in 2012. If each view were a film, this blog would power 5 Film Festivals

Click here to see the complete report.

Post Navigation