The Data Warrior

Changing the world, one data model at a time. How can I help you?

Archive for the category “Oracle”

After TROUG BI&DW 2014

Seems it is a busy spring for the Oracle User Group World. Meetings happening everywhere! I hope you all can find at least one event to attend in the first half of 2014. It is the best way to keep up with our constantly changing world and continue to bring value to your employers and customers.

gurcanorhan's avatarGurcan Orhan's ODI and DWH Blog

It was a nice day of TROUG BI&DW 2014 full day meeting. Thanks for all attendees, speakers and everybody who paid attention to this event.
Here are some blog posts / articles about event impressions.

View original post 35 more words

This just in: Win Dinner with Monty at #KScope14

Amazing but true – you can now enter a contest to win dinner with ODTUG President Monty Latiolais at ODTUG’s annual conference KScope14.

This year KScope will be held in beautiful Seattle, Washington from June 22nd – 26th.

Who knows what amazing dinner adventure will be in store for the winner!

Get the details here:

Win Dinner with Monty!.

See you in Seattle!

Kent

The Oracle Data Warrior

P.S. I will be presenting again this year and running my now annual Morning Chi Gung class (more on that later)

Better Data Modeling: My Top 3 Reasons why you should put Foreign Keys in your Data Warehouse

This question came up at the recent World Wide Data Vault Consortium. Seems there are still many folks who build a data warehouse (or data mart) that do not include FKs in the database.

The usual reason is that it “slows down” load performance.

No surprise there. Been hearing that for years.

And I say one of two things:

1. So what! I need my data to be correct and to come out fast too!

or

2. Show me! How slow is it really?

Keep in mind that while getting the data in quickly is important, so is getting the data out.

Who would you rather have complain – the ETL programmer or the business user trying to run a report?

Yes, it has to be a balance, but you should not immediately dismiss including FKs in your warehouse without considering the options and benefits of those options.

So here are my three main reasons why you should include FK constraints in your Oracle data warehouse database:

  1. The Oracle optimizer uses the constraints to make better decisions on join paths.
  2. Your Data Modeling and BI tools can read the FKs from the data dictionary to create correct joins in the meta data of the tool (SDDM, Erwin, OBIEE, Cognos, Bus Objects can all do this).
  3. It is a good QA check on your ETL. (Yeah, I know… the ETL code is perfect and checks all that stuff, bla, bla, bla)

Now of course there are compromise options. The three main ones are I know:

  1. Drop the constraints at the start of the load then add them back in after the load completes. If any fail to build, that tells you immediately where you may have some data quality problems or your model is wrong (or something else changed).
  2. Build all the constraints as DISABLE NOVALIDATE. This puts them in the database for the BI tools and data modeling tools to see and capture but, since they are not enforced, they put minimal overhead on the load process. And, so I am told by those that know, even a disabled constraint helps the optimizer make a smarter choice on the join path.
  3. (really 2a) Best of both – disable the constraints, load your data, then re-enable the constraints. You get optimization and quality checks.

So NOW what is your reason for not using FKs in your data warehouse?

Happy Modeling!

Kent

Live from the 1st Annual World Wide Data Vault Consortium: Day 2

Wow has is been an amazing event with many amazing people from around the globe. I am VERY glad that I took the time and came to St Albans for this inaugural event.

As with Day 1, there is just too much great information for me to adequately cover it in a blog post (or 10!), so I will give you some highlights and lots of pictures.

The good news is that sometime in the next few months you will be able to purchase access to a video of the entire event on LearnDataVault (then you will be really bummed about not having come in person). Dan and Sanjay figure with their schedules it will take a few months to produce a top quality video. The good news is we had professional videographers for all three days and they filmed every keynote and every talk.

I will let you know when the video is ready.

Dan Does a Deep Dive on Data Vault 2.0

Due to popular demand, Dan actually changed the agenda and made the first session of Day 2 a detailed look at some of the more important aspects of Data Vault 2.0. This is material that he has not really written much about and is only available otherwise in his DV 2.0 bootcamp class.

Here are some of the highlights (again to see even more details check #WWDVC on twitter).

Opposing goals between DW storage and Mart presentation have led to many failed DW/BI projects.

Opposing goals between DW storage and Mart presentation have led to many failed DW/BI projects.

BTW – Dan would like to see us stop calling the reporting side “Data Marts” and start calling them “Information Marts”. To be agile we have to stop mixing the raw data storage (EDW) with the reporting that has applied business rules.

Dan went on a bit of a rant about the need to measure things.

DV 2.0 Methodology helps us be more precise to be more successful

DV 2.0 Methodology helps us be more precise to be more successful

Then we had a discussion about DV 2.0 architecture and methodology and how it supports and fits into the knowledge pyramid.

Dan introduces us to  DV 2.0 and the Knowledge Pyramid

Dan introduces us to DV 2.0 and the Knowledge Pyramid

The DV 2.0 Architecture

The DV 2.0 Architecture

The some discussion about DV 2.0 agility backed up by a real user case study.

A DV success at Qsuper

A DV success at Qsuper

It sure was great to see real numbers on a DV 2.0 success story!

So how did they do it? Dan then showed us his recommended approach to requirements gathering that helps the process become more agile.

DV 2.0 approach to getting better requirements faster

DV 2.0 approach to getting better requirements faster

Then finally another rant about how to get better performance from our systems.

Dan's Rules of Performance

Dan’s Rules of Performance

Roelant Vos (Analytics8) talks about DV Automation

Characteristics of ETL for Data Vault

Characteristics of ETL for Data Vault

Roelant Vos discusses  generating Data Vault ETL from meta data

Roelant Vos discusses generating Data Vault ETL from meta data

Roelant uses Lego as a great analogy for the patterns in Data Vault and why it is possible to auto-create ETL code.

Model driven ETL generation is possible

Model driven ETL generation is possible

What is needed  to support generation of ETL

What is needed to support generation of ETL

With all this in mind, Roelant has built a nice little kit to actually generate DV ETL code for his clients. Nice job! You can follow Roelant on twitter here.

Doug Needham (ClearMeasure) blows our minds with Data Vault Math.

There were a ton of tweets about this session. Doug has done some incredibly innovative work on using mathematics to determine the quality/completeness of a data vault model and some slick ways to validate that it has been loaded correctly. Lots of talk about graph theory and calculating edges ensued.

Doug blows our minds with math

Doug blows our minds with math

Research on how to determine the driving key for a Link

Research on how to determine the driving key for a Link

More calculations from Doug - vector math!

More calculations from Doug – vector math!

This is the first time I have met Doug, a former marine corp DBA. We have found we have a lot of experiences and thoughts in common, including a common client! He even grew up in Texas in the town where I now live!

In this session Doug famously said “If you do not know what a hypothesis, control group or A/B test is, you are NOT a data scientist.”

One of the great things about these event is the people you meet. I am glad to have met Doug.

Using Oracle SQL Developer Data Modeler for DV

After a much needed brain break from Doug’s talk and some networking time, I got to take the floor again to show my favorite FREE data modeling tool. I did my usual top 10 type talk but with a slant towards how I leverage all those features to support a data vault modeling project.

Data Vault Diversity: The former Marine mathematician and the long haired enviromentalist

Data Vault Diversity: The former Marine mathematician and the long haired environmentalist

There were lots of good questions and interaction and lots of interest in how I have built virtual data marts on top of data vault warehouses.

Starting my SDDM intro talk

Starting my SDDM intro talk

And that was it for Day 2. We all took a break then had a happy hour and dinner (with demo) sponsored by AnalytixDS. Good food and good fun!

Data Vault geeks from around the world in snowy St Albans, Vermont

Data Vault geeks from around the world in snowy St Albans, Vermont

More to come on Day 3!

Kent

2nd webinar of @LAOUC

Anyone interested in Oracle Data Integrator should tune into Gurcan’s webinar this week.

gurcanorhan's avatarGurcan Orhan's ODI and DWH Blog

Save the date for second webinar of LAOUC. I will be talking about How to handle DEV&TEST&PROD for ODI (click for registration) for LAOUC (Latin America Oracle User Community) on this Thursday (6th March  5PM (BRT) – 8PM GMT – 10pm (GMT +2)).

We will be starting a start-up small ODI project and enlarge it to enterprise level step by step.

View original post

Post Navigation