The Data Warrior

Changing the world, one data model at a time. How can I help you?

Countdown to KScope: Oracle Education, Fitness and More

It’s almost here: the best education event for Oracle developers – the Oracle Development Tools User Group KScope12 conference.

It starts Saturday June 23rd with the annual community service day helping out the Boy’s and Girl’s Club of San Antonio.

Then things really get rolling on Sunday June 24th with the famed all-day, in-depth symposiums.

One Monday, the main sessions kick things into high gear.

This year I am lucky enough to give two presentations on my favorite topics.

On Monday June 25 from 10:00 am – 11:00 am, I will present Introduction to Data Vault Modeling for an Oracle BI Environment. 

Then on Thursday June 28, from 10:30 am – 11:30 am, you can close out the conference by attending my presentation on SQL Developer Data Modeler: Reverse Engineering (and Re-engineer) an Existing Database Using Oracle SQL Developer Data Modeler.

If you are using the KScope app, don’t forget to add these sessions to your schedule and then check in during the sessions to be eligible for  some special ODTUG prizes.

Of course there will be networking events, a vendor hall, food and fun.

With all the good food and sitting in intense sessions all day, it is important that we all do our best to stay fit. To help with that, I will again be leading a 30 minute Chi Gung session every morning at 7 AM on the main lawn.

Really – anyone can do it. Read my article here for details.

Come out and join me to get your day off to a calming start so you can focus on getting the most out of your day at Kscope12.

And don’t forget to follow me on twitter @KentGraziano. I will be tweeting live and posting pictures all week from the conference.

See you in San Antonio. Giddy Up!

-Kent

Flexible Data Warehouse Architecture

My friend and colleague, Sanjay, just put up a new video discussing aspects of using the Data Vault as your data warehouse architecture.

It is really short – just barely 10 minutes.

You can jump over to see the video here.

Covers a nice little discussion comparing dimensional (star schema) models to data vault as a centralized data warehouse repository (as opposed to end-user facing data mart).

As a bonus, Sanjay also covers the basics of data vault modeling using an actual chalk board (okay so he is “old school”).

So why listen to Sanjay?

He is the co-founder with Dan Linstedt of LearnDataVault.com and has done tons of writing and consulting on Data Vault for quite a few years. He was behind the scenes helping us get the book out on the web, Amazon, and on Kindle.

He knows whereof he speaks.

Have a listen.

Be careful – you might learn something. 🙂

Later.

– Kent

P.S. Don’t forget to follow me on twitter @KentGraziano or subscribe to this blog (button at top of the right column) so you don’t miss out on my updates.

P.P.S. ODTUG KScope12 is just around the corner. I will be doing two talks and bunch of other stuff out there in San Antonio in a few weeks. Stay tuned!

Quick Tip: Adding a Custom Design Rule to Oracle Data Modeler

As most of my readers know, I use Oracle’s SQL Developer Data Modeler to do all my data modeling.

It has a lot of great features that are documented online in various places. One of those is Design Rules.

Design Rules (Tools -> Design Rules -> Design Rules) include a host of predefined quality checks on many of the objects created in a data model. This includes entities, attributes, relationships, tables, columns, constraints, indexes, etc.

You select the rules, or group of rules, and hit “apply”. The rules then check your model and tell you, object by object, if there are any issues.

Some issues are warnings. Others show up as errors. A error generally means that if you try to create DDL, that DDL will fail when you try to execute it an Oracle database.

One nice feature is that you can double click on a highlighted issue and go directly to the object with the issue so you can fix it.

An example of a design rule check is the length of the table name. Oracle still has a limit of 30 characters (Why????)  on object names, so there are design rules to check for that.

Pretty useful really.

For the Data Vault model I am currently building, we are trying to generate lots of stuff based on the table name (i.e., surrogate key sequence and some PL/SQL load procedures, but that is a much longer story). As a result we discovered we need to limit the table names to 26 characters because we need to use the table name as a root that has prefixes and suffixes added in certain cases.

Too bad the built in design rule is set to 30.

And there is no way to modify that built in rule (verified on the OTN Forum).

So the solution is to create a Custom Rule (Tools -> Design Rules -> Design Rules). The intrepid Philip from the Oracle development team kindly provided me with the base code to create the rule I needed. I was able to take his code, and use the one custom rule that comes delivered as a template, to make a new rule that did the check I wanted.

Here is the code:

var ruleMessage;
var errType;
var table;
//define
function checkName(table){
result = true;
ruleMessage=””;
if(table.getName().length()>26){
ruleMessage=”Table name over 26 characters”;
errType=”Error”;
return false;
}else{
return true;
}
}
//invoke it
checkName(table);

I won’t explain the code (you can figure that out if you like), but it does work as I wanted, so I am a happy camper. 🙂

Now after I add new objects to the model (e.g., hubs, links, satellites), I just run this rule to find any that are too long. Then I fix the table name and reapply my constraint naming standards (another very useful and simple utility in the tool). After that I can generate the DDL and build the objects in the db, then re-run our code generator.

If you have not looked at the features of SDDM, time to look.

Happy Modeling!

– Kent

P.S. To see more article about SDDM, check out Jeff Smith’s blog (in my blog roll).

P.P.S. Don’t forget to follow me on twitter @KentGraziano. I retweet a lot of Jeff’s article there. 😉

Data Vault Book is #1

This is really cool!

Well for a few of us anyway 🙂

This week, the Data Vault modeling book, Super Charge Your Data Warehouse, hit #1 on the Amazon Kindle list for books on data warehousing.

Data Vault Book on Kindle #1

Data Vault Book on Kindle #1

Here are some recent rankings:

Guess the book is filling a need for a bunch of people.

Like I said, very cool.

Later.

Kent  (Editor, Amazon #1 Best Selling book)

P.S. You can now follow me on twitter @KentGraziano

Is Data Vault Agile?

You bet it is!

Years ago I wrote an article about Agile Data Warehousing and proposed using Data Vault Data Modeling as a way to get there. Dan Linstedt recently published an article with more details on how it fits. Here are the good parts:

1. Individuals and Interactions over processes and tools

The Data Vault is technology agnostic AND focuses VERY heavily on customer interaction. In fact it’s really the only methodology where I’ve seen a very heavy emphasis on the fact – The business owns the data.

Also, you have to start with the Hub entities and they require identification of the business keys as specified step-by-step on page 54 of Super Charge Your Data Warehouse

2. Working Software over Comprehensive Documentation

With the pattern based architecture in a Data Vault model and with the business rules downstream of the Data Warehouse, you can start building extremely fast and even use code-generation tools or scripts to get the first cut of your model.

I’ve in fact used code-generation for Data Warehouses that have been in production for quite a few years They’re even running today.

The Data Vault Model & Methodology in my opinion is the fastest way to get something delivered to a Data Warehouse and it dramatically reduces project timelines and risk.

3. Customer Collaboration over Contract Negotiation

The Data Vault Methodology emphasizes the ownership of the project and data by the business and makes them a partner on any Business Intelligence project.

And, the fact that it’s easy to address change makes them happy which interestingly enough, is the next one:

4. Responding to Change over Following a Plan

This makes some people cringe. But it’s a reality of most projects. The first time out neither you nor the business REALLY know what they want. It’s only after they see something, they realize the value of the information and their brains start churning.

In the traditional forms of Data Warehousing, it takes scoping, project budgeting, planning, resource allocation and other fun stuff before you can even get creative and give them what they think they want. The problem is, most business users don’t REALLY know. The DW team ends up thinking and even assuming for them often incorrectly. You can end up with something that is really fancy and beautiful and still … useless!

To add to the complication, If it’s in fact a bad idea, it will be money ill spent which can be as much of a big deal if it’s a great idea where the time to build will make them lose out on the competitive edge they’re looking for.

With the Data Vault, the model is built for change from the ground up. Since the core data NEVER ever changes, creating business level user-interface layers on top is just so easy – and many architects and modelers think it’s ideal.

Check out the full post – Agile Data Warehousing

(and don’t forget to buy the book).

BTW – if you are going to ODTUG KScope12 this June in San Antonio, be sure to stop by for a chat. I will be giving two talks, one on Data Vault and one on using SQL Developer Data Modeler.

See ya.

Kent

P.S. I am now on twitter! You can follow me there @KentGraziano.

Post Navigation