The Data Warrior

Changing the world, one data model at a time. How can I help you?

Archive for the category “SQL Developer Data Modeler”

ODTUG KScope12: Day 3 Recap. More Fun in the San Antonio Sun

Well it was another HOT day in San Antonio, Texas at the 2012 ODTUG KScope conference.

Really.. it was.

It was something like 104 degrees outside with a Heat Index of 107.

Yikes.

But it was more like 65 degrees in the session rooms.

They do like to keep it cold inside here in Texas.

But the topics and speakers were hot anyway.

After an energizing session of Chi Gung this morning, my first session to attend was Mark Rittman talking about Exalytics and the TimesTen in-memory database. Based on the number of people in the room at 8:30 AM, I would call this a hot topic for sure.

Inquiring minds want to know if this Exalytics stuff is all it is cracked up to be (and worth the $$).

image

Mark did his best to give us the low down, candid truth. Mostly it was good news.

With the Summary Advisor, it is pretty easy to create an In Memory Adaptive Data Mart which will hold commonly used aggregates. It leverages the existing Aggregate Assistance Wizard,

So what you ask? Well that technology tracks all the quesries run in your OBIEE server and figures out which summaries would help speed up your performance.

Now you won’t get your entire data warehouse up in memory, but you will get the most used data set up to return faster.

The biggest gotcha is that it does not know from automatic incremental refreshes, so you have to use ODI or some scripting to refresh the TimesTen database automatically.

Anyway, the future does look bright for Exalytics.

Next up was Ashley Chen, Oracle Product Manager, talking about the new features in the 3.1 release of SQL Developer and SQL Developer Data Modeler.

Notably in SQL Developer there is now some APEX Integration and TimesTen integration, along with improved DB Copy and DB Diff utilities. Plus they have re-done the Oracle web site for SQL Dev to segment the tool into more logical groupings of functionality.

On the Data Modeler side, new features include Easy Sync to DB, better versioning support, a better, modifiable meta data reporting engine, and the new Query Builder for developing and testing the code for an Oracle view (I wrote about that here).

Then it was bit of a break while I interviewed JP Dijcks in the ODTUG Social Media Lounge and then got my set of ODTUG tatoos.

Next it was lunch and the Oracle ACE and ACE Directors Lunch and Learn sessions where we divided the rooms by topic area and had the various Oracle ACEs answer questions and lead a discussion about topics in their area. Here are a few of the BI ACEs plotting their strategy for the panel.

They did end up asking me to join the panel, so I got to field a few questions about data modeling, big data, and where to build a metrics model in the OBI RPD or the database? It depends….

After lunch I attended Ron Crisco’s talk about Agile and Data Design. A favorite topic of mine!

Often a contentious topic, Ron challenged us with some very good questions:

  • Is Agile the enemy of good design?
  • What is data design?
  • Who does it?
  • How do you keep it in sync with ongoing changes and implementation?

He kept this all in context of the principles in the Agile Manifesto and the goal of delivering useful software to the business.

Best quote: “Agile is an Attitude”

I completely agree!

I finished the day hanging out with the Ashley Chen and Jeff Smith in the Data Modeler lab session as folks tried out the new features on a pre-configured Oracle VM.

Ashely and Jeff kept busy helping folks while I tried to get the new VM running on my laptop. No luck. Maybe tomorrow.

I did get to help a bit and answer a few questions for some of the participants.

No official KScope events tonight so I got to spend a little time relaxing by the pool and in the lazy river with my friend JP and his family. Saw several other friends and collegues as well with their spouses and kids playing in the pool. Then we all got to watch Despicable Me projected on a sheet on the far side of the pool.

Pretty neat. Nice way to end the day.

Tomorrow should be another exciting day of sessions and then we have the BIG EVENT:  we all saddle up and head out to the Knibbe Ranch for BBQ and a real Rodeo.

Yee haw!

See ‘ya at the round-up tomorrow, y’all.

Kent

ODTUG KScope12: Day 1 Symposium Sunday

Wow. What a day!

Started off with leading a Chi Gung class at 7 AM to about 18 attendees. Great start to the day.

Then it was off to the races with the kick off of the BI Symposium, chaired by Kevin McGinley. I got to be “interviewed” about my  Data Vault Modeling session on Monday ( I will report on that tomorrow) , along with several other presenters. That was followed by a lively talk show-style discussion led by Kevin and Stewart
Bryson. Below see the room and audience in attendance at 9:00 AM on a Sunday. (pretty good turn out – way better than last year!)

image

The panel discussion was followed by a series of talks from Oracle BI product management. There was lots of talk about mobile BI, Oracle’s acquisition of Endeca and of course BI in the Cloud.

(At this point I switched tracks to the Db development symposium chaired by Chet Justice aka @Oraclenerd)

The next talk I attended was by Kris Rice (@krisrice) who gave an intro to Oracle SQL Developer Data Modeler. (Nicely he plugged my Data Modeler talk on Thursday)

Some review (for me) and some new stuff too. I learned his trick for showing the joins between views – use the view to table utility to convert the views to tables, add PKs, then use the Discover Foreign Keys feature. This creates FKs based on column names and know PKs.

Cool trick. Just gotta remember to set “generate DDL” to “No”.

Quick switch back to the BI Symposium to see some screen shots of a new look and feel for OBIEE with modern mobile themes.

More coolness…especially if you are an iPad sort of geek.

Back to DB dev land (is it lunch yet?) to hear Oracle product manager Jeff Smith (@thatjeffsmith) take about full lifecycle development using SQL Developer.

Lots of great tips from Jeff about generating table api’s, using version control, doing schema diffs, and unit testing.

SQL Developer definitely has lots of features I did not know about. Being able to define unit tests inside the tool seems like a valuable option. I will be getting folks at my client site to try it out next week!

Oh yeah – he also mentioned DB Doc for creating HTML documentation  on your code because code is never really self-documenting. Gotta check into that more too…

<Lunch break – yummy Italian selection of salads and food>

Post-lunch back to BI and Mike Donohue from Oracle talking about reporting on data from “beyond the data warehouse.”

Heaven forbid! (well I guess we gotta deal with it now)

So, Mike talked a bit about how Endeca Information Discovery can be used to gain understanding and build analytics on big and unstructured data. Mentioned “faceted data model” and generating a key value store. Sounds cool. Have to look into that too.

Mike also discussed using BI Publisher to allow users access to local data (in Excel, XML, OLAP, etc)  so they can build their own reports. Scary thought but, in some businesses it will make sense because in reality not all data is in an ERP system or a well built RDBMS.

Whata gonna do?

<Back to DB Dev>

No to hear the world-famous Tom Kyte (of Ask Tom fame) talk about his approach to tuning. It was, as expected, a full house.

Tom’s main point was not to necessarily tune the specific problem query but more holistically to look at the overall algorithm (or approach) that was taken to solve the problem in the first place.

In his experience many queries can’t be tuned all that much when what was written was not even the best way to solve the problem. He gave quite a few eye-opening examples where there was simply a much better way to accomplish a task than the SQL that was originally written. Seems many situations really require re-engineering the solution.

A nice take away (that makes you go “duh”):

More code = More bugs

Less code = Less bugs

Moral of the story – find the simplest solution. If the code is really complex, you are probably wrong (or at least over complicating it). Try again.

Last symposium session for the day (for me) was Maria Colgan (Oracle) talking about tips to get the most out of the Oracle Cost Based Optimizer.

Maria is the queen of the optimizer. She explained what the optimizer will do in several situations and why and if it is wrong, what you need to change to get it right.

Okay – already on brain overload (and it is just day 1!).

Need sleep.

Have my own presentation tomorrow.

And Chi Gung at 7AM.

C ‘ya

Kent

P.S. There were lots of tweets all day with more pictures of the event. To see them look for #kscope and @ODTUG on Twitter (or follow me @kentgraziano).

Countdown to KScope: Oracle Education, Fitness and More

It’s almost here: the best education event for Oracle developers – the Oracle Development Tools User Group KScope12 conference.

It starts Saturday June 23rd with the annual community service day helping out the Boy’s and Girl’s Club of San Antonio.

Then things really get rolling on Sunday June 24th with the famed all-day, in-depth symposiums.

One Monday, the main sessions kick things into high gear.

This year I am lucky enough to give two presentations on my favorite topics.

On Monday June 25 from 10:00 am – 11:00 am, I will present Introduction to Data Vault Modeling for an Oracle BI Environment. 

Then on Thursday June 28, from 10:30 am – 11:30 am, you can close out the conference by attending my presentation on SQL Developer Data Modeler: Reverse Engineering (and Re-engineer) an Existing Database Using Oracle SQL Developer Data Modeler.

If you are using the KScope app, don’t forget to add these sessions to your schedule and then check in during the sessions to be eligible for  some special ODTUG prizes.

Of course there will be networking events, a vendor hall, food and fun.

With all the good food and sitting in intense sessions all day, it is important that we all do our best to stay fit. To help with that, I will again be leading a 30 minute Chi Gung session every morning at 7 AM on the main lawn.

Really – anyone can do it. Read my article here for details.

Come out and join me to get your day off to a calming start so you can focus on getting the most out of your day at Kscope12.

And don’t forget to follow me on twitter @KentGraziano. I will be tweeting live and posting pictures all week from the conference.

See you in San Antonio. Giddy Up!

-Kent

Quick Tip: Adding a Custom Design Rule to Oracle Data Modeler

As most of my readers know, I use Oracle’s SQL Developer Data Modeler to do all my data modeling.

It has a lot of great features that are documented online in various places. One of those is Design Rules.

Design Rules (Tools -> Design Rules -> Design Rules) include a host of predefined quality checks on many of the objects created in a data model. This includes entities, attributes, relationships, tables, columns, constraints, indexes, etc.

You select the rules, or group of rules, and hit “apply”. The rules then check your model and tell you, object by object, if there are any issues.

Some issues are warnings. Others show up as errors. A error generally means that if you try to create DDL, that DDL will fail when you try to execute it an Oracle database.

One nice feature is that you can double click on a highlighted issue and go directly to the object with the issue so you can fix it.

An example of a design rule check is the length of the table name. Oracle still has a limit of 30 characters (Why????)  on object names, so there are design rules to check for that.

Pretty useful really.

For the Data Vault model I am currently building, we are trying to generate lots of stuff based on the table name (i.e., surrogate key sequence and some PL/SQL load procedures, but that is a much longer story). As a result we discovered we need to limit the table names to 26 characters because we need to use the table name as a root that has prefixes and suffixes added in certain cases.

Too bad the built in design rule is set to 30.

And there is no way to modify that built in rule (verified on the OTN Forum).

So the solution is to create a Custom Rule (Tools -> Design Rules -> Design Rules). The intrepid Philip from the Oracle development team kindly provided me with the base code to create the rule I needed. I was able to take his code, and use the one custom rule that comes delivered as a template, to make a new rule that did the check I wanted.

Here is the code:

var ruleMessage;
var errType;
var table;
//define
function checkName(table){
result = true;
ruleMessage=””;
if(table.getName().length()>26){
ruleMessage=”Table name over 26 characters”;
errType=”Error”;
return false;
}else{
return true;
}
}
//invoke it
checkName(table);

I won’t explain the code (you can figure that out if you like), but it does work as I wanted, so I am a happy camper. 🙂

Now after I add new objects to the model (e.g., hubs, links, satellites), I just run this rule to find any that are too long. Then I fix the table name and reapply my constraint naming standards (another very useful and simple utility in the tool). After that I can generate the DDL and build the objects in the db, then re-run our code generator.

If you have not looked at the features of SDDM, time to look.

Happy Modeling!

– Kent

P.S. To see more article about SDDM, check out Jeff Smith’s blog (in my blog roll).

P.P.S. Don’t forget to follow me on twitter @KentGraziano. I retweet a lot of Jeff’s article there. 😉

Is Data Vault Agile?

You bet it is!

Years ago I wrote an article about Agile Data Warehousing and proposed using Data Vault Data Modeling as a way to get there. Dan Linstedt recently published an article with more details on how it fits. Here are the good parts:

1. Individuals and Interactions over processes and tools

The Data Vault is technology agnostic AND focuses VERY heavily on customer interaction. In fact it’s really the only methodology where I’ve seen a very heavy emphasis on the fact – The business owns the data.

Also, you have to start with the Hub entities and they require identification of the business keys as specified step-by-step on page 54 of Super Charge Your Data Warehouse

2. Working Software over Comprehensive Documentation

With the pattern based architecture in a Data Vault model and with the business rules downstream of the Data Warehouse, you can start building extremely fast and even use code-generation tools or scripts to get the first cut of your model.

I’ve in fact used code-generation for Data Warehouses that have been in production for quite a few years They’re even running today.

The Data Vault Model & Methodology in my opinion is the fastest way to get something delivered to a Data Warehouse and it dramatically reduces project timelines and risk.

3. Customer Collaboration over Contract Negotiation

The Data Vault Methodology emphasizes the ownership of the project and data by the business and makes them a partner on any Business Intelligence project.

And, the fact that it’s easy to address change makes them happy which interestingly enough, is the next one:

4. Responding to Change over Following a Plan

This makes some people cringe. But it’s a reality of most projects. The first time out neither you nor the business REALLY know what they want. It’s only after they see something, they realize the value of the information and their brains start churning.

In the traditional forms of Data Warehousing, it takes scoping, project budgeting, planning, resource allocation and other fun stuff before you can even get creative and give them what they think they want. The problem is, most business users don’t REALLY know. The DW team ends up thinking and even assuming for them often incorrectly. You can end up with something that is really fancy and beautiful and still … useless!

To add to the complication, If it’s in fact a bad idea, it will be money ill spent which can be as much of a big deal if it’s a great idea where the time to build will make them lose out on the competitive edge they’re looking for.

With the Data Vault, the model is built for change from the ground up. Since the core data NEVER ever changes, creating business level user-interface layers on top is just so easy – and many architects and modelers think it’s ideal.

Check out the full post – Agile Data Warehousing

(and don’t forget to buy the book).

BTW – if you are going to ODTUG KScope12 this June in San Antonio, be sure to stop by for a chat. I will be giving two talks, one on Data Vault and one on using SQL Developer Data Modeler.

See ya.

Kent

P.S. I am now on twitter! You can follow me there @KentGraziano.

Post Navigation