The Data Warrior

Changing the world, one data model at a time. How can I help you?

Archive for the tag “#SQLDeveloper”

ODTUG KScope: Day 5 – Happy Trails

Well the final day of KScope12 finally arrived and it was another hot one with the final sessions and the Texas heat. Another bright red sunrise greeted us as it has all week.

image

Today I managed to get a picture of the group that showed up for Chi Gung every day at 7 AM. We even had some new people today (officially the last day). They all enjoyed the sessions and learned (hopefully) enough to practice a bit once they return home.

I am grateful to all the participants for showing up early each morning with enthusiasm and a willingness to try something new. It made my job to lead them much easier. (There will be a You Tube video sometime next week for people to review, so stay tuned)

The first order of business for the day (after Chi Gung) was the official KScope closing session. Even though there were still two sessions to go afterward we had the closing at 9:45 AM. We were entertained, yet again, with some photo and video footage taken throughout the week, including one interview with me! We also learned who got the presenter awards for each track and for the entire event.

Then we all got beads to remind us to go to KScope13 in New Orleans.

Next was my final session for the event: Reverse Engineering (and Re-Engineering) an Existing Database with Oracle SQL Developer Data Modeler.

I had a surprising number of people for the last day after the closing session. I think there was about 70 people wanting to learn more about SDDM. Apparently most people are unaware of the features of the tool (which I have written about on several posts).

So, that was nice.

Finally I went to JP Dicjks talk about Big Data and Predicting the Future.

His basic premise is that we should now never throw away any data as it all can be used to extend the depth of analytics. We can react to events in real time and proactively change outcomes of those events.

The diagram above shows the basics of one way that data moves through the world and into the Hadoop file systems. I am oversimplifying but it is a cool diagram.

Part of the challenge is uncovering un-modeled data. I guess that is where the recent Oracle acquisition, Endeca, comes in with their Data Discovery tool (again oversimplifying) .

And that was pretty much it for the show. It was a great week with lots of learning and networking (and tweeting). We all had a good time and learned enough to make our heads explode.

I look forward to meeting folks again next year at KScope13 in New Orleans.

Kent

ODTUG KScope12: Day 3 Recap. More Fun in the San Antonio Sun

Well it was another HOT day in San Antonio, Texas at the 2012 ODTUG KScope conference.

Really.. it was.

It was something like 104 degrees outside with a Heat Index of 107.

Yikes.

But it was more like 65 degrees in the session rooms.

They do like to keep it cold inside here in Texas.

But the topics and speakers were hot anyway.

After an energizing session of Chi Gung this morning, my first session to attend was Mark Rittman talking about Exalytics and the TimesTen in-memory database. Based on the number of people in the room at 8:30 AM, I would call this a hot topic for sure.

Inquiring minds want to know if this Exalytics stuff is all it is cracked up to be (and worth the $$).

image

Mark did his best to give us the low down, candid truth. Mostly it was good news.

With the Summary Advisor, it is pretty easy to create an In Memory Adaptive Data Mart which will hold commonly used aggregates. It leverages the existing Aggregate Assistance Wizard,

So what you ask? Well that technology tracks all the quesries run in your OBIEE server and figures out which summaries would help speed up your performance.

Now you won’t get your entire data warehouse up in memory, but you will get the most used data set up to return faster.

The biggest gotcha is that it does not know from automatic incremental refreshes, so you have to use ODI or some scripting to refresh the TimesTen database automatically.

Anyway, the future does look bright for Exalytics.

Next up was Ashley Chen, Oracle Product Manager, talking about the new features in the 3.1 release of SQL Developer and SQL Developer Data Modeler.

Notably in SQL Developer there is now some APEX Integration and TimesTen integration, along with improved DB Copy and DB Diff utilities. Plus they have re-done the Oracle web site for SQL Dev to segment the tool into more logical groupings of functionality.

On the Data Modeler side, new features include Easy Sync to DB, better versioning support, a better, modifiable meta data reporting engine, and the new Query Builder for developing and testing the code for an Oracle view (I wrote about that here).

Then it was bit of a break while I interviewed JP Dijcks in the ODTUG Social Media Lounge and then got my set of ODTUG tatoos.

Next it was lunch and the Oracle ACE and ACE Directors Lunch and Learn sessions where we divided the rooms by topic area and had the various Oracle ACEs answer questions and lead a discussion about topics in their area. Here are a few of the BI ACEs plotting their strategy for the panel.

They did end up asking me to join the panel, so I got to field a few questions about data modeling, big data, and where to build a metrics model in the OBI RPD or the database? It depends….

After lunch I attended Ron Crisco’s talk about Agile and Data Design. A favorite topic of mine!

Often a contentious topic, Ron challenged us with some very good questions:

  • Is Agile the enemy of good design?
  • What is data design?
  • Who does it?
  • How do you keep it in sync with ongoing changes and implementation?

He kept this all in context of the principles in the Agile Manifesto and the goal of delivering useful software to the business.

Best quote: “Agile is an Attitude”

I completely agree!

I finished the day hanging out with the Ashley Chen and Jeff Smith in the Data Modeler lab session as folks tried out the new features on a pre-configured Oracle VM.

Ashely and Jeff kept busy helping folks while I tried to get the new VM running on my laptop. No luck. Maybe tomorrow.

I did get to help a bit and answer a few questions for some of the participants.

No official KScope events tonight so I got to spend a little time relaxing by the pool and in the lazy river with my friend JP and his family. Saw several other friends and collegues as well with their spouses and kids playing in the pool. Then we all got to watch Despicable Me projected on a sheet on the far side of the pool.

Pretty neat. Nice way to end the day.

Tomorrow should be another exciting day of sessions and then we have the BIG EVENT:  we all saddle up and head out to the Knibbe Ranch for BBQ and a real Rodeo.

Yee haw!

See ‘ya at the round-up tomorrow, y’all.

Kent

ODTUG KScope12: Day 1 Symposium Sunday

Wow. What a day!

Started off with leading a Chi Gung class at 7 AM to about 18 attendees. Great start to the day.

Then it was off to the races with the kick off of the BI Symposium, chaired by Kevin McGinley. I got to be “interviewed” about my  Data Vault Modeling session on Monday ( I will report on that tomorrow) , along with several other presenters. That was followed by a lively talk show-style discussion led by Kevin and Stewart
Bryson. Below see the room and audience in attendance at 9:00 AM on a Sunday. (pretty good turn out – way better than last year!)

image

The panel discussion was followed by a series of talks from Oracle BI product management. There was lots of talk about mobile BI, Oracle’s acquisition of Endeca and of course BI in the Cloud.

(At this point I switched tracks to the Db development symposium chaired by Chet Justice aka @Oraclenerd)

The next talk I attended was by Kris Rice (@krisrice) who gave an intro to Oracle SQL Developer Data Modeler. (Nicely he plugged my Data Modeler talk on Thursday)

Some review (for me) and some new stuff too. I learned his trick for showing the joins between views – use the view to table utility to convert the views to tables, add PKs, then use the Discover Foreign Keys feature. This creates FKs based on column names and know PKs.

Cool trick. Just gotta remember to set “generate DDL” to “No”.

Quick switch back to the BI Symposium to see some screen shots of a new look and feel for OBIEE with modern mobile themes.

More coolness…especially if you are an iPad sort of geek.

Back to DB dev land (is it lunch yet?) to hear Oracle product manager Jeff Smith (@thatjeffsmith) take about full lifecycle development using SQL Developer.

Lots of great tips from Jeff about generating table api’s, using version control, doing schema diffs, and unit testing.

SQL Developer definitely has lots of features I did not know about. Being able to define unit tests inside the tool seems like a valuable option. I will be getting folks at my client site to try it out next week!

Oh yeah – he also mentioned DB Doc for creating HTML documentation  on your code because code is never really self-documenting. Gotta check into that more too…

<Lunch break – yummy Italian selection of salads and food>

Post-lunch back to BI and Mike Donohue from Oracle talking about reporting on data from “beyond the data warehouse.”

Heaven forbid! (well I guess we gotta deal with it now)

So, Mike talked a bit about how Endeca Information Discovery can be used to gain understanding and build analytics on big and unstructured data. Mentioned “faceted data model” and generating a key value store. Sounds cool. Have to look into that too.

Mike also discussed using BI Publisher to allow users access to local data (in Excel, XML, OLAP, etc)  so they can build their own reports. Scary thought but, in some businesses it will make sense because in reality not all data is in an ERP system or a well built RDBMS.

Whata gonna do?

<Back to DB Dev>

No to hear the world-famous Tom Kyte (of Ask Tom fame) talk about his approach to tuning. It was, as expected, a full house.

Tom’s main point was not to necessarily tune the specific problem query but more holistically to look at the overall algorithm (or approach) that was taken to solve the problem in the first place.

In his experience many queries can’t be tuned all that much when what was written was not even the best way to solve the problem. He gave quite a few eye-opening examples where there was simply a much better way to accomplish a task than the SQL that was originally written. Seems many situations really require re-engineering the solution.

A nice take away (that makes you go “duh”):

More code = More bugs

Less code = Less bugs

Moral of the story – find the simplest solution. If the code is really complex, you are probably wrong (or at least over complicating it). Try again.

Last symposium session for the day (for me) was Maria Colgan (Oracle) talking about tips to get the most out of the Oracle Cost Based Optimizer.

Maria is the queen of the optimizer. She explained what the optimizer will do in several situations and why and if it is wrong, what you need to change to get it right.

Okay – already on brain overload (and it is just day 1!).

Need sleep.

Have my own presentation tomorrow.

And Chi Gung at 7AM.

C ‘ya

Kent

P.S. There were lots of tweets all day with more pictures of the event. To see them look for #kscope and @ODTUG on Twitter (or follow me @kentgraziano).

Is Data Vault Agile?

You bet it is!

Years ago I wrote an article about Agile Data Warehousing and proposed using Data Vault Data Modeling as a way to get there. Dan Linstedt recently published an article with more details on how it fits. Here are the good parts:

1. Individuals and Interactions over processes and tools

The Data Vault is technology agnostic AND focuses VERY heavily on customer interaction. In fact it’s really the only methodology where I’ve seen a very heavy emphasis on the fact – The business owns the data.

Also, you have to start with the Hub entities and they require identification of the business keys as specified step-by-step on page 54 of Super Charge Your Data Warehouse

2. Working Software over Comprehensive Documentation

With the pattern based architecture in a Data Vault model and with the business rules downstream of the Data Warehouse, you can start building extremely fast and even use code-generation tools or scripts to get the first cut of your model.

I’ve in fact used code-generation for Data Warehouses that have been in production for quite a few years They’re even running today.

The Data Vault Model & Methodology in my opinion is the fastest way to get something delivered to a Data Warehouse and it dramatically reduces project timelines and risk.

3. Customer Collaboration over Contract Negotiation

The Data Vault Methodology emphasizes the ownership of the project and data by the business and makes them a partner on any Business Intelligence project.

And, the fact that it’s easy to address change makes them happy which interestingly enough, is the next one:

4. Responding to Change over Following a Plan

This makes some people cringe. But it’s a reality of most projects. The first time out neither you nor the business REALLY know what they want. It’s only after they see something, they realize the value of the information and their brains start churning.

In the traditional forms of Data Warehousing, it takes scoping, project budgeting, planning, resource allocation and other fun stuff before you can even get creative and give them what they think they want. The problem is, most business users don’t REALLY know. The DW team ends up thinking and even assuming for them often incorrectly. You can end up with something that is really fancy and beautiful and still … useless!

To add to the complication, If it’s in fact a bad idea, it will be money ill spent which can be as much of a big deal if it’s a great idea where the time to build will make them lose out on the competitive edge they’re looking for.

With the Data Vault, the model is built for change from the ground up. Since the core data NEVER ever changes, creating business level user-interface layers on top is just so easy – and many architects and modelers think it’s ideal.

Check out the full post – Agile Data Warehousing

(and don’t forget to buy the book).

BTW – if you are going to ODTUG KScope12 this June in San Antonio, be sure to stop by for a chat. I will be giving two talks, one on Data Vault and one on using SQL Developer Data Modeler.

See ya.

Kent

P.S. I am now on twitter! You can follow me there @KentGraziano.

The best FREE data modeling tool ever

Yup, I said FREE!

Oracle just released the latest and greatest version of SQL Developer Data Modeler  (SDDM) and it is free to the world to not only download but to use in your production environment to develop all your models.

As many of you know, I have been using this tool for several years now and have mentioned it many times on various LinkedIn forums (just search for me and check out my activity). I have used SDDM for both Oracle and SQL Server. For forward engineering and reverse engineering. For conceptual, logical, and physical data models.

I think it is a great tool (even if it was not free).

I loved Oracle Designer and got quite good at that, but once shops stopped buying and using Designer (and Oracle pretty much sun-setted the tool), I suffered for a few years using other tools.

I was a very happy camper when Oracle came out with this new data modeling tool. I am even happier now with the new features they have added.

The one I like the most, so far, is the visual editor they added for defining views. The previous version had a decent declarative approach that allowed you to specify tables, columns, and joins, but you could not really “see” the implied data model.

The newest version of SDDM (version 3.1) has added in a visual editor that shows you a diagram of the tables, columns, and joins. So now when you open (or define) a view and press the “query” property button you get drag and drop interface to build the view and a nice visual diagram.

And the best part is when you upgrade your existing models from previous versions, the old views automatically get diagrammed.

To get the best out of the new version you need to run a one time utility labeled “Parse Older Style Views”. You can find that off the right mouse menu in any diagram with views. It runs very fast and basically reads the SQL for your views then parses it out to show up properly in the diagram.

One nice new feature with the parsed views is that if the underlying tables in the view are part of the same design file (hopefully you did not drop those), then the view object on your diagram will now list those tables below all the columns. This is nice because now I do not have to open the view definition to see which tables the view is pulling from.

The other great new feature is the “Test Query” button on the view property dialog.

No more writing views that do not work. You press the button, specify a database connection to use, then the base query for the view fires.

If there is an error in your syntax, or a table you don’t have access to, you find out immediately.

So gone are the days of writing the view in your modeling tools, loging into SQL Plus or SQL Developer, testing the view, having it fail, then going back to SDDM to fix it.

Now you can do agile view development! In one tool!

Neat!

Oh, and if the view works, there is a data tab so you can see the actual data the view will produce – live. Right in the data modeling tool.

Pretty cool.

Nice job guys.

Convinced yet? Head over to the Oracle site and download your own copy and give it try.

UPDATE 2015: Data Modeler is now up to version 4.1 and going strong. Plus now there is an Oracle Press book available on Amazon: http://www.amazon.com/Oracle-Developer-Modeler-Database-Mastery-ebook/dp/B00VMMR9EA/

And I even have a tips and tricks Kindle book out on SDDM. You can find that here.

Let me know what you think in the blog comments.

Talk to you all later.

Kent

P.S. For all the new features in SDDM 4.1 check out the full list over here.

P.P.S Need training on SDDM? Check out my post about my new workshop.

Post Navigation