The Data Warrior

Changing the world, one data model at a time. How can I help you?

Archive for the tag “agile”

Oracle Designer Lives!

Amazing as it seems, I picked this article up on Twitter today.

An up to date, current, and NEW article about automating builds of applications from the Oracle Designer repository.

How very agile…

Thanks to all the gang over at AMIS (http://technology.amis.nl/) for keeping the technology alive and for being innovative enough to adapt it for the modern agile development world.

Running Oracle Designer Generation from Ant and Hudson

Introduction

Oracle Designer is a windows client-server development tool that is meant to be manually operated by a developer. Anyone trying to integrate Designer with an automatic build environment will find that it does not provide an API or a commandline version to kick-off any generation automatically.

There is however a hook that can be exploited by generating so-called GBU files directly from the Designer Repository. These GBU files are then fed to an executable called dwzrun61.exe that executes the actual generation of DDL scripts and forms.

This article describes how this can be done using examples from a real world situation. It shows how to generate the GBU files, the different strategies that can be followed and some of the pitfalls you might run into trying to pull this off yourself.

The code of the program we wrote can be found on here and is free to be adjusted to fit any other situation than ours.

via Running Oracle Designer Generation from Ant and Hudson.

If you want to meet some of the guys from AMIS and pick their brains, be sure to sign up for KScope13 and come meet them live in person.

See you there.

Kent

ODTUG KScope: Day 5 – Happy Trails

Well the final day of KScope12 finally arrived and it was another hot one with the final sessions and the Texas heat. Another bright red sunrise greeted us as it has all week.

image

Today I managed to get a picture of the group that showed up for Chi Gung every day at 7 AM. We even had some new people today (officially the last day). They all enjoyed the sessions and learned (hopefully) enough to practice a bit once they return home.

I am grateful to all the participants for showing up early each morning with enthusiasm and a willingness to try something new. It made my job to lead them much easier. (There will be a You Tube video sometime next week for people to review, so stay tuned)

The first order of business for the day (after Chi Gung) was the official KScope closing session. Even though there were still two sessions to go afterward we had the closing at 9:45 AM. We were entertained, yet again, with some photo and video footage taken throughout the week, including one interview with me! We also learned who got the presenter awards for each track and for the entire event.

Then we all got beads to remind us to go to KScope13 in New Orleans.

Next was my final session for the event: Reverse Engineering (and Re-Engineering) an Existing Database with Oracle SQL Developer Data Modeler.

I had a surprising number of people for the last day after the closing session. I think there was about 70 people wanting to learn more about SDDM. Apparently most people are unaware of the features of the tool (which I have written about on several posts).

So, that was nice.

Finally I went to JP Dicjks talk about Big Data and Predicting the Future.

His basic premise is that we should now never throw away any data as it all can be used to extend the depth of analytics. We can react to events in real time and proactively change outcomes of those events.

The diagram above shows the basics of one way that data moves through the world and into the Hadoop file systems. I am oversimplifying but it is a cool diagram.

Part of the challenge is uncovering un-modeled data. I guess that is where the recent Oracle acquisition, Endeca, comes in with their Data Discovery tool (again oversimplifying) .

And that was pretty much it for the show. It was a great week with lots of learning and networking (and tweeting). We all had a good time and learned enough to make our heads explode.

I look forward to meeting folks again next year at KScope13 in New Orleans.

Kent

ODTUG KScope12: Day 3 Recap. More Fun in the San Antonio Sun

Well it was another HOT day in San Antonio, Texas at the 2012 ODTUG KScope conference.

Really.. it was.

It was something like 104 degrees outside with a Heat Index of 107.

Yikes.

But it was more like 65 degrees in the session rooms.

They do like to keep it cold inside here in Texas.

But the topics and speakers were hot anyway.

After an energizing session of Chi Gung this morning, my first session to attend was Mark Rittman talking about Exalytics and the TimesTen in-memory database. Based on the number of people in the room at 8:30 AM, I would call this a hot topic for sure.

Inquiring minds want to know if this Exalytics stuff is all it is cracked up to be (and worth the $$).

image

Mark did his best to give us the low down, candid truth. Mostly it was good news.

With the Summary Advisor, it is pretty easy to create an In Memory Adaptive Data Mart which will hold commonly used aggregates. It leverages the existing Aggregate Assistance Wizard,

So what you ask? Well that technology tracks all the quesries run in your OBIEE server and figures out which summaries would help speed up your performance.

Now you won’t get your entire data warehouse up in memory, but you will get the most used data set up to return faster.

The biggest gotcha is that it does not know from automatic incremental refreshes, so you have to use ODI or some scripting to refresh the TimesTen database automatically.

Anyway, the future does look bright for Exalytics.

Next up was Ashley Chen, Oracle Product Manager, talking about the new features in the 3.1 release of SQL Developer and SQL Developer Data Modeler.

Notably in SQL Developer there is now some APEX Integration and TimesTen integration, along with improved DB Copy and DB Diff utilities. Plus they have re-done the Oracle web site for SQL Dev to segment the tool into more logical groupings of functionality.

On the Data Modeler side, new features include Easy Sync to DB, better versioning support, a better, modifiable meta data reporting engine, and the new Query Builder for developing and testing the code for an Oracle view (I wrote about that here).

Then it was bit of a break while I interviewed JP Dijcks in the ODTUG Social Media Lounge and then got my set of ODTUG tatoos.

Next it was lunch and the Oracle ACE and ACE Directors Lunch and Learn sessions where we divided the rooms by topic area and had the various Oracle ACEs answer questions and lead a discussion about topics in their area. Here are a few of the BI ACEs plotting their strategy for the panel.

They did end up asking me to join the panel, so I got to field a few questions about data modeling, big data, and where to build a metrics model in the OBI RPD or the database? It depends….

After lunch I attended Ron Crisco’s talk about Agile and Data Design. A favorite topic of mine!

Often a contentious topic, Ron challenged us with some very good questions:

  • Is Agile the enemy of good design?
  • What is data design?
  • Who does it?
  • How do you keep it in sync with ongoing changes and implementation?

He kept this all in context of the principles in the Agile Manifesto and the goal of delivering useful software to the business.

Best quote: “Agile is an Attitude”

I completely agree!

I finished the day hanging out with the Ashley Chen and Jeff Smith in the Data Modeler lab session as folks tried out the new features on a pre-configured Oracle VM.

Ashely and Jeff kept busy helping folks while I tried to get the new VM running on my laptop. No luck. Maybe tomorrow.

I did get to help a bit and answer a few questions for some of the participants.

No official KScope events tonight so I got to spend a little time relaxing by the pool and in the lazy river with my friend JP and his family. Saw several other friends and collegues as well with their spouses and kids playing in the pool. Then we all got to watch Despicable Me projected on a sheet on the far side of the pool.

Pretty neat. Nice way to end the day.

Tomorrow should be another exciting day of sessions and then we have the BIG EVENT:  we all saddle up and head out to the Knibbe Ranch for BBQ and a real Rodeo.

Yee haw!

See ‘ya at the round-up tomorrow, y’all.

Kent

Is Data Vault Agile?

You bet it is!

Years ago I wrote an article about Agile Data Warehousing and proposed using Data Vault Data Modeling as a way to get there. Dan Linstedt recently published an article with more details on how it fits. Here are the good parts:

1. Individuals and Interactions over processes and tools

The Data Vault is technology agnostic AND focuses VERY heavily on customer interaction. In fact it’s really the only methodology where I’ve seen a very heavy emphasis on the fact – The business owns the data.

Also, you have to start with the Hub entities and they require identification of the business keys as specified step-by-step on page 54 of Super Charge Your Data Warehouse

2. Working Software over Comprehensive Documentation

With the pattern based architecture in a Data Vault model and with the business rules downstream of the Data Warehouse, you can start building extremely fast and even use code-generation tools or scripts to get the first cut of your model.

I’ve in fact used code-generation for Data Warehouses that have been in production for quite a few years They’re even running today.

The Data Vault Model & Methodology in my opinion is the fastest way to get something delivered to a Data Warehouse and it dramatically reduces project timelines and risk.

3. Customer Collaboration over Contract Negotiation

The Data Vault Methodology emphasizes the ownership of the project and data by the business and makes them a partner on any Business Intelligence project.

And, the fact that it’s easy to address change makes them happy which interestingly enough, is the next one:

4. Responding to Change over Following a Plan

This makes some people cringe. But it’s a reality of most projects. The first time out neither you nor the business REALLY know what they want. It’s only after they see something, they realize the value of the information and their brains start churning.

In the traditional forms of Data Warehousing, it takes scoping, project budgeting, planning, resource allocation and other fun stuff before you can even get creative and give them what they think they want. The problem is, most business users don’t REALLY know. The DW team ends up thinking and even assuming for them often incorrectly. You can end up with something that is really fancy and beautiful and still … useless!

To add to the complication, If it’s in fact a bad idea, it will be money ill spent which can be as much of a big deal if it’s a great idea where the time to build will make them lose out on the competitive edge they’re looking for.

With the Data Vault, the model is built for change from the ground up. Since the core data NEVER ever changes, creating business level user-interface layers on top is just so easy – and many architects and modelers think it’s ideal.

Check out the full post – Agile Data Warehousing

(and don’t forget to buy the book).

BTW – if you are going to ODTUG KScope12 this June in San Antonio, be sure to stop by for a chat. I will be giving two talks, one on Data Vault and one on using SQL Developer Data Modeler.

See ya.

Kent

P.S. I am now on twitter! You can follow me there @KentGraziano.

A Data Architect’s Initial View of Data Vault

Wow this is really cool! A long time, experienced, Kimball-style architect had this to say (and more!) about the Data Vault:

The more I thought about it, the more I began thinking a traditional staging area and its complexities are a huge headache!  The simpler design using the data vault methodology as the persistent staging area offers huge benefits over the traditional Kimball style data warehouse staging area.  This includes repeatable code use in building and populating the data vault as well as the ability to easily account and validate the data.

(see more at A Data Architect’s Initial View of Data Vault | Making Data Meaningful.)

That pretty much says it all.

Ready to learn Data Vault now?

Well then get to it! Go to the learning portal and sign up or at least go buy the book!

Later.

Kent

Post Navigation