The Data Warrior

Changing the world, one data model at a time. How can I help you?

Archive for the tag “Data Modeling”

Oracle OpenWorld 2012: User Group Sunday

Yes, today was the first day for #OOW 2012. Affectionately known to many of us as User Group Sunday. Along with a ton of other activities, this is the day the various Oracle user groups get to “own” the agenda and put together the sessions they think Oracle customers, and their members, might want to see.

By users; for users.

For the 2nd year,  ODTUG asked me to curate their agenda. I was fortunate enough to “recruit” some great track leads who invited and vetted speakers and sessions to fill five rooms for most of the day. It was quite successful. (Thanks for the hard work guys.)

I attended quite a few myself and captured a few photos and thoughts. I was tweeting all day so you can also go to Twitter and search on @Kentgraziano to see my twitter stream.

After checking in at the User Group kiosk, I went to my first session given by Gwen Shapira and Robyn Sands who spoke about Flexible Design and Data Modeling. Great topic. They gave some very practical advice on do’s and don’t if you want to be more agile.

“Just good enough” does not scale.

Plan for Change

Worst Practices for Database Design

If you want some more modeling best practices, check out my ebook on Amazon: http://www.amazon.com/Check-Doing-Design-Reviews-ebook/dp/B008RG9L5E/.

Next I went on to see Kellyn Pot’vin and Stewart Bryson do a DBA vs Developers show down with No Surprises Development.

Release Planning Questions

Best advice – practice your deployments several times before going live…

Next: Guy Harrison talked about Hadoop, Bug Data, and Exadata. This was a very helpful intro talk about the space. I have been trying to wrap my mind around Hadoop, NoSQL, unstructured data, etc. and how we deal with it. Lots of great diagrams and examples to help explain.

Google’s Software Architecture

The Hadoop Ecosystem

Sigh…more to learn.

Next was a very interesting session by Mark Rittman about the Oracle Endeca software and how it can be used in a BI environment and how it compliments OBIEE.

This gives a quick view of what is involved with the Oracle Endeca Platform.

Oracle Endeca Information Discovery Platform

It looks like a very interesting platform that uses key value pairs to store the data. This enables search and analytics on some realtively unstructured data stores (i.e., not relational tables)

Final talk of the day (for me) was Jon Mead telling us about how they helped a customer develop event driven analytics using ODI and OBIEE and the Oracle Reference Architecture for data warehousing.

After all this, a  little break and networking, then on to the opening keynote.

It started with the Corporate Sr VP of Fujitsu  who talked about some cloud applications they have deployed in Japan. They have the Agricultural Cloud project to help farmers be more efficient and bring more and better crops to market. They also have developed a Healthcare Cloud Service for optimizing patient care and early diagnosis.

Very cool cloud applications.

Last up was CEO, Lary Ellison who announced Oracle 12c and Pluggable Databases (to support cloud deployments). I had heard about these (under NDA) at the Ace Directors meeting so now I can share a few pictures related to those since it is now public information.

Oracle 12c

Bigger, badder, faster…

Oracle Cloud Ecosystem

Pluggable Database Architecture

With PDB, you can develop a plug and play database. Many cool applications for this one.

To end out the day, I went to the 9th annual Oracle ACE dinner hosted by Oracle at the St Francis Yacht Club. Great food, drinks, and networking was had by all. Then back to the hotel to write this blog post.

Now off to bed so I can swim the bay with some other crazy people tomorrow morning. Wish me luck. Brrr.

Later.

Kent

Five Days Only – Get it Free: A Check List for Doing Data Model Design Reviews

Later this week I travel to Oracle HQ for my first product briefing as an Oracle ACE Director. In celebration of this momentous event, I have decided to give all me readers and followers a gift:

For the next five days (Sept 24 – 28, 2012), my first solo Kindle book will be ON SALE for the low, low price of FREE!

Don’t delay. You can get it here: A Check List for Doing Data Model Design Reviews: Kent Graziano: Kindle Store.

In case you missed my earlier post about the book, here is a brief description:

Tired of crappy data models and whiney data modelers? Need to deliver a high quality design in a short period of time? Need a better way to enforce standards? As part of trying to be more “agile” in my approach to developing databases, I have adopted a concept from the agile world: peer reviews. Before any data model moves from analysis (logical model) into development (physical model), the development team needs to gather to review what the modeler has done. If the model passes the review (almost never on the first round), the physical model is constructed. The physical model is then subjected to a rigorous review as well (including metadata). Only then can DDL be produced and deployed. This guide book will discuss the actual modeling and design process I follow and give you a check list of questions to ask in any model review session. This is a “take no prisoners” approach that has left many a would-be data modeler in a withering heap, but in the end you will have solid models and designs that deliver value.

The book has been doing pretty good (sells for $2.99 normally) but it could do better. 😉 Currently it is #32 if you search for Data Modeling under Kindle ebooks.

Will you help me get it into the top 10?

[ Update: as of Sept 24, 2012 at 12:45 PM CDT the book is now #2 in the Kindle store for Databases! Thanks everyone. Let’s keep it rollin’]

[ Update #2: as of Sept 25, 2012 at 12:45 PM CDT the book is now #1 in the Kindle store for Databases! How long can we keep it there?]

Head on over to Amazon and get it today: A Check List for Doing Data Model Design Reviews.

Thanks a bunch. Hope you can put the information to good use.

Oracle ACE Director

Kent

P.S. Do me another favor? After you get the book (for FREE), please log back into Amazon and leave a review so other data modelers know if it is a worthwhile book for them to read.

P.P.S. Don’t forget to like this post! And click the Follow button (upper right) if you want to get my posts sent to your email directly.

Five ways to make Data Modeling Fun

While on my recent family vacation, I happened to mention I needed ideas for a blog post.

My son, all of nine years old, suggested the above title.

Hmmm…I said…not bad. That might work.

After all most people think data modeling booooorrring, right?

But for a few of us, it is kind of fun.

So then I asked him if he had any ideas how we could make it fun.

My son does not actually know how to do any data modeling (yet), but he has looked over my shoulder a few times and knows I draw pictures with boxes and connecting lines and words in the boxes.

With that bit of knowledge, he did come up with a few good ideas that really could make data model review sessions, a bit more fun, and maybe more effective.

Here they are:

Word Search

Put up a large version of a data model on the wall. Give the reviewers a list of words to find on the model diagram (you produce the list from your data dictionary).  Have them go to the diagram to highlight or circle the words one their list.

This will help get everyone familiar with the model and the layout of the diagram.

For more fun – form teams and keep score! Maybe even add a time limit per word.

Silly Sentences

If you don’t know how this works, you start with sentences with blanks in strategic areas. So the sentences may be missing nouns, verbs, adverbs, etc. You have someone fill in the blanks out of context – you ask for a noun but they have no idea what the sentence looks like until after you fill in all the blanks. (This game is in my son’s Nat Geo magazine) It can be quite funny.

One of the hardest parts of a logical model is naming the relationships.  Use this game to figure out the right sentences.

Start by writing the relationships with completely silly or even wrong verbs:

Each Customer must be found squatting at one or more Addresses.

Use your creativity to come up with goofy verbs for the relationships. Then get the users to “validate” the sentences.

I am sure they will be more than willing to correct your errors. 😉

Jeopardy

You all know how this game works – you get the answer and have to come up with the questions.

This is an interesting way to validate your entity and attribute definitions. Use entity definitions as the answers. Users have to guess the entity name.

For example: What is a customer?

Of course it will be really interesting to see if they can link definitions you got from them with the entity names in the model. You might get some clarifications in the process.

Data Model Haiku

You can do this with definitions or maybe relationship sentences. Trying to put the words in a specific form will make you really think about your understanding of the concepts (and force you to be succinct).

Each customer may

Be contacted by one or

More customer reps

Note for my  friends in the UK: Feel free to do Sonnets in Iambic pentameter.

Data Model Telephone

This is pretty much what happens anyway – you attend a meeting with the customer, they give you requirements, you take notes then try to build a model from those notes. You write out definitions and get them to review those. Chances are good you did not get it quite right.

So for fun, and to make a point about recording details carefully, get your team in a room and start at one end whispering a definition to the first person and have them pass it on. Write down the end result to compare to the definition in the model.

If the result is really funny, tell the customer at the next review meeting.

So what do you think? Can we make data modeling more fun?

Let me know your thoughts in the comments below.

If you have any fun ideas, please share those too!

Game on!

Kent

P.S. If you would like some other ideas on how to get better data models, check out my recent Kindle book on best practices for data model design reviews.

Best Practice: How to Create the Best Data Model Ever

A good data model, done right the first time, can save you time and money.

We have all seen the charts on the increasing cost of finding a mistake/bug/error late in a software development cycle.

Would you like to reduce, or even eliminate, your risk of finding one of those errors late in the game?

Of course you would! Who wouldn’t?  Nobody plans to miss a requirement or make a bad design decision (well nobody sane anyway).

No modeler worth their salt wants to leave a model incomplete or incorrect.

So what can you do to minimize the risk?

Well, if you are designing relational database or data warehouse systems, you can do your part by implementing a best practice approach to developing your data models.

What you need is a simple, repeatable process for reviewing your models.

Conceptual. Logical. Physical.

Years ago, a client asked me to help them develop a review process for their new data architecture committee. One that even a non-modeler could follow.

It had to be easy to follow and repeatable.

A checklist of what to look for and what to ask the modeler to make sure they got the best possible model.

It worked like a charm.

I have been using and refining that check list ever since.

It is amazing how many issues I have found over the years using this approach.

And I usually found them in early stages. They were also usually pretty small issues that were easy to fix at that stage.

A missing attribute definition.

A missing business key.

Incorrect cardinality or optionality on a relationship.

Small, but they would have been costly to fix if we had built the database with the original design and started coding the application, then found the mistake.

I imagine that you could probably benefit from using my process and  having this checklist handy to set up your very own data model design review process. Am I right?

So I decided to publish it and make it available to all my loyal readers and followers (even you lurkers out there!). 😉

As of today you can get your very own copy of the process details,  pre-review questions, and the review checklist for both logical and physical models in the convenient Kindle format for a crazy low price.

This is way less than you would pay for me or any other data model consultant to build one for you.

Even better, if you have Amazon Prime you can get it for free via the lending library. So try before you buy (you really do want your own copy to keep, honest).

So head on over to  Amazon and check it out.

Will you do me a favor?

If you like it and think it can help your friends and colleagues at other companies, then please post a review and be sure to tell them about over email, LinkedIn, or Twitter.

BTW – You don’t have to own a Kindle to get my book. You can download a FREE Kindle reader to your PC, MAC, iPhone, or Android device. So don’t worry…just get the book and tell your friends.

Happy Modeling!

Kent

P.S., If you have any ideas for other little reports I could provide, leave me a comment in the blog. Thanks!

ODTUG KScope12: Day 3 Recap. More Fun in the San Antonio Sun

Well it was another HOT day in San Antonio, Texas at the 2012 ODTUG KScope conference.

Really.. it was.

It was something like 104 degrees outside with a Heat Index of 107.

Yikes.

But it was more like 65 degrees in the session rooms.

They do like to keep it cold inside here in Texas.

But the topics and speakers were hot anyway.

After an energizing session of Chi Gung this morning, my first session to attend was Mark Rittman talking about Exalytics and the TimesTen in-memory database. Based on the number of people in the room at 8:30 AM, I would call this a hot topic for sure.

Inquiring minds want to know if this Exalytics stuff is all it is cracked up to be (and worth the $$).

image

Mark did his best to give us the low down, candid truth. Mostly it was good news.

With the Summary Advisor, it is pretty easy to create an In Memory Adaptive Data Mart which will hold commonly used aggregates. It leverages the existing Aggregate Assistance Wizard,

So what you ask? Well that technology tracks all the quesries run in your OBIEE server and figures out which summaries would help speed up your performance.

Now you won’t get your entire data warehouse up in memory, but you will get the most used data set up to return faster.

The biggest gotcha is that it does not know from automatic incremental refreshes, so you have to use ODI or some scripting to refresh the TimesTen database automatically.

Anyway, the future does look bright for Exalytics.

Next up was Ashley Chen, Oracle Product Manager, talking about the new features in the 3.1 release of SQL Developer and SQL Developer Data Modeler.

Notably in SQL Developer there is now some APEX Integration and TimesTen integration, along with improved DB Copy and DB Diff utilities. Plus they have re-done the Oracle web site for SQL Dev to segment the tool into more logical groupings of functionality.

On the Data Modeler side, new features include Easy Sync to DB, better versioning support, a better, modifiable meta data reporting engine, and the new Query Builder for developing and testing the code for an Oracle view (I wrote about that here).

Then it was bit of a break while I interviewed JP Dijcks in the ODTUG Social Media Lounge and then got my set of ODTUG tatoos.

Next it was lunch and the Oracle ACE and ACE Directors Lunch and Learn sessions where we divided the rooms by topic area and had the various Oracle ACEs answer questions and lead a discussion about topics in their area. Here are a few of the BI ACEs plotting their strategy for the panel.

They did end up asking me to join the panel, so I got to field a few questions about data modeling, big data, and where to build a metrics model in the OBI RPD or the database? It depends….

After lunch I attended Ron Crisco’s talk about Agile and Data Design. A favorite topic of mine!

Often a contentious topic, Ron challenged us with some very good questions:

  • Is Agile the enemy of good design?
  • What is data design?
  • Who does it?
  • How do you keep it in sync with ongoing changes and implementation?

He kept this all in context of the principles in the Agile Manifesto and the goal of delivering useful software to the business.

Best quote: “Agile is an Attitude”

I completely agree!

I finished the day hanging out with the Ashley Chen and Jeff Smith in the Data Modeler lab session as folks tried out the new features on a pre-configured Oracle VM.

Ashely and Jeff kept busy helping folks while I tried to get the new VM running on my laptop. No luck. Maybe tomorrow.

I did get to help a bit and answer a few questions for some of the participants.

No official KScope events tonight so I got to spend a little time relaxing by the pool and in the lazy river with my friend JP and his family. Saw several other friends and collegues as well with their spouses and kids playing in the pool. Then we all got to watch Despicable Me projected on a sheet on the far side of the pool.

Pretty neat. Nice way to end the day.

Tomorrow should be another exciting day of sessions and then we have the BIG EVENT:  we all saddle up and head out to the Knibbe Ranch for BBQ and a real Rodeo.

Yee haw!

See ‘ya at the round-up tomorrow, y’all.

Kent

Post Navigation