The Data Warrior

Changing the world, one data model at a time. How can I help you?

Archive for the tag “from the author of A Checklist for Doing Data Model Design Reviews”

Early Christmas: The New #SQLDev Data Modeler is Here!

Thanks to the gang at Oracle for an early Christmas present – the newest version of Oracle SQL Developer Data Modeler (SDDM) is ready for download and use.

The best FREE data modeling tool on the planet just got better!

To be clear this is Early Adopter (EA) version 2 of SDDM 4.2. You can get it here right now!

#SQLDev Data Modeler New Features

Of course there are some bug fixes from EA1, but also some new features for you to enjoy:

Import from Oracle Database

  •   performance and filtering enhancements
  •   ability to define Oracle Client for thick connections
  •   view and materialized view driving query and columns now parsed and validated

Versioning

  •   improvements in performance
  •   new models are shown as a single node in pending changes window

Reporting

  • PDF reports allow diagrams to be embedded with links from diagram to details part into report
  • HTML report for tables now include diagrams

 

SQL Developer Data Modeler EA2 adds diagrams to HTML reports

#SQLDev Data Modeler HTML report with diagrams embedded

So go download and unwrap that present!

Cheers!

Kent

The Data Warrior

P.S. If you need training on Oracle Data Modeler, be sure to check out my online video training course along with my tips and tricks ebook. (HINT: Buy them now, and you may be able to deduct the cost from your 2016 taxes as an educational expense.)

Drill to Detail Podcast: Data Modeling, Data Vault, and Snowflake!

My good friend Mark Rittman has embarked on a new adventure as an independent analyst and consultant. As part of his new venture Mark started a new podcast series on iTunes that he calls Drill to Detail where he will feature interviews discussing a range of topics related to data warehousing, business intelligence, analytics, and big data.

I was honored to be asked to take part in this new venture and got to spend a hour with Mark a few weeks back recording what is now Episode 5 of the series. In this interview we talk about:

The podcast is about 60 minutes with each topic being about 20 minutes (so feel free to skip ahead if you are short on time). Please have a listen and let us know what you think in the comments below.

Cheers!

Kent

The Data Warrior

P.S. I will be speaking on these and related topics at a bunch of events over the next few weeks. Check out my speaking schedule and join me in person if you can!

Maintaining disabled FK’s, wisdom or farce?

A while back, I wrote a post about having FKs (foreign keys) in your data warehouse.

Well, a similar question came up recently on an Oracle forum with the above title. It is a fair question and it does surface fairly regularly in a variety of contexts (not just data warehousing).

Of course, as The Data Warrior, I felt is was my duty to respond.

The Question

Is there any reason to maintain a permanently disabled FK in the data model?  I’m not envisioning a reason to do it.  If it is not going to be enabled, then from my perspective, it would not make any sense to have it defined.  If anything, provide the definition of the relationship in the comment of the child column.

My Answer

Yes, by all means keep the FK please!

I see three good reasons for doing so:

  1. It is valuable metadata (& documentation). If somebody reverse engineers the database (say with ERWin or Oracle Data Modeler), the FK shows up in the diagram (way better than having to read a column comment to find out)
    Data Vault 2.0 Example

    A picture is worth a thousand words!

    .

  2. BI Metadata – If you want to use any sort of reporting or BI tool against the database, most tools will import the FK definition with the tables and build the proper join conditions. Way better than having someone guess what the join will be and then manually adding it to the metadata layer in the reporting tool. Examples that can read the Oracle data dictionary include OBIEE, Business Objects, COGNOS, Looker, and many others.(Note here that since the FK is not enforced on the remote databases, you might want to make sure these are treated as outer joins, lest you lose some transaction in the reports).
  3. The Oracle optimizer will use disabled constraints to improve query performance of joins. Again, this is metadata in the data dictionary which the optimizer can read. This is documented in the Oracle Data Warehouse guide and I have validated it on multiple occasions with Oracle product management.

While #3 applies specifically to Oracle, for other databases like MS SQL Server and Snowflake, #1 and #2 still apply.

Even if only one of the above is true for a given database, that, in my opinion, still justifies keeping the disabled constraint around.

Final Answer = Wisdom

What do you think? Feel free to comment below.

And please share on your favorite social media platform!

Model on!

Kent

The Data Warrior

 

Introduction to Snowflake!

Heads up! I will be giving a webinar next week, called Enabling Cloud-Native Elastic Data Warehousing to introduce folks to the Snowflake Elastic Data Warehouse. Sign up here and join me on July 12th!

Special thanks to DAMA International for inviting me to do this!

See you there!

Kent

The Data Warrior

Better Data Modeling: Customizing Oracle Sql Developer Data Modeler (#SQLDevModeler) to Support Custom Data Types

On a recent customer call (for Snowflake), the data architects were asking if Snowflake provided a data model diagramming tool to design and generate data warehouse tables or to view a data model of an existing Snowflake data warehouse. Or if we knew of any that would work with Snowflake.

Well, we do not provide one of our own – our service is the Snowflake Elastic Data Warehouse (#ElasticDW).

The good news is that there are data modeling tools in the broader ecosystem that you can of course use (since we are ANSI SQL compliant).

If you have read my previous posts on using JSON within the Snowflake, you also know that we have a new data type called VARIANT for storing semi structured data like JSON, AVRO, and XML.

In this post I will bring it together and show you the steps to customize SDDM to allow you to model and generate table DDL that contain columns that use the VARIANT data type.

Read the details of how I did it here on my Snowflake blog:

Snowflake SQL: Customizing Oracle Sql Developer Data Modeler (SDDM) to Support Snowflake VARIANT – Snowflake

Enjoy!

Kent

The Data Warrior

P.S. If you are in Austin, Texas this weekend, I will be speaking at Data Day Texas (#DDTX16). Snowflake will have a booth there too, so come on by and say howdy!

Post Navigation

%d bloggers like this: