The Data Warrior

Changing the world, one data model at a time. How can I help you?

Archive for the category “Oracle”

Agile Amped Interview

Last week I had a great time speaking and networking at the 1st every Agile Alliance Tech conference (#AATC2016), which was held at the newly renovated Marriott Crabtree.

The conference was a great success (IMHO), with some very interesting talks about TDD (Test Driven Development), Pair Programming, and Mob Programming (new to me!) among others. And of course my talk on Agile Data Engineering. You can catch up on all the action by reading the Tweet stream on #AATC2016.

While there I had the honor of being interviewed about my thoughts related to agile, data warehousing, and data vault. (And of course I mentioned Snowflake too!). The interview was hosted by SolutionsIQ and is now included in their online library of video podcasts about agile topics, called Agile Amped (a FREE resource).

Here is the interview:


Here is a direct link to the interview on the SolutionsIQ site.

Thanks to SolutionsIQ and Neville Poole for doing the interview and to Agile Alliance for inviting me to speak.

Agile On!

Kent

The Data Warrior

P.S. Keep your eyes open for #AATC2017 as I am pretty sure we will do this again next year!

P.P.S. Before the conference I gave another interview where I talked about why we need Agile Data Engineering.

The Data Warrior Speaks 2016: Updated

As expected, I have been booked to speak a few more places this year.

Here is my updated speaking schedule as of today:

RMOUG Training Days 2016 – February 9-11 in Denver, CO (I have 2 hour deep dive on Feb 9th). Register here.

TDWI Nashville – March 8th in Nashville (of course). I will be discussing how to apply Agile Methods to Data Warehousing. You can get more details (soon) and sign up here.

Tampa Analytics Professionals – March 22 at the St Pete College Epicenter. Again talking about how to apply Agile Methods to Data Warehousing. You can get details and sign up here.

Agile Alliance Technical Conference 2016 – April 7-9 at the Raleigh Marriott Crab Tree Valley in Raleigh, North Carolina. I will present Agile Data Engineering: Introduction to Data Vault Data Modeling on Thursday April 7th. The Super Early Bird and Early Bird rates are still available. Register here.

Enterprise Data World – April 17-22 at the Sheraton Marina in San Diego, California. Register early for discounts. My talk here will be Agile Data Warehousing: Building a Virtualized ODS.

Data Science Maryland Meetup – May 16th (Tentative). I expect to be talking about how to apply Agile Methods to Data Warehousing.Keep your eyes on the meetup page for details and to sign up.

World Wide Data Vault Consortium (WWDVC) – May 25-28 in Stowe, Vermont. I am now confirmed to speaking at WDVC for the 3rd time! And this year, Snowflake Computing will also be a sponsor. My talk this year will be Agile Data Warehousing: Building a Virtualized ODS with Oracle SDDM. Register here soon as this event has limited space.

ODTUG KScope16 – June 26-30 in Chicago, IL. Register early and be sure to book the hotel! My talk this year will be Data Warehousing in the Real World. I will also be running my annual Morning Chi Gung sessions.

And that is the first half of the year. I have nothing confirmed yet for the 2nd half, but am sure something will pop up.

Stay tuned.

I look forward to see y’all at one of these events.

Kent

The Data Warrior

P.S. I will also be working the Snowflake booth at both the Gartner BI and Analytics Summit this March in Dallas, and the HIMSS event in Las Vegas at the end of February. Stop by and say “hi” if you will be at either of these events.

Better Data Modeling: Customizing Oracle Sql Developer Data Modeler (#SQLDevModeler) to Support Custom Data Types

On a recent customer call (for Snowflake), the data architects were asking if Snowflake provided a data model diagramming tool to design and generate data warehouse tables or to view a data model of an existing Snowflake data warehouse. Or if we knew of any that would work with Snowflake.

Well, we do not provide one of our own – our service is the Snowflake Elastic Data Warehouse (#ElasticDW).

The good news is that there are data modeling tools in the broader ecosystem that you can of course use (since we are ANSI SQL compliant).

If you have read my previous posts on using JSON within the Snowflake, you also know that we have a new data type called VARIANT for storing semi structured data like JSON, AVRO, and XML.

In this post I will bring it together and show you the steps to customize SDDM to allow you to model and generate table DDL that contain columns that use the VARIANT data type.

Read the details of how I did it here on my Snowflake blog:

Snowflake SQL: Customizing Oracle Sql Developer Data Modeler (SDDM) to Support Snowflake VARIANT – Snowflake

Enjoy!

Kent

The Data Warrior

P.S. If you are in Austin, Texas this weekend, I will be speaking at Data Day Texas (#DDTX16). Snowflake will have a booth there too, so come on by and say howdy!

Better Data Modeling: Discovering Foreign Keys (FK) in #SQLDevModeler (SDDM)

A while back I had an interesting situation when I was attempting to reverse engineer and document a database for a client.

I had a 3rd party database that had PKs defined on every table but no FKs in the database. The question I posed (on the Data Modeler Forum) was:

How do I get the FK Discover utility to find FK columns with this type of pattern:

Parent PK column = TABCUSTNUM

Child FK column = ABCCUSTNUM

So the root column name (CUSTNUM) is standard but in every table the column name has a different 3 character “prefix” that is effectively the table short name. Is there way to get the utility to ignore the first three characters of the column names?

This was in SDDM 4.1.873.

No easy answer.

Well, the ever helpful Philip was very kind and wrote me a slick custom Transformation Script that did the trick! (Check the post if you want to see the code.)

But wait there’s more!

In his response he mentioned a feature coming in 4.1.888 – the ability to include a table prefix as part of a FK column naming template (just like this app had done).

Cool, I thought, but how does that help?

Well with the template in place it turns out that you can have the FK Discovery utility search based on the Naming Template model rather than just look for exact matching column names.

Using the Custom Naming Template

So recently (today in fact) I was trying to add FKs to the Snowflake DB model I reverse engineered a few weeks back (Jeff pointed out they were missing). I noticed the model had that pattern of a prefix on both the FK and PK column names.

In the CUSTOMER table the PK is C_CUSTKEY. In the ORDER table it is O_CUSTKEY. Nice simple pattern (see the diagram below for more). That reminded me of the previous issue and Philip’s script.

Snowflake Schema

Off to OTN to find that discussion and refresh my memory.

In the post, Philip has posted an example of a template that fit my previous problem:

{table abbr}SUBSTR(4,30,FRONT,{ref column})

With the note that {table abbr} would be the equivalent of what I called the table prefix. So first I went to the table properties and put in the prefixes using the Abbreviation property:

Add Table Abbrev

Then all I had to do was modify his example to account for the underscore and the fact that the main column text would start at character #3 instead of #4:

{table abbr}_SUBSTR(3,30,FRONT,{ref column})

I input that by going to Properties -> Settings -> Naming Standards -> Templates and then editing the default template:

Set up FK template

Discover FKs!

Now it was just a matter of running the utility. You find that with a right mouse click on the Relational Design node:

Discover FK tool

Next you get the list of candidate FKs:

Create FKs

Note that the utility also suggested some FKs based on the unique constraints (UKs) as well. I did not want those, so I unchecked them before I hit “OK”.

The result was getting all the FKs I wanted added into my model! Viola!

Snowflake with RI

So I can happily report that Philip’s little enhancement works just fine in 4.1.3. WooHoo! I can see this being very useful for a lots of cases in the future.

In a future post (early next year), I will continue with showing how we implemented Referential Integrity constraints in Snowflake DB and if I can generate the DDL from #SQLDevModeler.

Happy New Year Y’all

Kent

The Data Warrior & Snowflake Technical Evangelist

Data Modeler 4.1.3 is OUT

New updated version of Oracle SQL Developer Data Modeler is available with some important fixes.

Helifromfinland's avatarHeliFromFinland

The development team has been busy! 4.1.3 is now out and plenty of bugs fixed.

http://www.oracle.com/technetwork/developer-tools/datamodeler/overview/index.html

Bugs fixed (from the download pages):

22350565String index out of bounds exception while importing structured type method bodies
22311829Table report generating duplicate FK columns
22350611Cannot select relationship/foreign key line
22339253Change request report does not generate data in the report
22187312Error when running design rules using Data Modeler version 4.1.2.895

View original post

Post Navigation