The last and coming weeks are busy times. Apart from my daily work for my client, I’m moving to the Netherlands, both personally and professionally.
My company in Belgium will cease to exist by the end of the first quarter 2018, while a Dutch company is being setup as we speak. The name will stay the same, as well as the services delivered.
Meanwhile, I’m looking for a permanent residence in The Hague, where I used to live before I moved to Belgium.
I’m not going to bother you will all the administrative details that are involved, but it’s a bit overwhelming. It’s also my main focus at the moment, other things are put on hold till everything has been arranged.
This site will stay and be maintained in the future but I will drop the .be domain. The .com domain will remain.
In part 1 of this series, I showed three approaches to model the business concepts. The third attempt, shown below, is the better one and I’ll explain why.
The business key issues
After discussions with the business and by looking at the sources, the business keys of the concepts turned out to be as follows for a “study site”:
Site sequence number
This is a legacy business key however, because the concept of “master site” was introduced later. But it does pose some issues with the first and second attempt:
Region isn’t foreseen anywhere but could potentially be modeled as a hub on its own;
The site sequence number is meaningless on its own and can’t be modeled as a hub as such, resulting in a degenerate column in the “study site” link as it’s still part of the key.
The second issue is still valid according to Dan’s book1, but will require a slightly different pattern for loading (and thus automation) than a standard link that is the intersection of actual hubs.
If the region isn’t actually that important to the business to be modeled as its own hub, it would also need to be modeled as a degenerate column, bringing us back by the second issue.
The extensibility issues
Our model isn’t complete yet. During discussions with the business, it turns out there are “subjects” and “findings”.
A subject is a person (but not identifiable as such) taking part in a study at a particular study site. The same person taking part in another study (probably at another site), is a different subject.
The business key is nothing more than a sequence number (within the “study site”).
A finding is an side effect that a subject exposed during the study. Although not synonymous, think of it as a symptom.
The business key is nothing more than a sequence number or a date within the subject.
Do you see the problem here? Everything seems to have some kind of hierarchical relationship. Using the denormalization technique, you would end up with the following model:
To me this looks a bit messy, having too many dependencies. I know this can be just subjective again, but if you need to extend even further, it will become even more messy.
The alternative extension
If we however follow the idea2 of the third attempt in the previous post, subject and finding are just new hubs. Adding those to the model is easy, see below:
This model is much cleaner and easy to extend. You can argue however that some information is lost, because you can’t immediately see the dependencies that lie within the business keys. But what the question is whether these are really business keys or just a technical solution in the source that the business adopted as business keys (because they had not other choice).
Even if the composition of these business keys change, the definition of each of the hubs remains the same. The relationships between these hubs most likely don’t change either. And if they do, you can easily create other links between the hubs, without further impact on the rest of the model.
Furthermore, but this is only in latest developments in the DataVault community (not necessarily supported by Dan Linstedt and DV2.0), there is no necessity for link satellites that contain descriptive attributes.
In reality there is no conclusion. It’s partially a matter of preference. If you want to stick to pure DV2.0 recommendations, go for the denormalization technique as link-to-link is still not recommended.
But if you truly understand the modeling part of DataVault, go for the alternative2. It’s more flexible and easier to extend. And the DV2.01 book doesn’t not forbid you to model an event or transaction as a hub either so you don’t have to worry about that.
“Building a scalable data warehouse with DataVault 2.0”, http://a.co/aynqjSK ↩︎
“Modeling the agile data warehouse with DataVault”, http://a.co/a0hyHWh ↩︎
In this series of 2 articles I’m going to discuss different approaches to modeling business concepts using DataVault. It is based on discussions from the past at one of my clients in the pharmacy industry.
The backbone of a DataVault
The backbone of a DataVault consists of hubs and links. Satellites are part of the core concepts but of less significance for this series of articles.
Let me start by giving the definitions of hubs and links, as stated in literature about this subject.
“Hubs are defined using a unique list of business keys and provide a soft-integration point of raw data that is not altered from the source system, but is supposed to have the same semantic meaning.1”
“The Hub represents the key of a Core Business Concept and is established the first time a new instance of that concept’s Business Key is introduced to the data warehouse.2”
“The link entity type is responsible for modeling transactions, associations, hierarchies, and redefinitions of business terms. The next sections of this chapter define Data Vault links more formally. A link connects business keys; therefore links are modeled between hubs. Links capture and record the past, present, and future relationships between data elements at the lowest possible granularity.1”
“A Link represents an association between core concepts and is established the first time this new unique association is presented to the EDW. Just as a Hub is based on a core business concept, the Link is based on a natural business relationship between business concepts.2”
Now that the core constructs are defined, let’s define the business concepts that form the basis of this article and the data model.
“A place where healthcare is being practiced. This can be a hospital, a department of a hospital, a laboratory or another place.”
“A person that has followed some form of medical studies and practices healthcare.”
“A Master Site is the assignment of a Healthcare Professional to a Healthcare Facility.”
“A Study is a formally followed research process in the development of medicine.”
“A Study Site is the assignment of a Master Site to a Study.”
A first attempt to model the business concepts
The following data model uses the “colors” of the DataVault as introduced in “Modeling the agile data warehouse with DataVault”2.
Hubs are blue
Links are green
Satellites are yellow
Based on the business definitions given above, there is probably no doubt that “healthcare facility”, “healthcare professional” and “study” should be represented as hubs.
As you can see, both “master site” and “study site” are modeled as links. The reason why this is done, is because the definitions of these concepts are indeed a kind of association and links are used for representing associations.
But this also poses an immediate problem. We now have a link-to-link relation in the model. This is not recommended practice: “This dependency does not scale nor perform well in a high volume, high velocity (big data) situation. The problem with link-to-link entities is that a change to the parent link requires changes to all dependent child links.”1
A second attempt to model the business concepts
One way to get rid of the link-to-link relation is by using (a kind of) denormalization1.
If you apply that principle, you’ll get this:
Even though this is a correct approach, I have two problems with it:
It starts to look like a dimensional model and not like a DataVault model that is more fractal like. This is of course very subjective, but it just doesn’t feel right to me;
The extensibility of the model is more difficult than with other approaches.
A third attempt to model the business concepts
Another approach is to take a closer look at the following statement: “This understanding is crucial to data vault modeling. A Link – by itself – cannot represent a business concept. So a Link – by itself – cannot represent an Event, a Transaction, or a Sale. Each of these event-oriented business concepts must be entire data vault constellations – including a Hub, Satellite(s) and Link(s).”2
Now think about that for a minute…
Both “master site” and “study site” are assignments, which is a kind of event. But both are business concepts too. In fact, these business concepts each have their own (composed) business keys. So according to the statement, these should be modeled as (keyed instance) hubs, not as links.
Let’s try again:
In part 2 I will elaborate on why this third attempt is the better option.
“Building a scalable data warehouse with DataVault 2.0”, http://a.co/aynqjSK ↩︎
“Modeling the agile data warehouse with DataVault”, http://a.co/a0hyHWh ↩︎
This post should be taken a bit lightly but is nevertheless true.
Recently I bought an e-bike. You know, a bicycle with an electric motor that supports you. This e-bike, though a relatively simple (I.e. cheap) one, has some nice features:
– built in GPS tracking
– a SIM card in the frame somewhere to send your e-bike data over the air to a server owned by the manufacturer (I guess)
– other sensors that detect movement of some kind
So it comes with an app. Of course. Everything needs to be smart and connected to the internet nowadays.
The app shows me data about:
– what route I drove, using the GPS data and a map
– how many calories I burned
– how much CO2 I did not throw into the air as I took the bike and not the car
– average speed
For the above data I can see today, yesterday, last days, last month etc.
The sensors also emit signals that the app will receive and show me as notifications on my phone or watch, such as:
– whether movement was detected
– whether the bike has left a predetermined geofence
– whether the bike fell over
– whether the bike is moving faster than 50 kilometres an hour
Wow, great! Isn’t it?
Well, not all of it, because:
– the sensors can’t tell that it’s me moving the bike
– the sensors can’t tell that it’s me that fitted the bike to my car (hence the notification for faster than 50) and not someone you don’t know driving a white van and who isn’t the bike repairman either
– the sensors don’t know the difference between falling and a speed limiter bump that you drive over at about 20 km/h
As the bike is parked in a building with lots of concrete and other close by buildings, I also get a notification sometimes that the bike has left its geofence. Yesterday it did 13 km on its own. But I didn’t get a movement notification. In fact, the bike never moved at all. Nobody borrowed it. I was at work.
So that data is a little inaccurate…
Which brings me to the reporting part of that data in the app. It tells me I saved an amount of CO2. But compared to what? To a car it says on the help page. But what car? A Tesla doesn’t produce CO2 at all (apart from when you have a flat tire). Not very useful.
So my key points of this post are:
– is your sensor data trustworthy?
– is your reporting telling the right story?
– is your data lineage clear?
– do you know how your data relates to each other?
If the answer is no to most points, I wish you all the best interpreting your (big) data. You know where to find me to help out…
Yesterday I attended the initial awareness session of the “Full Scale” Data Architects. We had an open discussion on what it is, could be or should be.
One of the questions raised during that session and afterwards on LinkedIn was how to explain what we – data architects – do.
Although data architecture and architecture (in construction) have many differences, I still see an analogy.
When asked what I do, I also make that analogy. It doesn’t cover it completely of course but it is often enough for the first introduction.
I design “something”, make the blueprint and lay the foundation.
And that while taking into account all wishes, (legal) requirements, environmental factors, durability, change and – although in data architecture we try as much as possible to be technology-independent – available “building material”. It’s basically finding the right balance as Ronald Damhof put it.
In practice the architect may also be the contractor that takes the lead in the construction. This can be an incentive for some but not for others1.
But I always keep an eye – or delegate it – that the construction is according to plan. When necessary I even change the plan (due to external changes, available building materials).
I should therefore have an overview and be part of a whole team that I can trust.
And I shouldn’t make it more complex than strictly necessary, certainly not when I try to explain it to someone else.
Of course it can definitely help if you have ever constructed things yourself – and I did -, but mainly from the point of view of the problems you can run into. Otherwise you risk that you start with a technical bias (yes, it does happen to me occasionally).
Another question was how to make data architecture attractive to teenagers so that they will study it, if there were any real studies about it. ↩
Let me start this post by saying that I admit that standards are important. This is especially true for people that are new to the subject, didn’t get any proper education or have almost no experience.
But an evolution of those standards is just as important. Proof of this is given by Dan himself otherwise we wouldn’t have DV 2.0.
I agree that any proposed changes to the standard needs to be proven. Sometimes this can mathematically be done but with another cases it may just be a matter of time will tell.
Upfront statements saying that the change won’t hold over time without the necessary arguments to prove this are just as worthless as accepting a change without the proof it will hold.
Keep in mind that some of the proposed changes may not be valid in all circumstances but only within a certain context. Many of these changes are even proposed by people that do have the proper education, certification and experience.
I can’t help getting a feeling that if the proposed change wasn’t thought of by the inventor himself, it’s bad by default. That’s a shame because it prevents a wide adoption of a good methodology for building enterprise data warehouses.
So let’s evolve those standards together and know what the implications of each are.
I agree with Dan regarding his post on tools that generate DataVault models. Most of the tools that I have evaluated don’t go any further than generating a source DataVault as both he and Ronald Damhof have stated in posts long ago.
Without the necessary metadata and mathematical and technical stuff that Dan refers to, automatic model generation is not adding much value. And I wonder if we will ever get there. Even today – with the knowledge that we need lots of metadata – source systems get built without the badly needed metadata that would support automatic model generation from them. Most database systems don’t support storing all that metadata either as far as I know.
However I do see some added value with these (model generation) tools when we – the data modelers – create the needed metadata or semantic layer between the source system and the DataVault. Some of the tools even implicitly rely on that already. This semantic layer is the actual business view, the translation of the source model. It is data modeling by hand, just not necessarily with the usual data modeling tools. The model generation tools – when properly created – could support the data model governance lifecycle.
It’s been a while since I briefly looked into graph databases for the purpose of using them for metadata. As I have a metadata “problem” at hand, I decided to look into it again and just as last time, opted for Neo4j.
The problem I have is relatively simple. The BI solution at a client has many documents describing business data domains, data models (well, a sort of data models), mapping documents, mapping rules etc. These documents are stored on a file server in several directories, often arranged by change request. There is no good procedure in place yet – but in the works – to make sure there is a central place to store the documentation that actually describes the production state.
To make things worse, documents exist in different versions that were sometimes incorrectly “forked” into a newer version, resulting in changes getting lost from other versions. On top of the cake, the intend is to move all documentation over to Microsoft SharePoint (may the force be with us…).
In addition we have DDL in files and ETL in Informatica PowerDesigner. Due to the fact that naming conventions were sometimes a bit forgotten, it is pretty difficult to tie all documentation together. There are mapping rules/functions described in Word documents that are referenced in mapping specs in Excel sheets, but nobody knows which ones etc. etc. You get the drift.
So as we work our way in making a decent inventory of this metadata (at least that is how I see these documents, DDL and ETL code) I thought it would be a good time to make the relationships between this metadata explicit (as the way SharePoint is going to be used won’t help us here).
And that’s exactly where I see a good use of a graph database, simply because these are very strong in describing the relationships between things.
My first rough thoughts, without really thinking it all through, is a graph model like the following:
In my daily job as a BI consultant I need to plan my work and I’m using OmniFocus for that. In fact it is my trusted system for all my projects, not only work related ones.
I allocate blocks of time in my favorite calendar app Fantastical 2 to work on the tasks (or next actions) that I’ve stored in OmniFocus.
Sven Fechner wrote a great post how you can schedule OmniFocus tasks in Fantastical by dragging and dropping when working on your Mac.
I found out that this works perfectly. Strangely the results are quite different when you use the Mac OS sharing extensions. When you select a task in OmniFocus and share it using the extension “Add to Fantastical”, the result is bad to say the least.
As I don’t always have my Mac at hand, I was wondering if I could achieve the same on my iPhone or iPad. This turned out to be quite an adventure.
In OmniFocus for iOS you can share a task as well but this only seems to work well when you mail that task, as you can see below.
It is possible that other apps are listed when you share a task from OmniFocus, such as Trello for example. However, when you share to Trello, you’ll notice that the name of the card remains empty and is not filled in with the OmniFocus task name. In the animation below I filled in a name myself and then looked in Trello what had happened. As you see, the OmniFocus task is included as an attachment in Trello with the obscure name “FocusAction.ofaction”.
After opening that file in a plain text editor, it turns out to be an Apple property list (or plist) file.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<string>Write blog post about scheduling OmniFocus Tasks in Fantastical 2 … for iOS!</string>
If your OmniFocus tasks contains a defer (<key>start</key>) and/or due date, additional keys and strings will be present.
While you may be able to parse this using Drafts, I took a different approach and used Workflow instead. In my workflow I use the name of the OmniFocus task as the name of the event to be used in Fantastical. If a defer (or start) date is present, the workflow will ask you if you want to use that date as the event date in Fantastical. Unfortunately Workflow doesn’t provide a means (or at least I couldn’t find it) to show you a list of available calendars, so you will need to type that in manually in a prompt.
Finally, the event gets created in Fantastical and you edit it further there. The notes field of the event contains a link back to your original OmniFocus task (which will open OmniFocus with that task). This is obtained through the <key>uuid</key>.