My thoughts on the Datavault Certification Training

Last Thursday and Friday, I joined the datavault certification training, organised by Genesee Academy and Centennium BI expertisehuis.

In this post, I want to give my reflections on this training and the certification exam that is part of it. I give it both good points and some less good points. I will start with giving a little description on how the training is setup.

Setup of the training

The training starts with a set of online videos that are made available two weeks upfront of the actual two days classroom training. They give a good introduction and even some in depth information about the Data Vault modeling and methodology.

The two days classroom training contains a slightly adapted set of the slides that are presented in the online videos. You receive those as handouts in a nice binder.

During the first day, a lot of the slides are presented and additional context is given. Two small modeling cases need to be worked out and are discussed. When time permits, a third – slightly bigger – case is started.

The second day the third case is worked out and discussed in detail. Also any questions asked during the first day are discussed. The last slides are presented and a recap is given before the 2,5 hour exam starts.

The exam consists of both true/false questions, open questions where you have to write down the answer and some modeling. You need to have 90% of the questions correctly answered to be granted the title of “Certified Data Vault Data Modeler”.

How I rate it

In the next section I’ll give my thoughts on it. Of course this is highly subjective. I also need to add that I had already read a lot about Data Vault upfront and that I even started an implementation.

The Bad

Actually, it isn’t that bad, but just a matter of speech. Here it comes:

  • the slides in the classroom training don’t add any value to the ones presented in the online videos
  • there is more theoretical information in the online videos than in the classroom training

The Ugly

  • the fact that you need to have a score of 90% to get certified is not mentioned upfront
  • some of the true/false questions in the exam are ambiguous and need you to write down your assumptions to clarify your answer
  • some slides are already a little outdated based on the ongoing insight

The Good

As in any good presentation, it is the material that isn’t in the slides that really add value. In this training this is no different. The context around the material in the slides is provided by the trainer and explained in detail.

  • in-depth discussion on specific topics based on feedback and questions from the participants
  • the cases provide plenty of room for discussion on how to model in certain circumstances and why to do it like that

Conclusion

The classroom part provides an interaction that can’t be matched by slides or online videos, even though some information is repeated. But that’s the idea of learning as well, by using repetition. You do need to watch the online videos upfront if you are really new to the subject.

I still think it has been worth the money.

Summary mindmap of book Rework by 37signals

I’m currently reading the book Rework, written by the guys from 37signals.com, makers of Basecamp and other very nice collaboration tools.

This book is really a must read for everyone that runs or is planning to run its own business.

Included is a interactive mindmap in PDF format (with Flash embedded, needs to be viewed with Adobe Reader) that summarizes the book. I don’t know the original author of this mindmap, but all credit should go to him/her.

Use the download link to get it, because posterous can not convert this type of PDF to something viewable.

Rework_by_37_Signals.pdf Download this file

 

Limiting your number of inboxes using ifttt.com and OmniFocus

When applying Gettings Things Done, you at least have one inbox to capture your stuff. Most of the time, you probably have more than one inbox, especially digital ones. The more inboxes you have to manage, the less productive you probably get.

This post will show some small examples of how ifttt.com can come to the rescue in limiting the number of digital inboxes you have.

About ifttt.com

ifttt.com stands for “if-this-then-that”. It helps you to create tasks that are triggered by a particular event. This event can be a mail that you receive, a new blog post you wrote, a tweet you favorited, a google reader item you starred and many more. See this great article on Lifehacker.com for more information.

Your OmniFocus inbox setup

There are two ways you can setup your automated OmniFocus inbox:

  1. Using a Mac running as a kind of server with a mail rule (using Mail.app) and running OmniFocus
  2. Using the send-to-omnifocus@omnigroup.com mail address (which requires an additional manual step)

The first method is explained in the OmniFocus help on your Mac.

With the second method, a mail is returned containing a s pecial URL in it using the omnifocus:// scheme. By clicking on it, OmniFocus is opened (even on your iDevices) and the task is added to your inbox once you confirm. I am using this setup for the moment, but will switch to the first one soon.

Typical use-case scenarios

  1. Add tweets you favorite to your OmniFocus inbox to process them later
  2. Add Google reader items you star to your OmniFocus inbox to process them later

There are many more cases you can thinks of, just take a look at the possibilities on ifttt.com

Recipe for scenario 1: in the “To address” field, just fill in send-to-omnifocus@omnigroup.com

The rest is up to you…

How to load a #datavault hub from multiple sources

In this blogpost I will describe a few scenario’s on how to load a datavault hub from multiple sources, each with their pros and cons. Many thanks to Martijn Evers (@DM_Unseen) and @RonaldKunenborg for giving their input.

The main reason for this blogpost is to share knowledge about Data Vault, so that it becomes accessible to a broader audience.

Disclaimer: I am not a Certified Data Vault Modeler, but the fore mentioned persons are.

Some considerations

One of the “principles” in the Data Vault Methodology is that most of the loading can be done in parallel. First the hubs are loaded in parallel, then the links and finally the satellites (although satellites belonging to hubs can of course be loaded after the hubs are loaded, one doesn’t need to wait for the links to be loaded).

It’s exactly at this point where I initially was somewhat confused regarding the loading of a hub that has multiple sources. If you would load that hub sequentially, as explained in scenario 2 below, wouldn’t you be defying this principle of parallel loading?

On the other hand, the idea is to load your data as soon as it becomes available. This poses a problem when using a union construct as explained in scenario 1 below. If one of the sources is ready a lot later, you have to sit and wait before you can load. Precious time is lost.

Scenario 1: using a union between sources

In this scenario the business keys from the different sources will be unioned together, while keeping information about the record source of each business key.

The following pseudo-SQL provides the basis for this scenario.

select distinct a.col1 as business_key  , 'table1.col1' as record_source , load_cycle_ts() as load_dts from table1 a  union  select distinct b.col2 as business_key , 'table2.col2' as record_source , load_cycle_ts() as load_dts from table2 b

Note that the above is not entirely correct, as it can result in duplicate business keys due to the inclusion of the record_source. This can be handled however with most ETL tools.

A typical setup for the above using Pentaho Data Integration would be like:

Pros

  • All the sources of a particular hub in one transformation which can give a better overview
  • Principle of parallel loading is maintained

Cons

  • Not easy to generate via automation
  • Difficult to maintain if additional sources are needed
  • Additional steps are needed to prevent duplicate keys
  • Additional constructs are needed to appoint the master source
  • Synchronization between sources, because you need to wait to start loading when all sources are ready

Scenario 2: sequentially for each source

In this scenario the business keys from the different sources will be loaded sequentially, starting with the master source.

Step 1: master source first

select distinct a.col1 as business_key  , 'table1.col1' as record_source , load_cycle_ts() as load_dts from table1 a

Step 2: next source

select distinct b.col2 as business_key , 'table2.col2' as record_source , load_cycle_ts() as load_dts from table2 b

A typical setup for each of these steps in Pentaho Data Integration would be like:

With this kind of setup, you can also use “micro-batches” to load the data. This idea is explained by Martijn Evers (@DM_Unseen) in his reply to my original question about this subject on LinkedIn (you need to be a member of the Data Vault Discussions group to view it).

Pros

  • Can easily be generated via automation
  • Adding new sources is easy
  • None of the Cons of scenario 1

Cons

  • Slightly defies the principle of parallel loading of hubs, but this is really of minor importance

Conclusion

It should be clear that the second scenario of loading the hub sequentially from it sources is normally the best one to choose. However, to quote @RonaldDamhof, “it depends on the context”. You can always come across a situation where another way is better…

How to link Evernote notes to OmniFocus

Sven Fechner (@simplicitybliss) is one of my major resources when it comes to using OmniFocus. He wrote about linking OmniFocus and Evernote together in at least two blog posts:

  1. Get Evernote and OmniFocus talking
  2. Linking Evernote notes to OmniFocus tasks

In the second post, there is an important comment made by Bryan and Diego. You can link even easier now, by using the Evernote’s ability for copying a note link (or more than one note link).

Getting this note link can be done by selecting one or more Evernote notes and then right-click to get the context menu, and select “Copy Note Link” (or “Copy Note Links” if having selected more than one note):

Copy Note Links in context menu

I prefer to have a hot key assigned to this in the System Preferences. I use the same key for both the menu item “Copy Note Link” (when having one note selected) and the menu item “Copy Note Links” (when having multiple notes selected).

Short cut keys

After selecting ⌘K, you can paste the links in you OmniFocus tasks note. Simply clicking on it, will reveal your Evernote note.

Waiting for DataVault implementation classes

 I’m anxious to learn more about the implementation part of a DataVault, something that is not covered in detail in his book “Supercharge Your Data Warehouse”. 

For a while now, Dan Linstedt has annouced that he is working classes that cover the implementation. You can register to be notified about it here.

 

Unfortunately, I keep receiving emails from that list with all kinds of info about DataVault – but not about the implementation classes – and that I can be notified when the implementation classes are ready. But that’s why I registered in the first place. If I already did, why it that still in the emails that I keep receiving?

 

Just now, I saw another tweet from him:

 

 

Now I find it interesting that he’s working on multiple things, but it would be nice if something actually finishes. The implementation classes are almost ready for production and release. We all know what this means in IT. Either something is ready, or it’s not. “Almost” doesn’t exist…

 

Dan, get those implementation classes out now. People are waiting for it. You can always finetune them later…

 

Data Vault and other approaches, my reflection on Frank Haber’s article

Intro

I’m writing this blog post as an additional comment and reflection on the whole discussion that broke loose as a result of the article that Frank Habers wrote in XR magazine.

Before I continue, I want to make the following very clear:

  • I am an independant BI consultant with almost 15 years of experience
  • Most implementations I did or worked on are using Kimball’s approach
  • I am NOT a Certified Data Vault Modeler, but that does not mean that I haven’t read a lot of material that Dan Linstedt and others wrote about Data Vault (such as “Supercharge Your Data Warehouse”)
  • I have little practical experience in using Data Vault
  • The largest BI implementation (in terms of volume of data) I encountered was for a mobile telephone company
  • I have never (unfortunately) worked with an MPP database
  • I am not trying to sell anyone anything based on this post

My original comment on Frank’s article was only on one specific point he made about the difference in performance between a Dimensional and Data Vault model, with respect to the joins. I mentioned that it was missing enough clarification to make his point. Something he admitted in his own comment and to which he gave some more clarification already, which I appreciate.

However the whole discussion in the comments on his article could easily turn into a “war”, which is not very helpful as stated by Ronald Damhof and Rick van der Lans in their comments on twitter.

I find the article that Dan Linstedt wrote on his own blog to counter Frank’s article also a bit of an overheated response even though Dan’s makes it clear that he has nothing against Frank personally. For some part I can understand that. There is nothing wrong in correcting statements made that are false or not entirely true. And of course Data Vault is still Dan’s “baby” and we all know how we react if someone does something wrong to our children. But I do think that NOT being a Certified Data Vault Modeler doesn’t mean you can’t discuss it or don’t know anything about it. There isn’t such a thing as being a Certified Dimensional Modeler either…

But we must make sure we don’t actually start a war. We have done that before with Inmon’s and Kimball’s approach. It doesn’t lead anywhere in the end. Having a sound and constructive discussion in which we elaborate on pro’s and con’s of certain approaches is however a good thing. As Ronald Damhof mentioned in his comments, it all depends on the context (of the client).

And whether Frank’s article may have some commercial background in it or not, the approach he discusses is a good approach, but again, it depends on the context.

Benefits of Data Vault

Based on my limited experience with Data Vault, there are some benefits that I can see in its modeling aspect that are less obvious in Dimensional Modeling. The whole idea of having hubs and links and the fact that you have many-to-many relationships does help in at least two ways:

  1. Understanding the business and creating a sound model of the business processes
  2. Getting possibly crappy data from the source into your data warehouse and show the business that they may have an issue

Note that the above does not mean you can’t accomplish this with Dimensional Modeling. Let me elaborate.

Understanding the business

When discussing business processes and the data that is used or produced by these processes, I have come to the conclusion that a Dimensional Model is fairly easy to understand by the business. However, creating the bus architecture with many fact tables (at the most detailed grain possible) and conformed dimensions can also easily result in losing the complete overview, even when you only present the entities without the attributes. Secondly, I find it more difficult to understand possible relationships that exists between fact tables.

Does a Data Vault model solve this? Yes and no. If you present a complete Data Vault model with all satellites and possible reference tables, you’re lost as well (both IT and business). But if you limit it to the hubs and links only, it becomes much clearer.

I can hear you say already: “this doesn’t help”. Partially you are right. In many cases there is not much difference between a Data Vault and Dimensional Model. Let’s look at the following simple example:

  • Customer
  • Shop
  • Sales

Where as in a Dimensional Model you would have two dimensions and one fact, in a Data Vault model you would have two hubs, two satellites linked to those hubs, one link and one satellite linked to that link table. Leave out the satellites and you get (basically) the same as with the Dimensional Model: two hubs and one link, representing two dimensions and one fact.

But if you need to introduce a many-to-many relationship between dimensions, there are basically two ways of solving it:

  1. You use a factless fact table to capture that
  2. You alter the grain of an existing fact table by adding the additional dimension

With the second approach you will give yourself a headache when there is already data present, but it can be done.

The first approach, using the factless fact, is much easier. But wait, isn’t that the same as creating another link table between two hubs in a Data Vault model? Sure it is! But to me it feels more natural in a Data Vault model to use a link between hubs than to use a factless fact in a Dimensional Model. The reason for this is only psycological because of terminology: a factless fact. You’re registering a fact without it being a fact. Weird terminology if you ask me. Maybe it should have been called an attribute-less fact.

So in many cases there may not be much of a difference after all between a Dimensional Model and a Data Vault model, but I find a Data Vault model easier in terms of evolution. The “divide and conquer” is much easier to apply to it than to a Dimensional Model.

Another issue that I sometimes encounter with a Dimensional Model is the possibly changing cardinality of a relationship between dimensions. In a true Dimensional Model, snowflaking should be prevented (there are always exceptions), meaning you flatten or denormalize your table. Great if there is a hierarchy present that is one-to-many. But a nightmare when this changes to a many-to-many relationship (in which case having snowflaked it would give you easier means to recover).

Getting crappy data from your source in your data warehouse

Let’s be honest, we all have encountered it. If not, let me know. There is a lot of crappy data in source systems. Data that does not represent the cardinality rules given by the business. And all kinds of other data (quality) issues.

Having a Data Vault model with its many-to-many relationships provides a guarantee that you can at least load that crappy data into your data warehouse (maybe with a few exceptions). Having it there will of course still give you a headache when you need to process and present it to the business in a layer more suitable for presentation, either virtualized or with a Dimensional Model on top of your Data Vault.

But it does become much easier to confront the business with the fact that they have crappy data in their source!

I find it easier with a Data Vault model than with a HSA that is modeled as the source model. In fact, how often haven’t you been in the situation that the source model is much of a blackbox and you only receive extracts from it. In such a case, the HSA is probably modeled after the extract, which may not be the actual source model.

Often when using a Dimensional Model, this crappy data is hidden because it is being cleaned by the (complex) ETL along the way from source to presentation to the business. You lose some track of it and the business is possibly not even aware of it.

But Data Vault does not solve this, it only helps you to make it more visible. In the end, there is still work to be done to clean it, either in the source itself or along the way to the presentation layer (whether that be a Dimensional Model, cube or something else).

Con’s of Data Vault

This is probably the part that may get readers and experts “excited”, to say the least 😉 Due to my limited experience, these con’s could be false in some cases. Please correct me if I am wrong, I want to learn from the experts in the field.

One of the con’s is that Data Vault indeed does result in more tables and possibly more joins, which can make it more complex to maintain from the DBA’s point of view.

Secondly I do have some doubts on performance as well, but especially (and only) in the following situation: if you create a virtualized Dimensional Model suitable for presentation on top of the Data Vault model using views and when you do this on a plain non-MPP database that doesn’t use column stores. If even a physically implemented Dimensional Model already gives performance issues, than using views with more joins on top of a Data Vault model on the same configuration won’t be any quicker.

Thirdly… well, this is not related to the Data Vault Model and Methodology as such, but more to the advocacy of it. With any new or just evolutionary approach, there is a hurdle to tackle. We are afraid of change. Sometimes Data Vault is presented as the holy grail. That’s not true, period. It doesn’t even depend on the context. The holy grail has never been found. Data Vault can help you solve particular issues that we encounter now and maybe in the next ten years. But by then, we may have evolved in such a way in handling data, that even Data Vault doesn’t provide a solution for the issues we encounter.

I also have issues with the continous hammering on getting certified in Data Vault. What is really the benefit of it? Of course, I can show off with it on my CV. Increase my hourly rate a bit so that I can earn it back. I can see a benefit for Dan and Hans. They make money out of it. Those are valid reasons of course, but do I really get much more knowlegde by following the training and certification class, or by getting experience in the field with the theory based on the articles, blogpost, books, (free) advice from experts (yes, I did get free advice) that are certified and paying close attention to reviews done by a certified expert.

Conclusion

So what conclusion is there? Did I make a strong point somewhere? No, I just wanted to reflect on the discussion that Frank’s article started.

Data Vault can be useful, for sure, I can see that. But I have doubts as well. The most important thing is that we help our clients and choose the best approach given the context of those clients. Make them evolve.

I hope this post invites you to give your comments on my reflection. Please help me learn and evolve. Correct me if I am wrong etc.

And thanks for coming all the way down here to this last line, it means the post wasn’t boring 😉

 

 

Kanban With Evernote: A Household Example

In my previous article Setting Up Kanban With Evernote I wrote about a simple setup for Kanban using Evernote.

In that article I didn’t give all the details on how you can eventually use such a setup and how it really looks like. In this article I will go a little further and give an example with screenshots and I will share that notebook for public viewing.

For the examples I will use the Evernote web interface, but you can also do this with the desktop or mobile clients.

Assumptions

The workflow consists of the following states (represented by tags):

  • todo
  • doing
  • done

This household consists of three people (represented by tags):

  • John
  • Mary
  • Junior

To make it even a little bit more interesting, I will introduce some “areas of responsibility”, also represented by tags:

  • Cleaning
  • Payments
  • Shopping

Why not make it even a bit more interesting and add some contexts borrowed from David Allen’s GTD. A context can be a location or a tool you need to accomplish the task (in fact a context can be much more than just that, but I keep it simple for this example):

  • @Hardware Store
  • @Supermarket
  • @Home
  • @Computer

Your setup will look like this:

Setup tags

Note that I put the tags in groups of tags. This is not necessary, I did it just for illustration purposes to make things clearer.

Workflow

As mentioned before, the workflow is simple in this case and a task will go through the following states, in the order specified:

  1. todo
  2. doing
  3. done

Creating tasks

You can enter a new task by simply creating a new note and give it a title of the thing that needs to be done. You assign it the todo tag and possibly the tag of the person that needs to do it if known upfront and a context tag if you know upfront where you need to do it or what tool you need.

The following example shows “Buy bread”, which is assigned the following tags:

  • todo
  • @Supermarket
  • Shopping

Entering a task

As you can see, you still need to buy the bread, you need to do it at the supermarket and the area of responsibility is shopping. Anyone can do it, you haven’t assigned someone special to do it.

Now enter some other tasks. I will not give all the details here in the text, but you will be able to see them in the next screenshot:

Tasks in snippet view

However, to have a better overview in the web client, choose the View Options in the notes and show them as a list. This will immediately show the tags assigned to the notes as well, as can be seen below:

Tasks in list view

But you can see that if you have long tag names, not all tags may show, for example with Buy hammer, you don’t see the todo tag. I haven’t been able to change the width of the columns in the web interface, but there are other alternatives that I will address later.

Doing tasks: changing the tags

When someone in the household is ready to start a task, it involves merely changing the notes tags.

When John picks up the Pay bills task, the todo flag is removed from the note and it will be assigned the doing tag. When the task is done, the doing tag is removed and replaced with the done tag.

The Buy bread task hadn’t been assigned a specific person upfront, so anyone in the household can do it. If Mary would decide to do so, she would assign her own tag Mary to it and change the todo tag into doing.

More advanced views

You can use Evernote’s standard features to have more control over your workflow, by filtering the notes on one or more specific tags.

Let’s assume that John finished paying the bills and that Mary is buying the bread. Filtering on the todo tag will now show only the following tasks:

  1. Buy hammer
  2. Do homework for school
  3. Clean shower in bathroom

Remaining todo

Likewise, when you filter on the doing tag, it will show only the Buy bread task:

Tasks in progress

And when you filter on the done tag, you would only see the Pay bills tasks (not shown here).

Suppose one of the members in the household wants to see which tasks remain todo and are assigned to him/her or are not assigned to someone specific (i.e. just something that is available). This makes a very valid use case. Let’s say Mary want to see this.

The filter for this is easy to setup and will show todo tasks not assigned to John or Junior (i.e. assigned to Mary or to no one):

  • Notebook:“Household Kanban”
  • Tag:todo
  • -Tag:John
  • -Tag:Junior

Remaining tasks that Mary can decide to do

As these types of filters will be often used, it is recommended to store them as a Saved Search in Evernote, so that you can easily apply them again without have to write it from scratch.

Saved search

Conclusion

This is just a simple setup, but gives enough hints for further extension and other applications.

I have shared the notebook for this setup publicly for viewing only. This means that you won’t be able to create, modify or delete notes/tasks. This notebook will remain shared until the end of March 2012.

It is shared via the following public URL: http://www.evernote.com/pub/estrenuo/householdkanban

One more thing

Want to have a “real” Kanban board like view? Try something like the following… ;–)

Using multiple browsers for a Kanban board view

Setting Up Kanban With Evernote

This article describes how you can use Evernote to setup a simple, yet easy to use Kanban “system” to manage your projects, workflows and tasks using (shared) notebooks, tags and notes. For optimal use at least one premium subscriber evernote account is needed.

Evernote was not intended to be used for this, so there are some drawbacks of course. The most important drawback is that you will be missing the typical visual representation of a Kanban board, with its vertical lanes that represent a state in the workflow.

What is Kanban?

Kanban, very simply put, is a way to manage and optimize workflow. It was originally invented by Toyota for their manufacturing, but nowadays it is also applied to software development and other kinds processes such as household tasks.

For more details, just google on Kanban or use Wikipedia as a starting point to learn more about it. I strongly you suggest you do some initial reading on this, so that you can easily understand the rest of the article and see the benefit of a setup using Evernote.

Minimalistic setup in Evernote

The most minimalistic setup is just for one person. This can be a free account, but in that case the normal limitations apply. With a free account you can only attach PDF’s and images to a note. With a premium subscriber account you can also attach Word document and basically any other type of attachment. PDF’s will be searchable and even text in images is searchable.

What do you need?

You need the following:

  • an evernote account (free or premium)
  • a single (synchronized) notebook
  • tags to represent stages in the workflow, such as for example:
    • todo
    • doing
    • done
  • notes representing tasks (these are the Kanban cards)

The setup described will work with any modern browser. You can also use any of the evernote desktop clients (Mac/Windows) or one of the mobile apps (iPhone, Android, BlackBerry, Windows Phone).

Note that a synchronized notebook is not the same as a shared notebook. A synchronized notebook created in one of the desktops clients syncs with your online evernote account. With the desktop client you can also create local notebooks however. These notebooks are not synced with your online evernote account and will not be accessible with a browser or one of the mobile apps.

How does it work?

The notebook you create basically represents the Kanban board, but without the same visual representation of it. It is the placeholder for your notes that represent the Kanban cards, where each note/card represents a task or any kind of item that you want it to represent, as long as it fits within the Kanban way of working.

Once you created the notebook, you can start adding notes that represent your tasks, such as:

  • buy bread
  • bring out the trash
  • clean garage

Each of these notes will be assigned one or more tags. In the example tags given above, a task can only have one tag, because the states of the workflow are mutually exclusive.

Initially, assuming you aren’t doing any of those tasks yet, all these notes will be tagged with todo. When you then decide to take up a task, you change the note, remove the todo tag and assign it the doing tag. And when you’re done, well, you remove the doing tag and assign it the done flag. After a while, you can decide to remove the notes that have the done tag as you may not want to keep those forever.

Based on your tags, you can easily see in which state a particular note is and when it may be ready to be pulled into the next state of your workflow.

That’s it!

A more advanced setup: other people in the game

Setting up a Kanban approach just for you is nice, but could be a bit of overkill. It is very useful however when more people come into play. In case of the household related tasks given earlier, it might be that other people in your household/family add new tasks or do them. So how would you do that?

One “shared” account

The most simple setup here is by using just one evernote account that is shared by the other people in your household. They all know the account user name and password. You just create an extra set of tags representing the names of your household/family members, for example:

  • John
  • Mary
  • Junior

When you create a new task note, you assign it both the todo and one of the name tags if you already know upfront who is supposed to do it. But you can also leave it “blank”, i.e. you don’t assign a name tag to it, meaning that anyone can do it. In that case, if someone picks it up to do it, he/she would remove the todo tag and assign it the doing tag and his/her name tag, for example John.

Sounds easy, doesn’t it?

However, there are situations where you don’t want the other people to use your account. You may have other notebooks in your account that you don’t want other people peeking into, not even when they are your family members. Even if there is nothing confidential, there is always the risk that another member deletes notes or changes them just for fun (you don’t here me laughing however).

But there is an alternative to that, just read on…

One premium subscriber account and multiple other accounts

In this case each person involved needs his/her own evernote account, but one of them needs to be a premium subscriber. The reason for this is that only a premium subscriber account can share a notebook with individuals that are able to create, modify or delete notes in that shared notebook. A free account can only share a notebook for viewing, which is not what you want in this case.

So how does this work?

The premium subscriber account needs to create a notebook as normal and then share it with individuals. Basic information on how to share a notebook from the desktop client of Evernote can be found here.

When you want to share a notebook from the desktop client, right click on it and choose to share it. You will be presented with the following screen (or something similar): Sharing Notebooks

Now choose Share with individuals and enter the email addresses of the persons you want to share the notebook with.

Don’t forget to check the Modify this notebook setting and Require log in to Evernote setting: Settings shared notebook

The invited people will receive an email with a link to the shared notebook, which they can either access online with a browser or integrate within the desktop client. Note that if you want to access this shared notebook with one of the mobile apps that you need to integrate it first in the desktop and sync, otherwise it won’t show up. Further details are left up to the reader to find out.

That’s all!

Additional thoughts

The above more advanced setup can of course be further extended. If you are working within a software development team, you could think of the following:

  • Using multiple shared notebooks to represent different teams
  • Using multiple shared notebooks to represent different projects (not recommended, see next item)
  • Using tags to identify a project
  • Using especially tagged notes to describe the projects, tagged with charter
  • Using tags for bug, incident, release, feature, story etc. (yes, the hint to Scrum is intentional ;–))
  • Adding comments in the body of a note to describe whatever you like
  • Attach files to notes with additional information
  • Create saved searches to quickly filter on specific tags
  • Create “template” notes for specific entries that are often needed, pre-tagged

Shortcomings

The above setup still has a lot of shortcomings:

  • You don’t get a nice visual representation of the Kanban board
  • It’s a manual process to set the tags (and ownership of a task)
  • In fact it is all manual…
  • No other advanced features that some of the online tools have to offer

Another shortcoming is that there are a lot of companies that have their firewall block access to Evernote (and other cloud-based storage services such as Dropbox).

Advanced alternatives

If you need something more advanced, take a look at the following online services:

Or look at this article which lists 15 tools for Kanban.