Wednesday, December 24, 2008

Year 2008 in a nutshell

Picture courtesy of foxypar4@flickr
Christmas and New Year is coming so it's time for some summary. I've never done this before but this year was quite awesome and I have something to be proud and loud.

Let's start from January when I moved from my hometown to the glamorous French Riviera i.e Cote d'azur (yes, the sea here is pretty much azure).

I accomplished some cool stuff at work and one that I can share with you is the new generation hotel search engine - Wallaby (which is working but still in beta phase).

In 2008 I started interesting in the theory of Agile Software Development (after few years of using it) which resulted in inviting me to contributing to the blog. It was a huge challenge for me as I've never been a good or even decent writer (especially in my native language - Polish) but it appeared to be kind of success. I have on my account some pretty cool posts that appeared to be big hits like these (each of the following posts was visited more than 5k times):
And the best posts were written pretty spontaneously in less than thirty minutes without any preparation. Just some kind of a thunder in my brain that triggered my grey matter to work - that was just it.

I'd like to thank Artem for inviting me to become a regular contributor - it was a good decision for me to join ASD. I hope I will keep at least the same level of inventiveness next year :)

During 2008 I've read over 18 technical books and I shortly reviewed some of them on my private blog.
More on numbers in 2008: my FeedBurner account shows quite an increase (260% from 25 to 65 - and counting) of regular readers of my private blog - that's cool. Thank you readers!

Last but not least submitting my old and fresh posts to DZone gave my private blog a lot of visibility and doing this I increased number of visits by 1000% from ~930 to over 9K in just one month (I will not be able to keep these numbers but it anyway shows me that what I write is interesting and a lot of people like it). This is also thanks to ASD blog which gave me more visibility.

And now some conclusions. As I wrote earlier I moved to southern France in January 2008 and this was caused mostly by my previous employer - Intel - who simply laid off our team at the end of 2007. My friends who I worked with at Intel opened their own company and I was supposed to be their partner but I decided to move to France with my wife. And this was one of the best decisions we made in our life.

We live in a great place, have awesome friends and every week we do some cool stuff like skiing, hiking or just visiting some beautiful places like Monaco or Eze. And this is much more important that money that I could earn staying in Poland.

I learned a lot during year 2008, the Sun hasn't burnt out my brain (yet) and consider it pretty awesome year privately as well as professionally. And I hope next year will be even better - but even if it will be a bit worse it will still be awesome :) Yes - I'm the optimist. I'm not going to reveal my professional plans for the next year but I will probably share with you the results in 365 days.

As this is my last post this year I'd like to wish all the people around the World:

Merry Christmas
Happy New Year 2009

Wednesday, December 17, 2008

Seven Principles of Lean Software Development - Build Quality In

Picture courtesy of WayTru@flickr
I was doing a quick research for this article using Google trying to find arguments standing for the claim that "quality is expensive". I was trying to find some resources saying that care for quality in the early stages of the software projects doesn't make sense and doesn't pay - I find it very difficult and I cannot share with you any reasonable links (maybe except for this one which refers to another post saying that "Quality Sucks").

What does it mean? It means that software people know that quality is very important. Why then quality of software products sucks in more cases than it rocks?

In many big corporations following waterfall model there is a special team named QA (Quality Assurance). This team's responsibility is (not surprisingly) to assure appropriate level of quality in the software product. This is fine, this is perfectly OK, except that QA is considered as a separate activity. This is the "Verification" or "Testing" box in the waterfall model diagram.
The biggest problem with this attitude is that QA team gets the product after it has been implemented and this is the root cause of all evil. QA is considered as something separate - we first do the product and then we care about the quality. This way of thinking is wrong. You should care about quality from the day one, before you write a single line of code.

Software teams should "build quality in" their products and QA should not be considered as a separate activity. Quality Assurance should be constant process of improving the product - QA activities and people should be involved in the development of the product during the development, not after when the developers are moved to another projects or even teams.

In this post I will try to explain "Build Quality In" principle from "Implementing Lean Software Development - from Concept to Cash" book. I will present few practices that will help your team build software with quality built in.

In order to achieve high quality in your software you should start worrying about it before you write single line of working code. It means that you should write test first and use all the frameworks that will facilitate your test suite (e.g. use mock objects). Track your code coverage - don't be too anal about it because 100% code coverage should not be the goal of your software but use it as an indicator to see which parts of you system should be tested more. And use all other tools that you feel are necessary to test your software thoroughly - unit testing is very often not enough.

Reduce partially done work - tasks that are 90% done usually takes another 90% of total time to finish - keep focused on one task and make it complete, then you can go to the next one. Try not putting defects on a list - stop and fix them the moment you discover them. Known bugs residing in your software will cause more defects in the future - don't allow this to happen (however issue tracking system can be sometimes useful e.g. for collecting requests from your customers).

Integrate your code as soon and as extensively as possible - commit your changes to CVS, SVN, etc. at least once a day and ensure all the tests give you green light. Don't wait with synchronization because it will hurt - you will spend more time on integration than on development. And you will get frustrated.

Even the best engineers make mistakes - you cannot avoid it - they are not robots (that may make mistakes too, btw.). Eliminate risk of mistakes by automating everything that is routine-work. Almost everything that is a repetitive work can be automated. And you should do this as soon and as early as possible - the best is to have continuous integration engine before committing a single line of code.

You should automate testing, building, installations, anything that is routine, but do it smartly, do it in a way people can improve the process and change anything they want without worrying that after the change is done the software will stop working. Automate in order to make people feel comfortable improving the software, tests, installation process, etc. by changing whatever they feel is necessary.

Code in your software product should be as clean and as simple as possible. You can easily ensure it using static code analysers - they really work and can be a real pain in the ass for lousy developers (that's good because they will learn how to write clean code and follow the conventions).

Eliminate code duplication to ZERO - every time it shows up refactor the code, the tests, and the documentation to minimize the complexity. Using modern IDEs it's pretty simple and gives developers fun.

Build Quality In
I hope pieces of advice given above will make it easy to understand how to put "Build Quality In" principle in practice. If you need more detailed description with more examples and more sophisticated explanation you should definitely go to "Implementing Lean Software Development - from Concept to Cash" book.

PS. Four principles described earlier can be found here:
  1. Respect People

  2. Deliver Fast

  3. Optimize the Whole

  4. Defer Commitment

Originally published on

Friday, December 12, 2008

Null Object - design pattern

Joshua Bloch in his excellent book Effective Java (2nd Edition) gives advice that you should never return null collection/map/array from your code i.e. instead of such code:

public List<String> returnCollection() {
//remainder omitted
if (/*some condition*/) {
return null;
} else {
// return collection
you should always use this pattern:

public List<String> returnCollection() {
//remainder omitted
if (/*some condition*/) {
return Collections.emptyList();
} else {
// return collection
This basically prevents caller of your code to get NPE while trying to do things like this:

if (obj.returnCollection().size() > 0) {
// remainder omitted

Robert C. Martin in his book Agile Software Development, Principles, Patterns, and Practices gives another very similar pattern but related to ALL objects, not only to collections/maps/arrays. This design pattern is called Null Object.

Here is the example - let's assume that you have an application that check whether the user is authenticated:

public class User {
private String username;
private boolean authenticated;
// remainder omitted

public boolean isAuthenticated() {
return authenticated;
// remainder omitted
and the code that returns the reference to the User object looks like this:

public User getUser() {
if (/*some condition*/) {
return user;
} else {
return null;
This way the code that checks whether our user is authenticated should look like the following snippet:

if (obj.getUser() != null && obj.getUser().isAuthenticated() {
// allow
// remainder omitted
Checking whether the object is null is not only a boilerplate code but it can also give you a lot of bugs e.g. if you forget to check whether the object is null.

And here the Null Object can help you:

public class NullUser extends User {

public static final NullUser INSTANCE = new NullUser();

public static NullUser getInstance() {
return INSTANCE;

public boolean isAuthenticated() {
return false;

private NullUser() {

public User getUser() {
if (/*some condition*/) {
return user;
} else {
return NullUser.getInstance();
plus cleaner client code:

if (obj.getUser().isAuthenticated() {
// allow
// remainder omitted

I find this pattern very useful and really helpful. With this pattern you can really save yourself a lot of NPEs.

Still the question is whether User should be a class or an interface and then whether NullUser should extend the base class or implement the user's interface. But I will leave this decision for your consideration.

What do you think about Null Object pattern?

PS. Example I presented is not necessarily applicable in the real systems - it is here just to depict design pattern's idea. Please don't treat provided code as a solution (I can think of many improvements/changes to it depending on the context by myself) - think about it at the pattern level - not the code level.

Thursday, December 11, 2008

"I know it doesn't work but it's done" - a story about the definition of done

Picture courtesy of orinrobertjohn@flickr
Some time ago I was talking to engineers responsible for some part of the software and I was asking when they will be ready for production. In other words I asked them when their features will be ready. They told me that they are ready now. What was my surprise when I tried to test their software and discovered that about 50% of cases were not working at all. So I asked them why they told me that they are "done" when they haven't even implemented half of the planned features? They answered me: "We know it doesn't work but it's done - we implemented something so it's done. Now we have to work on quality i.e. implement rest of the features."

When I heard this I think I might have looked like the lady from the picture. I couldn't believe someone can think like this - if we implement one use case out of one hundred we can consider the project done? The rest is the "quality"? I don't think so.

In this post I'll try to explain once again what is definition of done and why it's so important to have the same definition at least among all the people involved in the development of a single project.

Let's define "Done"

I would say that there is no one, good and universal definition of done. You can find some discussions in the Internet about it but you can see that everyone has it's own variation. So do I - in my view the most important things are (I will use "user story" word meaning every variation of request, use case, user story, etc.):
  • user story has to be implemented, today (no 99.9% is accepted)

  • user story has to be tested and no known bugs should exist

  • user story is ready to go into production, today

  • user story has to be ready to be presented to customer, today

Some explanation

User story has to be implemented - means that the code has to be committed to the version control system (like CVS, SVN, etc.), documentation should be available on Wiki or in the VCS, etc. It means that the output of the work done (whatever the work to be done was) must be available for anyone in the company to be downloaded in some way and checked. There must not be "I have it on my box - will publish it soon". Work must be committed and available for others.

User story has to be tested and no known bugs should exist - means that if you know about any bugs in the user story you're going to deliver - it's not done. If it exists in some subpart of the user story and you really need to deliver the working stuff, maybe you should split this user story into two, smaller ones. You must not deliver bugs to your customer - I'm talking about bugs you're aware of.

User story is ready to go into production - it means that it is ready to be deployed at any time from the time you stop talking. The wise thing would be if the working software is already deployed and tested in the production system - if it works - it's really done.

User story has to be ready to be presented to customer - it means that within 30 minutes (max) you are able to prepare presentation of working software to your customer. Of course, it requires you to have list of acceptance test ready and you know how to demo your software. The last point of it is very very important. Remember about it when defining all your user stories - you have to know HOW TO DEMO USER STORY - it will probably help you defining acceptance tests (e.g. user adds new item to the database using HTML form, then goes to the search panel and is able to find newly created item by its name, ....).

Wrap up

As I mentioned above good and universal definition of done probably does not exist but at least many resources agree on base principles. My definition of done is simple but I consider it quite powerful.

If you are interested in diving more into the subject I would recommend those two links from ScrumAlliance:
What do you think about my definition of done? If you have your own I would gladly read about it. Please share your opinions here.

Originally published on

Wednesday, December 10, 2008

Postcard from Auron

Christmas time is coming and just have to decorate my blog with some snow :)


Thursday, December 04, 2008

Developers Aren't Gonna Read It

Picture courtesy of austinevan@flickr
Developers are the customers - from time to time. They are the customers for product definition/specification team that is preparing technical specification documents. It doesn't really matter whether you work in an agile or non-agile environment - I'm sure you have some technical documentation and the main goal of it is to answer developers' questions on technical issues (e.g. how to configure some components to work with others, how to map fields from GUI forms to XML message, etc.) It also helps test or QA team to prepare acceptance tests and to verify whether what developers implemented is what was specified (I know it stinks a bit the waterfall but stay tuned - I will say something about agile documentation soon).

I would suggest you reading an article on TAGRI (They Aren't Gonna Read It) by Scott W. Ambler - it's really great. And in my post I'm going to give you real-life example from what I experienced regarding documentation. I will share with you my opinions of what kind of documentation sucks and what documents are really cool and useful (btw. my dream documentation is the one with which I'm able to find accurate information of my interest quickly and be able to put this information into my head in less than 10 minutes - the picture you see is a total contradiction of my dream - it's a waterfall process).

Developers won't read it

At the beginning of the project I met with the specification creators in order to discuss what we are going to deliver. They described me the project goals, business value and the requirements, of course. They also showed me two big books (sorry, specification docs) with around 300 pages in total. And the project was quite simple - it was basic CRU (Create, Read, Update) with one type of objects to be stored and queried. Believe me - the system was simple.

I was responsible for the web UI part of this system (100 pages) but had to understand also how the backend works (200 pages). Wow! That's a lot of, for a simple system, to read - imagine how much time you need to read this. And it's nothing comparing to how much time and effort it cost to produce it - and you still have no single line of code working.

So, did I read the documentation? I didn't.

I didn't have to because I preferred direct communication with the guys who know the system throughout. I didn't have to read the documentation because the guy who specified the GUI prepared screen mockups and I knew exactly how the pages should look like. And I didn't need 100 pages of documentation for it. I just needed couple of screen mockups - it was enough for me to deliver the software.

Tests and examples are the best documentation ever

And they are not only the best documentation - they also define a design of the system to great extent. When I was integrating UI forms with the XML to the backend I was still not referring to any documentation. I just asked guys responsible for specification and preparation of tests to give me example messages (requests and responses). I got what I needed and basing on the examples I was able to integrate UI with the backend - simple. Here I will give kudos for the specification that explained how to map UI form fields into XML message (XML schema was not self-explanatory in many places e.g. I could store the phone number in different XML tags - I had to know which one I should fill). I also used a piece of documentation that explained what should be the format of input data - I had to validate user's input somehow. But that was all I needed - I used 5% (roughly) of the overall documentation stack.

KISS (Keep It Stupid Simple)

In that simple project we had three different documents and I was finding discrepancies between them almost every day - yes, some of the fields, data types, etc. were specified in all three documents (often differently). The part developed by me was quite resistant to this because I was still keeping my one source of information. If I didn't know something I just asked specification team - I wasn't looking for the answer in the documentation because it would take more time and it appeared that I was asking questions that were mostly not covered by the documentation. Again, verbal communication was the best choice for me - I was able to fix some mistakes in the documentation on the fly - because I reported every technical problem I had to those guys - they were adjusting details according to technical constraints.

To summarize this point - keep it simple. Have one source of information and you will win. I do - every time I follow this rule.

Not all documentation sucks

That's true - some level of documentation is necessary, as I showed it before. Some mappings are sometimes required, list of required fields, error codes, etc. Many of this can and should be covered by unit tests and automated GUI tests - but it's pretty hard to record GUI tests not having it. Usually in waterfall-like process documentation is considered as a good - very important good. I don't understand why? It doesn't represent any business value for the customer (except the user's guides) but a lot of people value documentation more than the working software. I think this is the reason they fail so often.

In my opinion the best documentation is the one that is very easily changeable. For example Word documents stored in Lotus Notes folders is a bad idea. Developers cannot easily change or even comment the specification/documentation - they should. I often faced the problem that I saw something really really stupid in the docs and I wanted to change it but I wasn't able. Wiki is the best option here - it's easily searchable (through many projects) and changeable. And if you want to have a Word doc - your business - just export the page to the format of your choice.


My main recommendation is to communicate verbally with people who know your product as often as you need; have one source of information - you will not be surprised by different requirements for the same thing; start with unit testing your code and keep your UI tests up to date; automate as many acceptance tests as possible - consult the results with customers (or customer proxies).

I'm not going to copy-paste the whole article but I totally support the Solution part of TAGRI article - I strongly recommend you reading it. I have nothing more to add.

What is your experience with documentation? Do you read huge stacks of papers your team lead gives you? What is the value of the documentation? Please, share your opinions here.

Originally published on