Run a QML-Application in CodeXL


I just tried to run my QML-based application in CodeXL to get an overview about my hotspots. You have to setup the path to your executable for the CodeXL-project.

Unfortunately with a qmake-based build the executable and the QT-libraries are not at the same place. So per default your app will not start.

Solving this is really easy: just add your QT-installation path to the PATH vaiable, if you are using only one QT-installation.

If you have to switch between two different installations you can use a startup-script, which adds your QT-installation path to the PATH-variable in windows.

Never rebuild a product from scratch to erase your technical dept


I am sure every developer had seen the following situation once in his carreer: you have to support an older product which for instance is based on a out-dated technology like the Motif-UI-Framework, a legacy codebase, spagetty-code or whatever. Of course a lot of users are still using this product, it supports a really rich featureset even if the way to use this features seems to be much too complicated. And of course you have to add new features and bugfixes all the day. Unfortunately caused by a really taff project-plan you are only able to add another workaround to bring the new feature up. No time is there to deal with all the legacy code or build some automatic tests or even doing some urgent refactoring tasks. In other words: you have to deal with legacy code.

So how can we deal with stuff like this: let’s rebuild it from scratch for instance? Then you can use some more modern technologies and avoid all the older errors which were made by the old product-team. Sounds good, right …

Nice idea. Other companies tried this as well, Netscape for instance. They tried to rebuild their whole product from scratch and their failed. The old company Netscape is history. But why? Why is a rewrite not the right way to bring a great idea onto a modern foundation. Just keep in mind: the product has users and these users normally have some reasons to use it even if the underlying technology / codebase is legacy. And normally these products a stable in their special way.

Do you know the idea of technical depts? If not take a look into this great article: . You increase your technical dept when you inserting a quick hack to bring up a new feature in time without cleaning up your code base / specification / documentation or tests. Every time when you clean up your code you decrease your technical dept.

The feature-dept coming from the users will be decreased when you have finalize a feature request, even if it done by a hack. Users normally are interested in features even if the underlying code quality is really bad. And this is the one that gives you an income at the end of the day.

So let’s restart your great idea with a new implementation from scratch. You will not have any technical dept at the beginning but a really big feature-dept caused by the old users which core-interest is to solve a specific problem with your tool. They are not interested in your new fancy tech for a really small featureset. And this is the reason that you will have to support the older product as well until the newer one is really “finish”. Of couse the older product still get’s the new requests and bugfixes. And of course the new one has also bugs, needs refactorings. In other words new technical depts will be added as well.

So what has happened: in the beginning you tried to solve the issue with your technical dept by recreating everything from scratch. And now you earned a new feature dept onto the original technical dept plus new technical deps caused by a new fancy tech, repeated errors, …

Of course the sum of all of these is much bigger than the older dept. And when you have finalized the new product? Normally not all users will switch to the new one.

So is this really a win-win situation? From my point of view: no. It makes much more sence to decrease the technical dept of the original product and try to bring it into a state where you are able to deal with the issues. If you want to do so: stop supporting the old product immediately.

One of the things I learned about Scrum: Its all about communication


I’m working as a Scrummaster for now more or less than 1 and a half year. A lot of colleagues asked me in this time why we are using scrum? Where are those big advantages?

Is it the fact that you have a shorter time slots to plan? Shorter time gives you the opportunity to react faster on changes like new requirements, new focus or just a new technology.

Or comes the benefit from the standup?In standup you are forced to tell your team members your current task and the stuff which is blocking you.

Is it the fact that your product owner tells you where your implementation seems to be right or where is seems to be wrong ( or completly rubbish sometimes ) ?

In my opinion the simple answer is: the main advantage of scrum is, that it forces you to communicate. If you are following the methology you have a lot of situations when you are forced to show results, communicat that you are currently blocked. And you can get the information that the customer needs something different also quickly.

And the scrum-master has to kick some asses if this doesn’t work until people are talking again.

And this is for me the main topic I learned in the last year: Its all about communication. If you want to be successful you have to communicate to your developer-colleagues, your product owners, the testers or the customer. And Scrum is good in this.

Chicken Korma


For my english training I just translated my favourite recepy for Chicken Korma into english. And of course I want to share this one:

Chicken Korma:

800 g chicken breasts, cut in little pieces without skin or bones
One big onion cut in half rings
One red chilli peppers
Three cloves of garlic
One pieces of ginger ( size of your thump )
Almond slices
Shredded coconut
1 shrub of coriander
Curry paste ( Korma, you can do this on your own )
1 peace of butter
400 g chickpeas
400 g coconut milk
Black pepper
200 g natural yoghurt

At first produce the curry paste on your own or just buy it. Separate the leafs and the stalk
of the shrub of coriander. Now take your wok, put some oil in and heat it up until it is hot. Now you can sear the chicken meat until it get a light brown color all
around. Put the chilli, the mashed ginger, the onions and the stalk of
coriander with the butter into the wok. Cook this 10 minutes or so while you
are stiring the whole soup.
Mx the curry paste, the coconut milk, the almond leaves, the chickpeas,
the desiccated coconut and 200 ml of water into the wok.

Let it cook about 30 minutes. Now season to taste it with salt and

Serve it with the leaves of coriander, the natural yoghurt and rice.

Enjoy your dinner!

How to improve the architecture of your application


If you have to work with a brown-field-application like an older framework which is was developed in your company incidentally and you have reconcnized, that the underlying architecture is not really good normally you want to optimize this step by step. Because working with such a big package can be sporty and is not really fun.

But here the fun starts: how can we generate a better architecture for an already existing software-product even if its already in use? Of course you can try to change the internal stuf by refactoring it, buf one big sign for a brown-field project is, that there are no tests in place at all. But without any tests you cannot start refactoring work at all.

So how can we find a starting point for this:

  • Wait until a bug has to be fixed by yourself. You have to change some stuff, so you can also spend some time to build a small unittest for testing
  • Look out for big classes and methods.
  • If you have recognized that there is a place where a lot of bugs or changes are made all the time this is also a good starting point for making things better. Metrics over your code changes can help here a lot.
  • Look out for code duplications. Fixing bugs in one place and miss the other two is already a bad sign and you should change this. One thing I like to do is running Lint and look for patterns in the warnings. If you have the same lint-errors over and over again it is possible that you have found a code-duplication.
To write a unittest you need code which interface offers the ability for testing. Fo instance if you have to test an internal calculation you need a way to get the results afterwards. You need to know which dependencies are nessesary to create a testcase for you small piece of code also.
And normally in a brown-field project the interface will not offer this. So you have to do you first refactorings to get a better testable interface and the posibility to be able to write tests at all.
If you are doing this for a while you should be able to get more testable code which:
  • offers interfaces, which are not hinding any magical stuff
  • which dependencies are visible, mockable and can be controlled in a test environment
  • which was decoupled to make the tests much more easier
  • which has documentations in form of unittests, which can be a real good readable documentation for you interfaces.
  • is under test and can be refactored
And all of these points are signs for a better architecture. Of course these are not the only signs but as a starting point this can be really helpful. Because of the fact that more and more tests are there in place you can refactor much more easily.
So if someone asks we how to improve the architecture of his code I can give him one strong remmendation: make your code testable.

What is the best solution for a UI


Currently I am thinking about using a “new” framework for user-interface-base Application for the ZFXCE2. The stuff shall run on a Windows system at first ( and even this could be the only target platform ).

Does it makes sence to use something like WPF or even Silverlight? Special if you are thinking about all the statements from the Microsoft front, which framework will be the best for the future time, you will normally get more answers than you ever want to have:

  • Use HTML5 with JavaScript ( one year ago )
  • Use WPF,special for the Desktop
  • Use Silverlight, it can do so much for you, even working on an embedded device
  • Use WinRT, for Windows 8 it will be “the” thing
  • Use the Win32-API for base applications, because MIcrosoft will never break their dependency to this old but really stable API.
  • Use a portable framework, like QT
So what should I do. Maybe in the future I want to be able to switch to Linux easily. So Win32 is not an option anymore. Also WinRT seems to be out. WPF and Silverlight are able to run using Mono on a Linux system. Hopefully all feature I am using will be supported then.
HTML5 seems to be a realy useful option, but I want to have control over some 3D-stuff, which is hard to realize with JavaScript.
So I thing the only real option will be QT? What do you think, any other suggestions?

Asset Importer Library news


I am really happy to announce that we made a new major release for the Asset-Importer-Library. Currently we have only a new source package out, but the SDK is on its way as well.
The guy’s from Debian asked for a new stable release, because they wanted to add the assimp-package into the next Ubuntu-release and assimp changed the API. And this API-change will cause binary-incompatibilies with applications, which belongs to the 2.0-version.
The API-changes are already documented and you can find them here:

So what is new in Asset Importer Library 3.0. A lot I guess:

New features:

  • New export interface similar to the import API. Supported export formats are: collada, obj, ply and stl.
  • New import formats: XGL/ZGL, M3 (experimental)
  • New postprocessing step: Debone
  • Vastly improved IFC (Industry Foundation Classes) support
  • Introduced API to query importer meta information (such as supported format versions, full name, maintainer info).
  • Reworked Ogre XML import
House keeping / refactorings:
  • The API changed to optimize the usability
  • Unified naming and cleanup of public headers
  • Supporting a Debian-Package
  • IMprove CMake-Build system
  • Better CMake-Support for Linux and MacOS
We will release our release notes as fast as possible. Currently we are all really spare on time.
Thanks to all the helping hands, the patches and all the nice and really constructive feedback about our work.

My first Android “Hello World” app


After reading a lot of stuff regaring the SDK for Android Apps I wanted to start with a small App, which will just show a window with the simple text “Hello World” on it. After reading so much documentation this seems to be an easy task. Just downloading the current version of Eclipse, installing the Andoid-SDK, install a test target for the Android-emulator and let the App running. Sounds really easy, right?

Installing the SDK and eclipse was done easily as expected. I have also made some experiences with eclipse, so setting up a simple project was not so hard. The SDK works just fine as well. You have anything in place if you are using the wizard for creating your first project. The SDK creates also a Test-Application for you, that is great. So let start the app. At first I tried to define a simple target in the AVD-manager. After setting this up you can try to start it. And here the problems began. I got an error message like:

Cannot find the image at c:\users\kimmi\Desktop\.andoid\avd ...

Of course, this is not the path for my user. Some years ago I made a small mistake with my personal directory used for Windows 7. I moved my personal folder accidentally to a different place on my hard drive. I fixed this by adding another directory. So instead of


I am currently using something like


The AVD-Manager looks for the common path, unfortunately not mine. I thought about fixing my environment, but I wanted to learn something about the underlying mechanisms of the Android SDK.
And this issue was really easy to fix. By setting the environment variable


the issue was gone. Another try to start the manager and the next error happened: not enought memory left for the emulator. But I was also able to fix this issue easily: adapting the size of the simulation target’s memory. And now everything worked fine. After awaiting the startup of the emulator I was able to see my first “Hello World!” example by using the android sdk.

So my next step will be to setup a simple native package. My plan is to use the Asset-Importer-Library for loading a simple asset and render it onto the emulator.

My experiences with reference counting


Some years ago I started to use reference counting as a way to find object leaks in my code, special in my private projects to find memory-leaks much easier and faster: I played around with the get / release-pattern like the OCM-objects.

So I wrote a small base-class called IObject, which supports a simple internal reference count like:

class IObject
    IObject( const std::string &name );
    void get();
    void release();

    IObject &operator = ( const IObject &r );
    virtual ~IObject();
    std::string m_Name;
    unsigned int m_NumRefs;

A user can generate an object instance on the heap. if you want to share this the second instance which is working with this instance has to call get. This will increase the reference count by 1. After finishing the work the user must call release, which will decrease the reference count by one. If the number of references is zero the release-method will delete the instance itself.
The constructor will init the reference count with one. If you have finished your work you can release the instance. Explicite calls of the destructor are forbitten.
You see its pretty simple. The base object has an attribute to store its name. You can use this to see, which objects still have not release references in their code.
It sounds pretty easy at the first look, but after a while I saw more and more leaked references I wasn’t able to locate. So I intorduced a simple smart-pointer, which responsibility is managing the release automatically by using RAII. But the issue was not solved with this approach.
After a while I learned what went wrong:

  • Circular dependencies: the parent node owns a child node. The child node owns a reference to the parent as well. Not good! The solution was pretty simple: the child just owns a leak reference to the parent without incrementing the reference count.
  • When an object cache creates a new instance and stores a pointer showing to it I increment the number of reference, too. So the object itself at the beginning of its life has a reference count of one. The Cache increments it, too. When the cache will be destroyed the reference will be dropped and the reference count is decremented by one. But there is one reference left as well. After I have understand this the solution was pretty simple: avoid the unnessesary get-call.

I haven’t fixed all my leaks yet but I was suprised how many errors I made with such a simple approach and how much time it take to understand and fix them.