Home > FOSS, ownCloud > Monitor the QObject Tree of a Qt App

Monitor the QObject Tree of a Qt App

Because it is still reported that the ownCloud Client has an increasing memory footprint when running for long time I am trying to monitor the QObject tree of the client. Valgrind does not report any memory problems with it so my suspicion was that somewhere QObjects are created with valid parent pointers referencing a long living object. These objects might accumulate unexpectedly over time and waste memory.

So I tried to investigate the app with Qt Inspector of Robert Knight. That’s a great tool, but it does not yet completely do what I need because it only shows QWidget based objects. But Robert was kind enough to put me on the right track, thanks a lot for that!

I tried this naive approach:

In the clients main.cpp, I implemented these both callback functions:

 QSet<QObject*> mObjects;

 extern "C" Q_DECL_EXPORT void qt_addObject(QObject *obj)
 {
    mObjects.insert(obj);
 }

 extern "C" Q_DECL_EXPORT void qt_removeObject(QObject *obj)
 {
    mObjects.remove(obj);
 }

Qt calls these callbacks whenever a QObject is created or deleted respectively. When the object is created I add it’s pointer to the QSet mObjects, and if it is deleted, it is removed from the QSet. My idea was that after the QApp::exec() call returns, I would have to see which QObjects are still in the mObjects QSet. After a longer run of the client, I hoped to see an artificial amount of objects being left over.

Well, what should I say… No success so far: After first tests, it seems that the amount of left-over objects is pretty constant. Also, I don’t see any objects that I would not kind of expect.

So this little experiment left more questions than answer: Is the suspicion correct that QObjects with a valid parent pointer can cause the memory growth? Is my test code as I did it so far able to detect that at all? Is it correct to do the analysis after the app.exec() call returned?

If you have any hints for me, please let me know! How would you tackle the problem?

Thanks!

This is the link to my modified main.cpp:
https://github.com/owncloud/mirall/blob/qobject_monitor/src/main.cpp

Categories: FOSS, ownCloud Tags: , ,
  1. Harri
    August 14, 2014 at 18:39

    Even if Valgrind does not report a definite leak you can still use it to determine whether an ever-growing number of objects get allocated. Compare its output between medium and long run and monitor which places possible show a rising number of “still reachable” memory.

  2. August 14, 2014 at 18:40

    You can take a look with GammaRay ( https://github.com/KDAB/GammaRay ), it is more complete than Qt-Inspector. You can also count object by creating GDB python script, but the API is not documented well, so avoid this until it is the last solution. Another (bad) solution would be to a git grep “new ” and remove every single parent pointer to each MyClass(…) : QObject () that way, it might generate proper memory leaks.

    Another source of long running application memory increase is bloated QList and QHash. This can be checked with a GDB python script and some manual data mining.

  3. August 14, 2014 at 18:40
  4. Harri Porten
    August 14, 2014 at 18:48

    Valgrind may still be useful albeit not showing a definite leak. Monitor the “still reachable” numbers. Do they grow indefinitely the longer you ran the application? Initially, they’ll be unstable but if they continue to grow you can find problematic spots (at least symptoms).

  5. hwti
    August 14, 2014 at 19:33

    Did you try Valgrind’s Massif tool (with the massif-visualizer gui) ?
    It groups allocated heap blocks by callstack, so if you can reproduce the leak you will hopefully see one or several categories grow over time.

  6. Karellen
    August 15, 2014 at 00:29

    Leave the client running for a long time, so that its memory usage is 10x, or even 100x the startup memory usage. Then dump the memory of the process. Whatever is filling up 90% or 99% of the dump is whatever it is that you’re “leaking”.

    Hopefully, there will be something you can recognise. If you’re lucky, there will be strings in there that give the game away.

    If not, load the memory into a hex editor for which you can vary the number of bytes/line. (Or, run the memory through “od -t x4 -w N” for varying N, and view with a text editor capable of handling *large* files). Start by viewing the memory with line widths that are powers of 2, but experiment if that doesn’t help.

    At some point, you might see the same few bytes repeated at regular intervals, which will show up in your hex view as a group of columns that containing the same values row after row. That could be things like vtable pointers, or even member variables which are frequently the same for most instances of the leaked object type. By examining the size of the repeated pattern, and which columns (i.e. member variables) are the same, and what their values are, you might be able to figure out what type of object is being leaked.

    Once you know what’s being leaked, finding it should become easier.

  7. jpetso
    August 16, 2014 at 19:35

    After QApp::exec(), most objects would have been deleted, right? If you’re suspecting objects in the QObject tree with no lost references, wouldn’t those objects get properly deleted on tree destruction as well? If so, that would explain why you can’t see them anymore afterwards.

    As an addendum to Karellen’s suggestion, you can also make a string-to-object hash multimap instead of a set, and key it by QObject type. Then you can print out the amount of objects of each type at any point, and potentially find which type is rising on a regular basis.

  1. August 15, 2014 at 17:30

Leave a comment