I'm currently running oprofile sessions on my netbook (currently using OpenSUSE 13.1), trying to isolate causes of sluggishness. I don't really know what I'm doing. OTOH I've found some interesting stuff. e.g. Performance periodically goes to complete rubbish. The responsible program seems to be the package manager, invoked as a cron job or such. When the package repos are updated, everything slows down. The performance loss is apparently because of LZMA, the compression algorithm it uses. LZMA is brutal on low spec CPUs; calls to liblzma eat up over 80% of available CPU time when decompressing any given package. And Zypper often calls both liblzma and gzip at the same time (for different parts of certain packages or metadata files, perhaps). Makes me wonder how much of the performance loss I've seen in recent years is due to better algorithms - cryptographic, hashing, compression, whatever - that require more memory or CPU time to do their work. Or algorithms that scale differently. (e.g. O(n) with a large constant might be favored over O(n^2) with a smaller constant. But in the days of yore the opposite might have been true for some tasks, the assumption being that a desktop would never deal with a large value of n.) Anyway I'm curious now. Can anyone in the know offer some commentary on where in the Linux userspace stack performance is typically lost these days? What sort of userspace tasks - and what associated libraries - are the biggest CPU and memory hogs on a modern Linux distro, and how does that compare with the situation ~5 years ago?