Keeping SSDs fresh

With a new SSD the laptop is quieter and feels faster than before. I want to keep it that way, which (still) means keeping the number of writes to it down. OpenSUSE has some tips, as does Fedora, but they leave a few bits untouched which might be useful, so I’m taking note here.

 - Make /tmp a tmpfs filesystem. This means no longer relying on /tmp across reboots, but those are pretty rare since I usually just suspend-to-RAM.
 - Make /var/log tmpfs, too. This is an agressive optimization, but I think it’s acceptable for a laptop.
 - Disable scheduler on disk sda, force syslog to write to /var/log in RAM.
 - Set syslog to log warn and above only.
The hard part is getting rid of a .xsession-errors that keeps growing (and getting written to). KDM can be configured to write the file elsewhere (and that’s documented) but you still need to hack the Xsession script to stop X from (re-)creating that file. I kept meaning to write down what I did, but .. good intentions and all.

Speaking of good intentions: I’ll be at FOSDEM, mostly at the KDE booth (everyone at the booth has also written “but I hope to attend some talks, too” on the schedule, so we’ll see. It’s been quite some time since I remember sitting with Anne-Marie at the bar across from Manneken Pis, ordering all the beers we couldn’t pronounce.

This entry was posted in Bla Bla. Bookmark the permalink.

10 Responses to Keeping SSDs fresh

  1. ascarpino says:

    If you use Fedora or OpenSuSE I guess you are already using systemd. So, there’s no need of syslog and instead of mount the whole /var/log/ as tmpfs, I suggest you to set Storage option in the systemd-journald service to ‘volatile’.

    • adridg says:

      @ascarpino ha, interesting. I searched around a bit, but nothing shows up (well, with my search terms like “ssd write reduce linux”) mentioning systemd and its facilities. The greatest failing of the modern internet: the information you find is nearly always out-of-date and old results swamp new (or corrected) interesting things. There was even a bit recently on the Register, I think, about error propagation in song lyrics.

  2. Fri13 says:

    By the usual write amount what few year old SSD could stand, meant that you could write every day 2 GB worth of data in 4 years before you ran to SSD write limit.

  3. Serafean says:

    What about simply making .xsession-errors a symlink to a file in /tmp? Wouldn’t that work too?

    • adridg says:

      @serafean @shoes No, ~/.xsession-errors is re-created on login. I tried ln -s /dev/null .xsession-errors, and after re-logging-in it was back to being a regular file.

  4. shoes says:

    you can point .xsession-errors to /dev/null if you it really causes problems, no?

  5. No need to pay attention to write cycles, modern SSDs are not that fragile anymore, and they should come with plenty of reserve blocks to make sure even you won’t see it degrading within its reasonable lifetime.

    I just would not mind.

  6. SSL says:

    You could also move browser’s cache to tmpfs, I bet this folder in particular get a lot of writes. And since you suspend, not power off, it shouldn’t be a problem

  7. Jan says:

    For .xsession-erros there’s a solution:
    sudo chattr +iu .xsession-errors
    Should make it immutable and undeletable (on ext4 at least).

    I’d still second Sebastian Kügler here though. Crippling your system is not worth the effort with modern devices.

  8. Geka says:

    Unfortunately, those making SSDs aren’t that expncieered making drives in general, so they don’t have important past experience to guide them. One SSD we’ve used, much to our chagrin, locked up the POST on a motherboard when it failed. The only possible solution was to remove it. Another had a soft fail, in that parts would respond correctly, so we saw the RAID mechanism doing what it was supposed to. But because the failure wasn’t staged correctly, the RAID mechanism was er abused. In this case, the RAID attempted something like 200+ rebuilds in ~ 5 seconds due to the soft failure. Basically, the RAID level functionality is working fine. Its the how an SSD should be indistinguishable from a disk drive in operation that many of the vendors appear not to get. During failure, we need the drive not to lock POST, nor to soft fail. Remove it and be done. Not a RAID issue. Its a drive implementation issue. As for self healing file systems, yes, I’d advise using them with SSD. RAID1 or RAID10 at minimum at a block level. ZFS has issues on Linux right now, and isn’t suitable for boot drives under Linux. WAFL isn’t available.