Skip to content

Hackday

We just finished our 2nd Hackday at PowerReviews. Google has 20% time and PowerReviews has Hackday. All the software engineers stay up for 24 hours hacking on their pet projects that never made the cut to formal release. Perhaps it is my new reality as a parent of small children but I no longer see the appeal of staying up all night and depriving myself of precious sleep. Nevertheless the engineering staff was enthusiastic about the event and it was well attended although not everyone made it out (nor did everyone stay all night).

The rationale for Hackday is that programming projects are easier to bang out in larger contiguous blocks of time. Once you get into the flow you can become much more productive and the overall project gets easier and is actually delivered faster. Supposedly programming is more conducive to 12 hour chunks while MBA-type management and marketing tasks are suited to much smaller intervals. As a rare programmer/MBA I am not so sure that I buy into this line of thinking.

Certainly there are programming tasks that can be solved in a short period of time. Also there are plenty of difficult marketing and management jobs that take much longer than 1 hour.

Yet the Hackday was wildly successful. Not only did the software engineers love it (I’m sure all the free junk food didn’t hurt) but they prototyped some really nifty stuff.

I think the real reason hackday is so successful is two simple things:

  1. Something different. Change it up. Work at night instead of during the day. Work in the conference room instead of at your desk. Play bad techno music on speakers (Yes!) instead of those headphones that we are all forced to wear so we don’t bother others with our eclectic musical tastes.
  2. Provide freedom for people to unleash their creativity. Finally work on that killer feature that your boss hates. Morale is up.

Hackday was fun. Try it with your team.

Tagged

iostat graphs

If your computer system is slow there are four things to look at:

  • CPU
  • memory
  • I/O
  • network

Most database systems are I/O bound. (Although SSD drives may change this paradigm soon.) My database systems are I/O bound too. Since I am addicted to data pr0n like John Allspaw I decided to whip up some iostat graphs. Use whatever tools you have handy. I used bash scripting, a little bit of perl, nagios, rrdtool and nagiosgraph to roll up some trend analysis of my system I/O performance (or lack thereof).

After spending some quality time with the iostat man page I figured out which version and flags I wanted:

iostat -dx 1 2

Here are the fields I want (see man page):

  • r/s
  • w/s
  • avgrq-sz
  • await
  • svctm
  • util

I put together a shell script to gather these stats and spit them out in a nagios-friendly format. This plugin works with nagios 2.x & 3.0.

After integrating the plugin into Nagios I used our existing nagiosgraph setup to automagically dump the incoming data into rrds. One of our RSGs** has developed a collection of perl scripts that parse the rrd storage directory and create html-friendly graphs. We are currently evaluating whether or not we want to replace the perl scripts with Cacti.  At this point, the perl is working fine.  In the words of Donald Knuth, “Premature optimization is the root of all evil (or at least most of it) in programming.”

** RSGs = Really Smart Guys

Anyway, after staring at these graphs for a while I started to get very concerned about sustained periods of 100% utilization for some of the disk devices.  This implied the RAID10 volume was running flat out 100% busy.  However, I found that is not necessarily true and that is where it gets interesting.

It turns out that the most critical iostat metric for our OLTP DB load is await.  Await is the average time to wait in the I/O queue plus the time it takes to service the request (svctm).  Await will always be greater than svctm.  At the end of the day, we want the database to be fast.  Since we are not storing BLOBs in the DB it is very rare to see high sustained data transfer rates since the access patterns do not tend towards sequential I/O.  Instead the RAID10 volume is bound by the seek time of the disk spindles.  If I add more disks I would expect await time to decrease although the benefit would not be linear it would be OK.

Here is the interesting part.  The periods of sustained 100% utilization in my graphs often had very low await times.  So while the disk was busy it was still performant.  I also noticed that await spikes seemed to occur more often when writing data (w/s) or when attempting very large numbers of reads (r/s).  From my pgfouine reports I know that my DB load is 85% SELECT or read.  I hope to see significant performance improvements from being able to migrate read-only application traffic onto the replicated databases.  This should show up in decreased I/O load on my critical RAID10 volumes.


Tagged ,

We all have the same problems

“You are not a beautiful and unique snowflake” – Fight Club

I am going to give you a secret. This is how I solve 90% of all technical issues. It is amazingly simple and you can do it too.

Google

Get the EXACT TEXT for the error message or situation you find yourself in and type it into the search engine. Try some variations. Click through the search results and look beyond the top 10 on the first page. Read the PDFs. Check out the slides. Spend a little a bit of time. It is amazing what you can learn. It seems to me that many of us are too impatient to pay our dues by actually doing a little research. The Internet is useful for activities other than LOLcats and myfacespace. Many well-intentioned have chronicled their tales of IT woe and hard-won solutions and are just waiting for you to discover their heady prose.

UNIQUE

I used to work with someone I will call McCoy. (Not his real name.) McCoy was trying to break into the exciting world of system administration (little did he know) and he came to me for some tips. The secret above is what I gave him. He was skeptical. But he did try it out and lo and behold it worked for him.

Now you can refer these folks to this handy site: Let me Google That For You

Tagged

Percona Performance Conference

I attended the Percona Performance Conference today.  It was an interesting event since it was like a mini-conference within the MySQL 2009 conference which I had not registered for (so I felt a bit like a gate-crasher).  Anyway, the price was right (free!) and the content was good so I thought I would share some notes:

YouTube Disk Array Performance

My favorite talk was by Paul Tuckfield of Youtube on disk array performance.  And fittingly I missed the beginning since I needed to grab a sandwich.  Unfortunately, Paul packed a lot of dense text onto his slides and had a nasty habit of switching from slide to slide faster than anyone in the audience could read them.  Here were the good bits I managed to scribble down.

His recommendations for improving I/O:

  1. Don’t do I/O – tune queries instead.
  2. Tune DB cache
  3. Disable read-ahead for OLTP workloads
  4. Make sure you are really doing concurrent I/O vs. serial I/O

Cache only writes in the RAID controller.  Disable read caching on the controller.  Disable read caching on the disks too.  Don’t do readahead anywhere but in the DB.

How to check concurrent vs. serial I/O:  Make sure the stripe/chunksize is much larger than the DB blocksize.  This makes it more likely that an I/O request can be serviced by a single spindle.  This provides the kind of scalability you want from a RAID10 volume as additional I/O requests can hopefully be serviced by other spindles within the volume.  (i.e. you actually engage all of your drives – but instead of having all the drives busy servicing a single request they can be busy servicing multiple requests in parallel.  (concurrent I/O)

Check iostat avgrq-sz to see if some layer is doing >1 block read.

The magic algorithm for testing to see if you are really doing concurrent I/O:

(r/s + w/s) * svctm < 1 with %util ~= 100%

The recommendation (#2 above) to tune the DB cache and make it larger than the other system caches conflicts somewhat with the Postgresql tuning advice I have found.   The Postgresql crowd seem to like the filesystem cache within the Linux kernel.  I’m not sure I can tell who is correct at this point but it is certainly leading me down into the weeds as I examine things like the different Red Hat Linux I/O schedulers available.  The default CFQ scheduler I am using now is certainly not the best.  Noop seemed to be the favorite in the room this afternoon.

Hopefully Paul will post his slides somewhere so I can get the rest of this.  I wish he had a longer talk since it didn’t seem he had enough time to cover all of his material.

Replication Lag

Peter Zaitsev talked about MySQL replication lag and it felt like a bit of a rehash of a presentation of his that I had already seen the slides for somewhere.  There was a bit of common sense in there about avoiding monster long-running deletes.  Instead of a single DELETE transaction use many DELETE LIMIT n statements (where n equals some number that you change depending on how much replica lag you can tolerate.)  Wrap this into a loop.  As long as the statement deletes some rows, run it it again.  When it does not delete anything you are done.  Nice and simple.

There was also some good advice to benchmark your slave replication capacity by pausing replication and seeing how long it takes for the slave to catch up.  I’m not sure exactly how to do this safely with Londiste yet but it seems like a bit of info it would be nice to have before everything hits the fan.  He suggests having 3x capacity (i.e. 1 hour takes < 20 minutes to catch up).

Interesting factoid: During the keynote he mentioned that craigslist fits their full text index into 20GB.   Must be nice.

QA Performance Engineering

George Bennet of Atlassian had the best T-shirt:  “Performance It works bitches.”  He also had a compelling argument for building performance testing into the CI environment.  We already use many of the tools he discussed at PowerReviews but I need to look into Chronos and antrun.  Chronos automates JMeter and Ant-run provides shell-like functionality to exec system commands.

SQL Session Consistency

Robert Hodges of Continuent presented their java-based Tungsten SQL router which provides session consistency.  The use case for Session consistency occurs when a user updates their profile and they need to see it reflected immediately but other users don’t necessarily need to see it too.  They do this by inspecting the SQL to see if a write is occurring and then checking the lag for each of the slaves to see if it is safe to route queries to them or not.  This sounds fantastic and from the presentation it sounds like it will work with Postgresql too.  I wonder how much it costs?

TokuTek & Covering Indexes

Dr. Bradley Kuszmaul of Tokutek presented some impressive performance boosts form the TokuDB storage engine.  Their cache-oblivious based system relies upon covering indexes (read data directly from index) and Fractal Tree Indexes (speed up inserts).  It seems like their “clustering index” is simply a materialized table sorted in a different order and clustered on disk.  The clustered covering index can be implemented in Postgresql and I am sure this would be blazing fast for SELECTs.  However, for write-intensive workloads the insert overhead might get a bit unwieldly so this could be a Very Bad Idea™.  Anyway, I thought this was a remarkably clear presentation of some pretty arcane stuff.  Bradley did a good job holding the audience interest and I enjoyed his O(n) humor.

SSDs

Everyone had the obligatory Oracle/Sun jokes, but it seemed that every speaker was talking about SSDs too.  These are certainly going to be a part of more system deployments soon.  Use of SSDs will change DB optimization strategies to make random I/O OK and perhaps even faster than sequential I/O.  The implications of this are just beginning.  Price points for these devices are still coming down but the figures quoted today were:

  • 15k RPM 146GB – $400
  • Intel X25M SSD 64GB – $800
  • Fusion-IO – >$10K

Still, if you can fit the DB in RAM it will be faster than an SSD…

Object Oriented CSS

Nicole Sullivan presented a talk that seemed to be aimed at front-end engineers, of which, there probably were not too many in the audience.  Since I am not a front-end engineer either some of her content was lost on me as it was on many others in the room but I did get some good stuff:

My favorite quote, “O(n) natural to you but not designers.”  The core of her talk was calculating complexity in CSS using these metrics:

  1. http requests
  2. size of images
  3. size of CSS

Create a component library of re-usable code with this example structure: (counter blocks  x  background blocks  x  content objects) Separate structure from skin.

Try not to use text as images. Use a web-native font instead.  Otherwise you will end up duplicating your button/tab images within your sprites and your page will end up bloated.

Tagged , , , , , ,

WordPress Backups

First post! I haven’t blogged for a long while and now that I have decided to get back into it the first order of business is getting a backup of wordpress and the MySQL database. I don’t want to spend a lot time writing only to lose everything when the inevitable catastrophe occurs.  Since I spent some time setting these backups up tonight I thought I would post about it in case it was useful to someone else.  (And if something goes wrong, this post will help me remember what I was thinking when I set this up…)

Here is my backup strategy. I have three cron jobs. Two run on the web server and one runs on the mac.

Here is what the two crons on the web server are for:

  1. Dump data from MySQL
  2. Create a gzip’d tar of the web dir, wordpress & MySQL dump
  3. Move this file to an archive directory
  4. Keep 7 days of backups on my web server

An overview of the mac cronjob:

  1. Connect to the web server via sftp and downloads the backup archive.

This shell script does most of the work on the web server (replace the words in CAPS with appropriate values for your environment):

#!/bin/bash
# backup the wordpress mysql db dump (SQL statements)
# backup the wordpress blog & site
# Version 0.1 - 04/19/2009
cd ~
mkdir -p ~/backups/mysql
suffix=$(date +%y%m%d)
mysqldump --opt -uUSER -pPASSWD -h HOSTNAME DBNAME > ~/backups/mysql/DBNAME.$suffix.sql
tar zcf ~/backups/archives/backup.$suffix.tgz backups/mysql/* bin/* DOMAIN.com/*
rm -r ~/backups/mysql/

Here are the cron entries on the server:

MAILTO="your.email@somewhere.com"
0 8 * * * /home/USER/bin/backup.sh
0 9 * * * find /home/USER/backups/archives -mtime +7 -exec rm -f {} \;

On the mac, I use an sftp batchfile to specify a series of commands to run on the server to download the file. I also configured password-less login using ssh. (I won’t cover that part here but there are several ways to set that up.)

Here is the contents of the sftp batchfile:

lcd DOMAIN/
cd backups/archives/
mget *
quit

And the cronjob entry on the mac:

MAILTO="your.email@somewhere.com"
0 13 * * 1-5 sftp -b ~/bin/sftp-batchfile USER@webhost.com

With this setup in place I always have a rolling 7 day window of backups on the production web server. These backups get copied onto the mac every weekday so I have a copy of the data on another computer. And since I backup the mac using Time Machine I get a third copy of my data on an external drive.

There is some room for improvement with these scripts. The cronjob on the mac will copy duplicate data since it grabs each daily backup file 5 times before it gets deleted by the find command on the server. Since the backups are very small at this point I am perfectly happy to copy redundant data over the network since it keeps the scripts simple and therefore easy to understand and debug. At some point I will probably need to revise them.

Tagged , ,