Multiple Sites Driven By One WordPress Installation Part II: Please Be Careful, You Really Might Mess Up ;)

This is just a little update on what I’ve been doing and what I have acheived so far. So, what we have: two different hosts – example.org and example-two.org both on one webserver right next to eachother, let’s say in the /home/username/www (we’ll just call it www further on) directory. The apache vhosts are configured correctly, the document roots are set. Everything’s working fine. There are two wordpress installations, one in /www/example.org and the second one in /www/example-two.org, which I got rid of. Here’s how.

So, first, create a new directory for the installation somewhere unreachable. I created mine in /www/wordpress_public/ and copied all the files and folders from /www/example.org to /www/wordpress_public. Remember to copy, not move, cause you might mess up. Always keep your backup right by your side in case you destroy everything ;) Also, do not copy the .htaccess file, it has to say in your example.org root folder, while the files in wordpress_public should be accessible by everyone.

Next, you have to create a symbolic link in the example.org directory that links to ../wordpress_public (I called mine “core”, I think it’s fuzzy ;) On a linux system, being in the example.org directory it goes like this:

ln -s ../wordpress_public ./core

So now, /www/example.org/core is pretty much the same as /www/wordpress_public, get it? Note, that if you are running suPHP you’ll get a “is not in document root of Vhost” error when trying to access example.org/core. To get rid of that error, edit the suPHP config file (probably located in /etc/suphp.conf), find the check_vhost_docroot parameter and set it to false. Restart the httpd daemon.

Open up index.php in /www/exaple.org and replace this line:

require('./wp-blog-header.php');

With something like this:

require('./core/wp-blog-header.php');

Before proceeding, delete your .htaccess file in /www/example.org. Then, Open up your usual wordpress admin panel on example.org (yes, it’s still accessible) probably on example.org/wp-admin, browse to your Settings – General screen, then change the WordPress address (URL) to http://example.org/core instead of http://example.org. No trailing slashes! Click save and bang! You’re logged out. Now browse to http://example.org/core/wp-admin and you should be able to login using your old credentials. Browse to your Permalinks settings and refresh them (that should create a new .htaccess file). Done! Almost..

Now, if you’re going to have more than one website (which is actually the point I’m writing all this stuff, duh!) then you should keep a little order in your wp-content directory. I’m talking about the uploads, photos & other post attachments. The generic WordPress attachments are quite easy to change in the settings. Switch them to wp-content/uploads/example.org, next using an FTP client or SSH move all the files and folders in /www/wordpress_public/wp-content/uploads to /www/wordpress_publoic/wp-content/uploads/example.org. That should do the trick. To run a test try creating a new post and uploading a photo or whatever, see where it places it.

If there were already posts on your blog with images attached, then their paths will not change automatically to your new uploads folder, you’d have to do that manually if you didn’t know some basic SQL ;) Run this in phpMyAdmin or whatever:

UPDATE wp_posts w SET `post_content` =
    REPLACE(`post_content`, 'http://example.org/wp-content/uploads',
    'http://example.org/core/wp-content/uploads/example.org')
    WHERE `post_content` LIKE '%http://example.org/wp-content/uploads%';

The WHERE condition is just to make sure other posts aren’t messed up.. Make sure everything’s working and proceed.

Now we’d like the example-two.org website to use the same WordPress installation located in /www/wordpress_public. First, copy themes and plugins from the example-two.org directory to the wordpress_public folder (so that wordpress_public will contain all the plugins and themes that both websites use). In SSH you’d type something like this:

cp -R www/example.org/wp-content/plugins/* www/wordpress_public/wp-content/plugins/
cp -R www/example.org/wp-content/themes/* www/wordpress_public/wp-content/themes/

Then, you have to somehow distinguish in wordpress_public which website we’re gonna show, right? I wrote this little hack you could use for wp-config.php (the one located in /www/wordpress_public) which could handle as many websites as you like:

$wp_multi = array(
	"example.org" => array(
		"DB_NAME" => "example_database",
		"DB_USER" => "examlpe_username",
		"DB_PASSWORD" => "example_password",
		"DB_HOST" => "example_host"
	),

	"example-two.org" => array(
		"DB_NAME" => "example2_database",
		"DB_USER" => "example2_username",
		"DB_PASSWORD" => "example2_password",
		"DB_HOST" => "example2_host"
	)
);
$server_name = $_SERVER["SERVER_NAME"];
$wp_settings = $wp_multi[$server_name];

// ** MySQL settings ** //
define('DB_NAME', $wp_settings["DB_NAME"]);
define('DB_USER', $wp_settings["DB_USER"]);
define('DB_PASSWORD', $wp_settings["DB_PASSWORD"]);
define('DB_HOST', $wp_settings["DB_HOST"]);

I hope you get my point and don’t delete the rest of the wp-config.php file which I didn’t list here ;) Oh and you do have to change example.org and example-two.org, unless you own those domains … Now, save the file and check back on example.org, see if it still works (yes, it should ..)

Back to example-two.org. Create the symbolic link (core) in /www/example-two.org pointing to the wordpress_public folder, just like we did for example.org, remember? Make sure it’s accessible. Browse to your old example-two.org admin panel, switch the WordPress address (URL) to http://example-two.org/core (no trailing slash!), remove your .htaccess file from /www/example-two.org and edit index.php (it should read the same as the index.php of the example.org website). Browse to your new admin panel (http://example-two.org/core/wp-admin) and refresh your permalinks settings. Browse to example-two.org and see if it works. If it does, then you did everything right, and remember to set your upload directory to wp-content/uploads/example-two.org and don’t forget to copy all your previous uploads to the new place.

Now when everything’s working fine, you can go ahead and remove all the files and folders that are somehow related to WordPress from your /www/example.org and /www/example-two.org directories (except from, of course, .htaccess).

Congrats! You now have two websites, driven by one single WordPress installation!

Here’s a list of rules to follow when using this method:

  • You update your WordPress core only once on any of the two (or more) hosts. If there’s a database structure change in the update, then each website will ask about updating its database seperately (when in the admin panel)
  • Same with the plugins. Update only once, and if you do encounter any issues after a certain plugin update, deactive and activate it on each and every site
  • If you’re using the Next Gen Gallery plugin, change the settings to store your gallery in /wp-content/gallery/example.org and /wp-content/gallery/example-two.org, for the two websites, not in one single folder, otherwiste you’ll have a complete mess when you reach 5 or 10 websites
  • I’ve no idea what will happen with cache plugins, still trying to figure out what WP-Super Cache is up to and it’s not my fault if you mess everything up, alright? ;)
  • All this might be pretty dangerous, so you shouldn’t experiment on super popular blogs unless you know what you’re doing

Guess that’s it. I’ll keep running the tests and stuff. I’m really worried about the cache plugin, so I’ll get back at you with that later this week. Also, for your information, kovshenin.com and blog.foller.me are now running this method, so if you notice anything strange, please let me know, okay?

Multiple Sites Driven By One WordPress Installation

This is early experimental. And, I’ve also marked this post into the “personal” category, because you wouldn’t want your clients to have too much access, especially if they share a single WordPress installation. Now I know there’s the WordPress MU project, but I guess I can’t use it in this case, because WordPress MU assumes your URLs will be within the same domain (either subdomains or directories).

The reason I want multiple sites to be driven by one single WordPress installation is because I’m really tired of upgrading everytime. Upgrading the WordPress Core once in a while is okay, but when you’ve got a list of 30 plugins, it’s a pain in the neck upgrading two or three every day on every single blog and website you run. Automatic updates is not a choice, as I want to take a look at what I’m updating to before actually doing it, at least once.

I won’t be doing this from scratch. I’ll start by merging this blog and the Foller.me blog into a single installation. Single doesn’t mean they share the same database, all they share is the WordPress core files, plugins and themes. Yes, this may be dangerous, because not all the plugins store the data in the database (though I believe they should, at least when they’re capable of doing that). Now imagine the Next Gen Gallery (or perhaps any other gallery plugin) being shared over two websites within one WordPress installation. The albums are stored in one folder called gallery. So there might be a conflict if two albums have the same name. There might be an option to store the files in a different directory, and hope that option is stored in the database, will check on that later.

One more issue.. Remember I said personal projects? And assigned the post to the personal category? If you’ve got some clients who are hosted on WordPress, and you’re doing some admin things for them but they DO have admin rights in their admin panels, then I wouldn’t go with this stuff, as it’ll be quite difficult to restrict them from changing eachothers themes and plugins that they share. Get my point?

kay, now the trick will be in the wp-config.php file. We’ll basically look at the incoming address using some regular expression or whatever. If it’s based on kovshenin.com, then we connect to database 1, otherwise, if it’s based on blog.foller.me we connect to the 2nd database, and so on. Pretty simple, huh? If you’re a total freak you might wanna try changing just the prefix, thus having multiple websites, one WordPress installation, one database and a bunchload of tables ;)

I’ve no idea if this will alter the overall performance, but keeping total visitors under ~ 20,000 per day should be just fine ;) I’ll get back at you with another post next week, hopefully with some tests and some results. Cheers!

Working With Amazon EC2: Tips & Tricks

It’s been a while now since I’ve been hosting on Amazon Web Services and I’d just like to point out some issues I had and quick ways of solving them. We’re gonna talk about setting up a server that would serve not only you, but your clients too, cause $100/mo is quite expensive, isn’t it? So let’s begin and keep this as straightforward as possible. If you don’t understand something, it’s probably because you haven’t read the official EC2 docs and haven’t searched the forums. This is not a tutorial, it’s just a set of rules you may want to follow to make things right.

Once you start a new instance from an Amazon predefined AMI (Fedora Core 8 for example) I suggest you start building your structure right straight away. Attach an EBS volume to you instance (I mount it to /ebs) and start creating your users with their home directories in /ebs/home/kovshenin not the regular /home/kovshenin. Also point your MySQL server to keep your database files in /ebs/mysql. There are plenty tutorials out there on how to do that.

Now, edit your httpd.conf, add your vhosts, point them to the right users dirs, install an ftp server and make sure you chroot the users to their home directories. That way they won’t be able to mess up with eachothers files and folders, peek passwords etc. You might want to change the root user’s home directory to / instead of /root in case you’ll want to use ftp via your root user (which is quite dangerous).

Now comes the fun part. The HTTP server runs under the apache user by default in FC8 and I recommend you don’t touch this. Damn it took me quite some time to figure out how the heck can the apache user execute and write to files not belonging to apache. I messed up big time with the groups, adding apache to all my client’s users groups, but thank god I found mod_suphp in the end. Install that one and make sure you use it and there’s no need to change the users umasks anymore.

Note: There’s a little issue with the mod_suphp in Fedora as far as I know, which doesn’t let you use the suPHP_UserGroup directive in the httpd.conf yelling that it does not exist. Most of the man pages on the net say you have to use that directive, but I’m good without it. It seems that suphp can figure out what user to run on its own, look closely at the config files, and also make sure you’re running php-cgi, not the CLI version. By the way, this is the part where WordPress stops asking you your FTP credentials on plugins/themes update, install, remove and core upgrade too. Speeds up the whole process ;)

I used the following code to test how mod_suphp works (or doesnt):

<?php echo system("id"); ?>

Which should output what’s the current user. Make sure you check everything works before going public, and do not set your min_uid and min_gid in suphp lower than 50. It’s safer to chown -R files and folders than to let suphp run your scripts via root or some other powerful user.

Backing up your EC2 and EBS

This is very important. Once you have everything set up and running, DO backup. Backing up the EBS is quite simple, just create a snapshot from the Amazon EC2 Management Console. Backing up the running AMI (instance) is a little bit mroe complex. You have to use the ec2 command line tools to bundle a new volume, upload it to an Amazon S3 bucket and register the AMI. There are plenty tutorials on the net on how to do that. Shouldn’t take you more than half an hour to figure it out.

Just make sure you have copies of all the major config files (httpd.conf, crontab, fstab, ..) backed up on your /ebs/config for instance. You might need them in the future (when you loose everything, haha ;) Restoring a backed up AMI instance is simple. Launch a new instance using the AMI you generated, attach the Amazon Elastic IP address to it and voila. Way too simple.

About the EBS, there are quite a few things you should be able to do with it before continuing. Restoring a backed up Snapshot: Create Volume from Snapshot, umount /ebs, deattach old volume, attach new volume, mount /ebs. Cool? Be careful when you’re resizing your EBS. The xfs filesystem automatically grows as far as I know, but in my case I use the ext3 filesystem. So if you need to grow your ext3 EBS you’ll go:

  1. Create a Snapshot
  2. Create a new EBS Volume from that Snapshot you created (say 10 GB if you were running 5 GB)
  3. Attach it to your Instance, say /dev/sdg
  4. Use the resize2fs command to resize the partition to 10GB
  5. Mount it to /ebs2 or whatever
  6. Check to see if everything’s in place
  7. Unmount /ebs2, deattach /ebs2, unmount /ebs, deattach /ebs
  8. Attach the 10GB volume to where /ebs was attached (/dev/sdf)
  9. Mount /ebs and start your services

There you go, back to work, server. By the way, when working with Amazon AWS, note that you should be working in the same region where your AMI is (us, eu, east, 1c, …) otherwise some of the options (when attaching, etc) might just not come up. Beware of that.

Well, I guess those are pretty much all the basics. Don’t forget to read the Amazon S3 tutorials and API, pretty sweet stuff! Good luck.

Foller.me: MySQL Tweaking & Optimization

As I mentioned in the interview with @enked on his website Chidimar.com, I had serious problems with MySQL database optimization on the Foller.me project. The current public stable version (beta-1) is using the MyISAM engine and it’s not holding much data – profiles, locations, geo points for the followers geography.

In the new version (currently dev-1 and hopefully beta-2 in a few days) I changed most of the old tables and added new ones, and using InnoDB this time. You see, it’s not very easy to scan through ~1,000,000 relations for the @mashable account ;) and I bumped into a ~10 second delay before the @mashable profile showed up at Foller.me. Slow query showed that one of the simplest queries caused that slow-mo – it took 6 seconds to execute! The guys at Stack Overflow helped me optimize the query and the two tables I was having problems with, so I came down to ~2 seconds for that query, neat!

Digging further I managed to tune the MySQL server up a little bit (caching, all sorts of buffers, etc – you should definitely take a look at MySQLTuner, it’s a perl script that helps you tune pretty much all the MySQL config) which decreased the query execution time to 1 s. The peeps at Stack Overflow said it’s pretty okay for that kind of query to execute for 3 seconds on over 2 million rows, so I thought that 1 second is final. Phew! :)

Now, think about the MySQL query cache. It doesn’t work in my situation, simply because I shoot UPDATEs and INSERTs at the relations table every five minutes or so (via a cron job), so there actually is a way to perform even higher. Thought of temporary tables, views and triggers (and even stored procedures). Nah.. Simply caching that query would be good, right? I mean if I cache the whole profile for an hour, why wouldn’t I cache the relations result set? Cache the query.. Aha, but I thought slightly further. Why not cache the whole page with memcached? I’ll keep you updated with the results.

Have You Tried the Amazon Web Services?

Amazon EC2, EBS, S3.. I’ve been looking for the perfect web hosting for over two years now. Is this it?

A few months ago I really liked MediaTemple cause they offered pretty good US hosting starting from $20/mo, which was quite good for the Foller.me project, so at the starting point I chose them. Their service is cool, definitely worth the money, but. A few weeks have passed, along with some major development on the service update and I got stuck with MySQL and overall server performance. It’s pretty tough to scan through 2,000,000 relations from @cnnbrk and then geocode their locations so I thought that I need to fine-tune MySQL and work out a more powerful caching system.

Yes, MediaTemple do offer dedicated MySQL grids for $50/mo, so that’s $70/mo overall. Not that bad, but thinking ahead, I’d also like to tweak up my http server, so that’d be a virtual dedicated plan for $50/mo, which makes $100/mo in total. Woah! And that’s just the start (around 500 megs RAM, 20 GB disk space and 1 TB bandwidth).

Now the Amazon Web Services offers a 2 GB RAM, 1.6 GHz virtual machine for only $0.10/hr, that makes ~$70/mo. Put up an Elastic Block Store (EBS) up to 1 TB and attach it to the instance around $20/mo. and perhaps an Amazon S3 bucket $10/mo. That makes about $100/mo in total. It’s not just the price though, I loved the way you’re in total control of whatever is happening on your server. You tune it however you like, whenever you like. Save bundled volumes and start over at any time. One-click EBS volume backups, elastic IP address and up to 20 instanced running simultaneously (you can increase this number by contacting Amazon). You also get to pick whatever OS you’d like to run (they’re called AMIs). You can build your own bundled OSs and make them available public.

Oh, and one of the best things about Amazon EC2 (Elastic Cloud) is that it’s so flexible! Switching servers has never been so easy. Start a new instance, attach an EBS, tune it up. Associate your old Elastic IP address to the new instance and voila! Go ahead and terminate your old instance, cause you’re riding your new mustang now!

I’m also sure that you can setup multiple servers and network balancers.. Like clustered computing y’know, the possibilities are endless! But I’m too far away from that at the moment, though I’m sure that whenever I have some free time, I will throw some experiments in that field ;) I’ve already setup Trac and SVN server a few days ago, works great!

Virtual Private Servers, Dedicated Servers, blah blah blah. Those are from the past. It’s Amazon Web Services. Go get your account right now ;)

Linux Dummy: Unscheduled Maintenance

If anyone of you have tried to access the blog yesterday night, you might have noticed that nothing was working. Sorry! I’ll say it straight, it’s completely my fault. Yesterday evening I decided to set up a cron job for automatic backups on my VPS – a full MySQL dump and a compressed archive of the www directory. So I got a couple of error messages stating that I don’t have the right to access some files which were in the wp-content/upload and wp-content/cache folders… I was frustrated!

Next… Never attempt to do this, okay? I logged in as root, changed owner on all files and folders including sub-folders of the www directory, set it to kovshenin:kovshenin. Voila, the backup worked! In a couple of minutes my VPS ran out of memory and I couldn’t even logon via SSH to reboot the server!

Now that’s funny! I called my hosting provider this morning and asked them what happened? They said everything’s fine, rebooted my server. I managed to logon by SSH, ran the “top” command, and looked at my memory usage growth! 100% was reached in 17 minutes, and bang! Disconnect. Two more calls to my provider didn’t help. They said that the only thing they can do is reset my yesterday’s VPS state completely.

So what really happened? I’m not sure but I bet it’s the WP-Super Cache plugin for WordPress! You see, cached files were created by the user that the httpd (apache) daemon ran – thus, one called “webmaster”. The user “kovshenin” apperantly didn’t have access to those files, and the change owner command spoiled all the cache! Now the static files were owned by “kovshenin”, and “webmaster” (apache) didn’t have any rights for those files. WP-Super Cache must have been in an infinite loop trying to access those, and of course, with no luck – therefore memory leak.

After another reboot I managed to quickly get into the WordPress control panel, enable Maintenance Mode and disable all the other plugins. Enabled them one by one. Setting 0777 as the rights for the cache directory and two WP-Super Cache config files solved the problem. The site was working fine again, and the new generated cache files were owned by “webmaster”… The day has been saved.

But what about the backups? Finally, I came to a thought that both “kovshenin” and “webmaster” users should be in the same groups. So I added “webmaster” to the “kovshenin” group, and “kovshenin” to the “webmaster”. Everything’s great! Apart from the fact that my Google Analytics now shows 0 visitors for 21.05.2009. Jeez, what a dummy…

Internet Connection Sharing Via Wi-Fi On Fedora Linux

I was very tired yesterday evening, so I thought about tweeting through my iPhone while lying in bed. EDGE is pretty slow and expensive, and 3G hasn’t yet arrived in Moscow (military issues) so I decided to go with Wi-Fi. Good idea, huh? And it took me just a couple of hours to set the whole thing up. I’m running Fedora Linux 10 but you should’t have much trouble on other distros.

Setting up a Wi-Fi hot spot at home using a simple Wi-Fi router is the easiest way to go around, but that costs like a hundred bucks – not worthed. I’ve managed to setup an Ad-hoc (computer to computer) connection using my built-in 10m Wi-Fi module on my laptop PC. If you ever ran a Windows OS (duh!) you might know that sharing an Internet connection on your LAN is quite simple. My situation’s slightly different. A Vista running box is already sharing a PPPOE connection through LAN to two other laptop PCs, one of which is my favourite Fedora 10 box.

Anyways, the wired network on Fedora is set up and works fine (eth0 interface). Hadn’t had to iptables anything, nor should you ;) Now, setup a wireless network. Make sure you choose Ad-hoc (computer to computer) connection, input a passkey and setup your IP settings: 192.168.1.1, 255.255.255.0 and use 192.168.1.1 as your Route in case Fedora says it’s required (mine did). You might also need to input your DNS information (you can obtain your DNS information by running cat /etc/resolv.conf). This is all setup in the Networking manager (a Gnome utility AFAIK). Activate the connection and run ifconfig to make sure you’ve got a wirelass connection available (you should see a wlan0 section).

At this point you might want to test your connection. Get some device to ping your computer and try to ping back. Remember that you’ll have to setup IP information on your device manually (unless you’ve got a DHCP server running on the wlan0 interface). Pings fine? Okay, good. Now, all you’ve left to do is run a simple iptables script. Go ahead and generated one: Easy Firewall Generator for iptables. Don’t forget to pick the Gateway/Firewall option. Mine settings were like this:

  • Internet interface: eth0 (this is my wired LAN)
  • Static Internet IP Address (my wired LAN address)
  • Internal Network Interface: wlan0 (the wireless network)
  • Internal Network IP Address: 192.168.1.1
  • Internal Network: 192.168.1.0/24
  • Internal Broadcast: 192.168.1.255

The Generator will give out a shell script. Copy the contents and paste into a file (/home/kovshenin/wifitables.sh). Then:

$ cd /home/kovshenin
$ chmod a+x wifitables.sh
$ ./wifitables.sh

All done! I can now tweet free from the kitchen, balcony, bathroom and even from my toilet! :) Now I’m thinking about setting up a VNC server, so I’d never have to go back to my laptop PC again. Oh and by the way, if you DO have a feeling that you’ve messed up iptables, just run the iptables-restore command and start over. If you’re sure you got everything correct, use the iptables-save command, so that you wouldn’t need to run the script everytime you boot your system. Good luck everyone, and happy tweeting!

P.S. I’m glad to have some more connections on Google FriendConnect. Welcome newcomers! Hope you enjoy your stay!

Gone Mobile: SSH Terminal on Your iPhone

I was in a bus today in the morning standing in a traffic jam, when I suddenly got a call from my colleague Alex. He said that he messed up something in our database on my virual private server and didn’t know what to do because everything stopped working. Alex doesn’t know what SSH is and how to work with Putty so I had to figure this out all by myself and fast. Luckily I found TouchTerm – it’s a free SSH client for your iPhone. Download available in the App store.

It took me around thirty seconds to connect to my server, restore the database and make a couple of backups ;) screenshots:

Benchmarking: Your Web Hosting is Not That Perfect

Today I realized that the VPS I’m renting for $20/mo is not as good as it seemed at first. Ever thought about high loads? Okay, this may sound like some DDoS hacking tools, but no! 100 requests with 10 simultaneous made my virtual private server think for ~ 1,5 minutes. Jeez!

It took me quite some time to find good software for running some load tests on my webserver, linux has some good utilities (linux.com/feature/143896), but I suggest you start from ApacheBench which is a command line utility bundled with the Apache distribution. It’s cross-platform, therefore you can use it on Windows (I did). Anyways, here’s how you launch a test:

ab -n 100 -c 10 http://www.microsoft.com/

Why did I pick Microsoft? Well, if I get like 10,000 views tomorrow and everybody tries that command, that’d be a DDoS attack on Microsoft servers and I think they’re good enough to handle it. My server would just explode :)

Anyways, take a look at what the results may be like:

Benchmarking www.kovshenin.com (be patient).....done

Server Software:        Apache/2.2.8
Server Hostname:        www.kovshenin.com
Server Port:            80

Document Path:          /
Document Length:        84 bytes

Concurrency Level:      10
Time taken for tests:   90.984 seconds
Complete requests:      100
Failed requests:        1
   (Connect: 0, Receive: 0, Length: 1, Exceptions: 0)
Write errors:           0
Non-2xx responses:      100
Total transferred:      36564 bytes
HTML transferred:       8674 bytes
Requests per second:    1.10 [#/sec] (mean)
Time per request:       9098.438 [ms] (mean)
Time per request:       909.844 [ms] (mean, across all concurrent requests)
Transfer rate:          0.39 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   15   3.4     16      16
Processing:  2203 8866 8879.2   6188   48797
Waiting:     1969 8532 8664.9   5891   48750
Total:       2219 8880 8879.6   6203   48813

Percentage of the requests served within a certain time (ms)
  50%   6203
  66%   7281
  75%   8141
  80%   8313
  90%  17078
  95%  32266
  98%  43813
  99%  48813
 100%  48813 (longest request)

Ah.. And a failed request there, how sad… You might also want to check out your load on the server while benchmarking. Use the ‘top’ command, it should produce similar output:

Yup, although the super cache plugin is working, wordpress consumes a lot of memory… I also ran this with a 500/100 requests, that made my server go down for like 6 minutes, I had over 200 failed requests and my blog kept saying database connection error until the test had finished. Free memory dropped down to 0! Scary? For more information about how ab works, read Apache HTTP server benchmarking tool documentation at apache.org.

Three Linux Commands You Can't Live Without

Okay, we’re not going to talk about the shutdown, yum, etc. commands, though THEY are probably the ones that nobody can live without. I’m talking about the web here, remember? And we all know that not everybody owns a VPS, a VDS or a dedicated server. Virtual hosting plans are quite cheap today, and most of them are tuned to provide you with SSH access and basic privileges (although I DO highly recommend you get yourself a VPS, it’s not much more expensive than virtual hosting and I’m sure it’s worthed). These linux commands are extremely helpful when doing backup or moving from one server to another.

1. Create a compressed archive of the current directory:

tar -cvzf backup.tar.gz ./*

2. Create a compressed archive of a MySQL database dump:

mysqldump -u username -p password -h host -P port databasename | gzip -c > mysql.sql.gz

3. Get directory contents from a remote FTP server:

wget -r ftp://username:password@domain.com/directory/*

Hope that helped ;)