Driving the (ve) Server at Media Temple

It’s been a few weeks now since Media Temple launched their new (ve) Server and I’ve been testing it out for a few days now. I’m actually hosting my blog there to experience some real traffic load and my first impressions are awesome!

I started off with the simplest 512 MB server and transferred a few websites to the new platform. I’m not too used to the Ubuntu Linux operating system but I found my way around quickly. They do have other operating systems options, but Ubuntu is the one they recommend. First few tests showed that my load time decreased dramatically compared to my Amazon EC2 instance, which I was quite happy with. Next step was to run a few load tests using the Apache Benchmark tool (ab), and very soon I realized that I got quite a few failed requests, memory shortage and other strange stuff.

Media Temple’s (ve) servers are hosted on the Virtuozzo platform by Parallels, and after browsing their documentation I found out that there’s no swap space available for Virtuozzo containers. They do allow around 80% of burstable RAM (so you get around 1 GB when running 512 MB) but when that runs out, you’re left with nothing, not even some swap space on your hard drive. Some heavy load tests showed 30% request failure, which is quite horrible.

Media Temple don’t give much information on the new platform via the support system and in memory shortage questions in their user forums they advice you to upgrade, of course! Well, I wouldn’t like to upgrade to just run a couple of load tests, and what about Digg-traffic? Should I predict that and upgrade before the spike? Then downgrade again to save some cash? Of course not.

A good option I found here is to tune Apache a little bit, reduce it’s resources limits. This will not increase performance, but may guarantee a 100% fail-safe workflow. We wouldn’t like our users to see a blank page (or a memory shortage error) when a spike hits, but we would rather want them to wait more than often and still load the requested page. The settings mostly depend on what software you’re running, which services and the RAM available in your container.

You might want to reduce the KeepAliveTimeout in your apache settings (mine’s now set to 5), and the rest is up to the mpm prefork module. You’ll have to modify your settings and then run some tests until you’re comfortable with the results. Mine are the following:

<IfModule mpm_prefork_module>
    StartServers 3
    MinSpareServers 2
    MaxSpareServers 5
    MaxClients 10
    MaxRequestsPerChild 0
</IfModule>

This is on a 512 MB (~ 400 more burstable) container. An Apache Benchmark test showed that 100 concurrent (simultaneous) requests performed in 26 seconds with 0% failed requests, this makes 3.84 requests per second, which is quite good. To give a comparison, the same test ran on the mashable.com website gave 30 seconds with 3.32 requests per second, and of course a 0% failure. Also check out other MPMs for Apache which could give results too.

This definitely requires more fine-tuning and if the page load time becomes too high then yes, there is a reason to upgrade, but don’t forget about other performance tricks such as CDNs, gzip (deflate) and others. When you’re done with Apache, proceed to MySQL fine-tuning & php configuration, there are some tricks there too to give you some extra speed & performance.

I’ll keep playing around with this server, plus I’ve purchased a 1GB (ve) this morning, so there’s quite lot of tests that have to be run. Anyways, if you’re looking for a good, high-performance VPS, then Media Temple is definitely a choice to consider. For only $30/mo you can get quite a good looking virtual server. It is more interesting than their old dedicated virtual servers (although still in beta). Cheers, and don’t forget to retweet this post ;)

Cloud Tips: Amazon EC2 & Rejected Email

A few weeks ago I’ve setup my email in the /etc/aliases for user root (and the others) and started to actually read my root email from time to time (I wonder why I never did that before). Anyways, what bugged me straight away is that I had some rejected emails that were not being delivered, yielding the following errors (I removed some numbers):

Deferred: 450 4.7.1 : Helo command rejected: Host not found
421 invalid sender domain 'domU.compute-1.internal' (misconfigured dns?)

And some others that looked alike. Tonnes of them, every four hours! The emails to other addresses were delivered fine though. I had WordPress notification messages delivered to my email, never lost a message. I also tried sending out a few using the mail command via SSH, everything okay. For a second I thought that maybe those addresses were simply invalid, but wouldn’t the server reply with an “Invalid recepient” error? Probably.. Here’s what I got from the Amazon Web Services support forums:

It seems that some remote mail servers complain about your server
identifying itself in the SMTP dialogue as domU.compute-1.internal,
while its external name is ec2.compute-1.amazonaws.com

Makes total sense. Perhaps some servers do try to see where the e-mail is coming from and of course the .internal domain is unresolvable (thus the “dns” misconfiguration error). I had to identify myself with an external, resolvable name. So I copied the external name into the /etc/mailname file and hmm.. Well, it’s been a week now and I haven’t received anymore delivery errors, so that must have worked.

Cloud Tips: Automatic Backups to S3

In a previous post about backing up EC2 MySQL to an Amazon S3 bucket we covered dumping MySQL datasets, compressing them and uploading to S3. After a few weeks test-driving the shell script, I came up with a new version that checks, fixes and optimizes all tables before generating the dump. This is pretty important as mysqldump will fail on whatever step would cause an error (data corruption, crashed tables, etc), thus your uploaded to S3 archive would be kind of corrupt. Here’s the script:

filename=mysql.`date +%Y-%m-%d`.sql.gz
echo Checking, Fixing and Optimizing all tables
mysqlcheck -u username -p password --auto-repair --check --optimize --all-databases
echo Generating MySQL Dump: ${filename}
mysqldump -u username -p password --all-databases | gzip -c9 > /tmp/${filename}
echo Uploading ${filename} to S3 bucket
php /ebs/data/s3-php/upload.php ${filename}
echo Removing local ${filename}
rm -f /tmp/${filename}
echo Complete
<pre>

There you go. If you remember my previous example I stored the temporary backup file on Amazon EBS (Elastic Block Storage) which is quite not appropriate. Amazon charges for EBS storage, reads and writes, so why the extra cost? Dump everything into your temp folder on EC2 and remove afterwards. Don't forget to make changes in your upload.php script ($local_dir settings). Also, just as a personal not and to people who didn't figure out how to upload archives with data to S3, here's another version of the script which takes your public_html (www, htdocs, etc) directory, archives it, compresses and uploads to an Amazon S3 bucket:

<pre>filename=data.`date +%Y-%m-%d`.sql.gz
echo Collecting data
tar -czf /tmp/${filename} /ebs/home/yourusername/www
echo Uploading ${filename} to S3 bucket
php /ebs/data/s3-php/upload.php ${filename}
echo Removing local ${filename}
rm -f /tmp/${filename}
echo Complete

Oh and have you noticed? Amazon has changed the design a little bit, and woah! They’ve finally changed the way they show the Access Secret without a trailing space character! Congrats Amazon, it took you only a few months.

FTP Breaking on FEAT (vsftpd on Fedora Core 8)

It’s been a while since I connected to my Amazon EC2 running Fedora Core 8 via FTP and for no reason I tried connecting there today and badaboom! Strange though, it worked fine about a month ago, I was able to upload and download files, but this time I got a little crash. On one version of FileZilla FTP client I received a simple “Unable to connect” error. On a newer version I noticed that the FEAT (features list, or whatever) command was breaking the connection so I googled that.

People say that the server is broken but they don’t mention any tips on how to fix that. I logged on via SSH, rebooted the vsftpd daemon, with no luck. Then I tried connecting to localhost via FTP (in SSH) using the ftp command. I got a connection, LS and CWD commands worked just fine and I was able to see the files. So I sent a FEAT command and got an “invalid command” error. Humm?

Somebody on the Ubuntu forums mentioned that it’s an encodings issue. Client unable to handle UTF8 though server runs only UTF8. Does that make any sense? Guess not. Well before you go digging into your encoding settings and messing up your configuration files, or shutting down the server and starting a new instance (I’m on Amazon EC2) you might wanna try this fix.

I have no idea how it got there, but in my /etc/vsftpd.conf I found a new strange line saying:

connect_from_port_20=YES

For one second there I thought that it’s fair enough. But hey, wasn’t FTP supposed to work on port 21? Right. Comment out that line, restart your vsftpd daemon (service vsftpd restart) and voila! Worked for me.

I still think it’s strange though.. Ghosts? ;)

Linux Shell: Host to IP in Bulk

I’m very busy this week setting up a new router here at the office, but I do find some interesting stuff that might be somehow useful to you. Like this shell script I wrote a few hours ago that reads a text file with different hosts on each line (google.com, yahoo.com, etc) and returns a new file with their resolved, i.e. ip addresses on each line. I’m not sure where you’d want to use this, but I’ve setup a server with two internet connections.

One is pretty fast but expensive, the second one is slower but free. I made the most useful websites (google.com, etc) go fast and expensive (eth0), and less useful ones (facebook.com, etc) go slower and free of charge (wimax0). I store the useful hosts in a file called special and I use this script (which I called parsehosts.sh) to resolve it to special.resolved:

#!/bin/sh
filename=$1;
echo Parsing ${filename}
output_filename="${filename}.resolved";
echo > ${output_filename}
while read -r LINE ; do
        host $LINE | grep "has address" | sed -e "s/.*has address //" >> ${output_filename} ;
done < ${filename}
echo Done

To invoke the script use:

[root@localhost ~] ./parsehosts.sh special

This generates special.resolved, which I then use in my iptables script to route certain directions. Note that you should chmod +x parsehosts.sh before trying to execute it.

Cloud Tips: Backing Up MySQL on Amazon EC2 to S3

Now that I’m all set up in the Amazon cloud I’m starting to think about backups. Elastic Block Storage (EBS) on Amazon is great and the Snapshots (backups) can be generated with a few clicks from the Management Console, but, for a few reasons I’d like to set up my own backup scripts and here’s why:

  • Amazon EBS snapshots are cool, but there might be a situation where I’d like to restore only one file, or a few rows from a MySQL dump or whatever. EBS works in bundled mode, this means that you store an image of the hard-drive you’re working it, no matter how many files there are. It might be painful setting up an extra hard-drive just for backups and work with its snapshots
  • I don’t believe there’s a feature to schedule or automate EBS snapshots
  • I’d like a simple way to download backed up date onto my local PC
  • Some of my clients like to get their weekly backups by FTP

I don’t really care about my php files, images and other stuff that’s located on my EBS, cause I’m sure I have local copies of all that. The most important part in all my projects is the data stored in my MySQL database, thus I’m going to show you how to setup a simple shell script to generate daily MySQL backups and a simple php script to upload them to a secure S3 bucket.

Take a look at this simple shell script:

filename=mysql.`date +%d.%m.%Y`.sql.gz
echo Generating MySQL Dump: ${filename}
mysqldump -uyour_username -pyour_password --all-databases | gzip -c9 > /ebs/backups/${filename}
echo Uploading ${filename} to S3 bucket
php /ebs/data/s3-php/upload.php ${filename}
echo Removing local ${filename}
rm -f /ebs/backups/${filename}
echo Complete

I’m assuming you’re familiar with shell scripting, thus there’s no need to explain the first few lines. Don’t forget to type in your own username and password for MySQL access, also, I used the path /ebs/backups/ for my daily MySQL backups. You choose your own.

There are a few scripts located in /ebs/data/s3-php/ including upload.php, which takes a single parameter – filename (don’t put your path there, let’s keep things simple). The script simply reads the given file and uploads it to a preset path into your preset S3 bucket. I’m working with the S3-php5-curl class by Donovan Schonknecht. It uses the Amazon S3 REST API to upload files and it’s just one php file called S3.php, which in my case is located in /ebs/data/s3-php right next to my upload.php script.

Before going on to upload.php take a look at this file which I called S3auth.php:

$access_id = 'your_access_id';
$secret_key = 'your_secret_key';
$bucket_name = 'bucket';
$local_dir = '/ebs/backups/';
$remote_dir = 'backups/';

This is the settings file which I use in upload.php. These particular settings assume your backups will be located in /ebs/backups and will be backed up to an Amazon S3 bucket called ‘bucket’ and in the ‘backups’ directory within that bucket. Using single quotes is quite important, especially with the secret_key, as Amazon secret keys often include the backslash symbol. Here’s the upload.php script:

require("S3.php");
require("S3auth.php");

$s3 = new S3($access_id, $secret_key);
$s3->putBucket($bucket_name, S3::ACL_PRIVATE);
$s3->putObjectFile($local_dir.$argv[1], $bucket_name, $remote_dir.$argv[1], S3::ACL_PRIVATE);

All simple here according to the S3.php class documentation. Notice the $argv[1], that’s the first argument passed to the upload.php script, thus the filename of the backup file.

That’s about everything. Try a few test runs with the shell script (remember to chmod +x it otherwise you’ll not be able to execute it) and finally setup a cron running the script daily. Mine works like a charm!

Multiple Sites Driven By One WordPress Installation Part II: Please Be Careful, You Really Might Mess Up ;)

This is just a little update on what I’ve been doing and what I have acheived so far. So, what we have: two different hosts – example.org and example-two.org both on one webserver right next to eachother, let’s say in the /home/username/www (we’ll just call it www further on) directory. The apache vhosts are configured correctly, the document roots are set. Everything’s working fine. There are two wordpress installations, one in /www/example.org and the second one in /www/example-two.org, which I got rid of. Here’s how.

So, first, create a new directory for the installation somewhere unreachable. I created mine in /www/wordpress_public/ and copied all the files and folders from /www/example.org to /www/wordpress_public. Remember to copy, not move, cause you might mess up. Always keep your backup right by your side in case you destroy everything ;) Also, do not copy the .htaccess file, it has to say in your example.org root folder, while the files in wordpress_public should be accessible by everyone.

Next, you have to create a symbolic link in the example.org directory that links to ../wordpress_public (I called mine “core”, I think it’s fuzzy ;) On a linux system, being in the example.org directory it goes like this:

ln -s ../wordpress_public ./core

So now, /www/example.org/core is pretty much the same as /www/wordpress_public, get it? Note, that if you are running suPHP you’ll get a “is not in document root of Vhost” error when trying to access example.org/core. To get rid of that error, edit the suPHP config file (probably located in /etc/suphp.conf), find the check_vhost_docroot parameter and set it to false. Restart the httpd daemon.

Open up index.php in /www/exaple.org and replace this line:

require('./wp-blog-header.php');

With something like this:

require('./core/wp-blog-header.php');

Before proceeding, delete your .htaccess file in /www/example.org. Then, Open up your usual wordpress admin panel on example.org (yes, it’s still accessible) probably on example.org/wp-admin, browse to your Settings – General screen, then change the WordPress address (URL) to http://example.org/core instead of http://example.org. No trailing slashes! Click save and bang! You’re logged out. Now browse to http://example.org/core/wp-admin and you should be able to login using your old credentials. Browse to your Permalinks settings and refresh them (that should create a new .htaccess file). Done! Almost..

Now, if you’re going to have more than one website (which is actually the point I’m writing all this stuff, duh!) then you should keep a little order in your wp-content directory. I’m talking about the uploads, photos & other post attachments. The generic WordPress attachments are quite easy to change in the settings. Switch them to wp-content/uploads/example.org, next using an FTP client or SSH move all the files and folders in /www/wordpress_public/wp-content/uploads to /www/wordpress_publoic/wp-content/uploads/example.org. That should do the trick. To run a test try creating a new post and uploading a photo or whatever, see where it places it.

If there were already posts on your blog with images attached, then their paths will not change automatically to your new uploads folder, you’d have to do that manually if you didn’t know some basic SQL ;) Run this in phpMyAdmin or whatever:

UPDATE wp_posts w SET `post_content` =
    REPLACE(`post_content`, 'http://example.org/wp-content/uploads',
    'http://example.org/core/wp-content/uploads/example.org')
    WHERE `post_content` LIKE '%http://example.org/wp-content/uploads%';

The WHERE condition is just to make sure other posts aren’t messed up.. Make sure everything’s working and proceed.

Now we’d like the example-two.org website to use the same WordPress installation located in /www/wordpress_public. First, copy themes and plugins from the example-two.org directory to the wordpress_public folder (so that wordpress_public will contain all the plugins and themes that both websites use). In SSH you’d type something like this:

cp -R www/example.org/wp-content/plugins/* www/wordpress_public/wp-content/plugins/
cp -R www/example.org/wp-content/themes/* www/wordpress_public/wp-content/themes/

Then, you have to somehow distinguish in wordpress_public which website we’re gonna show, right? I wrote this little hack you could use for wp-config.php (the one located in /www/wordpress_public) which could handle as many websites as you like:

$wp_multi = array(
	"example.org" => array(
		"DB_NAME" => "example_database",
		"DB_USER" => "examlpe_username",
		"DB_PASSWORD" => "example_password",
		"DB_HOST" => "example_host"
	),

	"example-two.org" => array(
		"DB_NAME" => "example2_database",
		"DB_USER" => "example2_username",
		"DB_PASSWORD" => "example2_password",
		"DB_HOST" => "example2_host"
	)
);
$server_name = $_SERVER["SERVER_NAME"];
$wp_settings = $wp_multi[$server_name];

// ** MySQL settings ** //
define('DB_NAME', $wp_settings["DB_NAME"]);
define('DB_USER', $wp_settings["DB_USER"]);
define('DB_PASSWORD', $wp_settings["DB_PASSWORD"]);
define('DB_HOST', $wp_settings["DB_HOST"]);

I hope you get my point and don’t delete the rest of the wp-config.php file which I didn’t list here ;) Oh and you do have to change example.org and example-two.org, unless you own those domains … Now, save the file and check back on example.org, see if it still works (yes, it should ..)

Back to example-two.org. Create the symbolic link (core) in /www/example-two.org pointing to the wordpress_public folder, just like we did for example.org, remember? Make sure it’s accessible. Browse to your old example-two.org admin panel, switch the WordPress address (URL) to http://example-two.org/core (no trailing slash!), remove your .htaccess file from /www/example-two.org and edit index.php (it should read the same as the index.php of the example.org website). Browse to your new admin panel (http://example-two.org/core/wp-admin) and refresh your permalinks settings. Browse to example-two.org and see if it works. If it does, then you did everything right, and remember to set your upload directory to wp-content/uploads/example-two.org and don’t forget to copy all your previous uploads to the new place.

Now when everything’s working fine, you can go ahead and remove all the files and folders that are somehow related to WordPress from your /www/example.org and /www/example-two.org directories (except from, of course, .htaccess).

Congrats! You now have two websites, driven by one single WordPress installation!

Here’s a list of rules to follow when using this method:

  • You update your WordPress core only once on any of the two (or more) hosts. If there’s a database structure change in the update, then each website will ask about updating its database seperately (when in the admin panel)
  • Same with the plugins. Update only once, and if you do encounter any issues after a certain plugin update, deactive and activate it on each and every site
  • If you’re using the Next Gen Gallery plugin, change the settings to store your gallery in /wp-content/gallery/example.org and /wp-content/gallery/example-two.org, for the two websites, not in one single folder, otherwiste you’ll have a complete mess when you reach 5 or 10 websites
  • I’ve no idea what will happen with cache plugins, still trying to figure out what WP-Super Cache is up to and it’s not my fault if you mess everything up, alright? ;)
  • All this might be pretty dangerous, so you shouldn’t experiment on super popular blogs unless you know what you’re doing

Guess that’s it. I’ll keep running the tests and stuff. I’m really worried about the cache plugin, so I’ll get back at you with that later this week. Also, for your information, kovshenin.com and blog.foller.me are now running this method, so if you notice anything strange, please let me know, okay?

Working With Amazon EC2: Tips & Tricks

It’s been a while now since I’ve been hosting on Amazon Web Services and I’d just like to point out some issues I had and quick ways of solving them. We’re gonna talk about setting up a server that would serve not only you, but your clients too, cause $100/mo is quite expensive, isn’t it? So let’s begin and keep this as straightforward as possible. If you don’t understand something, it’s probably because you haven’t read the official EC2 docs and haven’t searched the forums. This is not a tutorial, it’s just a set of rules you may want to follow to make things right.

Once you start a new instance from an Amazon predefined AMI (Fedora Core 8 for example) I suggest you start building your structure right straight away. Attach an EBS volume to you instance (I mount it to /ebs) and start creating your users with their home directories in /ebs/home/kovshenin not the regular /home/kovshenin. Also point your MySQL server to keep your database files in /ebs/mysql. There are plenty tutorials out there on how to do that.

Now, edit your httpd.conf, add your vhosts, point them to the right users dirs, install an ftp server and make sure you chroot the users to their home directories. That way they won’t be able to mess up with eachothers files and folders, peek passwords etc. You might want to change the root user’s home directory to / instead of /root in case you’ll want to use ftp via your root user (which is quite dangerous).

Now comes the fun part. The HTTP server runs under the apache user by default in FC8 and I recommend you don’t touch this. Damn it took me quite some time to figure out how the heck can the apache user execute and write to files not belonging to apache. I messed up big time with the groups, adding apache to all my client’s users groups, but thank god I found mod_suphp in the end. Install that one and make sure you use it and there’s no need to change the users umasks anymore.

Note: There’s a little issue with the mod_suphp in Fedora as far as I know, which doesn’t let you use the suPHP_UserGroup directive in the httpd.conf yelling that it does not exist. Most of the man pages on the net say you have to use that directive, but I’m good without it. It seems that suphp can figure out what user to run on its own, look closely at the config files, and also make sure you’re running php-cgi, not the CLI version. By the way, this is the part where WordPress stops asking you your FTP credentials on plugins/themes update, install, remove and core upgrade too. Speeds up the whole process ;)

I used the following code to test how mod_suphp works (or doesnt):

<?php echo system("id"); ?>

Which should output what’s the current user. Make sure you check everything works before going public, and do not set your min_uid and min_gid in suphp lower than 50. It’s safer to chown -R files and folders than to let suphp run your scripts via root or some other powerful user.

Backing up your EC2 and EBS

This is very important. Once you have everything set up and running, DO backup. Backing up the EBS is quite simple, just create a snapshot from the Amazon EC2 Management Console. Backing up the running AMI (instance) is a little bit mroe complex. You have to use the ec2 command line tools to bundle a new volume, upload it to an Amazon S3 bucket and register the AMI. There are plenty tutorials on the net on how to do that. Shouldn’t take you more than half an hour to figure it out.

Just make sure you have copies of all the major config files (httpd.conf, crontab, fstab, ..) backed up on your /ebs/config for instance. You might need them in the future (when you loose everything, haha ;) Restoring a backed up AMI instance is simple. Launch a new instance using the AMI you generated, attach the Amazon Elastic IP address to it and voila. Way too simple.

About the EBS, there are quite a few things you should be able to do with it before continuing. Restoring a backed up Snapshot: Create Volume from Snapshot, umount /ebs, deattach old volume, attach new volume, mount /ebs. Cool? Be careful when you’re resizing your EBS. The xfs filesystem automatically grows as far as I know, but in my case I use the ext3 filesystem. So if you need to grow your ext3 EBS you’ll go:

  1. Create a Snapshot
  2. Create a new EBS Volume from that Snapshot you created (say 10 GB if you were running 5 GB)
  3. Attach it to your Instance, say /dev/sdg
  4. Use the resize2fs command to resize the partition to 10GB
  5. Mount it to /ebs2 or whatever
  6. Check to see if everything’s in place
  7. Unmount /ebs2, deattach /ebs2, unmount /ebs, deattach /ebs
  8. Attach the 10GB volume to where /ebs was attached (/dev/sdf)
  9. Mount /ebs and start your services

There you go, back to work, server. By the way, when working with Amazon AWS, note that you should be working in the same region where your AMI is (us, eu, east, 1c, …) otherwise some of the options (when attaching, etc) might just not come up. Beware of that.

Well, I guess those are pretty much all the basics. Don’t forget to read the Amazon S3 tutorials and API, pretty sweet stuff! Good luck.

Foller.me: MySQL Tweaking & Optimization

As I mentioned in the interview with @enked on his website Chidimar.com, I had serious problems with MySQL database optimization on the Foller.me project. The current public stable version (beta-1) is using the MyISAM engine and it’s not holding much data – profiles, locations, geo points for the followers geography.

In the new version (currently dev-1 and hopefully beta-2 in a few days) I changed most of the old tables and added new ones, and using InnoDB this time. You see, it’s not very easy to scan through ~1,000,000 relations for the @mashable account ;) and I bumped into a ~10 second delay before the @mashable profile showed up at Foller.me. Slow query showed that one of the simplest queries caused that slow-mo – it took 6 seconds to execute! The guys at Stack Overflow helped me optimize the query and the two tables I was having problems with, so I came down to ~2 seconds for that query, neat!

Digging further I managed to tune the MySQL server up a little bit (caching, all sorts of buffers, etc – you should definitely take a look at MySQLTuner, it’s a perl script that helps you tune pretty much all the MySQL config) which decreased the query execution time to 1 s. The peeps at Stack Overflow said it’s pretty okay for that kind of query to execute for 3 seconds on over 2 million rows, so I thought that 1 second is final. Phew! :)

Now, think about the MySQL query cache. It doesn’t work in my situation, simply because I shoot UPDATEs and INSERTs at the relations table every five minutes or so (via a cron job), so there actually is a way to perform even higher. Thought of temporary tables, views and triggers (and even stored procedures). Nah.. Simply caching that query would be good, right? I mean if I cache the whole profile for an hour, why wouldn’t I cache the relations result set? Cache the query.. Aha, but I thought slightly further. Why not cache the whole page with memcached? I’ll keep you updated with the results.

Linux Dummy: Unscheduled Maintenance

If anyone of you have tried to access the blog yesterday night, you might have noticed that nothing was working. Sorry! I’ll say it straight, it’s completely my fault. Yesterday evening I decided to set up a cron job for automatic backups on my VPS – a full MySQL dump and a compressed archive of the www directory. So I got a couple of error messages stating that I don’t have the right to access some files which were in the wp-content/upload and wp-content/cache folders… I was frustrated!

Next… Never attempt to do this, okay? I logged in as root, changed owner on all files and folders including sub-folders of the www directory, set it to kovshenin:kovshenin. Voila, the backup worked! In a couple of minutes my VPS ran out of memory and I couldn’t even logon via SSH to reboot the server!

Now that’s funny! I called my hosting provider this morning and asked them what happened? They said everything’s fine, rebooted my server. I managed to logon by SSH, ran the “top” command, and looked at my memory usage growth! 100% was reached in 17 minutes, and bang! Disconnect. Two more calls to my provider didn’t help. They said that the only thing they can do is reset my yesterday’s VPS state completely.

So what really happened? I’m not sure but I bet it’s the WP-Super Cache plugin for WordPress! You see, cached files were created by the user that the httpd (apache) daemon ran – thus, one called “webmaster”. The user “kovshenin” apperantly didn’t have access to those files, and the change owner command spoiled all the cache! Now the static files were owned by “kovshenin”, and “webmaster” (apache) didn’t have any rights for those files. WP-Super Cache must have been in an infinite loop trying to access those, and of course, with no luck – therefore memory leak.

After another reboot I managed to quickly get into the WordPress control panel, enable Maintenance Mode and disable all the other plugins. Enabled them one by one. Setting 0777 as the rights for the cache directory and two WP-Super Cache config files solved the problem. The site was working fine again, and the new generated cache files were owned by “webmaster”… The day has been saved.

But what about the backups? Finally, I came to a thought that both “kovshenin” and “webmaster” users should be in the same groups. So I added “webmaster” to the “kovshenin” group, and “kovshenin” to the “webmaster”. Everything’s great! Apart from the fact that my Google Analytics now shows 0 visitors for 21.05.2009. Jeez, what a dummy…