Gear review - Lowepro Flipside 400 AW Backpack

Since starting to shoot on a more professional basis some time ago, it wasn't long before two main issues started to become apparent on a nearly daily basis. Firstly, you can never have enough kit...

Just when you think you have all the gear you need, you suddenly find a lens you can't live without, or have something break at a critical moment - which inevetibly means you start carrying two of most irreplacable or vital pieces of equipment to help prevent any embarassing moments. As you start acquiring and needing to carry around more equipment, the second problem rapidly appears - finding the right equipment bag to store, protect & transport your vital camera gear along with keeping everything accessible while you're shooting.

When you're just getting started, a decent bag isn't really much of a priority. You probably don't have a huge amount of gear that needs to go out and about with you and most of the time lots of it can probably stay at home. Starting to shoot busy events or weddings etc. tends to change that, and before long you find yourself carrying a couple of pro camera bodies, lenses galore, two flashes, loads of batteries / memory cards etc. along with all the miscellaneous stuff like lens hoods, rain covers, flash triggers etc.

Over the years I've tried various styles of camera bag from traditional back packs to large shoulder bags and the more "modern" messenger-style bags which you can allegedly slide around on their strap to access your gear without putting everything down. The various styles all have their own strengths and weaknesses, along with varying degrees of actual practicality.


If you're a photographer who tends to shoot under studio conditions where ease of access in a hurry to a particular item, or portability isn't a particlar concern; or one that has an excessive amount of kit to transport, suggest you stop reading here and go buy whatever's large enough to carry everything around and comes with some wheels!

If you're out and about on a regular basis however or have a reasonable amount of kit that needs to go with you, then I thought I'd share a few thoughts on my latest purchase in a long succession of Lowepro bags - a black FlipSide 400AW.

I've always preferred backpack styles bags when transporting equipment, but previous models have all shared a number of somewhat annoying "features"; the most prevalent one being that to access your equipment you need to take the backpack off, remove its rain cover, lay the bag on the ground strap-side down, undo various clips & buckles and eventually unzip the equipment section before reversing the process. If you're indoors, this doesn't normally present a problem as floors indoors tend to be dry.

Working outdoors however is a somewhat different proposition as when you pick the bag up again, although your equipment remains protected, the side of the bag that will be against your back is inevitably now either wet, muddy, snowy, or dirty... Not nice.

The Flipside changes the traditional backpack model enough to make a difference. In terms of style, it's definitely a backpack - complete with comfortable shoulder straps and a waist belt to help spread the load around if it's full.

However, unlike most other camera backpacks you access the equipment section from the back of the bag rather than the front. This instantly gives it a couple of major advantages over other bags - 1) your gear's secure against "tampering" while the bag is on your back, and 2) the surface that will be in contact with you remains clean & dry when you put the bag down to get something out.

This alone is enough of a difference if you like backpack style bags but don't like using them when out & about!

In terms of features, like all Lowepro bags the Flipside 400aw is well constructed and feels like its made to last. Lowepro's lifetime guarantee helps to reinforce this and in practical terms it replaces a 12 year old Lowepro backpack of mine which is long past it's retirement age. Over the years I might have just been lucky, but so far have yet to have a Lowepro bag fall apart due to a fault. It's early days yet, but initial indications are that there's no reason this trend won't continue with the Flipside.

As mentioned earlier, there's little to fault in terms of the bag's straps. They're well-padded, generiously sized, and comfortable to use - all vital requrements once the bag's full! The large waist strap serves two purposes - firstly it obviously helps to improve distribution of your equipment's weight around your body making a fully-laden bag significently easier to carry. Secondly, it allows you to slip off the shoulder straps and pull the bag around to your side or front to access equipment without taking the bag off or putting it down. This sounds a slightly odd concept and is one that's better tried out to experience it for yourself than described. Suffice to say that once you've realised that the strap can take the strain this can work surprisingly well if you need to grab something but don't want to put the bag down.

Like most other Lowepro bags, the Flipside 400AW is festooned with pockets and attachment points for the company's SlipLock range of add-on storage pouches and cases. It has a large zipped storage section which is ideal for holding shallower lens hoods, remotes & cables etc. along with memory cards & batteries.

The sides of the bag also feature handy exterior pouches for holding water bottles etc.

The main equipment compartment is secured by a pair of what look to be heavy-duty zippers, and can be reconfigured to suit your exact equipment storage requirements by simply moving the dividers around as needed. The compartment is deep enough to accommodate pro or gripped camera bodies and could be adjusted to probably fit a 300 or 400mm lens attached to a body down the centre of the bag.

Many lenses can be stored vertically to save space / fit more in, and its deep enough to store 1.4x and 2.0x teleconverters stacked - saving a useful lens-slot which might come in handy.

Without anything being unduly crammed I currently have the following kit in mine:

  • Canon EOS 5D MK III and EOS 5D bodies with a grip attached to the 5D.
  • EF 70-200 f2.8L IS.
  • EF 24-70 f2.8L.
  • EF 17-40 f4L.
  • EF 50mm f1.4.
  • EF 100mm f2.8.
  • 1.4x & 2.0x teleconverters.
  • Canon 580EX II & 550EX Flashes (580EX in main compartment, 550EX in the front).
  • Lens hoods for the 70-200/24-70/17-40.
  • Batteries for the cameras & 4 sets of spare AA's for the flashes.
  • Numerous memory cards.
  • Radio flash triggers (when needed) & a remote shutter release.

Fully loaded, it looks something like this:

All in all, I can only recommend the Flipside 400AW. It does exactly what I need a bag to do and somehow lets you carry all your gear around in a surprisingly comfortable manner. It's also apparently suitable for airline carry-on use and should fit in the overhead bins without issue - haven't personally tried this however so would highly recommend you verify this before flying anywhere with it!


Just checking... it is Spring, right?

It's supposed to be spring time... right?


Must be winter…

With a growing collection of wellies by our door, a picture seemed in order...


Disaster Recovery - where do you start when planning for survival?

As some of you may be aware, I'm currently involved in building a cloud-based DR environment for a couple of core business systems. We opted to use Amazon EC2 for this, and I thought I'd share a few observations gleaned along the way.

Over the next few weeks I'm going to be taking a look at the area of disaster recovery from the IT perspective, with a focus on how you might be able to take advantage of cloud platforms to help ensure your business could survive a catastrophic disaster.

First, a quick disclaimer..
This & related posts are not to be considered a definitive guide to all things disaster recovery. Nor should they be considered as complete or appropriate advice to be applied wholesale within your environment without due care & attention. If you are not sufficiently technically competent to decide whether or not a solution is appropriate, please seek the advice of someone who is. Any disaster recovery solution must be appropriate for your organisation & meet your objectives.  

With the disclaimer out of the way, this & the following articles are intended to take a look at the process of planning for business survival in the event of a major, show-stopping disaster. The primary focus to start with is looking at the technical aspects involved, skipping over many of the associated business aspects.

While not the subject at hand, something to keep in mind throughout is that for every IT-focused business continuity plan, it's vitally important that your organisation has a corresponding BUSINESS-focused business continuity plan which addresses questions such as where will your staff go to work if your primary office building becomes uninhabitable? How would you contact your staff to tell them not to come in or to go to your alternative location? What happens to your customers....? How do they get in-touch with you?

Going back a few years, the slightest mention of the terms "Diaster Recovery" or "Business Continuity Planning" (especially when used by external auditors or insurers etc) had an unnerving ability to strike fear second to none into both those who held budgetary responsibility for funding such an undertaking, and those who would be in the running to be responsible for its implementation.

Setting the implementation worries aside however, unless you are of the opinion that encountering a catastrophic failure of your core systems or having your main offices rendered unusable following something outside of your control would actually be an "opportunity" in disguise, spending some time planning for such an event is usually time well spent.

Depending on the scale of your organisation, Disaster Recovery (from an IT perspective) may evoke thoughts of articulated trailers full of computer equipment arriving in your car park or mirrored data centres... For many other organisations however, it probably evokes thoughts of wild panic while you consider just what you'd actually do should your data centre & its contents of carefully managed servers disappear overnight.

Although disaster recovery or business continuity plans are generally plans which noone every wants to be faced with actually invoking, consider for one moment what might happen to your company's day to day operation if its computer based services or systems ceased to exist. What would happen if email services disappeared, file servers couldn't be accessed, your order processing, stock control, payment or financial systems couldn't be accessed... Or in this age of social media, consider the potential damage to your social audience engagement & online reputations. How would your business be affected if your public web sites disappeared off the Internet? Permanently?

Generally speaking, none of those scenarios are what could be considered good things to happen. In the majority of cases, the day a major disaster strikes is probably one you'd regard as a BAD DAY (capital letters definitely required).

With enough forward planning and the appropriate level of investement however, it is perfectly possible to plan for such an eventuality and establish a degree of preparedness relevent to your business. Some businesses may have no viable option other than creating a 100% mirror of their production environments complete with real-time data replication; others might consider that just ensuring their email continued to operate would be enough.

Before the advent of managed, 3rd party-hosted systems & commercially viable cloud-hosted Infrastructure-as-a-Service (IaaS) platforms, if you were unable to justify the expense of procuring or maintaining enough physical hardware to run your core systems for DR, you essentially had two options: Trim back your DR plans until they fit into your budgets, or decide to keep your fingers crossed, hope your luck was good and that you would never have a problem.

Many businesses, large and small, bravely opt for the second option. They sit back, get on with their daily trading and simply hope that disaster will never strike, or if it does, hope that they won't be so badly affected that they're unable to continue operating.

This approach may work if your organisation is small enough or not so dependant on email or other computer-based systems. Over time, this approach has an unfortunate effect of creating a false sense of security; it can rapidly become business-as-usual until something major goes wrong or they're hit by a natural disaster... Perhaps they're hoping that their business insurers will cover the cost of replacement kit; and that they'll pay up immediately so that replacement hardware can be bought? Its a great idea, but it doesn't usually happen...

If your organisation is a subscriber to the "luck" approach, yet couldn't actually continue operating in the event of disaster striking, consider challenging the status-quo. Full-scale disaster recovery planning might not be something that the organisation is in a position to consider, but yet many small steps can often be taken to help increase the organisation's chance of survival with minimal disruption to normal operation.

So, assuming you're still reading by this point and are managing to avoid quivering under your desk or going to find that lucky rabbit's foot / 4-leaf-clover / handy chunk of wood / whatever-else-you're-relying-on, what can you do to help give yourself a fighting chance in a DR scenario?

The first task at hand is to ensure you understand what you're trying to protect to the best degree possible. Take a detailed look through your production environment and ensure you have enough technical documentation to understand what each element of the environment does and how it interacts with, or relates to all other elements. Unless your environment consists of a single machine, diagrams are usually key to this and can help identify what you'd need to reconstitute a particular system or service. This might seem an obvious thing to do, but the amount of organisations which do not have this level of clarity is nothing short of staggering.

A significant note of caution here is that attempting to create a disaster recovery solution without first gathering this knowledge can only be dangerous for your organisation; sometimes much more dangerous than adopting the rely-on-luck approach & doing nothing. This is simply because by spending time planning, you're creating a sense of security and can often end up believing that you're fully prepared should something happen, yet your plans would be highly likely to contain flaws - not necessarily through any technical error, but because unless you truly understand your systems & how they interact, you run the risk of missing something vital without which your system cannot function.

For each system it's also vitally important to understand its importance to your organisation - in terms of:
- Whether it's critical to your daily basis or if it's only needed once a month / quarter / year?
- How long your business could viably operate without it?
- What impact does it being unavailable have on the business, or other systems?
- In the event of a major disaster, how much potential data loss could be tolerated?
- Is it vital that NOTHING is lost, or could your business continue with a few hours or longer of lost data?
- How important is it compared to other systems?
- If you can only work on restoring one service at a time, which should you work on first?

These points can be considered a set of baseline recovery objectives for each of your systems - forming a set of guidelines against which to asses possible DR solutions.

It might be a slightly obvious statement to make, but it's also essential to ensure each system is fully considered from a technical perspective. Among many others, some key things to check include:

  • How many users rely on the system?
  • How do they access it? Citrix? Applications running on desktop computers? HTTP?
  • Are there alternative access routes/methods which could be used in a DR scenario?
  • How much data does it store, and how / where does it store it?
  • How frequently does the data change?
  • What % of its data is created/updated on a daily basis?
  • What does that % represent in terms of Gbs or Tbs of data?
  • What options do you have in terms of replicating it, mirroring its data or identifying & copying changed data?
  • How do you back it up, where are those backups stored & what devices/software would you need to restore them?
  • What other elements of your underlying infrastructure does the system depend on?

Once you have a solid understanding of what your environment contains, double check that you've included the supporting infrastructure and underlying network services - DNS, DHCP, Active Directory, RADIUS etc...

Don't forget your telecoms either - communication in the event of a disaster tends to become even more important than normal. Your staff will need to be able to communicate with each other via phone, and are probably going to be working from different locations to their normal offices. Are mobile phones going to be enough? Do you need to be able to receive calls on your existing fixed-line numbers?

What happens if your email servers are unavailable? Will mobile devices still be able to send & receive messages?

Armed with your environment diagrams & documentation, then spend some time identifying key dependancies for each of your systems/applications or services. This is a vital step as otherwise it's all too easy to establish a great DR solution for one of your systems, only to find out in a DR situation that it isn't able to actually do anything without having 3 other systems up & running.

Having identified your key systems, the next challenge is to work through your systems and determine how best to approach establishing some form of DR provision.

The "best" approach to aim for is the one that fits your business's objectives for continuity, while delivering a supportable, maintainable & affordable solution. Don't be afraid to consider what may be new technology or new approaches for your organisation - Cloud and IaaS platforms can offer incredible benefits when compared to more traditional approaches to BCP, but come with their own set of costs & technical challenges.

Finally for this, keep in mind that here is no "perfect" one-size-fits-all solution to disaster recovery planning as every business is different with different priorities, different demands on their IT systems, and different ways of interacting with their customers.

Therefore, every business' plans for disaster recovery need to be as unique as they are to ensure that their DR planning reflects their own, wholly unique set of business continuity objectives.

Next: A look at what to do with this information & some thoughts on where to start...


So.. how's your brain working today? Can YOU read this?

 

This was forwarded onto me earlier this week...  I never usually forward stuff like this onwards but this just amused.

Can you read the following message...?

F1gur471v31y 5p34k1ng?

Good example of a Brain Study. If you can read this you have a strong mind.

7H15 M3554G3
53RV35 7O PR0V3
H0W 0UR M1ND5 C4N
D0 4M4Z1NG 7H1NG5!
1MPR3551V3 7H1NG5!
1N 7H3 B3G1NN1NG
17 WA5 H4RD BU7
N0W, 0N 7H15 LIN3
Y0UR M1ND 1S
R34D1NG 17
4U70M471C4LLY
W17H 0U7 3V3N
7H1NK1NG 4B0U7 17,
B3 PROUD! 0NLY
C3R741N P30PL3 C4N
R3AD 7H15.
PL3453 F0RW4RD 1F
U C4N R34D 7H15.


Cloud Hosting - What does Cloud actually mean?

 

"Cloud Hosting" is possibly one of the most used, but yet also most misunderstood terms in current use across the Internet.

Cloud means many things to many people, yet has the ability to generate images of limitless potential & strictly imposed limitations - all at the same time.

Among those who have not really used cloud based services in anger for anything substantial, there seems to be a common set of misconceptions that you can't run "proper" line-of-business applications in the cloud. A phrase that gets bounced around far too often only serves to reinforce this - "cloud - isn't it just for all that web stuff ?".

Justification for this comment seems to always run along the lines of not being able to provision large storage capacities, not being able to scale up/down to meet demand, inflexibility in terms of network configuration or performance, or most amusingly, not being able to quickly add new servers when you need them.

In the world of "cloud", some of these misconceptions are true if you use one of the multitude of vendors offering "cloud web hosting" or "cloud" virtual servers, as 9 times out of 10 purchasing a "virtual private server" (VPS) to host a website tends to buy you a fixed amount of resources provisioned on a chunk of hardware residing in someone's datacentre. If you're lucky, the hosting company is doing a decent job and supports their VPS host servers with reliable primary storage provisioned from high performance SANs with plenty of underlying disks. If you're not so lucky, you'll soon find a world of limits and poor performance which only comes clear when you discover that the hosting outfit is running a collection of individual VPS host servers, running as many individual VPSs as they can cram onto a machine from a pair of (hopefully mirrored) disks resident in the host hardware. Between these two extremes lie a multitude of other options & models, some of which are better than others.

To get a feel for the market as a whole a great place to start is the Cloud forums over at WebHostingTalk. Virtual Private Servers / VPSs are also discussed at length on WHT with many reviews & comparisons posted from those who have experienced good or indeed not so good service from their chosen host.

Scenarios such as the above tend to form the initial view of "cloud" for many people, as unfortunately the majority of low-end / budget VPS / Cloud web hosts tend to fall somewhere towards the lower end of the quality vs price range; offering a thin slice of a server's resource for a few £ per month or year.

While under some situations these budget VPSs can provide more than enough resource to run a personal website, as soon as you start needing better performance or more disk space etc you tend to quickly reach its limits and inevitably subsequently start shopping around for a new host. At this point, the difference between budget & high-end services tends to quickly become clear as higher end services can nearly always offer you the option of increasing your allocated CPU, memory or disk resources; giving you a larger slice of the underlying host in exchange for more cash.


At the other extreme, services like Amazon's Elastic Compute Cloud (EC2), GoGrid or Rackspace Cloud offer a different type of solution; providing you with an Infrastructure as a Service (IaaS) environment, allowing you to build out any combination of server resources to meet your exact needs. While several other (IaaS style) cloud hosts provide a similar capablity, Amazon takes it one step further by separating storage resources away from server instances, meaning that you can easily scale storage for your servers up (and down) when you need it; rather than being tied into renting a server with xGb month on month.

Want to run a large Oracle database server with 30Gb ram & 12TB storage? Not a problem. Want to run a little personal webserver with 1 CPU core & 613Mb ? No issue; Might even be free if you qualify for Amazon's free usage tier in your first year. Need 2 of those Oracle servers & 20 4-core 16Gb applications servers? Go launch them - and for many enterprises, more importantly launching them doesn't need any capital expenditure or come with lengthy lead times to procure machines, scale storage infrastructure etc.

For many companies, Amazon Web Services (AWS) provides them with somewhere to experiment; trying out new software or services in a self-contained environment which is completely separate to their business-critical production systems.

A new breed of business is emerging however who take a different view.

Rather than being a plaything or testing platform, AWS is their production platform. They run little or no on-site traditional server/application hosting infrastructure of their own instead using AWS to run everything from web hosting to HR/Payroll systems.

Many Internet-scale organisations for example rely on AWS for their mission-critical operations, running a minimum of supporting infrastructure on-site and virtually everything else in the cloud from Internet-scale file storage through to database environments, application & web severs and edge-caching harnessing technologies such as Varnish.

Clearly between the two extremes there are a multitude of other options - running a mixture of cloud & on-site service, or perhaps looking at cloud as the ultimate disaster recovery environment.

It's the subject of a future post, but cloud is also an ideal environment for disaster recovery.

If your organisation does not wish to or cannot justify the expenditure of establishing a traditional DR environent in an alternative data centre, prior to the advent of IaaS environments DR would have been firmly off the agenda.

All in, if you've not yet explored what cloud could do for your business one thought to keep in mind is that your competitors probably have....


An Open Letter to Comment Spammers

Dear potential spammer,

In case you haven't noticed, it's a rare occasion that a piece of marketing junk submitted here as a comment against a post will actually appear.

We filter all comments before they're posted and if our automated spam filters fail to catch them, our manual review will. Either way, we do not post anything we don't want to appear online.

It's simple really. Submit a genuine comment and we'll post it. Submit spam, or anything we deem to be offensive or just don't want to see, and it we won't post it.

So, before you submit your next spam comment proclaiming that this is the best site you've ever visited (and please by the way go visit our site hawking something unsavoury), something trying to sell certain medications, offering adult chat services, something that doesn't make any gramatical sense, or anything that's just good old fashioned spam; just don't bother.

If however you're someone with something to say which doesn't fall into any of the above, please post away :)

Look forward to hearing from you!

 


Website relaunched!

With traffic to this website steadily increasing, I've been spending some time attempting to look at how I could get more relevant and useful content surfaced onto the site's homepage.Read more


Get started with Wordpress & Nginx; 15 minutes from fresh OS to Wordpress

In earlier posts I briefly talked about the dramatic performance improvements you can get by using Nginx to run your PHP / Wordpress websites and touched on some of the vital steps that need to be done to start tuning Wordpress for performance.

If you're interested in the idea of using Nginx but wondering where to start or are perhaps finding it a challenge to find a consistent & accurate guide on the web to get you up and running, then hopefully this post will help demystify things a little for you.

The steps below work, and should get you up & running with an operational Nginx server along with support for Wordpress & PHP.

As an important set of caveats however, I'm not suggesting that this is everything you need to do in order to build a working server for you; guaranteeing this will work on any random OS you want to use; that this is the only way to end up with something that works... or to that matter, that the end result does everything you need it to! Your mileage may well vary!

They also do not cover installation of popular control panels such as Webmin/VirtualMin/cPanel etc. (as I don't tend to use them) and won't deal with moving email around for the simple reason that I always keep email & web services separate, normally handling email through a hosted Exchange solution or Google Apps.

The following instructions are based on starting with a fresh installation of CentOS 5.7 (or 6) and should result in a running web server using Nginx & PHP-FPM, ready to run Wordpress.

The same basic configuration will work for both single Wordpress instances & Wordpress MultiSite, assuming you use the right Nginx config :)

Without further ado, SSH into your fresh CentOS installation and complete the following steps. Commands that need to be executed via SSH or a console prompt are prefixed with ">".

Throughout, you can download copies of any mentioned configuration files by clicking the links.

Add the REMI Repository
While CentOS provide many packages through their own software repositories for easy installation via YUM for essentials such as PHP & MySQL,  unfortunately in many cases those packages are woefully outdated and you end up with old versions of software installed on your server. In the case of Apache, MySQL or PHP it's vital to stay up to date with current versions to ensure you're protected from known bugs & security holes etc.

Fortunately, it's very easy to setup an alternative repository to obtain access to newer versions of software without needing to start compiling your own from source and one of the most useful repositories for LAMP is Remi's.

The commands to do this are a little different depending on whether you're running CentOS 6 or 5.x, so please pick the right ones:

Centos 6
> rpm -Uvh http://download.fedora.redhat.com/pub/epel/6/i386/epel-release-6-5.noarch.rpm
> rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-6.rpm

Centos 5.x
> rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm
> rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-5.rpm

 

With the repository installed, the next step is to install MySQL, PHP & Nginx:

Starting with MySQL, this command should install both MySQL Server and all of its various dependancies.
> yum --enablerepo=remi install mysql-server

Once installed, we now need to start, enable the service & secure the installation.
> service mysqld start

Enable it as a service to automatically start whenever your machine is rebooted:
> chkconfig mysqld on

Next, the installation should be secured by setting a secure root password, removing test user accounts & databases and preventing remote login using the root account if possible in your environment.  Just follow the prompts for this, supplying a suitable password & answering Yes to removing test databases/users & preventing remote root access.
> mysqlsecureinstallation

Next, PHP & PHP-FPM should be installed:
> yum --enablerepo=remi install gcc php php-common php-pear php-pdo php-mysql php-pecl-memcache php-pecl-apc php-gd php-mbstring php-mcrypt php-xml curl php-devel php-fpm

Enable PHP-FPM & start the service:
> chkconfig php-fpm on
> service php-fpm start

Finally, install NGINX:
The simplest way to do this is to setup a new repository for NGINX - which along with simplifying installation also simplifies updates in the future.

First, we need to create a new YUM .repo file:
> vi /etc/yum.repos.d/nginx.repo

Then:
> yum install nginx
> chkconfig nginx on
> service nginx start

(more detailed install instructions can be found on the Nginx Wiki

At this point, you should be able to visit http://your-server's-ip and see a page welcoming you to Nginx.

Before going much further, I'd recommend that you take a moment to install & enable a firewall on your machine if it's not already running one, and ensure that you open only the ports you actually need.

Summary: Assuming you've made it to here, you now have a server which contains a base installation of Nginx, PHP & MySQL; which will need further configuration to get a usable system up & running.

The most vital two steps here are to setup PHP-FPM to support PHP requests from NGINX, and then to configure NGINX to host websites along with passing PHP requests onto PHP-FPM.

In terms of configuration, PHP-FPM uses a service-based model with a relatively simple base configuration file which then loads service-specific configuration files for as many PHP process pools that you wish to start. For most installations a single "www" PHP poll is probably  more than sufficient, and you should find a default configuration file waiting for you in /etc/php-fpm.d/www.conf.

Restart the php-fpm service to pick up the new configuration:
> service php-fpm restart

Next, we need to configure NGINX.

A similar model is used for NGINX with the base nginx.conf file defining your PHP handler (i.e. php-fpm), and site-specific conf files handling anything more er, specific.

Finally, we need to configure NGINX for your site & Wordpress.

There are many, many ways to tackle this but assuming you're planning to run more than one site on this machine, I would recommend creating a generic Wordpress configuration file containing all necessary settings to run Wordpress and including it in any site specific configuration.

Within a "global" folder, create a wordpress.conf file:
>vi /etc/nginx/conf.d/global/wordpress.conf

A number of restrictions should really also be defined in a global .conf file for potential re-use across multiple sites:
> vi /etc/nginx/conf.d/global/restrictions.conf

Then, finally, create a site specific configuration file which defines the site specific config you need such as domain names, root folders etc.

> vi /etc/nginx/conf.d/your_website.conf

That's it...!

Restart your NGINX service to pick up your new configuration, and assuming you've taken care of the more mundane aspects such as creating a database for your Wordpress installation, uploading the necessary Wordpress files, copying a default wp-config.sample file over to wp-config.php (and adding in your database's name along with a username/password), you should now be up & running !


Replace radiator valve without draining system

Replacing radiator valves without draining your central heating system is something that can be achieved with a few simple additional tools (bungs), a heating system that's in reasonable condition,  and a small helping of luck!

This is a pretty safe & simple way to do small changes to pipework or routine maintenance tasks such as removing radiators to clean or replace them, or replacing valves etc. without emptying the system and avoiding needing to flush & refill.... Or so various plumbers seem to reckon at any rate.

I've used this approach successfully several times now to both replace radiators, upgrade valves and alter pipework on an open-vented system with a feed & expansion (F&E) tank in the loft.

If you've never had your heating system flushed or cleaned however, it's probably worth taking the time to do a quick DIY chemical clean before replacing a radiator. This is a DIY task that anyone with an elementary knowledge of plumbing can probably accomplish without too much effort, and can help prolong the lifetime of your heating system's components!

Without further ado, the process is started by first either closing off the heating's feed & expansion tank outlet/feed into the heating system, or by fitting a "bung" into the tank's outlet pipe. Once done, repeat for the heating's open vent pipe.
Didn't have a suitable purpose-made bung available so simply used a speedfit 15mm stop end to seal the vent.

Next comes the first test of faith... With a suitable container strategically placed to catch water, crack open one of the old radiator valves, catch the water, and wait for the flow to stop.....

Assuming you've stopped the F&E tank supply from reaching the system & that the vent's sealed, once the water stops you have successfully produced a vacuum in the system & can start work!
If however the flow doesn't stop and rebunging / tightening valves etc doesn't help, I'd suggest you stop here & drain the system down below the level of whatever you need to work on!

As we're replacing the old rad with a new one that's about the same size, no other work is needed except replacing both valves & fitting the new rad.

Step 1 - cleanup the pipework.
Given the state of the old valves & pipes, it's a good idea to remove as much of the old paint as possible around where the new valve will seal as paint doesn't tend to help watertightness.
It's also much easier to do this before you've removed the old valves & fitted the rad to the wall....

Step 2 - brackets & test fit.
Determine where the radiator's going to be positioned & drill the necessary holes for your radiator brackets. Measure carefully - easier to get it right first time...

Once the brackets are mounted, fit the radiator & determine where the valves need to be positioned.
As a suggestion, attach radiator tails, bleed valves & blanking plate before mounting the radiator on its brackets.  Ensure a good 6-9 turns of PTFE tape are wrapped around the new radiator tails before screwing them into the radiator.

Also always helps to ensure the radiator's level...

Step 3 - remove old valves & replace

Assuming you've got a good vacuum in your system, go ahead & undo the valve... I opted to simply cut them off as the new radiator was a little taller than the original (and therefore I needed lower valves).

Clean up the pipe end, ensuring you've got bare copper where the valve's olive will seal as otherwise you'll be faced with remaking the joint as you start trying to fill the system.. A scenario best avoided really!

Make up the valve, ensuring retaining nut & olive are on the pipe. Position the new valve body suitably angled/aligned for your radiator's tails and hand-tighten the connections.

Tighten everything up using a suitable pair of grips or wrenches, and ensure the valve is actually closed...

Then, repeat for the other valve.

Assuming that you're fitting a TRV to one end, ensure it's on the flow pipe unless the one you've got works in either direction & use the TRV head to close the valve..

Step 4 - feeling confident?
Unbung/reopen the F&E tank supply & unseal the vent pipe.
Assuming nothing's leaking yet open both the rad valves.

Check for leaks as water starts to fill the new radiator, tightening any connections as needed.

As it's filling, open the radiator bleed valve to ensure all the air's removed and let the radiator fill. Once full, close the bleed valve and assuming everything's still dry, switch on the heating & check the new radiator heats up as you would expect.

Enjoy & file the idea away so that next time you need to replace a radiator or change a valve, you avoid emptying the system!


Sam's First Shoes

To mark a bit of a milestone in Sam's life, thought I'd quickly post this.

His first pair of proper "cruising" shoes.... Now very necessary as he's probably going to be walking independently in few weeks at the latest.
Shot with an EOS 5D & EF 24-70 f2.8L, and processed through Aperture & Nik Software's Silver Efex Pro.

Obviously time moves on, but while it doesn't feel like more than a few short months have passed he turns 11 months old in a few days time and will be a year old come January 2012.

Roll on the next few years of adventure!


Web Hosting - you don't always get what you pay for...!

Just over a month ago now, we launched a pair of commercial Wordpress websites for a local business initially using a top-end "dual hosted" Business plan provided by 1&1.

We went this route for a number of reasons, one being the significant amount of advertising 1&1 seem to produce (applying the "they can't be that bad" concept), and the second was a desire in the current commercial climate to try and reduce the operational cost around web hosting.

The new sites were to replace two 4-5 year old sites running on a Windows 2008 VPS provided by Tagadab, which with all credit to Tagadab has been an extremely reliable box for the year it's been in use. One of the key drivers behind relaunching the sites however was to migrate to an open-source CMS environment which could run on a wider range of platforms rather than just Windows & SQL Server; allowing us to migrate onto commodity hosting and take full advantage of high performance alternative webservers to IIS such as Nginx.

Shortly after the new sites went live on 1&1, we realised that our choice of host was actually a massive mistake as we weren't getting anywhere remotely close to the levels of reliability, performance or support we needed and decided to plan a controlled move to a more professional service from MediaTemple (mt).

During another prolonged 1&1 outage, the idea of planning the process soon disappeared out of the window along with our websites & in the space of an hour or so we moved over to (mt)'s Grid Service & cut our losses with 1&1. We'd opted for (mt) as following some great feedback from other businesses (gs) sounded ideal. After a couple of weeks however, it became somewhat apparent however that although they consistently delivered great levels of reliability along with good support, performance on (gs) for UK visitors simply wasn't anything close to as good as we either expected or needed. Some investigation into this suggests that the basic problem was the lack of database responsiveness, something that could have probably been resolved by upgrading the account to use a dedicated MySQL container rather than the standard (gs) platform. If that didn't deliver, the next step was to upgrade to (mt)'s Dedicated Virtual (dv)  VPS platform, which is essentially an expensive / somewhat overpriced VPS solution.

The chart below shows average response times for one of the sites from September to November 2011, which while rather illustrating the issues we encountered, actually hides the worst aspects behind the averages. At its peak, the site took something in the region of 4-5 seconds per page load with (mt) and 3-4s for 1&1.

Before embarking on this "adventure" we'd previously been accustomed to sub-second page loads with the old site and although we were not expecting a more complex Wordpress site to deliver the same level of performance, needed something significantly better than 5s per page.

Opting for the quickest route to getting a working platform, we subsequently decided to return to the VPS route to ring-fence resources for our sites, and began looking around for a reliable UK based hosting company offering a fast VPS provisioned on the XEN virtualisation platform (XEN, OpenVz, VMWare etc are all different virtualisation technologies with XEN often offering faster VPSs as it can help prevent unscrupulous hosts from overselling their server capacities & overcommitting VPS resources).

Having signed up initially for a VPS in a large Netherlands datacentre, we started installing all the necessary software to run our sites opting for Nginx in place of Apache... which led to a raft of additional issues as we worked our way through numerous tutorials / guides in an attempt to find a set of recommendations or instructions with which to actually get everything working properly for Wordpress; documenting the build as we went just in case we needed to do it again. Performance was great as you'd expect with Nginx running on a fast VPS with plenty of resources, but unfortunately after 6 network outages in the space of 7 days we discovered that our new host (or their datacentre) had a major issue with being on the receiving end of DDoS attacks.

Having planned ahead & documented the build process, moving the sites again (to another new VPS, this time UK hosted) took under 30 minutes from provisioning a new VPS to moving sites & DBs across, to being ready to go live.

We always use separate services for web, email & DNS so moving our websites around is a relatively simple exercise. If you can, keeping separation between key services is a great way forwards as it gives you significantly more flexibility than if you rely on one machine or host for everything!

I'll share our build process in my next post, as a hopefully relatively foolproof guide for getting a freshly installed machine up and running with Nginx, PHP, PHP-FPM, MySQL & Wordpress.

Coming back to the subject of this post, it just goes to demonstrate that the cost of a hosting solution is often far from the most important aspect in terms of deciding whether or not to use a particular hosting service. Cheap hosts are often a great demonstration of this as you never tend to get much in the way of server resources, support or performance - often making them far from a good option for running anything important. At the other end of the scale, more expensive hosts don't always deliver either. Our DDoS-bound host for example offers a good range of VPSs from tiny to XXXL, costing between £2.99 & £40/month. Not top-end pricing by any means, but for £40/month you tend to expect a reliable solution which remains online...

In summary, finding a good host is not a simple process. It's something you can ease a little by checking out your prospective hosts before signing up (as a tip, search for your host's name on the WebHostingTalk forums), but unless you have a cast-iron recommendation from someone you trust or other reasons for believing that your next host is it for a long time, it's worth sticking to a few guidelines:

  1. Know what you want in terms of server location, hosting type, disk space, bandwidth, CPU resource & server software etc.
  2. Shop around - Google for the type of hosting you're looking for and see what's out there.
  3. Search for independent reviews & recommendations on sites such as WebHostingTalk.com & try and ignore the many "everything's perfect" type paid-for reviews if you can't find anything positive on WHT. Don't be afraid to ask whether anyone's used a particular host if you can't find anything.
  4. Stay flexible - don't tie yourself into long term 12, 18 or 24 month contracts as you can normally buy hosting services from reputable suppliers on rolling 30-day agreements. Be cautious if someone wants to tie you in for longer.
  5. Don't be afraid to switch hosts if you're not getting the performance, reliability or support you need.
  6. Finally, make sure you have a clear, documented set of instructions/notes for building your VPS from fresh OS install through to having sites online. Might seem overkill when you're doing it for the first time, but rest assured it makes subsequent builds or rebuilds significantly easier!

For now, if you take nothing else away from this slightly rambling tale, it would be to try and stick to the above guidelines :)

Check back soon for that Nginx build guide!


Domestic Central Heating System Wiring Diagrams; C, W, Y & S Plans

After trawling through some older posts on here to tidy up images & content following the recent relaunch of my site, I thought it might be useful to make copies of the wiring diagrams I used to sort out our central heating system.

The diagram set includes wiring plans for a number of popular configurations of central heating systems, C Plan, W Plan, Y Plan, S Plan, S Plan+ etc. and you should select the most appropriate diagram that matches the components you have installed in your system along with what you're hoping to achieve in terms of controllability.

Obviously your milage might vary and if you're unsure as to how to proceed I'd strongly suggest you stop reading here and go find yourself a competent plumber to advise.

This is a diagram for a C Plan system - a fairly typical UK pumped & zone-controlled heating/ gravity domestic hot water system including a wall-mounted thermostat, hot water storage tank thermostat and time controller. Click the diagram to download a PDF of the diagram which will print better!

If C Plan doesn't fit your needs, click here to download a PDF containing diagrams for other popular systems.

Good luck!


Wordpress Performance Essentials - Shared Hosting or VPS

Following on from my last post, Getting Started with Wordpress & Nginx, assuming you've decided to go down the VPS or Dedicated server route (rather than shared hosting) there are a few things you need to get installed as a priority to ensure good performance. If however you're unable to use a virtual private server and are planning to run your site on shared hosting, getting good performance is still important.

Wordpress by itself is a bit of a poor performer, something which can get progressively worse with every plugin you install; especially if you use lots of database-call heavy tasks such as querying posts, displaying related content, or retrieving random posts / pictures / whatever from the database on every page impression.

A great, simple way to improve this (aside from getting a faster database server or switching to Nginx rather than Apache) is to ensure you have tuned Wordpress as far as you can through minimising unnecessary plugins and using a good cache plugin. Once that's all been done, the next step is to ensure you're minifying your code as much as possible - a process which takes normal CSS, HTML & Javascript files containing loads of (unnecessary) empty space and removes anything & everything that's not directly necessary for the file to be understood by a user's browser. This is a great & simple way to reduce your bandwidth usage & improve site performance, simply though minimising the amount of data that needs to be transmitted with every page.

If you're using a shared hosting environment, all you can really do to achieve great Wordpress performance is to setup caching within Wordpress & enable minification. The two plugins I would always recommend to do this are W3 Total Cache & Better WP Minify.

W3 Total Cache is a comprehensive suite of tools designed to improve the user experience of visiting your site through caching nearly everything that your Wordpress site does - reducing download times and also providing a simple way to integrate your site with content delivery networks (CDN). It also offers some minify tools to help reduce the size of your pages.

 

Better WP Minify is a dedicated content minification plugin, which helps squeeze your files a little more than W3 Total Cache tends to do by itself.

I always tend to use both of these in tandem, which based on our experiences so far seems to give a pretty good result.

In terms of setting these up, Better WP Minify doesn't need much configuration other than being installed & enabled, but W3 Total Cache can need some more work to give you the best results as it will benefit from being tuned to suit your web server's configuration.

 

Out of the box, W3 Total Cache does a fairly good job at assuming a sensible set of default values which will work on most hosts and give you a reasonable degree of benefit to get you going. If your site still isn't performing too well at this point, unless your current host has some suggestions it's possibly time to rethink your hosting and perhaps consider changing to a better-performing web host or moving to a VPS.

There's a fantastic guide to getting started with W3 Total Cache available at c3mdigital.com which works through all the various configuration options, and is something I'd highly recommend taking a few minutes to read.

In essence, most shared hosts only offer relatively basic PHP installations - which tend to limit what you can actually do to improve performance. As a result, all WP caching tools tend to use the same approach with hidden generation of pages going on in the background - meaning that when a visitor browses to a cached page, WP just needs to retrieve the pre-generated page from disk and serve it to the visitor.. rather than needing to process the PHP scripts, retrieve content from the database, run all the plugins that need to be run for the page and eventually assemble everything into a coesive page before serving it to the user.

Flexible, VPS or Dedicated Hosting...

If you either have a flexible host who's able to install software for you, or have decided to run your own server as a dedicated or virtual private server, then it's possible to fully harness W3 Total Cache's capabilities with the addition of a PHP OpCode cache such as APC. This little add-on for PHP gives you the ability to use in-memory caching within W3 Total Cache, meaning that you can get away from using files-on-disk for your caches & hold frequently accessed data objects & pages within your server's memory. A quick way to produce significant performance improvements with a trivial amount of effort!

If you're running a Linux distribution that provides a package manager such as YUM, all you should need to do to get started is fire up an SSH or console session to your server & install a few pre-requisites:

yum install php-pear php-devel httpd-devel pcre pcre-devel

 

.. and then install APC using Pear:

pear install pecl/apc

 

That's it... You should now have APC installed, which means if you return to your W3 Total Cache setup pages you'll now be able to select Opcode: Alternative PHP Cache (APC)  & watch your site's performance increase!

 

 


Get started with high performance Wordpress powered by Nginx

Wordpress started life as a simple blogging tool back in 2003, with a handful of users who began to gradually switch to Wordpress from other contemporary blogging tools - preferring Wordpress's flexibility. Now, Wordpress is one of the most popular web content management systems on the Internet, supporting millions of websites and apparently being used for 22% of all new websites.

All the flexibility & extensibility comes at a price however. Wordpress site performance can often be somewhat lacklustre; especially if your site makes heavy use of plugins or the plugins you do use are badly written.

Something else that often doesn't help is the use of inexpensive shared web hosting to run your site as hosting is one arena where the old maxim of "you get what you pay for" always ring true.
Although there are many excellent web hosting services out there, some of the largest & cheapest hosts achieve their pricing through cramming as many websites as they can onto a handful of web server machines.
This tends to result in each site only being able to use a tiny fraction of the underlying server's resources, which in the case of Wordpress is one of the quickest ways to result in a slow site.

While you can't do much about the underlying technology if your using a shared hosting service, there are many things you can do to help speed Wordpress up; most of which I'm not going to cover here as there are many excellent resources available on the web that can help guide you in the right direction.

The basic principals to consider however are to use a decent caching plugin for Wordpress (WP Total Cache for example), remove any unnecessary plugins, minify your CSS & JS, and try using a content delivery network or service such as CloudFlare.

If you're still not happy with your site after this, then you probably need to change hosts.... Look for one who uses the LiteSpeed or Nginx webservers in place of Apache as this can help demonstrate that the host has an interest in how sites they host perform!

If you've outgrown shared hosting and run your own web server somewhere either as a physical machine or a virtual private machine (VPS), something to seriously consider is replacing Apache with Nginx - an "alternative" web server which can offer significent performance improvement over Apache while using your server's resources more efficiently.

Fully compatible with PHP, it can be a little tricky to get running initially without a guide (I'm going to be publishing one soon covering my experience of getting Nginx running with PHP 5.3 via PHP-FPM & Wordpress on CentOS 6), but is well worth trying out as it may go someway to helping you to get much more out out of your servers - along with much better performance when your site gets busy :)

For comparison purposes, one of our busy Wordpress sites was starting to struggle when it reached 80-90 active users. We'd tuned Apache pretty well for the machine & load, but as traffic increased much beyond this point page load times reached 4-5 seconds regardless of the cached & tuned Wordpress install. CPU load on the server was also much higher than we wanted, meaning that without adding more caching ahead of Apache (such as Varnish HTTP Accelerator for example, but thats another post) we weren't going to get much more performance out of the machine.


After some successful trials, we migrated the site over to Nginx (no special tuning) and saw instant performance improvements. In load tests, the same machine now happily manages page load times of 1-2s with 1500+ while using greatly reduced CPU & memory; all suggesting that we could push much higher traffic volumes through the box without any issues.

The chart to the right shows a quick load test up to 500 concurrent users, demonstrating Nginx's somewhat flat response under load which continued all the way up to 1500 users.

All our production Wordpress sites now run on Nginx rather than Apache, and to say the least we've not looked back....!

Check back soon for a working guide to get you started with Nginx-powered Wordpress.

In summary....

  • Run a Wordpress website using shared hosting or your own web server?
  • Ever had performance problems when the site gets busy? Plagued with slow page load times?
  • Often hit high server load for a handful of users?
  • Heard of Nginx.....?

Publishing with Wordpress on the go: Blogsy

 

Hello, This is what started as a test post created via Blogsy on my iPad.
I have no idea why I haven't come across this app before, as it looks like it was released earlier in 2011. Instead, I've essentially been actively avoiding use of other contemporary iPad apps in favour of the online Wordpress admin tools as most of the other available apps offer a poor & often somewhat annoying user experience leaving much to be desired.Read more


Scam the computer helpdesk scammer... priceless :)

For those who haven't come across it yet, there's a scam doing the rounds at the moment with someone from a dubious overseas call centre phoning you and offering to sort out your computer.

Various reasons are offered by someone who vaguely sounds like they know what they're doing, with an end game of getting you to open up a remote support session to your computer granting them full & unrestricted access to do whatever they wish.

During the call they get you to open up Windows Event Viewer and read out some of the 1000's of messages that any normally operating Windows machine will produce during the course of normal operations.

Every message is then pronounced as being critical / dangerous / corrupt and you're warned to not click anything on the basis that it will "activate it" and damage your computer. The real reason for not wanting you to click something of course is that it will be completely benign.

After this "diagnosis" you're then guided to open up a remote support website (various sites are being used it seems), which accepts a LogMeIn Rescue access code and facilitates a remote support session.

Despite running this in a ring-fenced VM I opted to not proceed much further beyond this point, having noted the code to forward it over to LogMeIn :)


First impressions - new 2010 Macbook Air 13" (2.13Ghz, 4Gb)

Having taken the plunge, I'm writing this on a shiny new 2010 Macbook Air 13" (2.13Ghz Core2Duo, 4Gb ram & 256Gb SSD) - which UPS kindly delivered earlier today.

For the last few years I've been happily using an early 2008 non-unibody Macbook Pro equipped with a 2.5Ghz Intel C2D, 4Gb ram and possibly the world's slowest 250Gb harddisk. This machine's been able to handle most tasks thrown at it, but in common with most machines of this vintage it's been getting increasingly bogged down with any sort of IO-bound task – not good when the main job you need the machine to do is support a digital photography workflow typically involving moving many relatively large files around.

I'm guessing that these ills could all be solved by replacing the machine's disk with something faster or perhaps installing an SSD in place of its boot disk, but with fast 256Gb SSD's costing upwards of £4-500 I decided to take a look around, and almost by accident stumbled across the new 2010 Air.

Given the perhaps somewhat lacklustre performance offered by the previous Macbook Air models  I was first surprised, and then perhaps somewhat skeptical as I came across review after review of the new 2010 Airs – virtually all of which gushed great volumes of praise about the machines. While most mentioned the lack of a backlit keyboard and automatic screen brightness adjustment,  practically all the reviews ended up concluding that the new Airs are nearly perfect laptops that are both ultraportable and can handle virtually any day to day computing task most users are likely to throw at them.

Needless to say I took these conclusions with a substantial pinch of salt as my current Macbook at first glance seemed to be of similar spec to the top-end Air, so prior to purchase I thought it'd be useful to go and try one of these supposedly amazing machines out – and duly spent far too much time in our local Apple store.

As you're probably aware, the new 2010 Airs come in two sizes – 11" and 13". While the 11" is a simply incredible machine for its size, cramming a fully functional & reasonably powerful Mac into a sliver of unibody case that is so thin it has to be seen to be believed, in my view it's perhaps just a little too small if you need to use applications such as Aperture or Photoshop.


The 13" version however is still ultraportable, but offers a larger, higher resolution screen that's a joy to use along with faster CPUs and more storage.

Comparing the new Airs side by side with 13 & 15" Macbook Pros was a truly surprising experience – with the Airs seemingly running rings around the larger Macbook Pros in many scenarios due to the their significantly quicker flash storage. The Macbook Pros may have been equipped with newer and faster processors, but whereas something like Chrome or Safari opened in a heartbeat on the Air, the Pros sometimes took several seconds to load. This extended to more substantial applications such as Office 2011's Word – which opens in around 1-2 seconds on the Air vs 5-10 or thereabouts on the new Macbook Pros. My current 2008 Pro for example comes in at closer to 15-30 seconds to start Word depending on what else it's running.

All in all it was rapidly becoming clear that both the new Airs had a clear performance advantage by means of their fast SSDs, and as far as portability is concerned there's not really any contest between either Air model and the Macbook or Macbook Pro models.

On a task requiring pure CPU power however, the higher specification CPUs on the larger Macbook Pros have the edge – but unless you're continually processing HD video etc. then there's a fair chance that you're not going to notice a massive day to day impact as a result of the relatively slow Core2Duo CPUs in the Airs.

Cutting what could turn into a longer story short, I ended up buying the top-end 13" Air model – and so far, am amazed at just how well it's able to handle fairly substantial applications in the form of Adobe Photoshop, InDesign and Apple's Aperture.

Running virtualised Windows via Parallels doesn't present a problem either – with a 64-bit Windows 7 VM booting from cold to a fully loaded desktop with Coherence mode enabled in around 12 seconds.

One thing that is slightly disconcerting however is the noise it makes while running… Unless you're pushing the CPU enough for it to fire up the fan, the new MBA is silent in operation as there are no moving parts in the machine. Definitely not a bad thing, but it does take a little getting used to!

Oh, the lack of a backlit keyboard or light sensor? 
A non-baclklit keyboard isn't really a problem or even an annoyance for me – I stopped needing to look at the keys while typing a long time ago and the keycaps seem to be pretty reflective so the light from the screen is sufficient to illuminate the keys if you're using the machine in a darkened room.

It would have been useful to have the ambient light sensor I suppose, but then again I would generally always adjust my old MBP's screen brightness manually as it would usually end up too dim or too bright for my preference so I don't see needing to continue to do this as much of an issue.

The new Macbook Air 13" so far truly seems to be as good a machine as the gushing reviews would suggest it is, although these are somewhat early days… I'll revisit this review after a week or so with the machine with a view to seeing whether I still agree with my initial thoughts.

One thing that is clear however is that the new MBA sets out a blueprint which we'll probably see the rest of the Macbook line evolve towards in the not too dim & distant future.


With 2011 rapidly approaching, it's time for a clean up of my Twitter account

Twitter can be a simply incredible communication & marketing tool,  offering what can only be described as an unparalleled mechanism for communicating with your customers or readers.

It's somewhat comical however just how quickly some things can change from being exceptionally helpful, to becoming an ongoing challenge in terms of sorting the wood from the trees - with useful or interesting tweets quickly being lost under a sea of tweets you're simply not interested in whatsoever.

In the early days of using Twitter, most people tend to follow anyone who looks interesting or perhaps operates in a similar field to themselves; invariably leading to ending up following a couple of hundred other people of which you perhaps only ever read tweets from 50-100 or so….

The vital question is of course, how best to approach this?

First step would seem to be obvious: review your followers and unfollow anyone who's tweets you always ignore. May be an obvious step but unless you're actively thinking about it, it's highly unlikely that you're going to instantly unfollow someone the first time you ignore a tweet… Often some milage here for a quick win…

In my case this didn't help much, perhaps only accounting for 5 or 6 accounts.

Next step seems equally obvious: unfollow anyone who's not actually still using Twitter, unless of course you feel a need to keep following them for some reason?

Once these two are done, if you're still not happy with the volume of tweets you're seeing then it's time for somewhat more drastic action – go through the users's you're following and think about why you're following them. If you're wondering why, and can't see a reason for keeping them… Unfollow!

A good tool to help in this process seems to be Tweepi – allows you to search/filter/review the users you're following along with getting quick access to stats about when they last tweeted and tools to follow/unfollow.

Your mileage may vary somewhat but hopefully you're now only seeing content you're actually interested in seeing – may well just make Twitter more useful to you on a daily basis :)


Been trying out Aperture 3: wow....!

Having been using Apple's Aperture 2 photographic workflow tool for what feels like a lifetime, I thought that the imminent arrival of a new Macbook was a prime opportunity to try out the latest & greatest version.

If you work with photos on a Mac and have never used Aperture before, suggest go taking a look at Apple's Aperture pages as until you've tried it, you don't know what you're missing....

Moving to Aperture from Lightroom & Photoshop made an incredible difference to my workflow - literally shaving hours off the time needed to process RAW image sets - but there were still many occasions when you needed to switch over to Photoshop to achieve a particular look for an image.

From what I've seen of Aperture 3 so far, I can now see myself firing up Photoshop even less - simply no need for it!

You can now use an array of non-destructive tools to tweak & adjust images in a ridiculous number of ways to handle most frequently used effects - along with all the usual abilities to copy & stamp effects between multiple images etc.

Being able to process images completely from RAW to cropped & retouched output within a single tool translates directly to shaving more time off my RAW workflow... Fantastic!

Just need the new Macbook to arrive now.... along with the Aperture 3 upgrade I'm probably about to order :)