if you're looking for a powerful all-in-one solution then something like the g4l livecd may be for you. it's a great little tool although i would say it's definitely aimed at users who are comfortable working with file systems and disks/partitions in the linux environment.
(warning: this software will not ask sanity-check questions...you can, very easily, write something to the wrong disk/partition and ruin a perfectly usable computer with it, so do be careful!)
i won't go into a detailed explanation of how to use it here, i strongly suggest you read the documentation carefully and google for usage examples.
i find the most useful feature to be the RAW mode that can make a bit-for-bit clone of an entire disk. it is capable of backing up to/restoring from FTP which can be handy, and it can be fairly easily integrated into a TFTP server for machines that don't have optical drives (a netbook for example).
i've never felt the need to try and boot it from a usb stick but google it, someone's probably done it!
if you want something that runs in the windows environment then there are tools available that offer the same features, but be prepared to spend money.
i ran into an issue with trying to make an image of the SSD in my laptop. the g4l livecd recognised my secondary hard disks but was not able to provide access to the RAID0 stripe contained on them.
the livecd was also not able to recognise either the LAN or WiFi hardware in the laptop so using an FTP server was also out of the question.
i decided to stick a ubuntu livecd in and see what it reckoned to the RAID stripe; it automatically mounted it with read/write access with no problem.
before i did this though, while still in the g4l livecd environment, i mounted the SSD and made use of the lblank7 script. this creates 2GB files filled with zeroes, and just keeps going until it fails due to the disk being full. you can then delete the files. this helps if you plan to compress the image afterwards.
little tip to delete multiple files with the same extension straight from the command line (in this case they're named something like TEST001.TMP, TEST002.TMP etc, the important bit is the file extension .TMP)
for i in *.TMP; do rm $i; done
in ubuntu, leaving the SSD unmounted, i simply used dd to make a clone of the entire disk:
dd if=/dev/sdc of=/media/DATA/M6600.img bs=1M
some people on the internet report that specifying a block size of around 1MB seems to improve performance, but i haven't tried it without that parameter so i can't comment.
dd doesn't give much in the way of output, but you can make it give you some information if you do the following...
open another terminal, and use
ps -a | grep ddto find the pid of the dd process, then issue
kill -USR1 pidand it will tell you how much it's done, how long it's been running and the average transfer rate. mine transferred at about 30MB/sec, and took about an hour and a quarter to do the 128GB disk.
once that was done i rebooted back to windows and used 7-zip, set to compress very heavily, (be aware you need lots of spare RAM for this, approx 10GB) to compress the image from 119GB down to a more respectable 7.56GB.
et voila, a compressed backup image