Monthly Archives: March 2011

Kinect face recognition

Using OpenCV haar image processing library to find faces on Kinect RGB cam real-time.

image

Source will be released, but I’m in the middle of some major refactoring so hold on a bit longer… Smile

FacebookLinkedInTwitterOrkutDiggShare/Bookmark

Kinect OpenCV object recognition

OpenCV SURF object recognition. Capable of detecting object that is tilted and rotated. Using EmguCV OpenCV .Net wrapper.

It’s part of my Kinect starters kit, a framework and a few sample modules for playing with Kinect programming in C#. Source will be released soon.

image

Should I defrag my SSD?

The answer is: “Yes!” and “No, never!” or somewhere between.

SSD disks has overcome the physical limitations of traditional harddisks. It is actually not a harddisk any more because it doesn’t have a disk. Traditionally when you access a fragmented file the data is scattered around on the disk so the read heads has to move around on a magnetic surface to read the whole file. With SSD this is no longer a problem as the data is stored in NVRAM (think “USB Pen”). This is one of the most significant advantages of the SSD drives. So fragmented files can be read nearly just a fast as defragmented files.

However the NTFS file system does some interesting things.

All hail NTFS, it was a very good file system at a very early time. Others have only started to catch up recently. Though there are some really good alternatives, none (afaik) has all the capabilities that NTFS has.

NTFS keeps track of all allocated and free space on the partition. When storing a file a certain amount of blocks are allocated. If the file size is only one block size it is actually just stored directly in the MFT. But if the file requires multiple blocks then free blocks are located and used. NTFS tries to find areas with many free blocks to prevent fragmentation. Despite this conditions such as high fragmentation (only small sequences of blocks available/low on space) and simultaneous writing to the disk may lead to fragmentation. Most of the time Windows don’t know how big the file you will be writing is and therefore can’t reserve the space. Other files can then be written in between causing fragmentation.

So when the time it takes to read a fragmented file isn’t an issue, how can fragmentation be an issue? One great feature in NTFS is that it will store allocated and free data as chunks. Traditional file systems like for instance FAT32 has a 1-1 mapping of file allocation and data block. NTFS does this way smarter by storing sequences. Instead of storing “FILE 100 is allocated to blocks 10,11,12,31,32,33” it stores “FILE 100 is allocated to blocks 10-12,31-33”. For a defragmented file it only needs one allocation sequence, while for a fragmented file it will need many.

The end result is that a defragmented drive will keep the MFT (NTFS file table) smaller so less memory is consumed, more of the (frequently used MFT) can be cached in memory and less CPU is required for processing a file.

Now the problem is that SSD drives has a limited number of writes for each block. Defragmenting the disk regularly will use all the available writes pretty quickly.

My recommendation is to defragment the SSD disk very rarely. You want to keep the files you never write to defragmented, and keep free sequential space to write files in. Defragmentation programs that can leave defragmented files and only worry about the fragmented ones is a good alternative. Defrag that comes with Windows will do fine.