Disadvantages of partitioning an SSD?

A wise guy who goes by the name of NickN maintains a lenghty forum post on his views about building a powerful computer (directed towards playing Microsoft’s Flight Simulator X, a very demanding piece of software).

He sums up points about SSD drives somewhere, and he concludes the list as follows:

DO NOT PARTITION SSD

He doesn’t elaborate on this unfortunately, but I wonder why he says this. What are the drawbacks of partitioning an SSD? (Partitioning in this context meaning >= 2 partitions)

Answer

SSDs do not, I repeat, do NOT work at the filesystem level!

There is no 1:1 correlation between how the filesystem sees things and how the SSD sees things.

Feel free to partition the SSD any way you want (assuming each partition is correctly aligned, and a modern OS will handle all this for you); it will NOT hurt anything, it will NOT adversely affect the access times or anything else, and don’t worry about doing a ton of writes to the SSD either. They have them so you can write 50 GB of data a day, and it will last 10 years.

Responding to Robin Hood’s answer,

Wear leveling won’t have as much free space to play with, because write operations will be spread across a smaller space, so you “could”, but not necessarily will wear out that part of the drive faster than you would if the whole drive was a single partition unless you will be performing equivalent wear on the additional partitions (e.g., a dual boot).

That is totally wrong. 
It is impossible to wear out a partition because you read/write to only that partition. This is NOT even remotely how SSDs work.

An SSD works at a much lower level access than what the filesystem sees;
an SSD works with blocks and pages.

In this case, what actually happens is, even if you are writing a ton of data in a specific partition, the filesystem is constrained by the partition, BUT, the SSD is not.
The more writes the SSD gets, the more blocks/pages the SSD will be swapping out in order to do wear leveling. It couldn’t care less how the filesystem sees things! 
That means, at one time, the data might reside in a specific page on the SSD, but, another time, it can and will be different. The SSD will keep track of where the data gets shuffled off to, and the filesystem will have no clue where on the SSD the data actually are.

To make this even easier: say you write a file on partition 1. The OS tells the filesystem about the storage needs, and the filesystem allocates the “sectors”, and then tells the SSD it needs X amount of space. The filesystem sees the file at a Logical Block Address (LBA) of 123 (for example). The SSD makes a note that LBA 123 is using block/page #500 (for example). So, every time the OS needs this specific file, the SSD will have a pointer to the exact page it is using.
Now, if we keep writing to the SSD, wear leveling kicks in, and says block/page #500, we can better optimize you at block/page #2300. Now, when the OS requests that same file, and the filesystem asks for LBA 123 again, THIS time, the SSD will return block/page #2300, and NOT #500.

Like hard drives nand-flash S.S.D’s are sequential access so any data you write/read from the additional partitions will be farther away than it “might” have been if it were written in a single partition, because people usually leave free space in their partitions. This will increase access times for the data that is stored on the additional partitions.

No, this is again wrong! 
Robin Hood is thinking things out in terms of the filesystem, instead of thinking like how exactly a SSD works.
Again, there is no way for the filesystem to know how the SSD stores the data.
There is no “farther away” here; that is only in the eyes of the filesystem, NOT the actual way a SSD stores information. It is possible for the SSD to have the data spread out in different NAND chips, and the user will not notice any increase in access times. Heck, due to the parallel nature of the NAND, it could even end up being faster than before, but we are talking nanoseconds here; blink and you missed it.

Less total space increases the likely hood of writing fragmented files, and while the performance impact is small keep in mind that it’s generally considered a bad idea to defragement a nand-flash S.S.D. because it will wear down the drive. Of course depending on what filesystem you are using some result in extremely low amounts of fragmentation, because they are designed to write files as a whole whenever possible rather than dump it all over the place to create faster write speeds.

Nope, sorry; again this is wrong. The filesystem’s view of files and the SSD’s view of those same files are not even remotely close.
The filesystem might see the file as fragmented in the worst case possible, BUT, the SSD view of the same data is almost always optimized.

Thus, a defragmentation program would look at those LBAs and say, this file must really be fragmented! 
But, since it has no clue as to the internals of the SSD, it is 100% wrong. THAT is the reason a defrag program will not work on SSDs, and yes, a defrag program also causes unnecessary writes, as was mentioned.

The article series Coding for SSDs is a good overview of
what is going on if you want to be more technical about how SSDs work.

For some more “light” reading on how FTL (Flash Translation Layer) actually works, I also suggest you read Critical Role of Firmware and Flash Translation Layers in Solid State Drive Design (PDF) from the Flash Memory Summit site.

They also have lots of other papers available, such as:

Another paper on how this works: Flash Memory Overview (PDF). 
See the section “Writing Data” (pages 26-27).

If video is more your thing, see An efficient page-level FTL to optimize address translation in flash memory and related slides.

Attribution
Source : Link , Question Author : MarioDS , Answer Author : Community

Leave a Comment