By: iz (indan.delete@this.nul.nu), January 10, 2009 1:09 am
Room: Moderated Discussions
Linus Torvalds (torvalds@linux-foundation.org) on 1/9/09 wrote:
---------------------------
>iz (indan@nul.nu) on 1/9/09 wrote:
>>
>>Every random write can cause a remapping table change,
>>which is basically a random write as well.
>
>No, it's a non-random write if you do it well. The remapping
>table doesn't have to be some array - that would be the
>last thing you want. You'd make it a smarter extent-based
>thing, along with a log of the last remapping events
>so that you can do those as basically a streaming area
>that is pre-erased (and then you write a more space-
>efficient long-term thing when that fills up).
>
>So no, you do not need to be stupid about the FTL at
>all. Although a lot of previous-generation flash apparently
>is pretty naive about it.
However smart you do it, it only groups multiple random writes together. And yeah, that's probably pretty non-random in most loads, but with a truly random write load it's still pretty much an extra random write. Perhaps not one for every write, but still pretty bad. It can be done in parallel though.
Previous-generation flash didn't seem to even try to do anything smart at all. :-/
>>Larger extends reduces your theoretical maximum random
>>write performance (that or they waste a lot of space).
>
>That makes no sense. An extent-based allocation model
>means that if you can do the remapping in bigger blocks,
>then that remapping information can be done more densely
>and the lookup can be more efficient. Rather than having
>to have a huge translation table for all blocks, you have
>a more size-efficient (but yes, more complex) translation
>layer.
Bad wording on my part, what I meant is that a bigger virtual block size (the remapped chunk size) reduces random write performance. If you do remapping on 16KB chunks then you can't write out a stream of random 4KB writes continuously at max flash speed.
>It is true that nice streaming writes tend to not have
>nearly the same latency issues as the random ones. But
>I'm surprised that you would say "most" writes are random,
>because at least in my experience metadata writes do tend
>to be pretty random, but in most loads they are not the
>bulk of the data.
>
>(Well, metadata updates are often the bulk, but only when
>there are almost no other writes going on - under UNIX
>filesystems, atime updates are very common, and happen
>when there is just reading going on to update the access
>times. So yes, they can be the "bulk", but that's often
>only when the bulk is very little ;^)
>
>But it clearly does depend on the load.
Umm, git? ;-)
The closest I come to a streaming write load is probably a system update, but that's still all over the place. Everything else is pretty much synchronized random writes (sync in the sense something is waiting till it's all written safely, not necessarily all in the same order).
And noatime should be the default mount option for all filesystems, seriously. Forget backward compatibility for a moment, think about forward compatibility for a change. ;-)
>But that's part of what any GC worth its salt should be
>doing: not just moving blocks around, but also moving them
>around so that the mapping tables become less fragmented.
>You wan to move the right data around, so that next
>time you need an erase block, you've compacted all the
>used blocks, and have unused space that is all ready to
>be erased.
>
>So a good GC is (a) incremental - so that you don't get
>huge spikes when you suddenly have to do a lot and latency
>suffers and (b) compacting - so that you get big free
>areas for future erase cycles and don't have to spend all
>your time just copying stuff around to make them.
Seems like you're saying there's a (c) ordering. Basically putting adjacent logical blocks back next to each other physically as well. This would improve streamed reading of randomly written files as well. Or were you thinking about another way of compacting your remapping tables, like rebuilding the remaping tree or whatever data structure is used?
>No, no, no.
>
>You seem to really be talking about just a very stupid
>garbage collector that doesn't try to improve on block
>layout at all, and just desperately tries to find free
>blocks, and then in order to make those free blocks
>usable for re-writing you technically only ever need
>one single erase block.
I was thinking that you wouldn't need more than, say, 1 GB, or even 4 GB to do anything smart you'd want, and throwing more free space in wouldn't improve much, however smart. Having more flash chips dedicated to GC, on the other hand, would improve GC speed.
>But your job is going to be much simpler if the
>device isn't nearly full. If you are always at 99%
>capacity and only have one free erase block, then for
>every single random write, you will have to do a full
>erase cycle and rewrite a whole erase block.
>
>Your performance will suck.
There's a huge difference between one free erase block and 1 GB or much more than that. Considering you want to do GC incremental I'm not sure how 8 GB of scratch area instead of 4 would improve things much.
>And quite frankly, from the performance data I have seen,
>this is exactly what the previous-gen flash disks did. It
>is why they count their write IOPS in tens of writes per
>second - if that.
>
>And that's simply not acceptable.
And mind-boggling as well, that they did it that way at all...
Sadly there doesn't seem to be a decent 1.8" ATA SSD and I'm stuck with a painfully slow HD.
>No. You don't do less garbage collection. You do
>a better job at it!
If you do a better job at it you need to do it less, which is where the improvement comes from. ;-)
But it depends on what "less" means for GC. I meant doing less writes/more efficient GC.
>If you don't make your drive be 99% full, but you always
>know you have (say) at least 10-15% free, you can actually
>get away from that bad cycle of having to do a full erase
>block for every random write. If you have extra space to
>play with, you can make a generational GC that doesn't
>copy the old data immediately, but can do an erase cycle
>and then write multiple new writes to that - because you
>have extra space.
Ah, yes, I sort of assumed the drive had some extra space, but that isn't necessarily the case of course.
>Then, instead of trying to keep just ahead of the
>piper all the time, you try to keep quite a bit ahead, so
>that you always have tens (or hundreds) of pre-erased
>blocks available - and when you do end up having to copy
>old data in your GC, you try to defragment your block
>translations at the same time, so that you get new blocks
>that you can erase entirely.
It's easy to get decent performance while you're ahead, and I suppose in general you do stay ahead as most people don't have continuously random writing going on. But with a SSD that can do 10K random writes a second, the GC speed actually becomes the limiting factor when the thing is put under heavy load.
I'd love to know what the random write speed is for e.g. the Intel drive over a prolonged time, say hours or days (until it hits the bottom), where the full disk is written and overwritten. Would be nice to be able to reset them after this test though.
>But this is all impossible to do if you don't have any
>scratch area. If you are constantly 99% full, there's
>simply no "buffer" to defragment into - you're always just
>having to solve the immediate problem of getting that one
>next erase-block.
Yes, yes, of course. But please explain how much more scratch area makes things faster than just having a few GB, besides the fact that you can do more multiple GCs in parallel with a bigger scratch area. Easier, probably, but faster?
>More scratch area helps because you can do better block
>allocation when you have more freedom.
E.g. 1 GB scratch area has 8K 128KB erase blocks. That seems enough freedom, how does increasing that help allocating blocks better?
> There's a diminishing
>return, of course, and in the end up can never write faster
>than the flash itself can take data, but with a good
>block remapper, you generally should be able to approach
>writing data as quickly as the flash can take it, rather
>than spending all your time erasing and copying old data
>around just to make space for the (small) new data.
The first step is already taken, getting random writes near raw flash write speed. The next one is having GC that's fast enough to stay ahead all the time.
---------------------------
>iz (indan@nul.nu) on 1/9/09 wrote:
>>
>>Every random write can cause a remapping table change,
>>which is basically a random write as well.
>
>No, it's a non-random write if you do it well. The remapping
>table doesn't have to be some array - that would be the
>last thing you want. You'd make it a smarter extent-based
>thing, along with a log of the last remapping events
>so that you can do those as basically a streaming area
>that is pre-erased (and then you write a more space-
>efficient long-term thing when that fills up).
>
>So no, you do not need to be stupid about the FTL at
>all. Although a lot of previous-generation flash apparently
>is pretty naive about it.
However smart you do it, it only groups multiple random writes together. And yeah, that's probably pretty non-random in most loads, but with a truly random write load it's still pretty much an extra random write. Perhaps not one for every write, but still pretty bad. It can be done in parallel though.
Previous-generation flash didn't seem to even try to do anything smart at all. :-/
>>Larger extends reduces your theoretical maximum random
>>write performance (that or they waste a lot of space).
>
>That makes no sense. An extent-based allocation model
>means that if you can do the remapping in bigger blocks,
>then that remapping information can be done more densely
>and the lookup can be more efficient. Rather than having
>to have a huge translation table for all blocks, you have
>a more size-efficient (but yes, more complex) translation
>layer.
Bad wording on my part, what I meant is that a bigger virtual block size (the remapped chunk size) reduces random write performance. If you do remapping on 16KB chunks then you can't write out a stream of random 4KB writes continuously at max flash speed.
>It is true that nice streaming writes tend to not have
>nearly the same latency issues as the random ones. But
>I'm surprised that you would say "most" writes are random,
>because at least in my experience metadata writes do tend
>to be pretty random, but in most loads they are not the
>bulk of the data.
>
>(Well, metadata updates are often the bulk, but only when
>there are almost no other writes going on - under UNIX
>filesystems, atime updates are very common, and happen
>when there is just reading going on to update the access
>times. So yes, they can be the "bulk", but that's often
>only when the bulk is very little ;^)
>
>But it clearly does depend on the load.
Umm, git? ;-)
The closest I come to a streaming write load is probably a system update, but that's still all over the place. Everything else is pretty much synchronized random writes (sync in the sense something is waiting till it's all written safely, not necessarily all in the same order).
And noatime should be the default mount option for all filesystems, seriously. Forget backward compatibility for a moment, think about forward compatibility for a change. ;-)
>But that's part of what any GC worth its salt should be
>doing: not just moving blocks around, but also moving them
>around so that the mapping tables become less fragmented.
>You wan to move the right data around, so that next
>time you need an erase block, you've compacted all the
>used blocks, and have unused space that is all ready to
>be erased.
>
>So a good GC is (a) incremental - so that you don't get
>huge spikes when you suddenly have to do a lot and latency
>suffers and (b) compacting - so that you get big free
>areas for future erase cycles and don't have to spend all
>your time just copying stuff around to make them.
Seems like you're saying there's a (c) ordering. Basically putting adjacent logical blocks back next to each other physically as well. This would improve streamed reading of randomly written files as well. Or were you thinking about another way of compacting your remapping tables, like rebuilding the remaping tree or whatever data structure is used?
>No, no, no.
>
>You seem to really be talking about just a very stupid
>garbage collector that doesn't try to improve on block
>layout at all, and just desperately tries to find free
>blocks, and then in order to make those free blocks
>usable for re-writing you technically only ever need
>one single erase block.
I was thinking that you wouldn't need more than, say, 1 GB, or even 4 GB to do anything smart you'd want, and throwing more free space in wouldn't improve much, however smart. Having more flash chips dedicated to GC, on the other hand, would improve GC speed.
>But your job is going to be much simpler if the
>device isn't nearly full. If you are always at 99%
>capacity and only have one free erase block, then for
>every single random write, you will have to do a full
>erase cycle and rewrite a whole erase block.
>
>Your performance will suck.
There's a huge difference between one free erase block and 1 GB or much more than that. Considering you want to do GC incremental I'm not sure how 8 GB of scratch area instead of 4 would improve things much.
>And quite frankly, from the performance data I have seen,
>this is exactly what the previous-gen flash disks did. It
>is why they count their write IOPS in tens of writes per
>second - if that.
>
>And that's simply not acceptable.
And mind-boggling as well, that they did it that way at all...
Sadly there doesn't seem to be a decent 1.8" ATA SSD and I'm stuck with a painfully slow HD.
>No. You don't do less garbage collection. You do
>a better job at it!
If you do a better job at it you need to do it less, which is where the improvement comes from. ;-)
But it depends on what "less" means for GC. I meant doing less writes/more efficient GC.
>If you don't make your drive be 99% full, but you always
>know you have (say) at least 10-15% free, you can actually
>get away from that bad cycle of having to do a full erase
>block for every random write. If you have extra space to
>play with, you can make a generational GC that doesn't
>copy the old data immediately, but can do an erase cycle
>and then write multiple new writes to that - because you
>have extra space.
Ah, yes, I sort of assumed the drive had some extra space, but that isn't necessarily the case of course.
>Then, instead of trying to keep just ahead of the
>piper all the time, you try to keep quite a bit ahead, so
>that you always have tens (or hundreds) of pre-erased
>blocks available - and when you do end up having to copy
>old data in your GC, you try to defragment your block
>translations at the same time, so that you get new blocks
>that you can erase entirely.
It's easy to get decent performance while you're ahead, and I suppose in general you do stay ahead as most people don't have continuously random writing going on. But with a SSD that can do 10K random writes a second, the GC speed actually becomes the limiting factor when the thing is put under heavy load.
I'd love to know what the random write speed is for e.g. the Intel drive over a prolonged time, say hours or days (until it hits the bottom), where the full disk is written and overwritten. Would be nice to be able to reset them after this test though.
>But this is all impossible to do if you don't have any
>scratch area. If you are constantly 99% full, there's
>simply no "buffer" to defragment into - you're always just
>having to solve the immediate problem of getting that one
>next erase-block.
Yes, yes, of course. But please explain how much more scratch area makes things faster than just having a few GB, besides the fact that you can do more multiple GCs in parallel with a bigger scratch area. Easier, probably, but faster?
>More scratch area helps because you can do better block
>allocation when you have more freedom.
E.g. 1 GB scratch area has 8K 128KB erase blocks. That seems enough freedom, how does increasing that help allocating blocks better?
> There's a diminishing
>return, of course, and in the end up can never write faster
>than the flash itself can take data, but with a good
>block remapper, you generally should be able to approach
>writing data as quickly as the flash can take it, rather
>than spending all your time erasing and copying old data
>around just to make space for the (small) new data.
The first step is already taken, getting random writes near raw flash write speed. The next one is having GC that's fast enough to stay ahead all the time.
Topic | Posted By | Date |
---|---|---|
First Dunnington benchmark results | Michael S | 2008/08/19 09:54 AM |
First Dunnington benchmark results | rwessel | 2008/08/19 12:42 PM |
First Dunnington benchmark results | Aaron Apink | 2008/08/19 04:49 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/19 05:28 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 08:49 AM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 02:10 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 05:42 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 06:12 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 08:45 AM |
First Dunnington benchmark results | Aaron Spink | 2008/08/21 12:12 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 02:15 PM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 01:59 AM |
First Dunnington benchmark results | Anders Jensen | 2008/08/20 02:26 AM |
+SSD | Anders Jensen | 2008/08/20 02:30 AM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 10:04 AM |
First Dunnington benchmark results | slacker | 2008/08/20 11:35 AM |
First Dunnington benchmark results | Doug Siebert | 2008/08/20 06:54 PM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 07:58 PM |
SLC vs. MLC | David Kanter | 2008/08/21 12:16 AM |
SLC vs. MLC | Matt Sayler | 2008/08/21 05:25 AM |
SLC vs. MLC | Richard Cownie | 2008/08/21 05:32 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/21 07:39 AM |
SLC vs. MLC | Michael S | 2008/08/21 08:07 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/21 08:52 AM |
SLC vs. MLC | Michael S | 2008/08/21 09:35 AM |
OLTP appliance = mainframe? (NT) | Potatoswatter | 2008/08/21 10:44 AM |
OLTP appliance = HP NonStop? | Michael S | 2008/08/21 11:03 AM |
OLTP appliance | Joe Chang | 2008/08/21 02:33 PM |
OLTP appliance | Potatoswatter | 2008/08/21 02:59 PM |
SLC vs. MLC | Aaron Spink | 2008/08/21 12:29 PM |
SLC vs. MLC | Dan Downs | 2008/08/21 10:33 AM |
SLC vs. MLC | rwessel | 2008/08/21 11:45 AM |
SLC vs. MLC | Dan Downs | 2008/08/22 07:21 AM |
SLC vs. MLC | Aaron Spink | 2008/08/21 12:34 PM |
SLC vs. MLC vs DRAM | pgerassi | 2008/08/21 11:24 AM |
SLC vs. MLC vs DRAM | David Kanter | 2008/08/22 12:31 AM |
SLC vs. MLC | Groo | 2008/08/23 11:52 AM |
SLC vs. MLC | Doug Siebert | 2008/08/21 05:14 PM |
SLC vs. MLC | Linus Torvalds | 2008/08/22 07:05 AM |
SLC vs. MLC | Doug Siebert | 2008/08/22 01:27 PM |
SLC vs. MLC | EduardoS | 2008/08/22 05:26 PM |
SSD Controller differentiation | David Kanter | 2008/08/22 08:35 PM |
SSD Controller differentiation | Doug Siebert | 2008/08/22 09:34 PM |
SSD Controller differentiation (supercaps, cost...) | anon | 2008/08/23 09:18 AM |
SSD Controller differentiation (supercaps, cost...) | Doug Siebert | 2008/08/23 09:40 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/23 09:50 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/08 11:03 AM |
SLC vs. MLC | Max | 2008/09/08 12:51 PM |
SLC vs. MLC | Howard Chu | 2008/09/08 08:04 PM |
SLC vs. MLC | Max | 2008/09/08 09:29 PM |
SLC vs. MLC | Howard Chu | 2008/09/08 11:12 PM |
RAM vs SSD? | Jouni Osmala | 2008/09/09 12:06 AM |
RAM vs SSD? | Max | 2008/09/12 11:51 AM |
RAM vs SSD? | EduardoS | 2008/09/12 03:27 PM |
Disk cache snapshotting | Max | 2008/09/13 07:34 AM |
Disk cache snapshotting | Howard Chu | 2008/09/14 08:58 PM |
Disk cache snapshotting | Max | 2008/09/15 11:50 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/09 06:43 AM |
SLC vs. MLC | Howard Chu | 2008/09/09 08:42 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/09 09:39 AM |
SLC vs. MLC | Michael S | 2008/09/09 11:29 PM |
SLC vs. MLC | anon | 2008/09/10 01:51 AM |
SLC vs. MLC | Michael S | 2008/09/10 02:09 AM |
SLC vs. MLC | Max | 2008/09/10 03:48 AM |
SLC vs. MLC | Michael S | 2008/09/10 04:52 AM |
SLC vs. MLC | Max | 2008/09/10 05:28 AM |
SLC vs. MLC | Matt Sayler | 2008/09/10 05:21 AM |
SLC vs. MLC | Michael S | 2008/09/10 08:17 AM |
SLC vs. MLC | anon | 2008/09/10 05:29 AM |
SLC vs. MLC | Michael S | 2008/09/10 08:23 AM |
SLC vs. MLC | Matt Sayler | 2008/09/10 09:45 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/10 06:25 AM |
SLC vs. MLC | Michael S | 2008/09/10 08:54 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/10 09:31 AM |
Physical vs effective write latency | Max | 2008/09/11 06:35 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 08:06 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 08:48 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 10:39 AM |
Physical vs effective write latency | Mark Roulo | 2008/09/11 11:18 AM |
Physical vs effective write latency | Doug Siebert | 2008/09/11 04:59 PM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 06:16 PM |
Physical vs effective write latency | Doug Siebert | 2008/09/11 09:28 PM |
Physical vs effective write latency | MS | 2009/02/03 02:06 PM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/11 11:39 AM |
SLC vs. MLC - the trick to latency | anon | 2008/09/11 12:17 PM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/11 04:25 PM |
SLC vs. MLC - the trick to latency | Doug Siebert | 2008/09/11 04:47 PM |
SLC vs. MLC - the trick to latency | rwessel | 2008/09/11 05:01 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/11 11:00 PM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/12 07:52 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/13 09:06 AM |
SLC vs. MLC - the trick to latency | Ungo | 2008/09/15 11:18 AM |
To SSD or not? One lady's perspective | David Kanter | 2008/09/22 12:12 AM |
To SSD or not? One lady's perspective | Howard Chu | 2008/09/22 03:02 AM |
To SSD or not? Real data.. | Linus Torvalds | 2008/09/22 06:33 AM |
To SSD or not? Real data.. | Ungo | 2008/09/22 11:27 AM |
4K sectors | Wes Felter | 2008/09/22 05:03 PM |
4K sectors | Daniel | 2008/09/22 09:31 PM |
Reasons for >512 byte sectors | Doug Siebert | 2008/09/22 08:38 PM |
Reasons for >512 byte sectors | rwessel | 2008/09/22 09:09 PM |
Reasons for >512 byte sectors | Howard Chu | 2008/09/23 01:50 AM |
Reasons for >512 byte sectors | Daniel | 2008/09/22 09:40 PM |
Reasons for >512 byte sectors | rwessel | 2008/09/23 08:11 AM |
Reasons for >512 byte sectors | Daniel | 2008/09/23 11:10 AM |
HDD long sector size availability | Etienne Lehnart | 2008/09/23 04:32 AM |
HDD long sector size availability | rwessel | 2008/09/23 08:19 AM |
HDD long sector size availability | Etienne Lehnart | 2008/09/23 01:17 PM |
To SSD or not? Real data.. | Jouni Osmala | 2008/09/22 10:16 PM |
To SSD or not? One lady's perspective | Wes Felter | 2008/09/22 10:25 AM |
How should SSDs be engineered into systems? | Rob Thorpe | 2008/09/22 01:01 PM |
How should SSDs be engineered into systems? | Matt Craighead | 2008/09/23 05:59 PM |
How should SSDs be engineered into systems? | Matt Sayler | 2008/09/24 03:17 AM |
ATA/SCSIS, Write Flushes and Asych Filesystems | TruePath | 2009/01/25 03:44 AM |
SLC vs. MLC - the trick to latency | Michael S | 2008/09/12 03:58 AM |
overlapped erase and read | Michael S | 2008/09/12 03:59 AM |
overlapped erase and read | David W. Hess | 2008/09/12 08:56 AM |
overlapped erase and read | Anonymous | 2008/09/12 07:45 PM |
overlapped erase and read | Jouni Osmala | 2008/09/12 10:56 PM |
overlapped erase and read | Michael S | 2008/09/13 10:29 AM |
overlapped erase and read | Michael S | 2008/09/13 11:09 AM |
overlapped erase and read | Linus Torvalds | 2008/09/13 01:05 PM |
SLC vs. MLC - the trick to latency | Doug Siebert | 2008/09/11 04:31 PM |
SLC vs. MLC | EduardoS | 2008/09/08 01:07 PM |
SLC vs. MLC | Linus Torvalds | 2008/09/08 01:30 PM |
SLC vs. MLC | EduardoS | 2008/09/08 03:01 PM |
SSD and RAID | Joe Chang | 2008/09/08 06:42 PM |
SSD and RAID | Doug Siebert | 2008/09/08 08:46 PM |
SSD and RAID | Aaron Spink | 2008/09/09 03:27 PM |
SSD and RAID | Groo | 2008/09/10 12:02 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 09:22 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 01:04 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 02:24 PM |
SLC vs. MLC | rwessel | 2009/01/06 03:47 PM |
SLC vs. MLC | anonymous | 2009/01/06 04:17 PM |
SLC vs. MLC | rwessel | 2009/01/06 04:58 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 11:35 PM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 04:45 PM |
SLC vs. MLC | rwessel | 2009/01/06 05:09 PM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 06:47 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 11:26 PM |
SLC vs. MLC | anon | 2009/01/06 07:23 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 11:52 PM |
SLC vs. MLC | anon | 2009/01/07 01:34 AM |
SLC vs. MLC | IntelUser2000 | 2009/01/07 06:43 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/07 09:28 AM |
drop data filesystem semantic | Doug Siebert | 2009/01/09 11:21 AM |
FTL and FS | iz | 2009/01/09 06:49 PM |
FTL and FS | Linus Torvalds | 2009/01/09 08:53 PM |
FTL and FS | iz | 2009/01/10 01:09 AM |
FTL and FS | Michael S | 2009/01/10 02:19 PM |
compiling large programs | iz | 2009/01/10 04:51 PM |
compiling large programs | Linus Torvalds | 2009/01/10 06:58 PM |
compiling large programs | peter | 2009/01/11 04:30 AM |
compiling large programs | Andi Kleen | 2009/01/11 12:03 PM |
The File Abstraction | TruePath | 2009/01/25 05:45 AM |
The File Abstraction | Howard Chu | 2009/01/25 12:49 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 08:23 AM |
The File Abstraction | Michael S | 2009/01/26 12:39 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 01:31 PM |
The File Abstraction | Dean Kent | 2009/01/26 02:06 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 03:29 PM |
The File Abstraction | Mark Christiansen | 2009/01/27 08:24 AM |
The File Abstraction | Mark Christiansen | 2009/01/27 09:14 AM |
The File Abstraction | Linus Torvalds | 2009/01/27 09:15 AM |
The File Abstraction | slacker | 2009/01/27 10:20 AM |
The File Abstraction | Linus Torvalds | 2009/01/27 12:16 PM |
Attributes All The Way Down | Mark Christiansen | 2009/01/27 01:17 PM |
The File Abstraction | slacker | 2009/01/27 04:25 PM |
The File Abstraction | Linus Torvalds | 2009/01/28 07:17 AM |
The File Abstraction: API thoughts | Carlie Coats | 2009/01/28 08:35 AM |
The File Abstraction | slacker | 2009/01/28 09:09 AM |
The File Abstraction | Linus Torvalds | 2009/01/28 12:44 PM |
Programs already 'hide' their metadata in the bytestream, unbeknownst to users | anon | 2009/01/28 08:28 PM |
The File Abstraction | slacker | 2009/01/29 09:39 AM |
The File Abstraction | Linus Torvalds | 2009/01/29 10:08 AM |
The File Abstraction | Dean Kent | 2009/01/29 10:49 AM |
The File Abstraction | Howard Chu | 2009/01/29 01:58 PM |
The File Abstraction | rwessel | 2009/01/29 03:23 PM |
Extended Attributes in Action | slacker | 2009/01/29 02:05 PM |
Extended Attributes in Action | stubar | 2009/01/29 03:49 PM |
Extended Attributes in Action | Linus Torvalds | 2009/01/29 04:15 PM |
Like Duh | anon | 2009/01/29 06:42 PM |
Like Duh | anon | 2009/01/29 08:15 PM |
Like Duh | anon | 2009/02/01 06:18 PM |
Double Duh. | Anonymous | 2009/02/01 09:58 PM |
Double Duh. | anon | 2009/02/02 01:08 AM |
Double Duh. | Anonymous | 2009/02/02 04:11 PM |
Double Duh. | anon | 2009/02/02 06:33 PM |
Like Duh | David Kanter | 2009/02/01 10:05 PM |
Like Duh | peter | 2009/02/01 10:55 PM |
Like Duh | anon | 2009/02/02 12:55 AM |
Xattrs, Solar power, regulation and politics | Rob Thorpe | 2009/02/02 03:36 AM |
Terminology seems too fuzzy to me | hobold | 2009/02/02 05:14 AM |
Terminology seems too fuzzy to me | rwessel | 2009/02/02 11:33 AM |
good summary | Michael S | 2009/02/03 01:41 AM |
good summary | Mark Christiansen | 2009/02/03 08:57 AM |
good summary | Howard Chu | 2009/02/03 09:21 AM |
good summary | Mark Christiansen | 2009/02/03 10:18 AM |
good summary | Howard Chu | 2009/02/03 11:00 AM |
good summary | Mark Christiansen | 2009/02/03 11:36 AM |
good summary | RagingDragon | 2009/02/03 09:39 PM |
good summary | rwessel | 2009/02/03 10:03 PM |
good summary | RagingDragon | 2009/02/03 10:46 PM |
Terminology seems too fuzzy to me | slacker | 2009/02/04 04:06 PM |
Terminology seems too fuzzy to me | Michael S | 2009/02/05 12:05 AM |
Terminology seems too fuzzy to me | Ungo | 2009/02/05 12:15 PM |
Terminology seems too fuzzy to me | slacker | 2009/02/05 01:19 PM |
Terminology seems too fuzzy to me | Howard Chu | 2009/02/05 03:44 PM |
Like Duh | iz | 2009/01/30 01:03 AM |
EAs (security labels) hosed me badly | anon | 2009/01/30 08:48 PM |
Extended Attributes in Action | RagingDragon | 2009/01/29 08:31 PM |
Extended Attributes in Action | anonymous | 2009/01/29 07:13 PM |
Extended Attributes in Action | Howard Chu | 2009/01/29 08:38 PM |
Extended Attributes in Action | slacker | 2009/01/30 10:24 AM |
Extended Attributes in Action | anon | 2009/01/30 04:50 PM |
Extended Attributes in Action | Etienne Lehnart | 2009/01/29 11:22 PM |
Extended Attributes in Action | Rob Thorpe | 2009/01/30 11:39 AM |
Extended Attributes in Action | slacker | 2009/01/30 12:16 PM |
Extended Attributes in Action | anon | 2009/01/30 05:03 PM |
Extended Attributes in Action | Howard Chu | 2009/01/30 10:22 PM |
Extended Attributes in Action | rwessel | 2009/01/30 11:08 PM |
Extended Attributes in Action | anonymous | 2009/01/30 11:22 PM |
Extended Attributes in Action | rwessel | 2009/01/30 11:56 PM |
Scaling | Dean Kent | 2009/01/31 08:04 AM |
Scaling | Rob Thorpe | 2009/02/02 01:39 AM |
Scaling | rwessel | 2009/02/02 10:41 AM |
Scaling | Howard Chu | 2009/02/02 11:30 AM |
Scaling | Dean Kent | 2009/02/02 01:27 PM |
Scaling | Rob Thorpe | 2009/02/03 04:08 AM |
Scaling | Dean Kent | 2009/02/03 06:38 AM |
Scaling | rwessel | 2009/02/03 01:34 PM |
Scaling | RagingDragon | 2009/02/03 09:46 PM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 10:27 AM |
in defense of software that does not scale | Howard Chu | 2009/02/03 11:03 AM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 11:17 AM |
in defense of software that does not scale | RagingDragon | 2009/02/03 10:00 PM |
in defense of software that does not scale | Michael S | 2009/02/04 05:46 AM |
in defense of software that does not scale | RagingDragon | 2009/02/04 08:33 PM |
in defense of software that does not scale | Dean Kent | 2009/02/03 11:17 AM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 11:24 AM |
in defense of software that does not scale | Vincent Diepeveen | 2009/02/04 09:43 AM |
in defense of software that does not scale | rwessel | 2009/02/03 01:44 PM |
in defense of software that does not scale | anon | 2009/02/04 01:35 AM |
in defense of software that does not scale | Carlie Coats | 2009/02/04 04:24 AM |
Scaling with time vs. scaling from the beginning. | mpx | 2009/02/05 12:57 AM |
Extended Attributes in Action | Michael S | 2009/01/31 09:33 AM |
Extended Attributes in Action | anon | 2009/01/31 09:37 PM |
Extended Attributes in Action | JasonB | 2009/01/31 07:11 AM |
Extended Attributes in Action | Howard Chu | 2009/01/31 10:43 AM |
Extended Attributes in Action | JasonB | 2009/01/31 03:37 PM |
Extended Attributes in Action | Howard Chu | 2009/02/02 01:42 PM |
Extended Attributes in Action | Howard Chu | 2009/02/02 01:44 PM |
The File Abstraction | Rob Thorpe | 2009/01/27 10:20 AM |
The File Abstraction | Howard Chu | 2009/01/26 11:28 PM |
The File Abstraction | Michael S | 2009/01/27 02:00 AM |
The File Abstraction | Dean Kent | 2009/01/27 07:30 AM |
The File Abstraction | Andi Kleen | 2009/01/27 01:05 AM |
SLC vs. MLC | Michel | 2009/01/12 05:54 PM |
SLC vs. MLC | Linus Torvalds | 2009/01/12 06:38 PM |
SLC vs. MLC | rwessel | 2009/01/12 11:52 PM |
SLC vs. MLC | Ungo | 2009/01/13 02:04 PM |
SLC vs. MLC | Wes Felter | 2009/01/13 04:42 PM |
SLC vs. MLC | TruePath | 2009/01/25 04:05 AM |
SLC vs. MLC | Ungo | 2008/08/21 11:54 AM |
SLC vs. MLC | Aaron Spink | 2008/08/21 12:20 PM |
MLC vs. SLC | Michael S | 2008/08/21 07:57 AM |
First Dunnington benchmark results | rwessel | 2008/08/21 09:40 AM |
First Dunnington benchmark results | Aaron Spink | 2008/08/21 02:18 AM |
First Dunnington benchmark results | Etienne Lehnart | 2008/08/20 03:38 AM |
Will x86 dominate big iron? | Tom W | 2008/08/19 09:10 PM |
Will x86 dominate big iron? | Jesper Frimann | 2008/08/19 11:28 PM |
Will x86 dominate big iron? | Tom W | 2008/08/20 02:42 PM |
Will x86 dominate big iron? | David Kanter | 2008/08/21 12:13 AM |
Will x86 dominate big iron? | Joe Chang | 2008/08/21 05:54 AM |
Will x86 dominate big iron? | asdf | 2008/08/22 12:18 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/22 06:54 PM |
Will x86 dominate big iron? | Jesper Frimann | 2008/08/22 08:48 AM |
Will x86 dominate big iron? | Tom W | 2008/08/24 12:06 AM |
Will x86 dominate big iron? | Michael S | 2008/08/24 03:19 AM |
Will x86 dominate big iron? | Dean Kent | 2008/08/24 08:30 AM |
Will x86 dominate big iron? | Paul | 2008/08/24 10:16 AM |
Will x86 dominate big iron? | Dean Kent | 2008/08/24 11:37 AM |
Will x86 dominate big iron? | Michael S | 2008/08/24 11:53 PM |
Will x86 dominate big iron? | someone | 2008/08/22 09:19 AM |
Will x86 dominate big iron? | aaron spink | 2008/08/23 01:56 AM |
Will x86 dominate big iron? | Michael S | 2008/08/23 08:58 AM |
Will x86 dominate big iron? | someone | 2008/08/23 12:51 PM |
Will x86 dominate big iron? | someone | 2008/08/23 12:55 PM |
Will x86 dominate big iron? | Aaron Spink | 2008/08/23 03:52 PM |
Will x86 dominate big iron? | anonymous | 2008/08/23 04:28 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/23 05:12 PM |
Off road and topic | EduardoS | 2008/08/23 05:28 PM |
Will x86 dominate big iron? | someone | 2008/08/23 05:26 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/23 08:40 PM |
Will x86 dominate big iron? | anonymous | 2008/08/24 12:46 AM |
Off road and topic | David W. Hess | 2008/08/24 02:24 AM |
Off road and topic | Aaron Spink | 2008/08/24 03:14 AM |
Beckton vs. Dunnington | Mr. Camel | 2008/08/22 05:30 AM |
Beckton vs. Dunnington | jokerman | 2008/08/22 11:12 AM |
Beckton vs. Dunnington | Mr. Camel | 2009/05/29 09:16 AM |