By: Linus Torvalds (torvalds.delete@this.linux-foundation.org), January 9, 2009 9:53 pm
Room: Moderated Discussions
iz (indan@nul.nu) on 1/9/09 wrote:
>
>Every random write can cause a remapping table change,
>which is basically a random write as well.
No, it's a non-random write if you do it well. The remapping
table doesn't have to be some array - that would be the
last thing you want. You'd make it a smarter extent-based
thing, along with a log of the last remapping events
so that you can do those as basically a streaming area
that is pre-erased (and then you write a more space-
efficient long-term thing when that fills up).
So no, you do not need to be stupid about the FTL at
all. Although a lot of previous-generation flash apparently
is pretty naive about it.
>Larger extends reduces your theoretical maximum random
>write performance (that or they waste a lot of space).
That makes no sense. An extent-based allocation model
means that if you can do the remapping in bigger blocks,
then that remapping information can be done more densely
and the lookup can be more efficient. Rather than having
to have a huge translation table for all blocks, you have
a more size-efficient (but yes, more complex) translation
layer.
>Most writes are random writes, at least on my system. The
>writes that aren't don't have to be quick either.
It is true that nice streaming writes tend to not have
nearly the same latency issues as the random ones. But
I'm surprised that you would say "most" writes are random,
because at least in my experience metadata writes do tend
to be pretty random, but in most loads they are not the
bulk of the data.
(Well, metadata updates are often the bulk, but only when
there are almost no other writes going on - under UNIX
filesystems, atime updates are very common, and happen
when there is just reading going on to update the access
times. So yes, they can be the "bulk", but that's often
only when the bulk is very little ;^)
But it clearly does depend on the load.
>>So in most use, you'd have a mix of small random writes
>>and larger contiguous ones, and the realistic situation is
>>that the remapping never gets really bad - at least not as
>>bad as the extreme benchmarks make it.
>
>I'm not really convinced of this. It may take a bit of
>time, but you'll get there eventually.
Well, it's true that you'll get there "eventually" if you
never try to clean things up.
But that's part of what any GC worth its salt should be
doing: not just moving blocks around, but also moving them
around so that the mapping tables become less fragmented.
You wan to move the right data around, so that next
time you need an erase block, you've compacted all the
used blocks, and have unused space that is all ready to
be erased.
So a good GC is (a) incremental - so that you don't get
huge spikes when you suddenly have to do a lot and latency
suffers and (b) compacting - so that you get big free
areas for future erase cycles and don't have to spend all
your time just copying stuff around to make them.
>>- garbage collection is much easier if you have
>>lots of free space.
>
>I don't see how this is. Some free space is needed, sure,
>but not that much. Garbage collection is just not needed
>when there's free space left...
No, no, no.
You seem to really be talking about just a very stupid
garbage collector that doesn't try to improve on block
layout at all, and just desperately tries to find free
blocks, and then in order to make those free blocks
usable for re-writing you technically only ever need
one single erase block.
With that really basic and stupid model, you can always
make room for a new write by finding one free block in
your FTL tables, then doing a erase (bigger than the
block size), copying all the non-free blocks that shared
the erase block with the rewritten one to the newly
erased block, writing the new data, and updating the
translations.
Yes, yes, you can do it that way - but if you do, you'll
always actually be esasing and writing much more
than the blocks you actually want to write, since
you'll be copying all the other blocks around it too.
But your job is going to be much simpler if the
device isn't nearly full. If you are always at 99%
capacity and only have one free erase block, then for
every single random write, you will have to do a full
erase cycle and rewrite a whole erase block.
Your performance will suck.
And quite frankly, from the performance data I have seen,
this is exactly what the previous-gen flash disks did. It
is why they count their write IOPS in tens of writes per
second - if that.
And that's simply not acceptable.
>>Now, you could actually sell the exact same drive
>>with a capacity of just 75GB, and you'd essentially have
>>doubled your "scratch area" to do GC in. End result:
>>smoother garbage collection with fewer GC spikes.
>
>Only way that is possible is by allowing more space be
>wasted, or in other words, do less garbage collection by
>allowing more fragmentation to happen.
No. You don't do less garbage collection. You do
a better job at it!
If you don't make your drive be 99% full, but you always
know you have (say) at least 10-15% free, you can actually
get away from that bad cycle of having to do a full erase
block for every random write. If you have extra space to
play with, you can make a generational GC that doesn't
copy the old data immediately, but can do an erase cycle
and then write multiple new writes to that - because you
have extra space.
Then, instead of trying to keep just ahead of the
piper all the time, you try to keep quite a bit ahead, so
that you always have tens (or hundreds) of pre-erased
blocks available - and when you do end up having to copy
old data in your GC, you try to defragment your block
translations at the same time, so that you get new blocks
that you can erase entirely.
But this is all impossible to do if you don't have any
scratch area. If you are constantly 99% full, there's
simply no "buffer" to defragment into - you're always just
having to solve the immediate problem of getting that one
next erase-block.
Is it simple? No. Now SanDisk is talking about their new
ExtremeFFS(tm) vs their old TrueFFS(tm), and I'm sure they
spent lots of effort on this all. I'm sure the new thing
is much more complex. But the thing is, it's worth it.
>More scratch area doesn't help a bit as far as I can see.
More scratch area helps because you can do better block
allocation when you have more freedom. There's a diminishing
return, of course, and in the end up can never write faster
than the flash itself can take data, but with a good
block remapper, you generally should be able to approach
writing data as quickly as the flash can take it, rather
than spending all your time erasing and copying old data
around just to make space for the (small) new data.
Linus
>
>Every random write can cause a remapping table change,
>which is basically a random write as well.
No, it's a non-random write if you do it well. The remapping
table doesn't have to be some array - that would be the
last thing you want. You'd make it a smarter extent-based
thing, along with a log of the last remapping events
so that you can do those as basically a streaming area
that is pre-erased (and then you write a more space-
efficient long-term thing when that fills up).
So no, you do not need to be stupid about the FTL at
all. Although a lot of previous-generation flash apparently
is pretty naive about it.
>Larger extends reduces your theoretical maximum random
>write performance (that or they waste a lot of space).
That makes no sense. An extent-based allocation model
means that if you can do the remapping in bigger blocks,
then that remapping information can be done more densely
and the lookup can be more efficient. Rather than having
to have a huge translation table for all blocks, you have
a more size-efficient (but yes, more complex) translation
layer.
>Most writes are random writes, at least on my system. The
>writes that aren't don't have to be quick either.
It is true that nice streaming writes tend to not have
nearly the same latency issues as the random ones. But
I'm surprised that you would say "most" writes are random,
because at least in my experience metadata writes do tend
to be pretty random, but in most loads they are not the
bulk of the data.
(Well, metadata updates are often the bulk, but only when
there are almost no other writes going on - under UNIX
filesystems, atime updates are very common, and happen
when there is just reading going on to update the access
times. So yes, they can be the "bulk", but that's often
only when the bulk is very little ;^)
But it clearly does depend on the load.
>>So in most use, you'd have a mix of small random writes
>>and larger contiguous ones, and the realistic situation is
>>that the remapping never gets really bad - at least not as
>>bad as the extreme benchmarks make it.
>
>I'm not really convinced of this. It may take a bit of
>time, but you'll get there eventually.
Well, it's true that you'll get there "eventually" if you
never try to clean things up.
But that's part of what any GC worth its salt should be
doing: not just moving blocks around, but also moving them
around so that the mapping tables become less fragmented.
You wan to move the right data around, so that next
time you need an erase block, you've compacted all the
used blocks, and have unused space that is all ready to
be erased.
So a good GC is (a) incremental - so that you don't get
huge spikes when you suddenly have to do a lot and latency
suffers and (b) compacting - so that you get big free
areas for future erase cycles and don't have to spend all
your time just copying stuff around to make them.
>>- garbage collection is much easier if you have
>>lots of free space.
>
>I don't see how this is. Some free space is needed, sure,
>but not that much. Garbage collection is just not needed
>when there's free space left...
No, no, no.
You seem to really be talking about just a very stupid
garbage collector that doesn't try to improve on block
layout at all, and just desperately tries to find free
blocks, and then in order to make those free blocks
usable for re-writing you technically only ever need
one single erase block.
With that really basic and stupid model, you can always
make room for a new write by finding one free block in
your FTL tables, then doing a erase (bigger than the
block size), copying all the non-free blocks that shared
the erase block with the rewritten one to the newly
erased block, writing the new data, and updating the
translations.
Yes, yes, you can do it that way - but if you do, you'll
always actually be esasing and writing much more
than the blocks you actually want to write, since
you'll be copying all the other blocks around it too.
But your job is going to be much simpler if the
device isn't nearly full. If you are always at 99%
capacity and only have one free erase block, then for
every single random write, you will have to do a full
erase cycle and rewrite a whole erase block.
Your performance will suck.
And quite frankly, from the performance data I have seen,
this is exactly what the previous-gen flash disks did. It
is why they count their write IOPS in tens of writes per
second - if that.
And that's simply not acceptable.
>>Now, you could actually sell the exact same drive
>>with a capacity of just 75GB, and you'd essentially have
>>doubled your "scratch area" to do GC in. End result:
>>smoother garbage collection with fewer GC spikes.
>
>Only way that is possible is by allowing more space be
>wasted, or in other words, do less garbage collection by
>allowing more fragmentation to happen.
No. You don't do less garbage collection. You do
a better job at it!
If you don't make your drive be 99% full, but you always
know you have (say) at least 10-15% free, you can actually
get away from that bad cycle of having to do a full erase
block for every random write. If you have extra space to
play with, you can make a generational GC that doesn't
copy the old data immediately, but can do an erase cycle
and then write multiple new writes to that - because you
have extra space.
Then, instead of trying to keep just ahead of the
piper all the time, you try to keep quite a bit ahead, so
that you always have tens (or hundreds) of pre-erased
blocks available - and when you do end up having to copy
old data in your GC, you try to defragment your block
translations at the same time, so that you get new blocks
that you can erase entirely.
But this is all impossible to do if you don't have any
scratch area. If you are constantly 99% full, there's
simply no "buffer" to defragment into - you're always just
having to solve the immediate problem of getting that one
next erase-block.
Is it simple? No. Now SanDisk is talking about their new
ExtremeFFS(tm) vs their old TrueFFS(tm), and I'm sure they
spent lots of effort on this all. I'm sure the new thing
is much more complex. But the thing is, it's worth it.
>More scratch area doesn't help a bit as far as I can see.
More scratch area helps because you can do better block
allocation when you have more freedom. There's a diminishing
return, of course, and in the end up can never write faster
than the flash itself can take data, but with a good
block remapper, you generally should be able to approach
writing data as quickly as the flash can take it, rather
than spending all your time erasing and copying old data
around just to make space for the (small) new data.
Linus
Topic | Posted By | Date |
---|---|---|
First Dunnington benchmark results | Michael S | 2008/08/19 10:54 AM |
First Dunnington benchmark results | rwessel | 2008/08/19 01:42 PM |
First Dunnington benchmark results | Aaron Apink | 2008/08/19 05:49 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/19 06:28 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 09:49 AM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 03:10 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 06:42 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 07:12 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 09:45 AM |
First Dunnington benchmark results | Aaron Spink | 2008/08/21 01:12 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 03:15 PM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 02:59 AM |
First Dunnington benchmark results | Anders Jensen | 2008/08/20 03:26 AM |
+SSD | Anders Jensen | 2008/08/20 03:30 AM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 11:04 AM |
First Dunnington benchmark results | slacker | 2008/08/20 12:35 PM |
First Dunnington benchmark results | Doug Siebert | 2008/08/20 07:54 PM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 08:58 PM |
SLC vs. MLC | David Kanter | 2008/08/21 01:16 AM |
SLC vs. MLC | Matt Sayler | 2008/08/21 06:25 AM |
SLC vs. MLC | Richard Cownie | 2008/08/21 06:32 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/21 08:39 AM |
SLC vs. MLC | Michael S | 2008/08/21 09:07 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/21 09:52 AM |
SLC vs. MLC | Michael S | 2008/08/21 10:35 AM |
OLTP appliance = mainframe? (NT) | Potatoswatter | 2008/08/21 11:44 AM |
OLTP appliance = HP NonStop? | Michael S | 2008/08/21 12:03 PM |
OLTP appliance | Joe Chang | 2008/08/21 03:33 PM |
OLTP appliance | Potatoswatter | 2008/08/21 03:59 PM |
SLC vs. MLC | Aaron Spink | 2008/08/21 01:29 PM |
SLC vs. MLC | Dan Downs | 2008/08/21 11:33 AM |
SLC vs. MLC | rwessel | 2008/08/21 12:45 PM |
SLC vs. MLC | Dan Downs | 2008/08/22 08:21 AM |
SLC vs. MLC | Aaron Spink | 2008/08/21 01:34 PM |
SLC vs. MLC vs DRAM | pgerassi | 2008/08/21 12:24 PM |
SLC vs. MLC vs DRAM | David Kanter | 2008/08/22 01:31 AM |
SLC vs. MLC | Groo | 2008/08/23 12:52 PM |
SLC vs. MLC | Doug Siebert | 2008/08/21 06:14 PM |
SLC vs. MLC | Linus Torvalds | 2008/08/22 08:05 AM |
SLC vs. MLC | Doug Siebert | 2008/08/22 02:27 PM |
SLC vs. MLC | EduardoS | 2008/08/22 06:26 PM |
SSD Controller differentiation | David Kanter | 2008/08/22 09:35 PM |
SSD Controller differentiation | Doug Siebert | 2008/08/22 10:34 PM |
SSD Controller differentiation (supercaps, cost...) | anon | 2008/08/23 10:18 AM |
SSD Controller differentiation (supercaps, cost...) | Doug Siebert | 2008/08/23 10:40 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/23 10:50 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/08 12:03 PM |
SLC vs. MLC | Max | 2008/09/08 01:51 PM |
SLC vs. MLC | Howard Chu | 2008/09/08 09:04 PM |
SLC vs. MLC | Max | 2008/09/08 10:29 PM |
SLC vs. MLC | Howard Chu | 2008/09/09 12:12 AM |
RAM vs SSD? | Jouni Osmala | 2008/09/09 01:06 AM |
RAM vs SSD? | Max | 2008/09/12 12:51 PM |
RAM vs SSD? | EduardoS | 2008/09/12 04:27 PM |
Disk cache snapshotting | Max | 2008/09/13 08:34 AM |
Disk cache snapshotting | Howard Chu | 2008/09/14 09:58 PM |
Disk cache snapshotting | Max | 2008/09/15 12:50 PM |
SLC vs. MLC | Linus Torvalds | 2008/09/09 07:43 AM |
SLC vs. MLC | Howard Chu | 2008/09/09 09:42 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/09 10:39 AM |
SLC vs. MLC | Michael S | 2008/09/10 12:29 AM |
SLC vs. MLC | anon | 2008/09/10 02:51 AM |
SLC vs. MLC | Michael S | 2008/09/10 03:09 AM |
SLC vs. MLC | Max | 2008/09/10 04:48 AM |
SLC vs. MLC | Michael S | 2008/09/10 05:52 AM |
SLC vs. MLC | Max | 2008/09/10 06:28 AM |
SLC vs. MLC | Matt Sayler | 2008/09/10 06:21 AM |
SLC vs. MLC | Michael S | 2008/09/10 09:17 AM |
SLC vs. MLC | anon | 2008/09/10 06:29 AM |
SLC vs. MLC | Michael S | 2008/09/10 09:23 AM |
SLC vs. MLC | Matt Sayler | 2008/09/10 10:45 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/10 07:25 AM |
SLC vs. MLC | Michael S | 2008/09/10 09:54 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/10 10:31 AM |
Physical vs effective write latency | Max | 2008/09/11 07:35 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 09:06 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 09:48 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 11:39 AM |
Physical vs effective write latency | Mark Roulo | 2008/09/11 12:18 PM |
Physical vs effective write latency | Doug Siebert | 2008/09/11 05:59 PM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 07:16 PM |
Physical vs effective write latency | Doug Siebert | 2008/09/11 10:28 PM |
Physical vs effective write latency | MS | 2009/02/03 03:06 PM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/11 12:39 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/11 01:17 PM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/11 05:25 PM |
SLC vs. MLC - the trick to latency | Doug Siebert | 2008/09/11 05:47 PM |
SLC vs. MLC - the trick to latency | rwessel | 2008/09/11 06:01 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/12 12:00 AM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/12 08:52 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/13 10:06 AM |
SLC vs. MLC - the trick to latency | Ungo | 2008/09/15 12:18 PM |
To SSD or not? One lady's perspective | David Kanter | 2008/09/22 01:12 AM |
To SSD or not? One lady's perspective | Howard Chu | 2008/09/22 04:02 AM |
To SSD or not? Real data.. | Linus Torvalds | 2008/09/22 07:33 AM |
To SSD or not? Real data.. | Ungo | 2008/09/22 12:27 PM |
4K sectors | Wes Felter | 2008/09/22 06:03 PM |
4K sectors | Daniel | 2008/09/22 10:31 PM |
Reasons for >512 byte sectors | Doug Siebert | 2008/09/22 09:38 PM |
Reasons for >512 byte sectors | rwessel | 2008/09/22 10:09 PM |
Reasons for >512 byte sectors | Howard Chu | 2008/09/23 02:50 AM |
Reasons for >512 byte sectors | Daniel | 2008/09/22 10:40 PM |
Reasons for >512 byte sectors | rwessel | 2008/09/23 09:11 AM |
Reasons for >512 byte sectors | Daniel | 2008/09/23 12:10 PM |
HDD long sector size availability | Etienne Lehnart | 2008/09/23 05:32 AM |
HDD long sector size availability | rwessel | 2008/09/23 09:19 AM |
HDD long sector size availability | Etienne Lehnart | 2008/09/23 02:17 PM |
To SSD or not? Real data.. | Jouni Osmala | 2008/09/22 11:16 PM |
To SSD or not? One lady's perspective | Wes Felter | 2008/09/22 11:25 AM |
How should SSDs be engineered into systems? | Rob Thorpe | 2008/09/22 02:01 PM |
How should SSDs be engineered into systems? | Matt Craighead | 2008/09/23 06:59 PM |
How should SSDs be engineered into systems? | Matt Sayler | 2008/09/24 04:17 AM |
ATA/SCSIS, Write Flushes and Asych Filesystems | TruePath | 2009/01/25 04:44 AM |
SLC vs. MLC - the trick to latency | Michael S | 2008/09/12 04:58 AM |
overlapped erase and read | Michael S | 2008/09/12 04:59 AM |
overlapped erase and read | David W. Hess | 2008/09/12 09:56 AM |
overlapped erase and read | Anonymous | 2008/09/12 08:45 PM |
overlapped erase and read | Jouni Osmala | 2008/09/12 11:56 PM |
overlapped erase and read | Michael S | 2008/09/13 11:29 AM |
overlapped erase and read | Michael S | 2008/09/13 12:09 PM |
overlapped erase and read | Linus Torvalds | 2008/09/13 02:05 PM |
SLC vs. MLC - the trick to latency | Doug Siebert | 2008/09/11 05:31 PM |
SLC vs. MLC | EduardoS | 2008/09/08 02:07 PM |
SLC vs. MLC | Linus Torvalds | 2008/09/08 02:30 PM |
SLC vs. MLC | EduardoS | 2008/09/08 04:01 PM |
SSD and RAID | Joe Chang | 2008/09/08 07:42 PM |
SSD and RAID | Doug Siebert | 2008/09/08 09:46 PM |
SSD and RAID | Aaron Spink | 2008/09/09 04:27 PM |
SSD and RAID | Groo | 2008/09/10 01:02 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 10:22 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 02:04 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 03:24 PM |
SLC vs. MLC | rwessel | 2009/01/06 04:47 PM |
SLC vs. MLC | anonymous | 2009/01/06 05:17 PM |
SLC vs. MLC | rwessel | 2009/01/06 05:58 PM |
SLC vs. MLC | Joern Engel | 2009/01/07 12:35 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 05:45 PM |
SLC vs. MLC | rwessel | 2009/01/06 06:09 PM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 07:47 PM |
SLC vs. MLC | Joern Engel | 2009/01/07 12:26 AM |
SLC vs. MLC | anon | 2009/01/06 08:23 PM |
SLC vs. MLC | Joern Engel | 2009/01/07 12:52 AM |
SLC vs. MLC | anon | 2009/01/07 02:34 AM |
SLC vs. MLC | IntelUser2000 | 2009/01/07 07:43 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/07 10:28 AM |
drop data filesystem semantic | Doug Siebert | 2009/01/09 12:21 PM |
FTL and FS | iz | 2009/01/09 07:49 PM |
FTL and FS | Linus Torvalds | 2009/01/09 09:53 PM |
FTL and FS | iz | 2009/01/10 02:09 AM |
FTL and FS | Michael S | 2009/01/10 03:19 PM |
compiling large programs | iz | 2009/01/10 05:51 PM |
compiling large programs | Linus Torvalds | 2009/01/10 07:58 PM |
compiling large programs | peter | 2009/01/11 05:30 AM |
compiling large programs | Andi Kleen | 2009/01/11 01:03 PM |
The File Abstraction | TruePath | 2009/01/25 06:45 AM |
The File Abstraction | Howard Chu | 2009/01/25 01:49 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 09:23 AM |
The File Abstraction | Michael S | 2009/01/26 01:39 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 02:31 PM |
The File Abstraction | Dean Kent | 2009/01/26 03:06 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 04:29 PM |
The File Abstraction | Mark Christiansen | 2009/01/27 09:24 AM |
The File Abstraction | Mark Christiansen | 2009/01/27 10:14 AM |
The File Abstraction | Linus Torvalds | 2009/01/27 10:15 AM |
The File Abstraction | slacker | 2009/01/27 11:20 AM |
The File Abstraction | Linus Torvalds | 2009/01/27 01:16 PM |
Attributes All The Way Down | Mark Christiansen | 2009/01/27 02:17 PM |
The File Abstraction | slacker | 2009/01/27 05:25 PM |
The File Abstraction | Linus Torvalds | 2009/01/28 08:17 AM |
The File Abstraction: API thoughts | Carlie Coats | 2009/01/28 09:35 AM |
The File Abstraction | slacker | 2009/01/28 10:09 AM |
The File Abstraction | Linus Torvalds | 2009/01/28 01:44 PM |
Programs already 'hide' their metadata in the bytestream, unbeknownst to users | anon | 2009/01/28 09:28 PM |
The File Abstraction | slacker | 2009/01/29 10:39 AM |
The File Abstraction | Linus Torvalds | 2009/01/29 11:08 AM |
The File Abstraction | Dean Kent | 2009/01/29 11:49 AM |
The File Abstraction | Howard Chu | 2009/01/29 02:58 PM |
The File Abstraction | rwessel | 2009/01/29 04:23 PM |
Extended Attributes in Action | slacker | 2009/01/29 03:05 PM |
Extended Attributes in Action | stubar | 2009/01/29 04:49 PM |
Extended Attributes in Action | Linus Torvalds | 2009/01/29 05:15 PM |
Like Duh | anon | 2009/01/29 07:42 PM |
Like Duh | anon | 2009/01/29 09:15 PM |
Like Duh | anon | 2009/02/01 07:18 PM |
Double Duh. | Anonymous | 2009/02/01 10:58 PM |
Double Duh. | anon | 2009/02/02 02:08 AM |
Double Duh. | Anonymous | 2009/02/02 05:11 PM |
Double Duh. | anon | 2009/02/02 07:33 PM |
Like Duh | David Kanter | 2009/02/01 11:05 PM |
Like Duh | peter | 2009/02/01 11:55 PM |
Like Duh | anon | 2009/02/02 01:55 AM |
Xattrs, Solar power, regulation and politics | Rob Thorpe | 2009/02/02 04:36 AM |
Terminology seems too fuzzy to me | hobold | 2009/02/02 06:14 AM |
Terminology seems too fuzzy to me | rwessel | 2009/02/02 12:33 PM |
good summary | Michael S | 2009/02/03 02:41 AM |
good summary | Mark Christiansen | 2009/02/03 09:57 AM |
good summary | Howard Chu | 2009/02/03 10:21 AM |
good summary | Mark Christiansen | 2009/02/03 11:18 AM |
good summary | Howard Chu | 2009/02/03 12:00 PM |
good summary | Mark Christiansen | 2009/02/03 12:36 PM |
good summary | RagingDragon | 2009/02/03 10:39 PM |
good summary | rwessel | 2009/02/03 11:03 PM |
good summary | RagingDragon | 2009/02/03 11:46 PM |
Terminology seems too fuzzy to me | slacker | 2009/02/04 05:06 PM |
Terminology seems too fuzzy to me | Michael S | 2009/02/05 01:05 AM |
Terminology seems too fuzzy to me | Ungo | 2009/02/05 01:15 PM |
Terminology seems too fuzzy to me | slacker | 2009/02/05 02:19 PM |
Terminology seems too fuzzy to me | Howard Chu | 2009/02/05 04:44 PM |
Like Duh | iz | 2009/01/30 02:03 AM |
EAs (security labels) hosed me badly | anon | 2009/01/30 09:48 PM |
Extended Attributes in Action | RagingDragon | 2009/01/29 09:31 PM |
Extended Attributes in Action | anonymous | 2009/01/29 08:13 PM |
Extended Attributes in Action | Howard Chu | 2009/01/29 09:38 PM |
Extended Attributes in Action | slacker | 2009/01/30 11:24 AM |
Extended Attributes in Action | anon | 2009/01/30 05:50 PM |
Extended Attributes in Action | Etienne Lehnart | 2009/01/30 12:22 AM |
Extended Attributes in Action | Rob Thorpe | 2009/01/30 12:39 PM |
Extended Attributes in Action | slacker | 2009/01/30 01:16 PM |
Extended Attributes in Action | anon | 2009/01/30 06:03 PM |
Extended Attributes in Action | Howard Chu | 2009/01/30 11:22 PM |
Extended Attributes in Action | rwessel | 2009/01/31 12:08 AM |
Extended Attributes in Action | anonymous | 2009/01/31 12:22 AM |
Extended Attributes in Action | rwessel | 2009/01/31 12:56 AM |
Scaling | Dean Kent | 2009/01/31 09:04 AM |
Scaling | Rob Thorpe | 2009/02/02 02:39 AM |
Scaling | rwessel | 2009/02/02 11:41 AM |
Scaling | Howard Chu | 2009/02/02 12:30 PM |
Scaling | Dean Kent | 2009/02/02 02:27 PM |
Scaling | Rob Thorpe | 2009/02/03 05:08 AM |
Scaling | Dean Kent | 2009/02/03 07:38 AM |
Scaling | rwessel | 2009/02/03 02:34 PM |
Scaling | RagingDragon | 2009/02/03 10:46 PM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 11:27 AM |
in defense of software that does not scale | Howard Chu | 2009/02/03 12:03 PM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 12:17 PM |
in defense of software that does not scale | RagingDragon | 2009/02/03 11:00 PM |
in defense of software that does not scale | Michael S | 2009/02/04 06:46 AM |
in defense of software that does not scale | RagingDragon | 2009/02/04 09:33 PM |
in defense of software that does not scale | Dean Kent | 2009/02/03 12:17 PM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 12:24 PM |
in defense of software that does not scale | Vincent Diepeveen | 2009/02/04 10:43 AM |
in defense of software that does not scale | rwessel | 2009/02/03 02:44 PM |
in defense of software that does not scale | anon | 2009/02/04 02:35 AM |
in defense of software that does not scale | Carlie Coats | 2009/02/04 05:24 AM |
Scaling with time vs. scaling from the beginning. | mpx | 2009/02/05 01:57 AM |
Extended Attributes in Action | Michael S | 2009/01/31 10:33 AM |
Extended Attributes in Action | anon | 2009/01/31 10:37 PM |
Extended Attributes in Action | JasonB | 2009/01/31 08:11 AM |
Extended Attributes in Action | Howard Chu | 2009/01/31 11:43 AM |
Extended Attributes in Action | JasonB | 2009/01/31 04:37 PM |
Extended Attributes in Action | Howard Chu | 2009/02/02 02:42 PM |
Extended Attributes in Action | Howard Chu | 2009/02/02 02:44 PM |
The File Abstraction | Rob Thorpe | 2009/01/27 11:20 AM |
The File Abstraction | Howard Chu | 2009/01/27 12:28 AM |
The File Abstraction | Michael S | 2009/01/27 03:00 AM |
The File Abstraction | Dean Kent | 2009/01/27 08:30 AM |
The File Abstraction | Andi Kleen | 2009/01/27 02:05 AM |
SLC vs. MLC | Michel | 2009/01/12 06:54 PM |
SLC vs. MLC | Linus Torvalds | 2009/01/12 07:38 PM |
SLC vs. MLC | rwessel | 2009/01/13 12:52 AM |
SLC vs. MLC | Ungo | 2009/01/13 03:04 PM |
SLC vs. MLC | Wes Felter | 2009/01/13 05:42 PM |
SLC vs. MLC | TruePath | 2009/01/25 05:05 AM |
SLC vs. MLC | Ungo | 2008/08/21 12:54 PM |
SLC vs. MLC | Aaron Spink | 2008/08/21 01:20 PM |
MLC vs. SLC | Michael S | 2008/08/21 08:57 AM |
First Dunnington benchmark results | rwessel | 2008/08/21 10:40 AM |
First Dunnington benchmark results | Aaron Spink | 2008/08/21 03:18 AM |
First Dunnington benchmark results | Etienne Lehnart | 2008/08/20 04:38 AM |
Will x86 dominate big iron? | Tom W | 2008/08/19 10:10 PM |
Will x86 dominate big iron? | Jesper Frimann | 2008/08/20 12:28 AM |
Will x86 dominate big iron? | Tom W | 2008/08/20 03:42 PM |
Will x86 dominate big iron? | David Kanter | 2008/08/21 01:13 AM |
Will x86 dominate big iron? | Joe Chang | 2008/08/21 06:54 AM |
Will x86 dominate big iron? | asdf | 2008/08/22 01:18 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/22 07:54 PM |
Will x86 dominate big iron? | Jesper Frimann | 2008/08/22 09:48 AM |
Will x86 dominate big iron? | Tom W | 2008/08/24 01:06 AM |
Will x86 dominate big iron? | Michael S | 2008/08/24 04:19 AM |
Will x86 dominate big iron? | Dean Kent | 2008/08/24 09:30 AM |
Will x86 dominate big iron? | Paul | 2008/08/24 11:16 AM |
Will x86 dominate big iron? | Dean Kent | 2008/08/24 12:37 PM |
Will x86 dominate big iron? | Michael S | 2008/08/25 12:53 AM |
Will x86 dominate big iron? | someone | 2008/08/22 10:19 AM |
Will x86 dominate big iron? | aaron spink | 2008/08/23 02:56 AM |
Will x86 dominate big iron? | Michael S | 2008/08/23 09:58 AM |
Will x86 dominate big iron? | someone | 2008/08/23 01:51 PM |
Will x86 dominate big iron? | someone | 2008/08/23 01:55 PM |
Will x86 dominate big iron? | Aaron Spink | 2008/08/23 04:52 PM |
Will x86 dominate big iron? | anonymous | 2008/08/23 05:28 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/23 06:12 PM |
Off road and topic | EduardoS | 2008/08/23 06:28 PM |
Will x86 dominate big iron? | someone | 2008/08/23 06:26 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/23 09:40 PM |
Will x86 dominate big iron? | anonymous | 2008/08/24 01:46 AM |
Off road and topic | David W. Hess | 2008/08/24 03:24 AM |
Off road and topic | Aaron Spink | 2008/08/24 04:14 AM |
Beckton vs. Dunnington | Mr. Camel | 2008/08/22 06:30 AM |
Beckton vs. Dunnington | jokerman | 2008/08/22 12:12 PM |
Beckton vs. Dunnington | Mr. Camel | 2009/05/29 10:16 AM |