By: iz (indan.delete@this.nul.nu), January 9, 2009 7:49 pm
Room: Moderated Discussions
Linus Torvalds (torvalds@linux-foundation.org) on 1/7/09 wrote:
---------------------------
>- almost any block mapping will be simplified by bigger
>extents.
>
>Result: especially after running benchmarks that just
>do small random writes for a long time, the block
>remapping tables will be maximally fragmented and have
>just single-block extents.
>
>This will likely cause a performance dip because the
>remapping tables don't fit in the RAM caches of the
>controller, so it will end up doing more lookups to the
>flash.
Every random write can cause a remapping table change, which is basically a random write as well. Larger extends reduces your theoretical maximum random write performance (that or they waste a lot of space).
>It's quite possible that the flash remapping layer will
>end up running extra GC cycles at points to avoid this
>worst-case situation, and that will obviously show up
>as a performance dip.
>
>The good news is that while "random write performance" is
>actually meaningful, it's very seldom the case that it's
>dominant (ie it's important because it happens occasionally,
>not because it's a common case!)
Most writes are random writes, at least on my system. The writes that aren't don't have to be quick either.
>So in most use, you'd have a mix of small random writes
>and larger contiguous ones, and the realistic situation is
>that the remapping never gets really bad - at least not as
>bad as the extreme benchmarks make it.
I'm not really convinced of this. It may take a bit of time, but you'll get there eventually.
>The other issue is:
>
>- garbage collection is much easier if you have
>lots of free space.
I don't see how this is. Some free space is needed, sure, but not that much. Garbage collection is just not needed when there's free space left...
>This is to some degree the bigger issue. It's also a
>possible "value add" issue, ie I would actually expect
>flash disk manufacturers to start differentiating their
>drives based on "performance vs capacity".
>
>You can effectively make a higher-performance drive by
>leaving more of the drive for internal use, in order to
>make GC be smoother/faster. For example, when the Intel
>drives are 80GB, that means that they really have 80GiB
>(binary) of flash, but only expose 80GB (decimal) of it
>as disk, so you have about 6GB of "free flash" to do
>GC with.
>
>Now, you could actually sell the exact same drive
>with a capacity of just 75GB, and you'd essentially have
>doubled your "scratch area" to do GC in. End result:
>smoother garbage collection with fewer GC spikes.
Only way that is possible is by allowing more space be wasted, or in other words, do less garbage collection by allowing more fragmentation to happen.
More scratch area doesn't help a bit as far as I can see. Only more space wasted for fragmentation. If with "scratch area" you mean free space to get clean blocks from, then it only postpones the need to do GC.
>The second issue shows up when you do any writes to disk:
>out of the factory, the remapping tables are likely all
>empty, so the disk can actually use all of the drive
>as a scratch area, and thus have a much easier time doing
>GC.
You mean no GC has to happen at all.
>But once you've written to the drive enough, it will only
>have that small (well, not so small) 6GB scratch area. See
>how that goes? And this is actually totally independent of
>whether you did small or large writes, although large
>writes will make GC easier in general, so the relative
>performance degradation will probably hit the smaller
>writes more.
Yes and no. If the disk is filled with big writes, not much space is wasted by fragmentation and no GC is needed. Overwriting existing full (erase) blocks means the old ones can be reused without defragmentation.
If you fill the disk with random writes those writes are either painfully slow, or as fast a streaming writes and then a hell of a lot remapping happened. When you start to overwrite existing blocks partially then a lot of fragmentation happens and GC is needed to find new blocks where to write new data to.
>So yes, performance will drop over time, down to a level
>where it stabilizes.
Theoretically it's either limited by your GC/defragmentation speed or your remapping speed (because you need to update the remapping table as well, which, if you're unlucky, can take as much effort as writing the tiny random writes itself).
>However, there is some good news: you can actually tell
>the drives to set aside more memory for scratch space.
That in itself won't help much.
>If your OS supports it, and if your filesystem
>is smart enough, it can actually do a "drop data" command
>when you delete files, and rather than remapping those
>blocks, the flash controller can then add them to the
>scratch area.
This does help really a lot though, but that is telling the drive that blocks can be freely used, which is slightly different than increasing the scratch space (which sounds more like reducing the drive's capacity).
>And I'm sure that people will improve on the Intel drives.
>I'm not saying that they are perfect. But yes, they do
>degrade a bit until they hit a baseline plateau of write
>performance, but if you look at the numbers, even that
>degraded plateau is a couple of orders of magnitude better
>than rotating media.
Just being much better than rotating media is not good enough.
A FTL and a filesystem do more or less the same, and if they work along each other instead of together they waste a lot of efficiency. Having both do a kind of remapping seems silly. The FS doesn't know what kind of reliability it can expect from the FTL, nor does the FTL know what the FS expects. One extreme is that both are reliable, and assume the worst, which slows things down a lot. The other is that one of them assumes too much and data can get lost or corrupted.
Moving the FTL into the FS or at least into the kernel is one way of solving it. The other is to agree on an abstract enough interface between them which lets them both know what to expect and do.
You want the drive to be able to do background GC/defragmentation, as well as bad block remapping, so let the controller be responsible for all the remapping. This should take away the burden of doing any remapping away from the FS altogether. That implies that overwriting an existing (virtual) block should either fail or succeed, but never leave behind garbage.
As there's no direct relation between physical block location and block addresses, let's go all the way and have the flash drive provide a virtual address space which is much bigger than the real one (say the 64 bits). After all, one of the tricky parts it to make all files fit on a disk. This way files never need to be fragmentated any more.
Or not, but throw a few virtual address move functions to shuffle data around without actually touching the data, just the remapping table. And a command to get a free chunk of space, so the FTL can decide the address instead of both the FTL and FS making life harder for each other.
Or as sequential reads/writes are still and always will be faster than random ones, let the FS have control over it. And that implies doing the FTL in the FS, or near it. Or add commands to allocate sequential chunks of memory and leave the FTL in the controller.
At least there's something better possible than the current guessing around and toe stepping. One random write can turn into many more random writes, one to update the FTL mapping table, and a few FS related one (metadata update, extends update, journalling). The basic way of speeding up writes is just doing less of them and as sequential as possible.
---------------------------
>- almost any block mapping will be simplified by bigger
>extents.
>
>Result: especially after running benchmarks that just
>do small random writes for a long time, the block
>remapping tables will be maximally fragmented and have
>just single-block extents.
>
>This will likely cause a performance dip because the
>remapping tables don't fit in the RAM caches of the
>controller, so it will end up doing more lookups to the
>flash.
Every random write can cause a remapping table change, which is basically a random write as well. Larger extends reduces your theoretical maximum random write performance (that or they waste a lot of space).
>It's quite possible that the flash remapping layer will
>end up running extra GC cycles at points to avoid this
>worst-case situation, and that will obviously show up
>as a performance dip.
>
>The good news is that while "random write performance" is
>actually meaningful, it's very seldom the case that it's
>dominant (ie it's important because it happens occasionally,
>not because it's a common case!)
Most writes are random writes, at least on my system. The writes that aren't don't have to be quick either.
>So in most use, you'd have a mix of small random writes
>and larger contiguous ones, and the realistic situation is
>that the remapping never gets really bad - at least not as
>bad as the extreme benchmarks make it.
I'm not really convinced of this. It may take a bit of time, but you'll get there eventually.
>The other issue is:
>
>- garbage collection is much easier if you have
>lots of free space.
I don't see how this is. Some free space is needed, sure, but not that much. Garbage collection is just not needed when there's free space left...
>This is to some degree the bigger issue. It's also a
>possible "value add" issue, ie I would actually expect
>flash disk manufacturers to start differentiating their
>drives based on "performance vs capacity".
>
>You can effectively make a higher-performance drive by
>leaving more of the drive for internal use, in order to
>make GC be smoother/faster. For example, when the Intel
>drives are 80GB, that means that they really have 80GiB
>(binary) of flash, but only expose 80GB (decimal) of it
>as disk, so you have about 6GB of "free flash" to do
>GC with.
>
>Now, you could actually sell the exact same drive
>with a capacity of just 75GB, and you'd essentially have
>doubled your "scratch area" to do GC in. End result:
>smoother garbage collection with fewer GC spikes.
Only way that is possible is by allowing more space be wasted, or in other words, do less garbage collection by allowing more fragmentation to happen.
More scratch area doesn't help a bit as far as I can see. Only more space wasted for fragmentation. If with "scratch area" you mean free space to get clean blocks from, then it only postpones the need to do GC.
>The second issue shows up when you do any writes to disk:
>out of the factory, the remapping tables are likely all
>empty, so the disk can actually use all of the drive
>as a scratch area, and thus have a much easier time doing
>GC.
You mean no GC has to happen at all.
>But once you've written to the drive enough, it will only
>have that small (well, not so small) 6GB scratch area. See
>how that goes? And this is actually totally independent of
>whether you did small or large writes, although large
>writes will make GC easier in general, so the relative
>performance degradation will probably hit the smaller
>writes more.
Yes and no. If the disk is filled with big writes, not much space is wasted by fragmentation and no GC is needed. Overwriting existing full (erase) blocks means the old ones can be reused without defragmentation.
If you fill the disk with random writes those writes are either painfully slow, or as fast a streaming writes and then a hell of a lot remapping happened. When you start to overwrite existing blocks partially then a lot of fragmentation happens and GC is needed to find new blocks where to write new data to.
>So yes, performance will drop over time, down to a level
>where it stabilizes.
Theoretically it's either limited by your GC/defragmentation speed or your remapping speed (because you need to update the remapping table as well, which, if you're unlucky, can take as much effort as writing the tiny random writes itself).
>However, there is some good news: you can actually tell
>the drives to set aside more memory for scratch space.
That in itself won't help much.
>If your OS supports it, and if your filesystem
>is smart enough, it can actually do a "drop data" command
>when you delete files, and rather than remapping those
>blocks, the flash controller can then add them to the
>scratch area.
This does help really a lot though, but that is telling the drive that blocks can be freely used, which is slightly different than increasing the scratch space (which sounds more like reducing the drive's capacity).
>And I'm sure that people will improve on the Intel drives.
>I'm not saying that they are perfect. But yes, they do
>degrade a bit until they hit a baseline plateau of write
>performance, but if you look at the numbers, even that
>degraded plateau is a couple of orders of magnitude better
>than rotating media.
Just being much better than rotating media is not good enough.
A FTL and a filesystem do more or less the same, and if they work along each other instead of together they waste a lot of efficiency. Having both do a kind of remapping seems silly. The FS doesn't know what kind of reliability it can expect from the FTL, nor does the FTL know what the FS expects. One extreme is that both are reliable, and assume the worst, which slows things down a lot. The other is that one of them assumes too much and data can get lost or corrupted.
Moving the FTL into the FS or at least into the kernel is one way of solving it. The other is to agree on an abstract enough interface between them which lets them both know what to expect and do.
You want the drive to be able to do background GC/defragmentation, as well as bad block remapping, so let the controller be responsible for all the remapping. This should take away the burden of doing any remapping away from the FS altogether. That implies that overwriting an existing (virtual) block should either fail or succeed, but never leave behind garbage.
As there's no direct relation between physical block location and block addresses, let's go all the way and have the flash drive provide a virtual address space which is much bigger than the real one (say the 64 bits). After all, one of the tricky parts it to make all files fit on a disk. This way files never need to be fragmentated any more.
Or not, but throw a few virtual address move functions to shuffle data around without actually touching the data, just the remapping table. And a command to get a free chunk of space, so the FTL can decide the address instead of both the FTL and FS making life harder for each other.
Or as sequential reads/writes are still and always will be faster than random ones, let the FS have control over it. And that implies doing the FTL in the FS, or near it. Or add commands to allocate sequential chunks of memory and leave the FTL in the controller.
At least there's something better possible than the current guessing around and toe stepping. One random write can turn into many more random writes, one to update the FTL mapping table, and a few FS related one (metadata update, extends update, journalling). The basic way of speeding up writes is just doing less of them and as sequential as possible.
Topic | Posted By | Date |
---|---|---|
First Dunnington benchmark results | Michael S | 2008/08/19 10:54 AM |
First Dunnington benchmark results | rwessel | 2008/08/19 01:42 PM |
First Dunnington benchmark results | Aaron Apink | 2008/08/19 05:49 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/19 06:28 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 09:49 AM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 03:10 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 06:42 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 07:12 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 09:45 AM |
First Dunnington benchmark results | Aaron Spink | 2008/08/21 01:12 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 03:15 PM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 02:59 AM |
First Dunnington benchmark results | Anders Jensen | 2008/08/20 03:26 AM |
+SSD | Anders Jensen | 2008/08/20 03:30 AM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 11:04 AM |
First Dunnington benchmark results | slacker | 2008/08/20 12:35 PM |
First Dunnington benchmark results | Doug Siebert | 2008/08/20 07:54 PM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 08:58 PM |
SLC vs. MLC | David Kanter | 2008/08/21 01:16 AM |
SLC vs. MLC | Matt Sayler | 2008/08/21 06:25 AM |
SLC vs. MLC | Richard Cownie | 2008/08/21 06:32 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/21 08:39 AM |
SLC vs. MLC | Michael S | 2008/08/21 09:07 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/21 09:52 AM |
SLC vs. MLC | Michael S | 2008/08/21 10:35 AM |
OLTP appliance = mainframe? (NT) | Potatoswatter | 2008/08/21 11:44 AM |
OLTP appliance = HP NonStop? | Michael S | 2008/08/21 12:03 PM |
OLTP appliance | Joe Chang | 2008/08/21 03:33 PM |
OLTP appliance | Potatoswatter | 2008/08/21 03:59 PM |
SLC vs. MLC | Aaron Spink | 2008/08/21 01:29 PM |
SLC vs. MLC | Dan Downs | 2008/08/21 11:33 AM |
SLC vs. MLC | rwessel | 2008/08/21 12:45 PM |
SLC vs. MLC | Dan Downs | 2008/08/22 08:21 AM |
SLC vs. MLC | Aaron Spink | 2008/08/21 01:34 PM |
SLC vs. MLC vs DRAM | pgerassi | 2008/08/21 12:24 PM |
SLC vs. MLC vs DRAM | David Kanter | 2008/08/22 01:31 AM |
SLC vs. MLC | Groo | 2008/08/23 12:52 PM |
SLC vs. MLC | Doug Siebert | 2008/08/21 06:14 PM |
SLC vs. MLC | Linus Torvalds | 2008/08/22 08:05 AM |
SLC vs. MLC | Doug Siebert | 2008/08/22 02:27 PM |
SLC vs. MLC | EduardoS | 2008/08/22 06:26 PM |
SSD Controller differentiation | David Kanter | 2008/08/22 09:35 PM |
SSD Controller differentiation | Doug Siebert | 2008/08/22 10:34 PM |
SSD Controller differentiation (supercaps, cost...) | anon | 2008/08/23 10:18 AM |
SSD Controller differentiation (supercaps, cost...) | Doug Siebert | 2008/08/23 10:40 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/23 10:50 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/08 12:03 PM |
SLC vs. MLC | Max | 2008/09/08 01:51 PM |
SLC vs. MLC | Howard Chu | 2008/09/08 09:04 PM |
SLC vs. MLC | Max | 2008/09/08 10:29 PM |
SLC vs. MLC | Howard Chu | 2008/09/09 12:12 AM |
RAM vs SSD? | Jouni Osmala | 2008/09/09 01:06 AM |
RAM vs SSD? | Max | 2008/09/12 12:51 PM |
RAM vs SSD? | EduardoS | 2008/09/12 04:27 PM |
Disk cache snapshotting | Max | 2008/09/13 08:34 AM |
Disk cache snapshotting | Howard Chu | 2008/09/14 09:58 PM |
Disk cache snapshotting | Max | 2008/09/15 12:50 PM |
SLC vs. MLC | Linus Torvalds | 2008/09/09 07:43 AM |
SLC vs. MLC | Howard Chu | 2008/09/09 09:42 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/09 10:39 AM |
SLC vs. MLC | Michael S | 2008/09/10 12:29 AM |
SLC vs. MLC | anon | 2008/09/10 02:51 AM |
SLC vs. MLC | Michael S | 2008/09/10 03:09 AM |
SLC vs. MLC | Max | 2008/09/10 04:48 AM |
SLC vs. MLC | Michael S | 2008/09/10 05:52 AM |
SLC vs. MLC | Max | 2008/09/10 06:28 AM |
SLC vs. MLC | Matt Sayler | 2008/09/10 06:21 AM |
SLC vs. MLC | Michael S | 2008/09/10 09:17 AM |
SLC vs. MLC | anon | 2008/09/10 06:29 AM |
SLC vs. MLC | Michael S | 2008/09/10 09:23 AM |
SLC vs. MLC | Matt Sayler | 2008/09/10 10:45 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/10 07:25 AM |
SLC vs. MLC | Michael S | 2008/09/10 09:54 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/10 10:31 AM |
Physical vs effective write latency | Max | 2008/09/11 07:35 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 09:06 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 09:48 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 11:39 AM |
Physical vs effective write latency | Mark Roulo | 2008/09/11 12:18 PM |
Physical vs effective write latency | Doug Siebert | 2008/09/11 05:59 PM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 07:16 PM |
Physical vs effective write latency | Doug Siebert | 2008/09/11 10:28 PM |
Physical vs effective write latency | MS | 2009/02/03 03:06 PM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/11 12:39 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/11 01:17 PM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/11 05:25 PM |
SLC vs. MLC - the trick to latency | Doug Siebert | 2008/09/11 05:47 PM |
SLC vs. MLC - the trick to latency | rwessel | 2008/09/11 06:01 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/12 12:00 AM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/12 08:52 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/13 10:06 AM |
SLC vs. MLC - the trick to latency | Ungo | 2008/09/15 12:18 PM |
To SSD or not? One lady's perspective | David Kanter | 2008/09/22 01:12 AM |
To SSD or not? One lady's perspective | Howard Chu | 2008/09/22 04:02 AM |
To SSD or not? Real data.. | Linus Torvalds | 2008/09/22 07:33 AM |
To SSD or not? Real data.. | Ungo | 2008/09/22 12:27 PM |
4K sectors | Wes Felter | 2008/09/22 06:03 PM |
4K sectors | Daniel | 2008/09/22 10:31 PM |
Reasons for >512 byte sectors | Doug Siebert | 2008/09/22 09:38 PM |
Reasons for >512 byte sectors | rwessel | 2008/09/22 10:09 PM |
Reasons for >512 byte sectors | Howard Chu | 2008/09/23 02:50 AM |
Reasons for >512 byte sectors | Daniel | 2008/09/22 10:40 PM |
Reasons for >512 byte sectors | rwessel | 2008/09/23 09:11 AM |
Reasons for >512 byte sectors | Daniel | 2008/09/23 12:10 PM |
HDD long sector size availability | Etienne Lehnart | 2008/09/23 05:32 AM |
HDD long sector size availability | rwessel | 2008/09/23 09:19 AM |
HDD long sector size availability | Etienne Lehnart | 2008/09/23 02:17 PM |
To SSD or not? Real data.. | Jouni Osmala | 2008/09/22 11:16 PM |
To SSD or not? One lady's perspective | Wes Felter | 2008/09/22 11:25 AM |
How should SSDs be engineered into systems? | Rob Thorpe | 2008/09/22 02:01 PM |
How should SSDs be engineered into systems? | Matt Craighead | 2008/09/23 06:59 PM |
How should SSDs be engineered into systems? | Matt Sayler | 2008/09/24 04:17 AM |
ATA/SCSIS, Write Flushes and Asych Filesystems | TruePath | 2009/01/25 04:44 AM |
SLC vs. MLC - the trick to latency | Michael S | 2008/09/12 04:58 AM |
overlapped erase and read | Michael S | 2008/09/12 04:59 AM |
overlapped erase and read | David W. Hess | 2008/09/12 09:56 AM |
overlapped erase and read | Anonymous | 2008/09/12 08:45 PM |
overlapped erase and read | Jouni Osmala | 2008/09/12 11:56 PM |
overlapped erase and read | Michael S | 2008/09/13 11:29 AM |
overlapped erase and read | Michael S | 2008/09/13 12:09 PM |
overlapped erase and read | Linus Torvalds | 2008/09/13 02:05 PM |
SLC vs. MLC - the trick to latency | Doug Siebert | 2008/09/11 05:31 PM |
SLC vs. MLC | EduardoS | 2008/09/08 02:07 PM |
SLC vs. MLC | Linus Torvalds | 2008/09/08 02:30 PM |
SLC vs. MLC | EduardoS | 2008/09/08 04:01 PM |
SSD and RAID | Joe Chang | 2008/09/08 07:42 PM |
SSD and RAID | Doug Siebert | 2008/09/08 09:46 PM |
SSD and RAID | Aaron Spink | 2008/09/09 04:27 PM |
SSD and RAID | Groo | 2008/09/10 01:02 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 10:22 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 02:04 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 03:24 PM |
SLC vs. MLC | rwessel | 2009/01/06 04:47 PM |
SLC vs. MLC | anonymous | 2009/01/06 05:17 PM |
SLC vs. MLC | rwessel | 2009/01/06 05:58 PM |
SLC vs. MLC | Joern Engel | 2009/01/07 12:35 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 05:45 PM |
SLC vs. MLC | rwessel | 2009/01/06 06:09 PM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 07:47 PM |
SLC vs. MLC | Joern Engel | 2009/01/07 12:26 AM |
SLC vs. MLC | anon | 2009/01/06 08:23 PM |
SLC vs. MLC | Joern Engel | 2009/01/07 12:52 AM |
SLC vs. MLC | anon | 2009/01/07 02:34 AM |
SLC vs. MLC | IntelUser2000 | 2009/01/07 07:43 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/07 10:28 AM |
drop data filesystem semantic | Doug Siebert | 2009/01/09 12:21 PM |
FTL and FS | iz | 2009/01/09 07:49 PM |
FTL and FS | Linus Torvalds | 2009/01/09 09:53 PM |
FTL and FS | iz | 2009/01/10 02:09 AM |
FTL and FS | Michael S | 2009/01/10 03:19 PM |
compiling large programs | iz | 2009/01/10 05:51 PM |
compiling large programs | Linus Torvalds | 2009/01/10 07:58 PM |
compiling large programs | peter | 2009/01/11 05:30 AM |
compiling large programs | Andi Kleen | 2009/01/11 01:03 PM |
The File Abstraction | TruePath | 2009/01/25 06:45 AM |
The File Abstraction | Howard Chu | 2009/01/25 01:49 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 09:23 AM |
The File Abstraction | Michael S | 2009/01/26 01:39 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 02:31 PM |
The File Abstraction | Dean Kent | 2009/01/26 03:06 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 04:29 PM |
The File Abstraction | Mark Christiansen | 2009/01/27 09:24 AM |
The File Abstraction | Mark Christiansen | 2009/01/27 10:14 AM |
The File Abstraction | Linus Torvalds | 2009/01/27 10:15 AM |
The File Abstraction | slacker | 2009/01/27 11:20 AM |
The File Abstraction | Linus Torvalds | 2009/01/27 01:16 PM |
Attributes All The Way Down | Mark Christiansen | 2009/01/27 02:17 PM |
The File Abstraction | slacker | 2009/01/27 05:25 PM |
The File Abstraction | Linus Torvalds | 2009/01/28 08:17 AM |
The File Abstraction: API thoughts | Carlie Coats | 2009/01/28 09:35 AM |
The File Abstraction | slacker | 2009/01/28 10:09 AM |
The File Abstraction | Linus Torvalds | 2009/01/28 01:44 PM |
Programs already 'hide' their metadata in the bytestream, unbeknownst to users | anon | 2009/01/28 09:28 PM |
The File Abstraction | slacker | 2009/01/29 10:39 AM |
The File Abstraction | Linus Torvalds | 2009/01/29 11:08 AM |
The File Abstraction | Dean Kent | 2009/01/29 11:49 AM |
The File Abstraction | Howard Chu | 2009/01/29 02:58 PM |
The File Abstraction | rwessel | 2009/01/29 04:23 PM |
Extended Attributes in Action | slacker | 2009/01/29 03:05 PM |
Extended Attributes in Action | stubar | 2009/01/29 04:49 PM |
Extended Attributes in Action | Linus Torvalds | 2009/01/29 05:15 PM |
Like Duh | anon | 2009/01/29 07:42 PM |
Like Duh | anon | 2009/01/29 09:15 PM |
Like Duh | anon | 2009/02/01 07:18 PM |
Double Duh. | Anonymous | 2009/02/01 10:58 PM |
Double Duh. | anon | 2009/02/02 02:08 AM |
Double Duh. | Anonymous | 2009/02/02 05:11 PM |
Double Duh. | anon | 2009/02/02 07:33 PM |
Like Duh | David Kanter | 2009/02/01 11:05 PM |
Like Duh | peter | 2009/02/01 11:55 PM |
Like Duh | anon | 2009/02/02 01:55 AM |
Xattrs, Solar power, regulation and politics | Rob Thorpe | 2009/02/02 04:36 AM |
Terminology seems too fuzzy to me | hobold | 2009/02/02 06:14 AM |
Terminology seems too fuzzy to me | rwessel | 2009/02/02 12:33 PM |
good summary | Michael S | 2009/02/03 02:41 AM |
good summary | Mark Christiansen | 2009/02/03 09:57 AM |
good summary | Howard Chu | 2009/02/03 10:21 AM |
good summary | Mark Christiansen | 2009/02/03 11:18 AM |
good summary | Howard Chu | 2009/02/03 12:00 PM |
good summary | Mark Christiansen | 2009/02/03 12:36 PM |
good summary | RagingDragon | 2009/02/03 10:39 PM |
good summary | rwessel | 2009/02/03 11:03 PM |
good summary | RagingDragon | 2009/02/03 11:46 PM |
Terminology seems too fuzzy to me | slacker | 2009/02/04 05:06 PM |
Terminology seems too fuzzy to me | Michael S | 2009/02/05 01:05 AM |
Terminology seems too fuzzy to me | Ungo | 2009/02/05 01:15 PM |
Terminology seems too fuzzy to me | slacker | 2009/02/05 02:19 PM |
Terminology seems too fuzzy to me | Howard Chu | 2009/02/05 04:44 PM |
Like Duh | iz | 2009/01/30 02:03 AM |
EAs (security labels) hosed me badly | anon | 2009/01/30 09:48 PM |
Extended Attributes in Action | RagingDragon | 2009/01/29 09:31 PM |
Extended Attributes in Action | anonymous | 2009/01/29 08:13 PM |
Extended Attributes in Action | Howard Chu | 2009/01/29 09:38 PM |
Extended Attributes in Action | slacker | 2009/01/30 11:24 AM |
Extended Attributes in Action | anon | 2009/01/30 05:50 PM |
Extended Attributes in Action | Etienne Lehnart | 2009/01/30 12:22 AM |
Extended Attributes in Action | Rob Thorpe | 2009/01/30 12:39 PM |
Extended Attributes in Action | slacker | 2009/01/30 01:16 PM |
Extended Attributes in Action | anon | 2009/01/30 06:03 PM |
Extended Attributes in Action | Howard Chu | 2009/01/30 11:22 PM |
Extended Attributes in Action | rwessel | 2009/01/31 12:08 AM |
Extended Attributes in Action | anonymous | 2009/01/31 12:22 AM |
Extended Attributes in Action | rwessel | 2009/01/31 12:56 AM |
Scaling | Dean Kent | 2009/01/31 09:04 AM |
Scaling | Rob Thorpe | 2009/02/02 02:39 AM |
Scaling | rwessel | 2009/02/02 11:41 AM |
Scaling | Howard Chu | 2009/02/02 12:30 PM |
Scaling | Dean Kent | 2009/02/02 02:27 PM |
Scaling | Rob Thorpe | 2009/02/03 05:08 AM |
Scaling | Dean Kent | 2009/02/03 07:38 AM |
Scaling | rwessel | 2009/02/03 02:34 PM |
Scaling | RagingDragon | 2009/02/03 10:46 PM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 11:27 AM |
in defense of software that does not scale | Howard Chu | 2009/02/03 12:03 PM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 12:17 PM |
in defense of software that does not scale | RagingDragon | 2009/02/03 11:00 PM |
in defense of software that does not scale | Michael S | 2009/02/04 06:46 AM |
in defense of software that does not scale | RagingDragon | 2009/02/04 09:33 PM |
in defense of software that does not scale | Dean Kent | 2009/02/03 12:17 PM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 12:24 PM |
in defense of software that does not scale | Vincent Diepeveen | 2009/02/04 10:43 AM |
in defense of software that does not scale | rwessel | 2009/02/03 02:44 PM |
in defense of software that does not scale | anon | 2009/02/04 02:35 AM |
in defense of software that does not scale | Carlie Coats | 2009/02/04 05:24 AM |
Scaling with time vs. scaling from the beginning. | mpx | 2009/02/05 01:57 AM |
Extended Attributes in Action | Michael S | 2009/01/31 10:33 AM |
Extended Attributes in Action | anon | 2009/01/31 10:37 PM |
Extended Attributes in Action | JasonB | 2009/01/31 08:11 AM |
Extended Attributes in Action | Howard Chu | 2009/01/31 11:43 AM |
Extended Attributes in Action | JasonB | 2009/01/31 04:37 PM |
Extended Attributes in Action | Howard Chu | 2009/02/02 02:42 PM |
Extended Attributes in Action | Howard Chu | 2009/02/02 02:44 PM |
The File Abstraction | Rob Thorpe | 2009/01/27 11:20 AM |
The File Abstraction | Howard Chu | 2009/01/27 12:28 AM |
The File Abstraction | Michael S | 2009/01/27 03:00 AM |
The File Abstraction | Dean Kent | 2009/01/27 08:30 AM |
The File Abstraction | Andi Kleen | 2009/01/27 02:05 AM |
SLC vs. MLC | Michel | 2009/01/12 06:54 PM |
SLC vs. MLC | Linus Torvalds | 2009/01/12 07:38 PM |
SLC vs. MLC | rwessel | 2009/01/13 12:52 AM |
SLC vs. MLC | Ungo | 2009/01/13 03:04 PM |
SLC vs. MLC | Wes Felter | 2009/01/13 05:42 PM |
SLC vs. MLC | TruePath | 2009/01/25 05:05 AM |
SLC vs. MLC | Ungo | 2008/08/21 12:54 PM |
SLC vs. MLC | Aaron Spink | 2008/08/21 01:20 PM |
MLC vs. SLC | Michael S | 2008/08/21 08:57 AM |
First Dunnington benchmark results | rwessel | 2008/08/21 10:40 AM |
First Dunnington benchmark results | Aaron Spink | 2008/08/21 03:18 AM |
First Dunnington benchmark results | Etienne Lehnart | 2008/08/20 04:38 AM |
Will x86 dominate big iron? | Tom W | 2008/08/19 10:10 PM |
Will x86 dominate big iron? | Jesper Frimann | 2008/08/20 12:28 AM |
Will x86 dominate big iron? | Tom W | 2008/08/20 03:42 PM |
Will x86 dominate big iron? | David Kanter | 2008/08/21 01:13 AM |
Will x86 dominate big iron? | Joe Chang | 2008/08/21 06:54 AM |
Will x86 dominate big iron? | asdf | 2008/08/22 01:18 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/22 07:54 PM |
Will x86 dominate big iron? | Jesper Frimann | 2008/08/22 09:48 AM |
Will x86 dominate big iron? | Tom W | 2008/08/24 01:06 AM |
Will x86 dominate big iron? | Michael S | 2008/08/24 04:19 AM |
Will x86 dominate big iron? | Dean Kent | 2008/08/24 09:30 AM |
Will x86 dominate big iron? | Paul | 2008/08/24 11:16 AM |
Will x86 dominate big iron? | Dean Kent | 2008/08/24 12:37 PM |
Will x86 dominate big iron? | Michael S | 2008/08/25 12:53 AM |
Will x86 dominate big iron? | someone | 2008/08/22 10:19 AM |
Will x86 dominate big iron? | aaron spink | 2008/08/23 02:56 AM |
Will x86 dominate big iron? | Michael S | 2008/08/23 09:58 AM |
Will x86 dominate big iron? | someone | 2008/08/23 01:51 PM |
Will x86 dominate big iron? | someone | 2008/08/23 01:55 PM |
Will x86 dominate big iron? | Aaron Spink | 2008/08/23 04:52 PM |
Will x86 dominate big iron? | anonymous | 2008/08/23 05:28 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/23 06:12 PM |
Off road and topic | EduardoS | 2008/08/23 06:28 PM |
Will x86 dominate big iron? | someone | 2008/08/23 06:26 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/23 09:40 PM |
Will x86 dominate big iron? | anonymous | 2008/08/24 01:46 AM |
Off road and topic | David W. Hess | 2008/08/24 03:24 AM |
Off road and topic | Aaron Spink | 2008/08/24 04:14 AM |
Beckton vs. Dunnington | Mr. Camel | 2008/08/22 06:30 AM |
Beckton vs. Dunnington | jokerman | 2008/08/22 12:12 PM |
Beckton vs. Dunnington | Mr. Camel | 2009/05/29 10:16 AM |