By: Linus Torvalds (torvalds.delete@this.linux-foundation.org), September 11, 2008 7:16 pm
Room: Moderated Discussions
Doug Siebert (foo@bar.bar) on 9/11/08 wrote:
>
>Regular RAM and a small (pre-erased) reserved section of
>flash along with a capacitor would be a much better
>solution.
If you have pre-erased flash, why bother with the whole
charade to begin with?
Guys, this is basic queuing theory. You cannot get write
bandwidth higher than what your flash is able to absorb
(and that includes all the GC and erase cycles necessary)
in the steady state anyway.
No amount of buffering will ever change that.
Anybody who thinks that they can lower latency by just
adding buffers is a moron, because the fact is,
if you "lie" and tell the OS that the write has completed,
it will just write more. Until your buffer is full, and
you have to throttle the writes!
And at that point, you have to expose the real latency
(or worse - much bigger latencies because you then
end up waiting for all of the buffering you did!)
Buffering can only help temporary spikes.
And there is almost certainly no reason for those
temporary spikes, except for simplistic initial SSD
controller implementations.
Once you are clever and careful enough to try to do any
capacitor that writes back after power failure, you're
already spending more effort and engineering (on a hacky
workaround) than if you were just doing incremental GC and
compaction in the first place.
Because once you do incremental compaction and GC, you'll
not need that special reserved pre-erased part of
the flash - because you just make sure that your compaction
and GC stays just ahead of the curve.
That's what realtime GC means for chissake!
So you're all trying to solve the wrong problem. You're
looking at badly designed SSD's (in fairness, it always
takes a few generations for people to learn even the basic
"tricks", so "badly designed" is a bit harsh), and then
you are trying to make these insane workarounds for it,
when the right way to solve it is to just do it better
in the first place, and suddenly the workarounds aren't
needed any more.
See what I'm saying?
Let me put this another way, if it's unclear:
- you can never fundamentally lower the average latency
for one operation to below that of the average
throughput for that size operation of the device.
Think about it.
If you could make the write latency lower than the
throughput, the thing that feeds it would just feed it
more, and if you can absorb that, then your throughput
would be higher than you just started out claiming it
was.
- Ergo: assuming latencies are fairly stable (ie the
latencies aren't sometimes low and sometimes very high),
that means that you can never realistically win
by buffering more than the data you can write in one
such "latency time".
- Thinking that you can just make buffers bigger is silly.
The fundamental buffer size is going to be "throughput
times latency time". No more, no less. (Of course, for
implementation reasons you might want to do things like
switch buffers around so you'd do double buffering, but
we're talking about just small multipliers here).
- And people know how to do realtime GC and
compaction.
Another way of saying the same thing: in the absense of
things like mechanical arm movement etc that introduces
big latency jumps, it sure as h*ll should be entirely
possible to keep latencies stable. You don't need
to have events that suddenly rewrite big portions of the
disk in order to do GC and erase-cycles!
- once your buffers are in the ballpark of the above
"basic size", then trying to return success early is
not going to help anything anyway. If you lie and return
early (and have a complex battery back-up to protect you
from the downsides of lying), you're just going to be
stuck ont he next IO instead.
So having some kind of super-capacitor is fundamentally
stupid. The "claim the IO is done early" is just stupid.
Much better to not lie, and thus not need battery
backup.
So why do people do high-performance controllers that
really do do battery backup etc? I just told you
it was stupid.
A couple of reasons:
- you can absorb load spikes. The big buffers
won't help you under heavy load, but it will help
you under spiky load.
- for rotational media, you can absorb the much longer
latencies for actual seek events. This is especially
true for RAID controllers that have huge bandwidth
over multiple disks (so the "latency times throughput"
number is actually reasonably large).
- again, mainly for rotational media: you can sort the
final requests and actually improve throughput by doing
the IO in a different order. IOW, you can actually
decrease the latencies by seeking less, but you need
lots of data to do that well.
But apart from the first one, those reasons are largely not
valid for solid-state media. The first one is true, but
quite frankly, it's much better done somewhere else
than on the disk itself. IOW, you're better off doing it
on the host controller, where the fundamental latencies are
lower, so you get a bigger win (ie for bursty IO, you may
be getting a fake latency to the host controller, not the
one all the way to the disk).
Doing it on the host controller is better for another
reason too: it allows you basic building block (the disk)
to be generic and cheap. Which is what you want. Because
people are going to buy cheap, and the people who want
something extra and are willing to pay for it, are still
better off if they can use mass-produced basic building
blocks and then just adding some "secret sauce".
(That "standard building block" detail is true for the
other cases too, for that matter, and probably explains
why nobody I know of actually does a disk with battery
back-up).
To recap:
- doing large buffers is stupid, and indicates that you
have way more latency variation than you should have
in a solid-state setup in the first place.
- and doing them on the disk, rather than closer to the
host, is doubly stupid even if you wanted to go for
an extra kicker.
Hmm? I may be wrong, but I don't think I am.
So please stop thinking that large RAM buffers "speeds
things up". Because they do nothing of the kind. Yes, they
can smooth out any non-realtime issues you have in the
GC/compaction, but you really should see it as a
"smoothing" thing, not anything else.
And there really should be no reason why SSD's should act
particularly bursty in themselves over much bigger data
sets than the size of the erase block. So you should be
able to size the buffer by roughly doing the max throughput
over an average latency cycle, and adding one erase block
worth of data to that.
With a 250MB/s thing that has 0.1 microsecond average
latencies, you shouldn't need more than a 25kB buffer. Of
course, since the erase block size is probably bigger than
that, do say 64kB or 128kB.
And then, if you support multiple outstanding commands, and
since RAM is cheap, you might decide to have separate
buffers for each outstanding command. But at that point,
you're really just wasting buffer space in order to just
not have to manage it very carefully and worry about
completion order etc.
Of course, that all depends on how smooth you can make
your GC and compaction. Maybe you can't do it entirely
incrementally. Maybe you end up having batches of a few
erase blocks.. Who knows? The point is, those latencies on
the order of a second really are the fundamental problem,
and they shouldn't exist, and buffering is not the way
to solve it.
Linus
>
>Regular RAM and a small (pre-erased) reserved section of
>flash along with a capacitor would be a much better
>solution.
If you have pre-erased flash, why bother with the whole
charade to begin with?
Guys, this is basic queuing theory. You cannot get write
bandwidth higher than what your flash is able to absorb
(and that includes all the GC and erase cycles necessary)
in the steady state anyway.
No amount of buffering will ever change that.
Anybody who thinks that they can lower latency by just
adding buffers is a moron, because the fact is,
if you "lie" and tell the OS that the write has completed,
it will just write more. Until your buffer is full, and
you have to throttle the writes!
And at that point, you have to expose the real latency
(or worse - much bigger latencies because you then
end up waiting for all of the buffering you did!)
Buffering can only help temporary spikes.
And there is almost certainly no reason for those
temporary spikes, except for simplistic initial SSD
controller implementations.
Once you are clever and careful enough to try to do any
capacitor that writes back after power failure, you're
already spending more effort and engineering (on a hacky
workaround) than if you were just doing incremental GC and
compaction in the first place.
Because once you do incremental compaction and GC, you'll
not need that special reserved pre-erased part of
the flash - because you just make sure that your compaction
and GC stays just ahead of the curve.
That's what realtime GC means for chissake!
So you're all trying to solve the wrong problem. You're
looking at badly designed SSD's (in fairness, it always
takes a few generations for people to learn even the basic
"tricks", so "badly designed" is a bit harsh), and then
you are trying to make these insane workarounds for it,
when the right way to solve it is to just do it better
in the first place, and suddenly the workarounds aren't
needed any more.
See what I'm saying?
Let me put this another way, if it's unclear:
- you can never fundamentally lower the average latency
for one operation to below that of the average
throughput for that size operation of the device.
Think about it.
If you could make the write latency lower than the
throughput, the thing that feeds it would just feed it
more, and if you can absorb that, then your throughput
would be higher than you just started out claiming it
was.
- Ergo: assuming latencies are fairly stable (ie the
latencies aren't sometimes low and sometimes very high),
that means that you can never realistically win
by buffering more than the data you can write in one
such "latency time".
- Thinking that you can just make buffers bigger is silly.
The fundamental buffer size is going to be "throughput
times latency time". No more, no less. (Of course, for
implementation reasons you might want to do things like
switch buffers around so you'd do double buffering, but
we're talking about just small multipliers here).
- And people know how to do realtime GC and
compaction.
Another way of saying the same thing: in the absense of
things like mechanical arm movement etc that introduces
big latency jumps, it sure as h*ll should be entirely
possible to keep latencies stable. You don't need
to have events that suddenly rewrite big portions of the
disk in order to do GC and erase-cycles!
- once your buffers are in the ballpark of the above
"basic size", then trying to return success early is
not going to help anything anyway. If you lie and return
early (and have a complex battery back-up to protect you
from the downsides of lying), you're just going to be
stuck ont he next IO instead.
So having some kind of super-capacitor is fundamentally
stupid. The "claim the IO is done early" is just stupid.
Much better to not lie, and thus not need battery
backup.
So why do people do high-performance controllers that
really do do battery backup etc? I just told you
it was stupid.
A couple of reasons:
- you can absorb load spikes. The big buffers
won't help you under heavy load, but it will help
you under spiky load.
- for rotational media, you can absorb the much longer
latencies for actual seek events. This is especially
true for RAID controllers that have huge bandwidth
over multiple disks (so the "latency times throughput"
number is actually reasonably large).
- again, mainly for rotational media: you can sort the
final requests and actually improve throughput by doing
the IO in a different order. IOW, you can actually
decrease the latencies by seeking less, but you need
lots of data to do that well.
But apart from the first one, those reasons are largely not
valid for solid-state media. The first one is true, but
quite frankly, it's much better done somewhere else
than on the disk itself. IOW, you're better off doing it
on the host controller, where the fundamental latencies are
lower, so you get a bigger win (ie for bursty IO, you may
be getting a fake latency to the host controller, not the
one all the way to the disk).
Doing it on the host controller is better for another
reason too: it allows you basic building block (the disk)
to be generic and cheap. Which is what you want. Because
people are going to buy cheap, and the people who want
something extra and are willing to pay for it, are still
better off if they can use mass-produced basic building
blocks and then just adding some "secret sauce".
(That "standard building block" detail is true for the
other cases too, for that matter, and probably explains
why nobody I know of actually does a disk with battery
back-up).
To recap:
- doing large buffers is stupid, and indicates that you
have way more latency variation than you should have
in a solid-state setup in the first place.
- and doing them on the disk, rather than closer to the
host, is doubly stupid even if you wanted to go for
an extra kicker.
Hmm? I may be wrong, but I don't think I am.
So please stop thinking that large RAM buffers "speeds
things up". Because they do nothing of the kind. Yes, they
can smooth out any non-realtime issues you have in the
GC/compaction, but you really should see it as a
"smoothing" thing, not anything else.
And there really should be no reason why SSD's should act
particularly bursty in themselves over much bigger data
sets than the size of the erase block. So you should be
able to size the buffer by roughly doing the max throughput
over an average latency cycle, and adding one erase block
worth of data to that.
With a 250MB/s thing that has 0.1 microsecond average
latencies, you shouldn't need more than a 25kB buffer. Of
course, since the erase block size is probably bigger than
that, do say 64kB or 128kB.
And then, if you support multiple outstanding commands, and
since RAM is cheap, you might decide to have separate
buffers for each outstanding command. But at that point,
you're really just wasting buffer space in order to just
not have to manage it very carefully and worry about
completion order etc.
Of course, that all depends on how smooth you can make
your GC and compaction. Maybe you can't do it entirely
incrementally. Maybe you end up having batches of a few
erase blocks.. Who knows? The point is, those latencies on
the order of a second really are the fundamental problem,
and they shouldn't exist, and buffering is not the way
to solve it.
Linus
Topic | Posted By | Date |
---|---|---|
First Dunnington benchmark results | Michael S | 2008/08/19 10:54 AM |
First Dunnington benchmark results | rwessel | 2008/08/19 01:42 PM |
First Dunnington benchmark results | Aaron Apink | 2008/08/19 05:49 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/19 06:28 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 09:49 AM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 03:10 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 06:42 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 07:12 PM |
First Dunnington benchmark results | rwessel | 2008/08/21 09:45 AM |
First Dunnington benchmark results | Aaron Spink | 2008/08/21 01:12 PM |
First Dunnington benchmark results | Joe Chang | 2008/08/21 03:15 PM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 02:59 AM |
First Dunnington benchmark results | Anders Jensen | 2008/08/20 03:26 AM |
+SSD | Anders Jensen | 2008/08/20 03:30 AM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 11:04 AM |
First Dunnington benchmark results | slacker | 2008/08/20 12:35 PM |
First Dunnington benchmark results | Doug Siebert | 2008/08/20 07:54 PM |
First Dunnington benchmark results | Richard Cownie | 2008/08/20 08:58 PM |
SLC vs. MLC | David Kanter | 2008/08/21 01:16 AM |
SLC vs. MLC | Matt Sayler | 2008/08/21 06:25 AM |
SLC vs. MLC | Richard Cownie | 2008/08/21 06:32 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/21 08:39 AM |
SLC vs. MLC | Michael S | 2008/08/21 09:07 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/21 09:52 AM |
SLC vs. MLC | Michael S | 2008/08/21 10:35 AM |
OLTP appliance = mainframe? (NT) | Potatoswatter | 2008/08/21 11:44 AM |
OLTP appliance = HP NonStop? | Michael S | 2008/08/21 12:03 PM |
OLTP appliance | Joe Chang | 2008/08/21 03:33 PM |
OLTP appliance | Potatoswatter | 2008/08/21 03:59 PM |
SLC vs. MLC | Aaron Spink | 2008/08/21 01:29 PM |
SLC vs. MLC | Dan Downs | 2008/08/21 11:33 AM |
SLC vs. MLC | rwessel | 2008/08/21 12:45 PM |
SLC vs. MLC | Dan Downs | 2008/08/22 08:21 AM |
SLC vs. MLC | Aaron Spink | 2008/08/21 01:34 PM |
SLC vs. MLC vs DRAM | pgerassi | 2008/08/21 12:24 PM |
SLC vs. MLC vs DRAM | David Kanter | 2008/08/22 01:31 AM |
SLC vs. MLC | Groo | 2008/08/23 12:52 PM |
SLC vs. MLC | Doug Siebert | 2008/08/21 06:14 PM |
SLC vs. MLC | Linus Torvalds | 2008/08/22 08:05 AM |
SLC vs. MLC | Doug Siebert | 2008/08/22 02:27 PM |
SLC vs. MLC | EduardoS | 2008/08/22 06:26 PM |
SSD Controller differentiation | David Kanter | 2008/08/22 09:35 PM |
SSD Controller differentiation | Doug Siebert | 2008/08/22 10:34 PM |
SSD Controller differentiation (supercaps, cost...) | anon | 2008/08/23 10:18 AM |
SSD Controller differentiation (supercaps, cost...) | Doug Siebert | 2008/08/23 10:40 AM |
SLC vs. MLC | Linus Torvalds | 2008/08/23 10:50 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/08 12:03 PM |
SLC vs. MLC | Max | 2008/09/08 01:51 PM |
SLC vs. MLC | Howard Chu | 2008/09/08 09:04 PM |
SLC vs. MLC | Max | 2008/09/08 10:29 PM |
SLC vs. MLC | Howard Chu | 2008/09/09 12:12 AM |
RAM vs SSD? | Jouni Osmala | 2008/09/09 01:06 AM |
RAM vs SSD? | Max | 2008/09/12 12:51 PM |
RAM vs SSD? | EduardoS | 2008/09/12 04:27 PM |
Disk cache snapshotting | Max | 2008/09/13 08:34 AM |
Disk cache snapshotting | Howard Chu | 2008/09/14 09:58 PM |
Disk cache snapshotting | Max | 2008/09/15 12:50 PM |
SLC vs. MLC | Linus Torvalds | 2008/09/09 07:43 AM |
SLC vs. MLC | Howard Chu | 2008/09/09 09:42 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/09 10:39 AM |
SLC vs. MLC | Michael S | 2008/09/10 12:29 AM |
SLC vs. MLC | anon | 2008/09/10 02:51 AM |
SLC vs. MLC | Michael S | 2008/09/10 03:09 AM |
SLC vs. MLC | Max | 2008/09/10 04:48 AM |
SLC vs. MLC | Michael S | 2008/09/10 05:52 AM |
SLC vs. MLC | Max | 2008/09/10 06:28 AM |
SLC vs. MLC | Matt Sayler | 2008/09/10 06:21 AM |
SLC vs. MLC | Michael S | 2008/09/10 09:17 AM |
SLC vs. MLC | anon | 2008/09/10 06:29 AM |
SLC vs. MLC | Michael S | 2008/09/10 09:23 AM |
SLC vs. MLC | Matt Sayler | 2008/09/10 10:45 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/10 07:25 AM |
SLC vs. MLC | Michael S | 2008/09/10 09:54 AM |
SLC vs. MLC | Linus Torvalds | 2008/09/10 10:31 AM |
Physical vs effective write latency | Max | 2008/09/11 07:35 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 09:06 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 09:48 AM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 11:39 AM |
Physical vs effective write latency | Mark Roulo | 2008/09/11 12:18 PM |
Physical vs effective write latency | Doug Siebert | 2008/09/11 05:59 PM |
Physical vs effective write latency | Linus Torvalds | 2008/09/11 07:16 PM |
Physical vs effective write latency | Doug Siebert | 2008/09/11 10:28 PM |
Physical vs effective write latency | MS | 2009/02/03 03:06 PM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/11 12:39 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/11 01:17 PM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/11 05:25 PM |
SLC vs. MLC - the trick to latency | Doug Siebert | 2008/09/11 05:47 PM |
SLC vs. MLC - the trick to latency | rwessel | 2008/09/11 06:01 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/12 12:00 AM |
SLC vs. MLC - the trick to latency | Anonymous | 2008/09/12 08:52 PM |
SLC vs. MLC - the trick to latency | anon | 2008/09/13 10:06 AM |
SLC vs. MLC - the trick to latency | Ungo | 2008/09/15 12:18 PM |
To SSD or not? One lady's perspective | David Kanter | 2008/09/22 01:12 AM |
To SSD or not? One lady's perspective | Howard Chu | 2008/09/22 04:02 AM |
To SSD or not? Real data.. | Linus Torvalds | 2008/09/22 07:33 AM |
To SSD or not? Real data.. | Ungo | 2008/09/22 12:27 PM |
4K sectors | Wes Felter | 2008/09/22 06:03 PM |
4K sectors | Daniel | 2008/09/22 10:31 PM |
Reasons for >512 byte sectors | Doug Siebert | 2008/09/22 09:38 PM |
Reasons for >512 byte sectors | rwessel | 2008/09/22 10:09 PM |
Reasons for >512 byte sectors | Howard Chu | 2008/09/23 02:50 AM |
Reasons for >512 byte sectors | Daniel | 2008/09/22 10:40 PM |
Reasons for >512 byte sectors | rwessel | 2008/09/23 09:11 AM |
Reasons for >512 byte sectors | Daniel | 2008/09/23 12:10 PM |
HDD long sector size availability | Etienne Lehnart | 2008/09/23 05:32 AM |
HDD long sector size availability | rwessel | 2008/09/23 09:19 AM |
HDD long sector size availability | Etienne Lehnart | 2008/09/23 02:17 PM |
To SSD or not? Real data.. | Jouni Osmala | 2008/09/22 11:16 PM |
To SSD or not? One lady's perspective | Wes Felter | 2008/09/22 11:25 AM |
How should SSDs be engineered into systems? | Rob Thorpe | 2008/09/22 02:01 PM |
How should SSDs be engineered into systems? | Matt Craighead | 2008/09/23 06:59 PM |
How should SSDs be engineered into systems? | Matt Sayler | 2008/09/24 04:17 AM |
ATA/SCSIS, Write Flushes and Asych Filesystems | TruePath | 2009/01/25 04:44 AM |
SLC vs. MLC - the trick to latency | Michael S | 2008/09/12 04:58 AM |
overlapped erase and read | Michael S | 2008/09/12 04:59 AM |
overlapped erase and read | David W. Hess | 2008/09/12 09:56 AM |
overlapped erase and read | Anonymous | 2008/09/12 08:45 PM |
overlapped erase and read | Jouni Osmala | 2008/09/12 11:56 PM |
overlapped erase and read | Michael S | 2008/09/13 11:29 AM |
overlapped erase and read | Michael S | 2008/09/13 12:09 PM |
overlapped erase and read | Linus Torvalds | 2008/09/13 02:05 PM |
SLC vs. MLC - the trick to latency | Doug Siebert | 2008/09/11 05:31 PM |
SLC vs. MLC | EduardoS | 2008/09/08 02:07 PM |
SLC vs. MLC | Linus Torvalds | 2008/09/08 02:30 PM |
SLC vs. MLC | EduardoS | 2008/09/08 04:01 PM |
SSD and RAID | Joe Chang | 2008/09/08 07:42 PM |
SSD and RAID | Doug Siebert | 2008/09/08 09:46 PM |
SSD and RAID | Aaron Spink | 2008/09/09 04:27 PM |
SSD and RAID | Groo | 2008/09/10 01:02 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 10:22 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 02:04 PM |
SLC vs. MLC | Joern Engel | 2009/01/06 03:24 PM |
SLC vs. MLC | rwessel | 2009/01/06 04:47 PM |
SLC vs. MLC | anonymous | 2009/01/06 05:17 PM |
SLC vs. MLC | rwessel | 2009/01/06 05:58 PM |
SLC vs. MLC | Joern Engel | 2009/01/07 12:35 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 05:45 PM |
SLC vs. MLC | rwessel | 2009/01/06 06:09 PM |
SLC vs. MLC | Linus Torvalds | 2009/01/06 07:47 PM |
SLC vs. MLC | Joern Engel | 2009/01/07 12:26 AM |
SLC vs. MLC | anon | 2009/01/06 08:23 PM |
SLC vs. MLC | Joern Engel | 2009/01/07 12:52 AM |
SLC vs. MLC | anon | 2009/01/07 02:34 AM |
SLC vs. MLC | IntelUser2000 | 2009/01/07 07:43 AM |
SLC vs. MLC | Linus Torvalds | 2009/01/07 10:28 AM |
drop data filesystem semantic | Doug Siebert | 2009/01/09 12:21 PM |
FTL and FS | iz | 2009/01/09 07:49 PM |
FTL and FS | Linus Torvalds | 2009/01/09 09:53 PM |
FTL and FS | iz | 2009/01/10 02:09 AM |
FTL and FS | Michael S | 2009/01/10 03:19 PM |
compiling large programs | iz | 2009/01/10 05:51 PM |
compiling large programs | Linus Torvalds | 2009/01/10 07:58 PM |
compiling large programs | peter | 2009/01/11 05:30 AM |
compiling large programs | Andi Kleen | 2009/01/11 01:03 PM |
The File Abstraction | TruePath | 2009/01/25 06:45 AM |
The File Abstraction | Howard Chu | 2009/01/25 01:49 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 09:23 AM |
The File Abstraction | Michael S | 2009/01/26 01:39 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 02:31 PM |
The File Abstraction | Dean Kent | 2009/01/26 03:06 PM |
The File Abstraction | Linus Torvalds | 2009/01/26 04:29 PM |
The File Abstraction | Mark Christiansen | 2009/01/27 09:24 AM |
The File Abstraction | Mark Christiansen | 2009/01/27 10:14 AM |
The File Abstraction | Linus Torvalds | 2009/01/27 10:15 AM |
The File Abstraction | slacker | 2009/01/27 11:20 AM |
The File Abstraction | Linus Torvalds | 2009/01/27 01:16 PM |
Attributes All The Way Down | Mark Christiansen | 2009/01/27 02:17 PM |
The File Abstraction | slacker | 2009/01/27 05:25 PM |
The File Abstraction | Linus Torvalds | 2009/01/28 08:17 AM |
The File Abstraction: API thoughts | Carlie Coats | 2009/01/28 09:35 AM |
The File Abstraction | slacker | 2009/01/28 10:09 AM |
The File Abstraction | Linus Torvalds | 2009/01/28 01:44 PM |
Programs already 'hide' their metadata in the bytestream, unbeknownst to users | anon | 2009/01/28 09:28 PM |
The File Abstraction | slacker | 2009/01/29 10:39 AM |
The File Abstraction | Linus Torvalds | 2009/01/29 11:08 AM |
The File Abstraction | Dean Kent | 2009/01/29 11:49 AM |
The File Abstraction | Howard Chu | 2009/01/29 02:58 PM |
The File Abstraction | rwessel | 2009/01/29 04:23 PM |
Extended Attributes in Action | slacker | 2009/01/29 03:05 PM |
Extended Attributes in Action | stubar | 2009/01/29 04:49 PM |
Extended Attributes in Action | Linus Torvalds | 2009/01/29 05:15 PM |
Like Duh | anon | 2009/01/29 07:42 PM |
Like Duh | anon | 2009/01/29 09:15 PM |
Like Duh | anon | 2009/02/01 07:18 PM |
Double Duh. | Anonymous | 2009/02/01 10:58 PM |
Double Duh. | anon | 2009/02/02 02:08 AM |
Double Duh. | Anonymous | 2009/02/02 05:11 PM |
Double Duh. | anon | 2009/02/02 07:33 PM |
Like Duh | David Kanter | 2009/02/01 11:05 PM |
Like Duh | peter | 2009/02/01 11:55 PM |
Like Duh | anon | 2009/02/02 01:55 AM |
Xattrs, Solar power, regulation and politics | Rob Thorpe | 2009/02/02 04:36 AM |
Terminology seems too fuzzy to me | hobold | 2009/02/02 06:14 AM |
Terminology seems too fuzzy to me | rwessel | 2009/02/02 12:33 PM |
good summary | Michael S | 2009/02/03 02:41 AM |
good summary | Mark Christiansen | 2009/02/03 09:57 AM |
good summary | Howard Chu | 2009/02/03 10:21 AM |
good summary | Mark Christiansen | 2009/02/03 11:18 AM |
good summary | Howard Chu | 2009/02/03 12:00 PM |
good summary | Mark Christiansen | 2009/02/03 12:36 PM |
good summary | RagingDragon | 2009/02/03 10:39 PM |
good summary | rwessel | 2009/02/03 11:03 PM |
good summary | RagingDragon | 2009/02/03 11:46 PM |
Terminology seems too fuzzy to me | slacker | 2009/02/04 05:06 PM |
Terminology seems too fuzzy to me | Michael S | 2009/02/05 01:05 AM |
Terminology seems too fuzzy to me | Ungo | 2009/02/05 01:15 PM |
Terminology seems too fuzzy to me | slacker | 2009/02/05 02:19 PM |
Terminology seems too fuzzy to me | Howard Chu | 2009/02/05 04:44 PM |
Like Duh | iz | 2009/01/30 02:03 AM |
EAs (security labels) hosed me badly | anon | 2009/01/30 09:48 PM |
Extended Attributes in Action | RagingDragon | 2009/01/29 09:31 PM |
Extended Attributes in Action | anonymous | 2009/01/29 08:13 PM |
Extended Attributes in Action | Howard Chu | 2009/01/29 09:38 PM |
Extended Attributes in Action | slacker | 2009/01/30 11:24 AM |
Extended Attributes in Action | anon | 2009/01/30 05:50 PM |
Extended Attributes in Action | Etienne Lehnart | 2009/01/30 12:22 AM |
Extended Attributes in Action | Rob Thorpe | 2009/01/30 12:39 PM |
Extended Attributes in Action | slacker | 2009/01/30 01:16 PM |
Extended Attributes in Action | anon | 2009/01/30 06:03 PM |
Extended Attributes in Action | Howard Chu | 2009/01/30 11:22 PM |
Extended Attributes in Action | rwessel | 2009/01/31 12:08 AM |
Extended Attributes in Action | anonymous | 2009/01/31 12:22 AM |
Extended Attributes in Action | rwessel | 2009/01/31 12:56 AM |
Scaling | Dean Kent | 2009/01/31 09:04 AM |
Scaling | Rob Thorpe | 2009/02/02 02:39 AM |
Scaling | rwessel | 2009/02/02 11:41 AM |
Scaling | Howard Chu | 2009/02/02 12:30 PM |
Scaling | Dean Kent | 2009/02/02 02:27 PM |
Scaling | Rob Thorpe | 2009/02/03 05:08 AM |
Scaling | Dean Kent | 2009/02/03 07:38 AM |
Scaling | rwessel | 2009/02/03 02:34 PM |
Scaling | RagingDragon | 2009/02/03 10:46 PM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 11:27 AM |
in defense of software that does not scale | Howard Chu | 2009/02/03 12:03 PM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 12:17 PM |
in defense of software that does not scale | RagingDragon | 2009/02/03 11:00 PM |
in defense of software that does not scale | Michael S | 2009/02/04 06:46 AM |
in defense of software that does not scale | RagingDragon | 2009/02/04 09:33 PM |
in defense of software that does not scale | Dean Kent | 2009/02/03 12:17 PM |
in defense of software that does not scale | Matt Sayler | 2009/02/03 12:24 PM |
in defense of software that does not scale | Vincent Diepeveen | 2009/02/04 10:43 AM |
in defense of software that does not scale | rwessel | 2009/02/03 02:44 PM |
in defense of software that does not scale | anon | 2009/02/04 02:35 AM |
in defense of software that does not scale | Carlie Coats | 2009/02/04 05:24 AM |
Scaling with time vs. scaling from the beginning. | mpx | 2009/02/05 01:57 AM |
Extended Attributes in Action | Michael S | 2009/01/31 10:33 AM |
Extended Attributes in Action | anon | 2009/01/31 10:37 PM |
Extended Attributes in Action | JasonB | 2009/01/31 08:11 AM |
Extended Attributes in Action | Howard Chu | 2009/01/31 11:43 AM |
Extended Attributes in Action | JasonB | 2009/01/31 04:37 PM |
Extended Attributes in Action | Howard Chu | 2009/02/02 02:42 PM |
Extended Attributes in Action | Howard Chu | 2009/02/02 02:44 PM |
The File Abstraction | Rob Thorpe | 2009/01/27 11:20 AM |
The File Abstraction | Howard Chu | 2009/01/27 12:28 AM |
The File Abstraction | Michael S | 2009/01/27 03:00 AM |
The File Abstraction | Dean Kent | 2009/01/27 08:30 AM |
The File Abstraction | Andi Kleen | 2009/01/27 02:05 AM |
SLC vs. MLC | Michel | 2009/01/12 06:54 PM |
SLC vs. MLC | Linus Torvalds | 2009/01/12 07:38 PM |
SLC vs. MLC | rwessel | 2009/01/13 12:52 AM |
SLC vs. MLC | Ungo | 2009/01/13 03:04 PM |
SLC vs. MLC | Wes Felter | 2009/01/13 05:42 PM |
SLC vs. MLC | TruePath | 2009/01/25 05:05 AM |
SLC vs. MLC | Ungo | 2008/08/21 12:54 PM |
SLC vs. MLC | Aaron Spink | 2008/08/21 01:20 PM |
MLC vs. SLC | Michael S | 2008/08/21 08:57 AM |
First Dunnington benchmark results | rwessel | 2008/08/21 10:40 AM |
First Dunnington benchmark results | Aaron Spink | 2008/08/21 03:18 AM |
First Dunnington benchmark results | Etienne Lehnart | 2008/08/20 04:38 AM |
Will x86 dominate big iron? | Tom W | 2008/08/19 10:10 PM |
Will x86 dominate big iron? | Jesper Frimann | 2008/08/20 12:28 AM |
Will x86 dominate big iron? | Tom W | 2008/08/20 03:42 PM |
Will x86 dominate big iron? | David Kanter | 2008/08/21 01:13 AM |
Will x86 dominate big iron? | Joe Chang | 2008/08/21 06:54 AM |
Will x86 dominate big iron? | asdf | 2008/08/22 01:18 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/22 07:54 PM |
Will x86 dominate big iron? | Jesper Frimann | 2008/08/22 09:48 AM |
Will x86 dominate big iron? | Tom W | 2008/08/24 01:06 AM |
Will x86 dominate big iron? | Michael S | 2008/08/24 04:19 AM |
Will x86 dominate big iron? | Dean Kent | 2008/08/24 09:30 AM |
Will x86 dominate big iron? | Paul | 2008/08/24 11:16 AM |
Will x86 dominate big iron? | Dean Kent | 2008/08/24 12:37 PM |
Will x86 dominate big iron? | Michael S | 2008/08/25 12:53 AM |
Will x86 dominate big iron? | someone | 2008/08/22 10:19 AM |
Will x86 dominate big iron? | aaron spink | 2008/08/23 02:56 AM |
Will x86 dominate big iron? | Michael S | 2008/08/23 09:58 AM |
Will x86 dominate big iron? | someone | 2008/08/23 01:51 PM |
Will x86 dominate big iron? | someone | 2008/08/23 01:55 PM |
Will x86 dominate big iron? | Aaron Spink | 2008/08/23 04:52 PM |
Will x86 dominate big iron? | anonymous | 2008/08/23 05:28 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/23 06:12 PM |
Off road and topic | EduardoS | 2008/08/23 06:28 PM |
Will x86 dominate big iron? | someone | 2008/08/23 06:26 PM |
Will x86 dominate big iron? | Dean Kent | 2008/08/23 09:40 PM |
Will x86 dominate big iron? | anonymous | 2008/08/24 01:46 AM |
Off road and topic | David W. Hess | 2008/08/24 03:24 AM |
Off road and topic | Aaron Spink | 2008/08/24 04:14 AM |
Beckton vs. Dunnington | Mr. Camel | 2008/08/22 06:30 AM |
Beckton vs. Dunnington | jokerman | 2008/08/22 12:12 PM |
Beckton vs. Dunnington | Mr. Camel | 2009/05/29 10:16 AM |