By: Groo (charlie.delete@this.semiaccurate.com), June 30, 2022 12:56 pm
Room: Moderated Discussions
⚛ (0xe2.0x9a.0x9b.delete@this.gmail.com) on June 30, 2022 12:08 am wrote:
> It is possible to achieve program safety of any complexity purely in software, without any special hardware
> support for the safety guaranties, in the design of a secure operating system. Thus, from a theoretical
> viewpoint, it is completely unnecessary to implement any kind of security feature directly in hardware
> (hardware support for capabilities ... or even hardware support for virtual memory protection).
>
> The article's claim that "linear address space as a concept is unsafe at any speed" is false, because
> theory guarantees that there always exists a particular minimum "speed" (i.e: minimum cost, minimum
> slowdown) upwards of which the concept of a linear address space can be used to implement a safety guarantee
> of any particular complexity, via mechanisms implemented purely in software. Obviously, the minimum
> "speed" (i.e: cost, slowdown) depends on the complexity/definition of the safety features.
>
Can you clarify something for me? When you say that safety is achievable purely in software, do you assume that the entire software stack is controlled by a friendly party, IE the owner? If not, does your supposition hold for someone running an intentionally malicious program on top of your fortress of security?
The second bit is about side channels. Does your correct software ideal take into account misusing correct and non-flawed behavior in the ways of modern side channel attacks? If you have a rock solid software stack and someone can pull the encryption keys to the disk with a side channel, is it still 'safe'?
I am not saying you are wrong, just wondering if you took these and related scenarios into account when you claimed things could be solved in software. For the record, my view is that things can be made 'secure' at most levels against known attack vectors but currently unknown or unexpected vectors are a different issue.
-Charlie
> It is possible to achieve program safety of any complexity purely in software, without any special hardware
> support for the safety guaranties, in the design of a secure operating system. Thus, from a theoretical
> viewpoint, it is completely unnecessary to implement any kind of security feature directly in hardware
> (hardware support for capabilities ... or even hardware support for virtual memory protection).
>
> The article's claim that "linear address space as a concept is unsafe at any speed" is false, because
> theory guarantees that there always exists a particular minimum "speed" (i.e: minimum cost, minimum
> slowdown) upwards of which the concept of a linear address space can be used to implement a safety guarantee
> of any particular complexity, via mechanisms implemented purely in software. Obviously, the minimum
> "speed" (i.e: cost, slowdown) depends on the complexity/definition of the safety features.
>
Can you clarify something for me? When you say that safety is achievable purely in software, do you assume that the entire software stack is controlled by a friendly party, IE the owner? If not, does your supposition hold for someone running an intentionally malicious program on top of your fortress of security?
The second bit is about side channels. Does your correct software ideal take into account misusing correct and non-flawed behavior in the ways of modern side channel attacks? If you have a rock solid software stack and someone can pull the encryption keys to the disk with a side channel, is it still 'safe'?
I am not saying you are wrong, just wondering if you took these and related scenarios into account when you claimed things could be solved in software. For the record, my view is that things can be made 'secure' at most levels against known attack vectors but currently unknown or unexpected vectors are a different issue.
-Charlie