By: x (x.delete@this.example.com), June 30, 2013 3:54 am
Room: Moderated Discussions
In end-user systems like desktops, workstations and mobile phones being able to run ANY workload decently is valuable in itself, since people's tastes can change a lot, and one may suddenly and randomly be curious about stuff like 3D modelling or raytracing that requires FP, or simply "random FP code" which does exist (e.g. in JavaScript and Lua everything is FP by default).
And then, once we establish that the FPU is present, making it good is natural.
In servers, I guess the reason here is that microservers don't have economies of scale yet, while big servers are not price sensitive and in some cases also need to "run any workload" (think of servers being rented out for unknown use), and they need a lot of OoO and cache anyway, so the cost of an FPU is a small part.
If massively multicore microservers become popular and start being used by Google & co., then THESE might not have an FPU.
That is, unless widespread AI comes first, and uses FP-based algorithms.
And then, once we establish that the FPU is present, making it good is natural.
In servers, I guess the reason here is that microservers don't have economies of scale yet, while big servers are not price sensitive and in some cases also need to "run any workload" (think of servers being rented out for unknown use), and they need a lot of OoO and cache anyway, so the cost of an FPU is a small part.
If massively multicore microservers become popular and start being used by Google & co., then THESE might not have an FPU.
That is, unless widespread AI comes first, and uses FP-based algorithms.