By: Andrey (andrey.semashev.delete@this.gmail.com), January 30, 2023 5:04 pm
Room: Moderated Discussions
--- (---.delete@this.redheron.com) on January 28, 2023 10:37 am wrote:
> rwessel (rwessel.delete@this.yahoo.com) on January 28, 2023 8:41 am wrote:
> > Chris G (chrisg.delete@this.chrisg.com) on January 28, 2023 3:24 am wrote:
> > > None of the Sapphire Rapids SKUs with High Bandwidth Memory (HBM) have In-memory Analytics Accelerators
> > > or QuickAssist Technology accelerators enabled. None of the SKUs with HBM support Intel On Demand
> > > so there is no way for customers to pay extra for these accelerators even if they want to. The
> > > SKUs with HBM have recommended customer prices of $8k (32 cores) to $13k (56 cores). Intel should
> > > enable all of the on-die accelerators on expensive parts like these. The silicon area of the dedicated
> > > accelerators is small so their effect on yield is insignificant.
> > >
> > > The Intel On Demand scheme is penny wise and pound foolish because it will accelerate adoption of
> > > AMD and ARM processors. This is the type of dumb idea that gets implemented when marketing people
> > > are in charge. Intel probably spent a similar amount of engineering implementing their scheme for
> > > metered usage of on-die accelerators as they spent on implementing the actual accelerators.
> >
> >
> > Without commenting on desirability and ethics, IBM, for example, has long done things like that with
> > the big POWER boxes and Z. On Z you can even do things like short term memory and CPU upgrades (mostly
> > non-disruptively too), so you can handle things like seasonal workloads spikes or special situations.
> > You can even get a boost while booting/IPLing the system. The situations aren't perfectly parallel,
> > but there's clearly people out there who are willing to buy into that sort of thing.
>
> I think that seeing these things through a lens of "ethics" (which usually boils down
> to "I don't want to pay more for better stuff" is not useful or informational.
No, it's "I don't want to pay twice for the same thing". Because you already paid for a fully functional piece of silicon when you bought the CPU. Plus, you paid extra on top of that to compensate for the cost of this licensing system. A useless licensing system.
> If you want to engage in market segmentation, I think the trick is to do it by *capacity*, not by *capability*.
> Give everyone AVX512 (and BNNI, and the accelerators, and the rest of it) but either
> - at the low end make them work, but not as fast as at the high end. (AVX512 double- or even quad-pumped) OR
> - insert some sort of silly counter so that the user can essentially get fast
> "home use" of AVX512/accelerator for up to X billion instructions per day after
> which they are throttled, plus an On Demand key to end the throttling.
Unpredictable behavior makes technology unusable. And yes, "working fast for up to N runs" counts as unpredictable because no user knows how many instructions of some kind are executed by a given application, and in what usage scenario. Nor should he care about that sort of things.
Intel On Demand is nothing but a cash grab, regardless of what kind of artificial limitations it imposes. I don't see how it can possibly widen the ecosystem. What does grow the ecosystem is more useful technologies, more performance, at lower cost. It's when you can't deliver on those fronts you start inventing ways to grab some cash here and now before you sail off with your golden parachute. IMHO, Intel hasn't changed enough under Pat.
> rwessel (rwessel.delete@this.yahoo.com) on January 28, 2023 8:41 am wrote:
> > Chris G (chrisg.delete@this.chrisg.com) on January 28, 2023 3:24 am wrote:
> > > None of the Sapphire Rapids SKUs with High Bandwidth Memory (HBM) have In-memory Analytics Accelerators
> > > or QuickAssist Technology accelerators enabled. None of the SKUs with HBM support Intel On Demand
> > > so there is no way for customers to pay extra for these accelerators even if they want to. The
> > > SKUs with HBM have recommended customer prices of $8k (32 cores) to $13k (56 cores). Intel should
> > > enable all of the on-die accelerators on expensive parts like these. The silicon area of the dedicated
> > > accelerators is small so their effect on yield is insignificant.
> > >
> > > The Intel On Demand scheme is penny wise and pound foolish because it will accelerate adoption of
> > > AMD and ARM processors. This is the type of dumb idea that gets implemented when marketing people
> > > are in charge. Intel probably spent a similar amount of engineering implementing their scheme for
> > > metered usage of on-die accelerators as they spent on implementing the actual accelerators.
> >
> >
> > Without commenting on desirability and ethics, IBM, for example, has long done things like that with
> > the big POWER boxes and Z. On Z you can even do things like short term memory and CPU upgrades (mostly
> > non-disruptively too), so you can handle things like seasonal workloads spikes or special situations.
> > You can even get a boost while booting/IPLing the system. The situations aren't perfectly parallel,
> > but there's clearly people out there who are willing to buy into that sort of thing.
>
> I think that seeing these things through a lens of "ethics" (which usually boils down
> to "I don't want to pay more for better stuff" is not useful or informational.
No, it's "I don't want to pay twice for the same thing". Because you already paid for a fully functional piece of silicon when you bought the CPU. Plus, you paid extra on top of that to compensate for the cost of this licensing system. A useless licensing system.
> If you want to engage in market segmentation, I think the trick is to do it by *capacity*, not by *capability*.
> Give everyone AVX512 (and BNNI, and the accelerators, and the rest of it) but either
> - at the low end make them work, but not as fast as at the high end. (AVX512 double- or even quad-pumped) OR
> - insert some sort of silly counter so that the user can essentially get fast
> "home use" of AVX512/accelerator for up to X billion instructions per day after
> which they are throttled, plus an On Demand key to end the throttling.
Unpredictable behavior makes technology unusable. And yes, "working fast for up to N runs" counts as unpredictable because no user knows how many instructions of some kind are executed by a given application, and in what usage scenario. Nor should he care about that sort of things.
Intel On Demand is nothing but a cash grab, regardless of what kind of artificial limitations it imposes. I don't see how it can possibly widen the ecosystem. What does grow the ecosystem is more useful technologies, more performance, at lower cost. It's when you can't deliver on those fronts you start inventing ways to grab some cash here and now before you sail off with your golden parachute. IMHO, Intel hasn't changed enough under Pat.
Topic | Posted By | Date |
---|---|---|
NYT on SPR | --- | 2023/01/26 10:37 AM |
NYT on SPR | Chris G | 2023/01/26 06:02 PM |
NYT on SPR | me | 2023/01/26 07:44 PM |
NYT on SPR | Anne O. Nymous | 2023/01/27 01:09 AM |
NYT on SPR | Michael S | 2023/01/27 03:22 AM |
NYT on SPR | --- | 2023/01/27 10:31 AM |
Pat has been trimming the Intel product portfolio | Mark Roulo | 2023/01/27 01:29 PM |
NYT on SPR | James | 2023/01/27 02:00 PM |
NYT on SPR | Adrian | 2023/01/28 03:55 AM |
NYT on SPR | anonymou5 | 2023/01/28 04:03 AM |
NYT on SPR | Adrian | 2023/01/28 04:14 AM |
NYT on SPR | Groo | 2023/01/29 09:50 AM |
NYT on SPR | Groo | 2023/01/29 09:46 AM |
NYT on SPR | Brendan | 2023/01/29 01:00 PM |
NYT on SPR | Anon4 | 2023/01/29 04:06 PM |
NYT on SPR | Brendan | 2023/01/29 07:03 PM |
NYT on SPR | Groo | 2023/01/30 07:09 AM |
NYT on SPR | Groo | 2023/01/29 09:39 AM |
NYT on SPR | AnonSoft | 2023/01/30 11:01 AM |
NYT on SPR | hobold | 2023/01/30 12:39 PM |
NYT on SPR | AnonSoft | 2023/01/30 05:34 PM |
NYT on SPR | hobold | 2023/01/31 04:40 AM |
NYT on SPR | Jukka Larja | 2023/01/31 07:13 AM |
Heterogeneous CPU Cores With OpenMP | Mark Heath | 2023/02/01 04:45 AM |
Heterogeneous CPU Cores With OpenMP | Freddie | 2023/02/01 05:05 AM |
Heterogeneous CPU Cores With OpenMP | Mark Heath | 2023/02/01 06:42 AM |
Heterogeneous CPU Cores With OpenMP | Freddie | 2023/02/01 09:54 AM |
Heterogeneous CPU Cores With OpenMP | Mark Heath | 2023/02/01 04:45 PM |
Heterogeneous CPU Cores With OpenMP | —- | 2023/02/02 04:35 PM |
Heterogeneous CPU Cores With OpenMP | Freddie | 2023/02/02 04:39 PM |
Heterogeneous CPU Cores With OpenMP | --- | 2023/02/03 12:15 PM |
Heterogeneous CPU Cores With OpenMP | Freddie | 2023/02/03 03:46 PM |
Heterogeneous CPU Cores With OpenMP | Anne O. Nymous | 2023/02/03 12:57 AM |
Heterogeneous CPU Cores With OpenMP | --- | 2023/02/03 12:35 PM |
Heterogeneous CPU Cores With OpenMP | Anne O. Nymous | 2023/02/03 01:35 PM |
different big/little split.. | Heikki Kultala | 2023/02/03 02:33 PM |
Heterogeneous CPU Cores With OpenMP | Paul H | 2023/02/03 06:51 PM |
Heterogeneous CPU Cores With OpenMP | Jukka Larja | 2023/02/01 06:24 AM |
When heavily loaded, Threads run about equally fast on E-cores than P-cores | Heikki Kultala | 2023/02/01 02:08 PM |
NYT on SPR | Chester | 2023/01/27 09:30 AM |
use archive.org | anon | 2023/01/27 06:08 PM |
Bypassing paywalls | Doug S | 2023/01/28 02:05 PM |
NYT on SPR | Chris G | 2023/01/27 06:54 PM |
Intel On Demand | Chris G | 2023/01/28 04:24 AM |
Intel On Demand | me | 2023/01/28 06:24 AM |
Intel On Demand | Groo | 2023/01/29 09:53 AM |
Intel On Demand | rwessel | 2023/01/28 09:41 AM |
Intel On Demand | --- | 2023/01/28 11:37 AM |
Anit-waste bias | Paul A. Clayton | 2023/01/28 07:57 PM |
Intel On Demand | Groo | 2023/01/29 09:58 AM |
Intel On Demand | Andrey | 2023/01/30 05:04 PM |
Intel On Demand | blaine | 2023/01/28 03:07 PM |
Intel On Demand | me | 2023/01/28 03:25 PM |
Intel On Demand | me | 2023/01/28 03:33 PM |
Intel On Demand | Chris G | 2023/01/28 07:06 PM |
Intel On Demand | me | 2023/01/28 07:43 PM |
Intel On Demand - Validation, certification? | Björn Ragnar Björnsson | 2023/01/28 10:41 PM |
Intel On Demand - Validation, certification? | anonymou5 | 2023/01/29 02:49 AM |
Sapphire Rapids crippleware is a naked money grab | Chris G | 2023/01/29 04:44 AM |
Intel On Demand - Validation, certification? | Groo | 2023/01/29 10:05 AM |
Intel On Demand - Validation, certification? | AnotherAnonymousEngineer | 2023/01/29 10:33 AM |
Intel On Demand - Validation, certification? | Groo | 2023/01/29 11:16 AM |
Intel On Demand - Validation, certification? | dmcq | 2023/01/29 04:32 PM |
Intel On Demand - Validation, certification? | Brendan | 2023/01/29 08:01 PM |
Intel On Demand - Validation, certification? | Groo | 2023/01/30 07:17 AM |
Intel On Demand - Validation, certification? | Freddie | 2023/01/30 11:36 AM |
Intel On Demand - Validation, certification? | anon2 | 2023/01/30 07:41 PM |
Intel On Demand - Validation, certification? | anon2 | 2023/01/31 01:35 AM |
Crippleware | Chris G | 2023/01/31 05:47 AM |
Doctorow calls it "enshittification" (NT) | hobold | 2023/01/31 07:55 AM |
Crippleware | anon2 | 2023/01/31 10:51 AM |
Crippleware | Groo | 2023/02/01 02:06 PM |
Crippleware | anon2 | 2023/02/01 05:10 PM |
Crippleware | Chris G | 2023/02/01 05:52 PM |
Crippleware | anon2 | 2023/02/01 09:15 PM |
SPR Volume | me | 2023/02/02 04:47 AM |
SPR Volume | anon2 | 2023/02/02 07:04 AM |
Crippleware | Chris G | 2023/02/02 08:12 AM |
Crippleware | anon2 | 2023/02/02 08:42 AM |
Crippleware | anon2 | 2023/02/02 08:48 AM |
Crippleware | Charles | 2023/02/01 01:38 AM |
Crippleware | Chris G | 2023/02/01 02:59 AM |
language digression | Matt Sayler | 2023/02/01 04:53 PM |
Crippleware | me | 2023/02/01 06:27 PM |
Crippleware | Chris G | 2023/02/01 07:01 PM |
Crippleware | me | 2023/02/01 07:10 PM |
Crippleware | Chris G | 2023/02/01 09:32 PM |
Crippleware | Tony | 2023/02/01 11:18 PM |
Crippleware | me | 2023/02/02 04:27 AM |
Crippleware | anonymou5 | 2023/02/02 03:47 AM |
Crippleware | Chris G | 2023/02/02 05:59 AM |
Intel On Demand - Enshittification | blaine | 2023/01/30 12:13 AM |
Intel and mobile phones | James | 2023/01/29 09:09 AM |
Intel and mobile phones | Maxwell | 2023/01/29 02:25 PM |
Intel and mobile phones | Groo | 2023/01/30 07:20 AM |
Intel and mobile phones | anonymous2 | 2023/01/30 11:15 AM |
Intel and mobile phones | Doug S | 2023/01/30 12:51 PM |
Intel and mobile phones | Daniel B | 2023/01/31 07:37 AM |
Intel and mobile phones | Groo | 2023/02/01 02:03 PM |
SPR HBM | me | 2023/01/29 09:17 AM |
SPR-W | me | 2023/02/17 05:41 PM |
Accelerators on AMD/ARM | Chester | 2023/01/29 05:41 PM |