• Draconic NEO@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    Like many people are saying that’s 2.6W per core. Which is actually very good.

    My laptop is running an Intel CPU with a TDP of 45W, which doesn’t seem as bad as that one until you realize that it’s only 6 cores meaning it uses 7.5W per core. If we multiply by the number of cores this CPU has we get 1440W if it were as inefficient as my Intel CPU, and that’s a very conservative estimate which assumes my CPU is as efficient as intel claims the Intel Core i7-9750H is, it actually might be much worse considering how hot this laptop gets, especially when gaming (though I don’t game on this laptop anymore for that reason).

    Bottom line, this is a very efficient CPU, but it’s also an insanely ovepowered CPU that most people will not use or need. Only datacenters and extremely dedicated power users need a CPU anywhere near this powerful.

  • lud@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 days ago

    384 threads is fucking bizarre!

    I would like to see the fans needed to cool this in a 2U or even 1U case. They must be comparable to a leaf blower…

  • AdrianTheFrog@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Only half of a toaster (in the US at least)

    It’s a nice easy unit to compare against because all of the ones I’ve seen draw basically exactly 1000 watts

    It’s also less than double what my desktop draws (11700k, rtx 3060) and (those aren’t particularly demanding components, and I only get 16 threads) (that’s basically exactly 6x more power draw per core, although the cores themselves perform differently ofc)

    It is slightly silly to have that many cores tho. I guess the main reason to not just use a gpu would be because pcie doesn’t have enough bandwidth, or if you need a ton of RAM? For a pure compute application I don’t think there are many cases where a GPU isn’t the obvious choice when you’re going to have almost 400 threads anyways. An A100 has half the tdp and there’s no way the epyc chip can even come close in performance (even if you assume the cpu can use avx512 while the gpu can’t use it’s tensor cores, it having about a third of the memory bandwidth isn’t exactly encouraging about the level of peak compute they’re expecting)

    • pivot_root@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      9 days ago

      Another application of those things are virtualization. Throw in 3 x4 TB NVMe SSDs, 384 GB of memory, and a 25G NIC.

      Off a single unit, you would be able to sell 12 VPS instances with 16 cores, 32 GB of memory, 1 TB of storage, and a guaranteed 1.5Gbps link.

    • cabb@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      Maybe its meant for hyperscalars who will rent it out in smaller units of say 16, 32, and 64 core instances to customers.

  • Richard@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Why do CPUs that power hungry exist? I can barely support the thought that my MODERN laptop sucks up to 40W on heavy loads

    • pivot_root@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      That’s an EPYC. It’s a datacenter CPU, and it’s priced accordingly. Nobody uses these at home outside of hardcore homelab enthusiasts with actual rack setups.