That RAM is nice, but core count doesn’t say much at this point: there are different cores with different architectures, multithreading, pipelining, caches, speeds, etc.
I’d rather see a TOPS comparison:
M3: claims 18 TOPS
M4: up to 38 TOPS
nVidia H100: up to 3900 TOPS/TFLOPS (INT8/FP8)
Meta is claiming to have 350,000 H100s, to put things into perspective.
I mean, sure, but largely GPU-based TOPS isn’t that good a comparison with a CPU+GPU mixture. Most tasks can’t be parallelized that well, so comparing TOPS between an APU and a TPU/GPU is not apples to apples (heh).
Agreed, but my point is that stating “x-core CPU, y-core GPU, z-core NPU”, is basically non-information.
CPUs run general logical processing
GPUs run integer/float matrices
NPUs run minimal effort matrices for inference
I’d like to see the TOPS for each of those, instead of a “core count” that tells me nothing about actual performance. Even the TOPS are orientative… but would be a good start.
That RAM is nice, but core count doesn’t say much at this point: there are different cores with different architectures, multithreading, pipelining, caches, speeds, etc.
I’d rather see a TOPS comparison:
Meta is claiming to have 350,000 H100s, to put things into perspective.
I mean, sure, but largely GPU-based TOPS isn’t that good a comparison with a CPU+GPU mixture. Most tasks can’t be parallelized that well, so comparing TOPS between an APU and a TPU/GPU is not apples to apples (heh).
Agreed, but my point is that stating “x-core CPU, y-core GPU, z-core NPU”, is basically non-information.
I’d like to see the TOPS for each of those, instead of a “core count” that tells me nothing about actual performance. Even the TOPS are orientative… but would be a good start.