ROCm has made a lot of improvements. $2000 for 48GB of VRAM makes up for any minor performance decrease as opposed to spending $2200 or more for 24GB VRAM with NVIDIA.
ROCm certainly has gotten better, but the weird edge cases remain; alongside the fact that merely getting certain models to run is problematic. I am hoping that RNDA4 is paired with some tooling improvements. No more massive custom container builds, no more versioning nightmares. At my last startup we tried very hard to get AMD GPUs to work, but there were too many issues.
It would be less if NVIDIA & AMD wasn’t in an antitrust duopoly. You can get two XTX for less than $2000 with 48GB total VRAM.
Unfortunately getting an AI workload to run on those XTXs, and run correctly, is another story entirely.
ROCm has made a lot of improvements. $2000 for 48GB of VRAM makes up for any minor performance decrease as opposed to spending $2200 or more for 24GB VRAM with NVIDIA.
ROCm certainly has gotten better, but the weird edge cases remain; alongside the fact that merely getting certain models to run is problematic. I am hoping that RNDA4 is paired with some tooling improvements. No more massive custom container builds, no more versioning nightmares. At my last startup we tried very hard to get AMD GPUs to work, but there were too many issues.