Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> there is also second mover advantage, where they just need to copy-cat the best part of nvidia's ML stack and dont waste time figuring out what works/what doesn't work.

Seems like patents would stop that.



didn't know nvidia patented matmul and dot product


There's a lot more involved than matmul and dot products.


Didn't the Google v Oracle lawsuit end up confirming that you can't patent an API?


AMD doesnt have to implement CUDA api, they just need to make sure their compute framework works well with pytorch/tf/MLIR or whatever high level framework is being used.

Cuda itself will change over time, so no reason for AMD to pick cuda, because nobody writes CUDA kernels by hand, they use high level frameworks


But CUDA kernels are everywhere, not just in high-level frameworks. Look at DeepSpeed, for example, which is used in training LLMs.


So what? They can be replaced by amd kernels if there will be adequate tooling support


No, it confirmed reimplementing an API is not copyright infringement.

The patent claims were rejected simply because the Google implementations were written in such a way the patents did not apply.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: