Machine Learning in C++ #191629
Replies: 3 comments
-
|
Hi, this is a really interesting project, building tensors and autograd from scratch in C++ is a great way to understand ML internals. One idea you could explore is adding a small benchmarking suite to compare performance of different tensor operations, especially memory allocation patterns. It could also be interesting to experiment with parallelization (like OpenMP) for basic ops. From an infrastructure perspective, setting up CI (GitHub Actions) with tests and benchmarks could make it easier to track improvements over time. Happy to follow the progress, looks like a great learning project. |
Beta Was this translation helpful? Give feedback.
-
|
This is a really nice project, building tensors and autograd in C++ is a great way to understand how ML works internally. Some simple ideas you can try next:
For performance:
For improvements:
Overall, this is a great learning project and will help you understand ML at a deeper level. Keep going |
Beta Was this translation helpful? Give feedback.
-
|
Building autograd from scratch in C++ is a solid exercise. A few specific things worth exploring next: Gradient checking — before adding more features, implement numerical gradient verification. Compare your autodiff result against Topological sort for the backward pass — if your graph traversal isn't already sorted, you'll get wrong gradients on non-trivial computation graphs. Most autograd bugs trace back to this. Expression templates (CRTP) — once you're happy with correctness, look into lazy evaluation. Keeping tensor ops lazy lets you fuse them before allocating, which removes a lot of temporary allocations in the hot path. Eigen uses this heavily and is worth reading for the pattern. Arena allocator for temporaries — instead of One thing to avoid: don't try to optimize before you have a working end-to-end training loop. Get XOR working with a 2-layer MLP first. If your gradients are right there, they're probably right everywhere. Then profile before guessing where the bottlenecks are. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
🏷️ Discussion Type
Question
Body
Hey everyone,
I’m currently building a C++ machine learning framework — nothing too serious, just a fun project to understand things like tensors, memory handling, and autograd at a deeper level. I’d really appreciate any suggestions on what features, ideas, or improvements I should explore next. Whether it’s design decisions, performance tricks, or even things to avoid, I’m open to all of it.
If anyone’s interested in contributing, discussing ideas, or just experimenting together, feel free to jump in. Always happy to learn from people who’ve worked on similar systems.
🔗 Repo: https://github.com/spandan11106/GradCore-Tensor.git
Beta Was this translation helpful? Give feedback.
All reactions