Skip to content

ash-iiiiish/master-repo-maths-cs-ai-compendium

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

65 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Master Repo for : Maths, CS & AI Compendium:

Logo

Read online: henryndubuaku.github.io/maths-cs-ai-compendium

Overview

Most textbooks bury good ideas under dense notation, skip the intuition, assume you already know half the material, and quickly get outdated in fast-moving fields like AI. This is an open, unconventional textbook covering maths, computing, and artificial intelligence from the ground up. Written for curious practitioners looking to deeply understand the stuff, not just survive an exam/interview.

Background

Over the past years working in AI/ML, I filled notebooks with intuition first, real-world context, no hand-waving explanations of maths, computing and AI concepts. In 2025, a few friends used these notes to prep for interviews at DeepMind, OpenAI, Nvidia etc. They all got in and currently perform well in their roles. Meanwhile I got in Y Combinator last year. So I'm sharing to everyone.

Outline

# Chapter Summary Status
01 Vectors Spaces, magnitude, direction, norms, metrics, dot/cross/outer products, basis, duality Available
02 Matrices Properties, special types, operations, linear transformations, decompositions (LU, QR, SVD) Available
03 Calculus Derivatives, integrals, multivariate calculus, Taylor approximation, optimisation and gradient descent Available
04 Statistics Descriptive measures, sampling, central limit theorem, hypothesis testing, confidence intervals Available
05 Probability Counting, conditional probability, distributions, Bayesian methods, information theory Available
06 Machine Learning Classical ML, gradient methods, deep learning, reinforcement learning, distributed training Available
07 Computational Linguistics syntax, semantics, pragmatics, NLP, language models, RNNs, CNNs, attention, transformers, text diffusion, text OCR, MoE, SSMs, modern LLM architectures, NLP evaluation Available
08 Computer Vision image processing, object detection, segmentation, video processing, SLAM, CNNs, vision transformers, diffusion, flow matching, VR/AR Available
09 Audio & Speech DSP, ASR, TTS, voice & acoustic activity detection, diarisation, source separation, active noise cancellation, wavenet, conformer Available
10 Multimodal Learning fusion strategies, contrastive learning, CLIP, VLMs, image/video tokenisation, cross-modal generation, unified architectures, world models Available
11 Autonomous Systems perception, robot learning, VLAs, self-driving cars, space robots Available
12 Graph Neural Networks geometric deep learning, graph theory, GNNs, graph attention, Graph Transformers, 3D equivariant networks Available
13 Computing & OS discrete maths, computer architecture, operating systems, concurrency, parallelism, programming languages Available
14 Data Structures & Algorithms Big O, recursion, backtracking, DP, arrays, hashing, linked lists, stacks, trees, graphs, sorting, binary search Available
15 Production Software Engineering Linux, Git, codebase design, testing, CI/CD, Docker, model serving, MLOps, monitoring, best way to use coding agents Available
16 SIMD & GPU Programming C++ for ML, how frameworks work, hardware fundamentals, ARM NEON/I8MM/SME2, x86 AVX, GPU/CUDA, Triton, TPUs, RISC-V, Vulkan, WebGPU Available
17 AI Inference quantisation, efficient architectures, serving and batching, edge inference, speculative decoding, cost optimisation Available
18 ML Systems Design systems fundamentals, cloud computing, distributed systems, ML lifecycle, feature stores, A/B testing, recommendation/search/ads/fraud design examples Available
19 Applied AI Ai in finance, healthcare, protein, drug discovery Coming
20 Bleeding Edge AI quantum ML, neuromorphic ML, decentralised AI, datacenters in space, brain machine interfaces Coming

Foreword

A newborn's brain is a newly initialised neural network, which trains from realworld data and experience into adulthood...until forever. Exceptional understanding of French with the flawless accent implies correct exposure to exceptional French and flawless accent. Similarly, great AI Researchers & engineers with excellent problem-skills imply quality knowledge consumed and exposure rich experience.

Now Kvashchev's experiment was a long-term Serbian study demonstrating that intensive, three-year training in creative problem-solving can significantly boost intelligence, particularly fluid intelligence, adding 10-15 IQ points. There is such a thing as having a natuarally high IQ, similar to how quality weight initialisations yield better training, evidenced by nature-vs-nurture experimental findings.

However, the only advantage a high-IQ individual really has is the ability to learn/recognise patterns faster. But using a repeated pattern makes any concept absolutely learnable. Charles Darwin was considered a very average, if not below-average, student by his teachers and father. He described himself as not being quick-witted, feeling like a "slow processor" who needed time to soak in data.

Between 3-10yrs, I performed well academically, naturally grasping concepts without ever taking notes or revising. I got a bit cocky between 11-13 and dropped to the bottom half of an 80-student class with this technique. Now between 14-15, I began reading like a normal student, finishing 1st in my final secondary school semester. Early school curriculum works well with natural IQ but real-world talents are powered by quality knowledge consumption and execution intensity.

In fact, most students who perform well academically are just more studious, but the academic system is designed for fast learners. This compendium provides a rounded and well-connected flow of knowledge to facilitate better learning for the Darwins of the world. You only need elementary maths and basic python programming, everything else is picked up, just read and trust the process!

How To Study Better

First semester at university, I took 17 modules at once, grades were not great for it, so I used a technique:

Phase 1: Cumulative reading after classes Read each slide/note title/headers only, close the book, then visualise and write an explanation for that concept. Only re-read what you missed, similar to masked-language modeling in machine learning. After the re-read, ultimately implement the concept in code after. You develop muscle memory for each concept.

Phase 2: Shadow reading before exams Read each slide/note subtitle, close the book, then visualise and write an explanation for that concept. Only re-read what you missed, similar to masked-language modelling in machine learning. After the re-read, ultimately implement the concept in code after. You develop muscle memory for each concept.

This worked really well for my friends who were not very confident. In fact, one of these friends beat me in advanced engineering mathematics modeule, where we covered Hessians and Optimisation. She works at a big oil & gas firm today. The willingness of the soul matters more than the body we are working with (Rosenthal experiment).

About

Become a cracked AI/ML Research Engineer

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • TypeScript 96.6%
  • JavaScript 3.4%