Neural Network Coding Workshop
Details
Join us for our monthly hands-on coding workshop.
We're working through Andrej Karpathy's "Neural Networks: Zero to Hero" series to build a transformer architecture and reproduce GPT-2.
"We start with the basics of backpropagation and build up to modern deep neural networks, like GPT. Language models are an excellent place to learn deep learning, even if you intend to eventually go to other areas like computer vision. Most of what you learn will be immediately transferable."
This Month's Focus
First, we'll review the first two projects in the series:
- micrograd: A tiny autograd engine to implement backpropagation.
- makemore: An introduction to auto-regressive language modeling starting with a character-level approach, then adding an MLP and building up to a transformer.
Then introduce the final project, nanoGPT, which ties things together in one package.
Finally, we'll check in on EurekaLabs to see if there is any more code for LLM101n -- the next level of this course
Preparation
Before the workshop, please:
- Watch the first six lessons of the "Neural Networks: Zero to Hero" series.
- Try to code up "micrograd" and "makemore" independently.
Prerequisites
- Solid programming skills in Python
Everyone is welcome!
Don't hesitate to reach out if you have any questions.
Every 4th Tuesday of the month until December 24, 2024
Neural Network Coding Workshop