Ciò che siamo
The LLM Reading Club is for anyone interested in building both a theoretical intuition of large language models and practical expertise in their development and use. There’s momentum and fun in numbers! 😊 The club will collectively explore some of the canonical books and research papers pertaining to large language models.
The level of prerequisite knowledge needed to maximise benefit will vary depending on the specific book or paper being covered. Generally, the content will be most suitable for those proficient in Python and with some understanding of (or a willingness to quickly learn) the relevant core machine learning or mathematical concepts.
All are welcome - this includes but not limited to - data practitioners of all hues, enthusiasts, students, researchers, and professionals.
Prossimi eventi (4+)
Visualizza tutto- Hands-On Large Language Models (Ch 1 & 2)Link visibile ai partecipanti
We are discussing the recently released Hands-On Large Language Models by Jay Alammar and Maarten Grootendorst. This book, which combines the essential theory of LLMs with a practical focus, is written by two highly regarded experts from the LLM space.
In this meetup we'll review and discuss the first 2 chapters:
- An Introduction to Large Language Models
- Tokens and Embeddings
> Buy book on Amazon
Book overview
Through the book's visually educational nature, readers can learn practical tools and concepts they need to use these capabilities today.You'll understand how to use pretrained large language models for use cases like copywriting and summarization; create semantic search systems that go beyond keyword matching; and use existing libraries and pretrained models for text classification, search, and clusterings.
The book aims to help you:
- Understand the architecture of Transformer language models that excel at text generation and representation
- Build advanced LLM pipelines to cluster text documents and explore the topics they cover
- Build semantic search engines that go beyond keyword search, using methods like dense retrieval and rerankers
- Explore how generative models can be used, from prompt engineering all the way to retrieval-augmented generation
- Gain a deeper understanding of how to train LLMs and optimize them for specific applications using generative model fine-tuning, contrastive fine-tuning, and in-context learning
- Hands-On Large Language Models (Ch 3 & 4)Link visibile ai partecipanti
We are discussing the recently released Hands-On Large Language Models by Jay Alammar and Maarten Grootendorst. This book, which combines the essential theory of LLMs with a practical focus, is written by two highly regarded experts from the LLM space.
In this meetup we'll review and discuss the following chapters:
3. Looking Inside Large Language Models
4. Text Classification> Buy book on Amazon
Book overview
Through the book's visually educational nature, readers can learn practical tools and concepts they need to use these capabilities today.You'll understand how to use pretrained large language models for use cases like copywriting and summarization; create semantic search systems that go beyond keyword matching; and use existing libraries and pretrained models for text classification, search, and clusterings.
The book aims to help you:
- Understand the architecture of Transformer language models that excel at text generation and representation
- Build advanced LLM pipelines to cluster text documents and explore the topics they cover
- Build semantic search engines that go beyond keyword search, using methods like dense retrieval and rerankers
- Explore how generative models can be used, from prompt engineering all the way to retrieval-augmented generation
- Gain a deeper understanding of how to train LLMs and optimize them for specific applications using generative model fine-tuning, contrastive fine-tuning, and in-context learning
- Hands-On Large Language Models (Ch 5 & 6)Link visibile ai partecipanti
We are discussing the recently released Hands-On Large Language Models by Jay Alammar and Maarten Grootendorst. This book, which combines the essential theory of LLMs with a practical focus, is written by two highly regarded experts from the LLM space.
In this meetup we'll review and discuss the following chapters:
5. Text Clustering and Topic Modelling
6. Prompt Engineering> Buy book on Amazon
Book overview
Through the book's visually educational nature, readers can learn practical tools and concepts they need to use these capabilities today.You'll understand how to use pretrained large language models for use cases like copywriting and summarization; create semantic search systems that go beyond keyword matching; and use existing libraries and pretrained models for text classification, search, and clusterings.
The book aims to help you:
- Understand the architecture of Transformer language models that excel at text generation and representation
- Build advanced LLM pipelines to cluster text documents and explore the topics they cover
- Build semantic search engines that go beyond keyword search, using methods like dense retrieval and rerankers
- Explore how generative models can be used, from prompt engineering all the way to retrieval-augmented generation
- Gain a deeper understanding of how to train LLMs and optimize them for specific applications using generative model fine-tuning, contrastive fine-tuning, and in-context learning
- Hands-On Large Language Models (Ch 7 & 8)Link visibile ai partecipanti
We are discussing the recently released Hands-On Large Language Models by Jay Alammar and Maarten Grootendorst. This book, which combines the essential theory of LLMs with a practical focus, is written by two highly regarded experts from the LLM space.
In this meetup we'll review and discuss the following chapters:
7. Advanced Text Generation Techniques and Tools
8. Semantic Search and Retrieval-Augmented Generation> Buy book on Amazon
Book overview
Through the book's visually educational nature, readers can learn practical tools and concepts they need to use these capabilities today.You'll understand how to use pretrained large language models for use cases like copywriting and summarization; create semantic search systems that go beyond keyword matching; and use existing libraries and pretrained models for text classification, search, and clusterings.
The book aims to help you:
- Understand the architecture of Transformer language models that excel at text generation and representation
- Build advanced LLM pipelines to cluster text documents and explore the topics they cover
- Build semantic search engines that go beyond keyword search, using methods like dense retrieval and rerankers
- Explore how generative models can be used, from prompt engineering all the way to retrieval-augmented generation
- Gain a deeper understanding of how to train LLMs and optimize them for specific applications using generative model fine-tuning, contrastive fine-tuning, and in-context learning