What we’re about
We read Astral Codex Ten, a blog that spans across rationality, science, psychiatry, medicine, ethics, genetics, AI, economics, and politics. https://astralcodexten.substack.com/about
You should join if you're interested in any of these topics and want to nerd out with other nerds! You don't have to read the blog to come to our events. Everyone is welcome - please feel free come even if you feel like you're "not the type of person who normally would fit in at such events".
We mostly have very informal meetups, sometimes including dinner, games, or hiking. Occasionally we have more structured lectures or mini-workshops.
Astral Codex Ten was formerly called Slate Star Codex. The author, Scott Alexander, has written over 1,500 posts. Don't worry, you're not expected to have read all of them!
If you're not sure where to start, check out this compilation of some his greatest works here from one of our meetup members:
https://jasoncrawford.org/guide-to-scott-alexander-and-slate-star-codex
Another compilation can be found here
https://guzey.com/favorite/slate-star-codex/
See also Scott Alexander's greatest from Slate Star Codex:
https://slatestarcodex.com/top-posts/
Upcoming events (1)
See all- A Conversation with Steve OmohundroMicrosoft New England Research and Development Center, Cambridge, MA
Join the Mind First Foundation for an enlightening conversation with Dr. Steve Omohundro, a pioneering figure in the fields of artificial intelligence and AI safety. The event will begin with an interview, exploring Dr. Omohundro’s groundbreaking early work in AI, with a special focus on his seminal contributions to modern fundamentals of AI safety theory and practice. Dr. Omohundro will then give a presentation on his recent research, titled "Obtaining the Benefits of Powerful AI While Limiting the Risks."
The conversation will then shift to an analysis and panel discussion of some of Steve’s foundational ideas in AI safety, with special focus on the emergence of superintelligent AI, and what that means for humanity.
Key Topics:
- Omohundro’s foundational contributions to the current landscape of AI development and AI safety research
- Rethinking past (and current) assumptions about AI alignment and control - Examining human values, ideal and real…
- Exploring systems of AI motivation
- Examining AI safety through the lens of evolutionary science
RSVPs are required here for this event!