**A 2-day virtual/in-person workshop intended to teach you how you can architect reliable scalable AI solutions with Ray Distributed**
Register at https://bit.ly/sfbay-acm-ray
Refunds up to **7 days** before event, Eventbrite’s fee is nonrefundable.
For viewing the intro to Ray lecture on video:
For viewing the Introduction to Ray AIR for Scaling AI/ML and Python Workloads: https://www.youtube.com/watch?v=XME90SGL6Vs
Distributed computing is becoming increasingly relevant for modern ML systems. OpenAI’s recent paper [AI and Compute](https://openai.com/blog/ai-and-compute/) suggests that the amount of compute needed to train AI models has roughly doubled every 3.5 months since 2012.
However, distributed systems are hard to program. Scaling a Python ML application to a cluster introduces challenges in communication, scheduling, security, failure handling, heterogeneity, transparency, and much more.
This context drove the development of **Ray**: a solution to enable developers to run Python code on clusters without having to think about how to orchestrate and utilize individual machines. You do not need to have a PhD in distributed systems to run large-scale distributed AI training and can instead focus on developing machine learning workloads.
Distributed Computing is the foundational underpinning of AI with increasing model size. In this 2-day workshop, we will cover Ray, an open-source tool that is used by a large number of startups and top Enterprises to run their deep learning workloads. Whether you are a data scientist, a DevOps engineer or someone who wants to take your deep learning knowledge to production, this is the course for you.
The course is offered in-person to enchance your learning experience. It is also available via live-streaming if requested. **SF Bay ACM** is partnering with **Anyscale**, a company behind Ray, to introduce to you this exciting and fast growing field – distributed ML applications.
### Event Description
This 2-day, 15 hours workshop is intended to teach you what Ray, Ray Core, Ray AI Runtime are and how to use them to scale ML applications and utilize large compute clusters.
**Day 1**, you will focus on Ray fundamentals. Via hands-on exercises and notebooks you will explore Ray APIs that allow you to run Python scripts and ML workloads in a distributed way.
D**ay 2**, you will focus on computer vision tasks and explore how to perform batch inference and model training in a distributed way. You will also learn about approaches to debugging, optimizing and monitoring Ray clusters.
**NOTE**:This is NOT a hybrid event, it is strictly in-person. We will cap the classroom size so that there is a strong focus on learning. There is a nominal charge for the 2-days of lecture & hands-on exercises. Please sign up early as we will keep the attendee count low. This is NOT a MOOC. **Registration also includes a 1-year SFBay ACM membership ($20 value) and selected registrants will be awarded with Ray Summit 2023 passes & all registrants will get AnyScale swag :)**
**Interactive notebooks, hands-on exercises, slides and QA sessions** will help you understand relevant concepts, APIs and best practices.
Check the event agenda below for more details..
### Access to the training materials
. access to the dedicated GitHub repository with all training resources.
.a dedicated Anyscale compute cluster that ‘you will use for the duration of training.
.After the event, you will always be able to run Ray on your laptop with the training material on the Github repo.
* Anyscale: Kamil & Jules
* SFBay ACM Prof Dev Chair: Yashesh Shroff
**Join us to participate in the mix of Ray technical training and community building!**
Lunch, snacks, coffee, and community camaraderie included.
## Saturday, March 11th: 9:30am-5pm, Pacific Time
### Distributed Systems with Ray, Foundational
* Overview of Ray (1 hour lectures), Part 1 & Part 2 + labs +lunch
* Ray Core (1.5hr), Part 1 & Part 2 + labs +coffee breaks
* Introduction to Ray AI Runtime (2hr) + labs
## Sunday, March 12: 9:30am – 5pm, Pacific Time
### Advanced Concepts
* Batch inference (2hr lectures)
* Observabilioty (debug, optimize, monitor, 1hr lectures)
* Model training (1.5hr)
**Organizer & SFBay ACM Prof Dev Chair**: Yashesh Shroff [@yashroff](http://twitter.com/yashroff)
For more information about Registration, please contact SF Bay Chapter of the ACM, yshroff at g | m | a i l
**We look forward to seeing you at the workshop!**