By Dr. JunLing Hu, founder and CEO of Coach.ai
855 Maude Ave,
Mountain View, CA 94043
You can also watch on Zoom:
6:30 Door opens, Food, Drink
7:00 SFbayACM intro, upcoming events, introduce the speaker
7:10 presentation starts (~90 min with Q&A)
(talk goes until 8:30 or as questions continue)
If you are considering building an LLM-powered product, such as customer support, internal document search, or a conversational assistant, how do you implement them? In this talk, I will review the basic components and the product pipeline. I will review the options of building in-house (using Llama 2) or using third-party solutions (such as ChatGPT). Another major component is semantic search or document retrieval. I will review both in house and third-party to this problem. By the end of this talk, you will gain a clear overview of LLM product pipeline, and understand where your skill can be best applied.
This talk describes a pipeline that can use different tools. For example, one can use a file system for saving embedding, or Pinecone for embedding remote hosting, or Weaviate for internal embedding databases. I don’t recommend Langchain as it is overly complicated.
Dr. Junling Hu is the founder and CEO of Coach.ai, which provides LLM-powered conversational AI platform. Prior to founding the company, Junling was the Director of Applied AI at Live Person, where she led a team building LLM-based customer support solutions. Junling is the author of the book The Evolution of Artificial Intelligence. Junling received her Ph.D. in Computer Science from the University of Michigan at Ann Arbor, with her Ph.D. thesis focused on reinforcement learning.