🎞️ Videos Q&A with Anthropic Team

Description

In this session, JB, an engineer from Anthropic’s Claude Code team, provides an inside look at the development and practical application of agentic coding tools. He discusses the design philosophy behind Claude Code, including prompt engineering, safety protocols, and the evolving role of "skills" as dynamic prompt libraries. JB addresses the technical balance between the underlying model and the software harness, while sharing how the team uses the tool to build itself. The discussion covers practical strategies for managing large codebases, such as using hierarchical context files and leveraging subagents to parallelize complex tasks like debugging.

Chapters

  • JB's background and role on the Claude Code team at Anthropic 0:00
  • How Anthropic designs Claude Code's personality and safety guardrails 0:46
  • Engineering vs. partnerships: directing community sponsorship requests 2:47
  • Agent skills: treating prompts like the programming libraries of the future 4:43
  • Model vs. Harness: how the AI and the application work together 7:24
  • Recursive development: building Claude Code with Claude Code 9:29
  • Tips for efficiency: using plugins and hierarchical claude.md files 10:52
  • Managing context by splitting tasks across multiple Claude instances 13:23
  • Planning locally and handing off to the web via the Teleport feature 15:43
  • Scaling context with subagents to parallelize large tasks like debugging 17:03
  • Closing thoughts: why AI-assisted coding is a vital skill for the future 20:24

Transcript

These community-maintained transcripts may contain inaccuracies. Select any text to report issues instantly, or edit on GitHub for advanced changes.

JB's background and role on the Claude Code team at Anthropic0:00

Yeah, of course. Hey everyone. I hope everyone's doing well. My name is JB. I work at Anthropic on the Claude Code team. So I built some of the features that you might see in Claude Code such as the VS Code extension, checkpointing,

some of the general performance work like speeding it up. So I hope you've been enjoying using Claude Code. Excited to hear your questions and hope I can answer any questions that you might have about Claude Code, about Anthropic, about AI. Happy to talk about anything.

How Anthropic designs Claude Code's personality and safety guardrails0:46

Hello. Okay, let's start.

So I think you have a question here, right?

The question about system prompt design Claude Code, I'm wondering to know about who designed the personality

and the morality about Claude Code

and the safety guard of your system.

Yeah, so we actually have several people working on it. A lot of the prompts actually come from Boris, who started Claude Code, and I think he wrote the initial set of tool prompts as well as the system prompt. And we usually have people in safeguards, so we also have a different safeguard team, a red-teaming team who actually looks through the prompts, reads through the prompts, and edits it as safety scenarios. And then we have teams that actually test the capability to see at what threat level

we sort of put it at. So now we're at ASL-3, but it really also depends on both the model as well as the Claude Code harness. The short answer is several different groups of people. Usually the prompt starts with the Claude Code team, one of us sort of comes with the first draft, and then we edit it based on the feedback that we get, we edit it based on what the safeguard team comes back to us with. So after a couple of editing, a couple of iterations, then we release it. But it usually comes from us first.

Engineering vs. partnerships: directing community sponsorship requests2:47

Next question.

Hi JB. I found you on LinkedIn. I'm coming from tonight, actually there was a meetup from ThaiPy.

It's a monthly Python meetup that we have in Bangkok. 30 to 40 people every month. Several times a year we have a vibe coding event where we are given problems and we all just vibe code as a group. Actually we're looking for sponsors and we wanted to see if... Not a joke! Not a joke. No, but we were asking for potentially maybe a partnership with Anthropic. Even just as simple as keys for that night for us all to have access to an LLM just for a vibe coding problem. So what I'm going to do is I'm going to try to connect with you on LinkedIn and maybe we can talk there. Would you accept my request and can we talk there? Yeah, yeah. I mean I think you should speak with Eric. I think Eric sort of organizes some of this stuff and so he may be able to help you set that up. I myself am an engineer and so I don't have input on that kind of stuff.

I'm sorry, speak with who?

As an engineer, partnerships is not really my main focus for work and so it's going to be hard. Yeah, I thought I heard you say speak with Eric. Is that what you said?

Can you hear me? Yeah, we can hear you. Yes, I'm saying I'm an engineer and so this is not my main area of focus.

I'm happy to answer any questions around C and stuff but I myself do not handle the partnerships and or hosting events and or sponsoring events. Okay, alright, maybe you can point me in the right direction of who to speak to after the event? Yeah, sure, let's go to the next question. Thank you.

Agent skills: treating prompts like the programming libraries of the future4:43

So I have a question about the agent skills. Can you show me some example or show me the best practice to write the agent skill for Claude Code?

What do you mean by agent skill?

It's the agent skill that used with Claude Code. Oh, okay, yeah, like the skills. Yeah, okay. Yeah, so skills are interesting and so skills are supported in both Claude Code and outside of Claude Code as well. It's in general something that we train into our models. The idea behind skills is that there are so many different skills, it's sort of one of the things we crowdsource and so you can download skills from online. And all it is is basically just like

a dynamically loaded prompt.

And so someone else can write the prompt and it basically loads dynamically. In the very beginning, it only takes the top part, so the description and the name of the skill that's actually in the system prompt. And then when that's actually required, it will actually load the entire contents of the skill so you can use it as a prompt. And one thing that's interesting about skills is that in the future, I think one of the ways we think about it is that in the future,

there may or may not be writing much less code.

And instead of writing code, our jobs may be just to be writing prompts, right? And so skills is one way of basically allowing us to get to that world where we're focused more on writing prompts than writing code itself. And so instead of thinking about, if I need to do something, I'm just going to write code. In the future, it may become like, I need to do something, I need to build a website, I need to make a system that does billing.

Instead of writing code, I just write a prompt and then the LLM will take that prompt and make working code out of it. And so I think skills in a future world where we look ahead where the models are getting better and the coding engines are getting better, it may be the case that skills are, you know, the skill and the way the prompts are packaged are like the future of how you write code. Like almost in the way like right now you use like a programming library, in the future you might just be using a skill sort of as a library rather than just actually coding

in the same way you use a library instead of actually writing the code yourself for certain functionalities.

Thank you.

Model vs. Harness: how the AI and the application work together7:24

Okay, next question.

Someone want to ask the question first or...

Okay, so can you hear me? Yes, thank you. I have a maybe a funny question.

Which one is more important to you, model or harness? Like, which one between, you know what I mean, like Claude Code or the model itself, which one is more important? Right. I think they're both very important. I think what we've found is that the model and harness work together to achieve the goal.

I think in a lot of ways, yeah,

I think there is a distinction,

but I think in a lot of ways they're kind of the same. In the sense that it's almost like if you think about it, if you look at the model itself, the model is considered as one giant thing, but if you look at the inside of the model, there are like different stages and different components. And so we group them together and call it a model, and then we group something like Claude Code together and call it a harness, but they're ultimately just different stages in the pipeline of the product that we deliver.

In my opinion, both the model and harness are very important, and I think the model and harness work together to actually give you the agentic coding results that you see with Claude Code. In a lot of ways, I think both are extremely important, and so it's hard to say.

Although the model, as you can think, it's actually a little bit more,

like you can use it for more than just with the harness. Like you can actually use it on claude.ai, you can use it like as an API, and so it's a little bit more general purpose than the specific harness with Claude Code. Thank you.

Recursive development: building Claude Code with Claude Code9:29

Okay, hi. So we've got a question from our audience from the YouTube. So there's a question: How many lines of the code in Claude Code are generated from Claude Code? Oh, yeah. We use Claude Code a lot to write the code for Claude Code. I can't tell you the exact number of lines, but yeah, we use it a lot.

And a lot of it we, obviously we review it, we test it ourselves. I think, like any other thing that we code

and like what we recommend, it usually makes sense for you to have the LLM write the code first, and then you review it as a person, and then you make sure everything makes sense, you try to understand it, and then you test it out to make sure all is good before you submit it. And that's usually what we do as well. Like we rely on Claude Code a lot

in a lot of those areas,

but then we do have human involved in making sure the code is correct, making sure the code makes sense, checking for security, checking for safeguards, and checking for all those things along the line. But yeah, we do use Claude Code a lot, like a lot, a lot. Hard to say what the percentage is, but a huge part of Claude Code is built by Claude Code. Okay. So the next question is:

Tips for efficiency: using plugins and hierarchical claude.md files10:52

Any unpopular or unique techniques to use Claude Code or AI efficiently?

Let me repeat the question. The techniques to use Claude AI efficiently? Yeah, your techniques or your unpopular techniques.

Yeah, I think using Claude correctly or efficiently

is a skill in itself. I think when I first started, I actually struggled to- well, not struggled, but I think there are several things. The model wasn't as good when I started, but the model has gotten much better since I started a year and a half ago. The models are much more capable and likely to get even better in the future. But it is a skill, and I think there are several things involved. One is understanding what MCP tools to add,

what MCP tools and what skills and what commands to add to your workflow. And so we package these and we call them plugins. And so you can actually see there will be like an official plugin store that we have, and there are other sort of like third-party plugin stores. And those actually package some of the skills that we have. And so that includes like having it write your git commit messages for you to doing a code review pass for you,

and all those skills are useful in my daily workflow. I think another thing that is quite important is actually understanding the context

that the model sees and really managing that context. And so we have several things that you use to control Claude, like claude.md.

And the claude.mds that you have, they're extremely important to curate what the model sees and what the model understands as you're working with it. And claude.md also, we load several of them.

So we load the claude.md from the current directory up until your home directory. And so there is like a hierarchical structure to claude.md. And so you can actually structure it so that each folder in your repository

has different claude.mds that get loaded depending on which folder you start Claude in. And so understanding the context,

basically what's loaded into the conversation and the system prompt of Claude Code, I think is extremely important to use it effectively. And I think it's sort of a skill that you learn as you use it more.

Managing context by splitting tasks across multiple Claude instances13:23

Hi, my name is Paul. I'd like you to explain more about the context management because everyone agrees that it's a pain in the ass for everyone. Could you please give us quick tips on that? Thank you. Yeah, sure. I think as I said before, claude.md is extremely important for you to curate a good claude.md. And I think another thing we tend to do quite a lot at least within Anthropic is we have multiple Claudes. And we usually just instead of continuing a single conversation on and on, we usually have whenever we need to do a task that we can break down, we spin up a new Claude to do it. And we might do it in a separate worktree, which is a git worktree, which is effectively like a copy of the repo. And so instead of having one single conversation that does everything, we have multiple conversations with each with their own context that handles, that breaks down this task. And so in this way, you don't have like one single conversation with all the context that solves all your problems, because that gets harder for the model to effectively handle the context around all the different problems that you have. But instead if you have multiple Claudes, like maybe five different Claudes working on five different problems, and each one is solving its own problem,

then that way you don't end up having the context between the different problems build up and affect basically the task that you're trying to solve. And so I think effectively managing different contexts and adding different Claude instances for the problem you're trying to solve allows you to basically break up and use the context wisely to just solve that unique problem rather than trying to solve a problem that's way too big for the context and having a lot of unnecessary information in the context.

Any other questions for follow-up there? I'm happy to talk about it too.

Planning locally and handing off to the web via the Teleport feature15:43

Hey, how's it going? Kelly. I guess my question is, first of all, I use Claude Code exclusively for my entire stack. Maybe 10% now is actually hand-written code. But my question is, I use worktrees and all that great stuff, but I would really prefer to plan on, I use the planning mode quite a bit, I prefer to plan on my computer and hand off to Claude Web. But there's no like, after you plan it's like, "Do you want to bypass?" or "Do you want to whatever?". So how do you guys handle the planning locally and pushing it to the web is my question. Or are you coming out with that feature? Because that would be nice. Yeah, I think it's something that we'll have.

I need to check back with our release lead, but I think it's definitely a use case that's important that lets you push and pull from the web.

Yeah, I think it's something we've definitely been exploring.

That's a great call. I think you're right in that

the web is going to be a huge part for basically handing work off asynchronously and not having to worry about worktrees locally.

Scaling context with subagents to parallelize large tasks like debugging17:03

Ok. So next question from our audience from the YouTube นะครับ The first question is: Can Claude Code has context larger than 200,000 tokens?

Yeah, I think that 200,000 is our normal context size, but we have released sort of experimental models or with the API, like a larger context.

And so we are playing around with that, it certainly is possible, but it isn't something that is in our main Claude Code for now, but we are looking into that as a possibility as some models can support larger context sizes. Ok. So the next question is how do you use multi-agents in your work?

Yeah, so we use it. I think it really depends on the use case. I think as I said before, we could have basically multiple Claude instances. That's one way. And then within each Claude instance, there's also this idea of subagents. Subagents basically allow you to parallelize work.

And so what that means is that, for example, one thing I think it does pretty well is like, let's say there is an issue. So there's a crash somewhere in the codebase. And I actually need to look at the past 100 different commits,

to ask Claude to look at the past 100 commits to try to find out where we introduced this bug. And so that's a really common use case for subagents. And so what I usually would ask it to do is like, "Oh, hey, Claude, can you spin up like 10, 20 different subagents, look through the last 100 commits, and try to find the commits that are likely to have caused this bug."

And it basically goes in and it tries to look at the last 100 commits to see what potentially caused the issue. And then it might come back with like three or five possible commits. And then that way I can look into it and manually see which one potentially caused the bug. But something like that is really helpful because otherwise it would have taken me much longer to look through 100 different commits myself to try to find out what could have caused it.

That's one use case of subagents that's really useful. If you have easily parallelizable work that can break down, you can actually just tell Claude to spin up 10 different subagents. And the reason why it's not as good to ask Claude to look through 100 commits without subagents is that if it actually looked through 100 different commits by itself,

it would be too much for the context to handle. And so by spinning up subagents, and each subagent taking on a handful, like 5 to 10 commits, then it wouldn't be too much. It would be basically breaking down the context again. And so each subagent will maybe identify one of the most

likely causes for the crash. And then that way you'll be able to really synthesize

and look through a larger set of data. And so that's a really common use case for subagents. Yeah, that would be all of our questions from the YouTube. Yeah, so I think we have

Closing thoughts: why AI-assisted coding is a vital skill for the future20:24

the time is almost over, right? Do you want to wrap up? Speak anything to the Claude Bangkok meet up? Yeah, thanks everyone for coming. I'm really excited to hear so many enthusiasts from Claude for Claude Code. I think we're working really hard to try to get some of these features to you, like the larger context sizes, some of the better context management, including being able to, we call it internally teleport, where you basically take your current session and send it to Claude Code web. I think it's possible from the command line already. I don't know if we fully released it, but it's definitely coming up if it's not released. We're working on tons of different features and we hope that you like it.

As I've said before, I think one thing that is quite important and I kind of tell a lot of people about AI is that, internally, the more I use AI,

the better I got at it. Regardless of which AI agent you use, Claude Code or Cursor or Gemini or whatever, I think it is quite important.

I do think very strongly that the future of coding is through AI agents because using it every day, the increase in productivity is crazy. I've been working in software for like 20 years, and so it's hard to really see a world where we kind of go back to hand coding. And so in a lot of ways, it's just to basically start learning that skill of how to use AI effectively.

I love that everyone is excited about Claude Code, but I do also understand that some people like different things. But regardless, AI coding is a skill that's worth investing in is my take.

Ok thank you. Yeah, thank you so much everybody. Bye, have a good day.

Edit metadata on GitHub

How to Contribute to Transcripts

Report Inaccuracies

1

Select text and click Report

Highlight the inaccurate text in the transcript, then click the Report button that appears.

2

Right-click any transcript text

Right-click on any transcript segment to select it and show the Report button.

Keyboard Shortcuts

Cycle playback speedS
Navigate to timestampPaste
Play/pause videoSpace
Seek ±5 seconds← →
Paste timestamps like "00:02:20.680" to jump to that position