GitHub Copilot Coding Agent: Building an Agentic Workflow That Automatically Creates a Draft PR from a Single Issue Assignment
Honestly, when I first heard "the AI reads an issue and opens a PR on its own," I was skeptical. I was used to GitHub Copilot autocompleting code inside my IDE, but the idea that you could throw it an issue and it would branch, fix code, run tests, and open a PR on its own—that felt like a different beast entirely. So I wired it up myself, and the result was a noticeable reduction in time spent on repetitive bug issues across the whole team.
The core idea is simple. Just by assigning an issue to Copilot in the issue tracker, the entire flow—from branch creation to Draft PR—runs automatically. After reading this, you'll be able to connect this automated workflow to a real repository and introduce it to your team with a clear understanding of the security risks involved. The example code is TypeScript-based, but the concepts apply equally to any stack.
This is for any developer using GitHub, and you can get started immediately on a Copilot Pro plan or higher.
Core Concepts
Coding Agent vs. Agent Mode: Similar Names, Easy to Confuse
I initially thought they were the same thing. They're not. This table might look complex at first, but all you really need to remember is "where does it run?"
| Agent Mode | Coding Agent (Cloud Agent) | |
|---|---|---|
| Runs in | IDE (VS Code, JetBrains) | GitHub.com cloud |
| How it works | Real-time conversation with developer (synchronous) | Independent background execution (asynchronous) |
| When to use | When you want to stay in control | When you want to delegate a well-defined task |
| Output | Immediate code edits | Draft PR |
Coding Agent runs outside the IDE, on a temporary sandbox provided by GitHub Actions. Even while a developer is away, it handles the entire flow on its own: branch creation → codebase exploration → changes → test execution → Draft PR.
It's also worth knowing the core security guardrail upfront. The agent cannot Approve or Merge the PR it creates. It can only submit a Draft PR; the final gate always belongs to a human.
What Tasks It's Well-Suited For
It's not a silver bullet. Being clear about this from the start matters—the better-tested the codebase and the more clearly defined the task, the higher the quality of the output. Give it a vague issue and it will produce vague code.
| Good fit | Poor fit |
|---|---|
| Bug fixes with clear reproduction steps | New features with ambiguous requirements |
| Adding unit tests | Large-scale architectural refactoring |
| Documentation improvements | UX decisions requiring user feedback |
| Bulk application of lint rules | Changes requiring coordination across multiple teams |
| Dependency upgrades | Tasks that can't be validated without an external API |
The 4-Step Agentic Workflow
Here's the flow from when the agent receives an issue to when it creates a PR:
- Assign — Assign the issue to Copilot from a GitHub Issue or an external tracker (Jira, Azure Boards, etc.)
- Explore — The agent analyzes the codebase and builds an execution plan. Progress is shown in the issue exploration plan UI
- Execute — Modifies files, runs tests, lints, and calls MCP servers if needed
- Submit — Creates a Draft PR with code scan and secret scan results automatically attached to the PR body
Once a person reviews the Draft PR and converts it to "Ready for review," it enters the standard PR review process.
Practical Application
Writing Good Issues: Kicking Off the Agent's First Task
This is the simplest starting point. Create a GitHub Issue and add Copilot as an Assignee—the agent starts working immediately.
This is the situation you'll encounter most often in practice: a weak issue description produces a weak PR. It's best to include specific context, like the example below. In particular, the pattern of adding a dedicated "Do Not" section isn't commonly seen in standard issue templates, but it has a significant impact.
## Bug Description
The `/api/users/{id}` endpoint returns a 500 error when given a
user ID that doesn't exist.
## Reproduction Steps
1. Send `GET /api/users/99999`
2. Response: 500 Internal Server Error
3. Expected: 404 Not Found
## Scope of Fix
- `src/users/users.controller.ts` — Add existence check logic
- `src/users/users.service.ts` — Apply `findOneOrThrow` pattern
## Do Not
- Do not touch the `src/legacy/` folder
- Do not delete existing tests
## Requirements
- Unit tests must be added
- Include Zod schema validationThe more clearly the issue body specifies relevant file paths, forbidden patterns, and test requirements, the less time the agent spends exploring the codebase and the more consistent the output.
The example above is based on a TypeScript/NestJS backend project, but the same structure applies directly to other stacks. The key is that "where and how to reproduce, and what must not be touched" must be clearly stated.
Codebase Onboarding: Writing .github/copilot-instructions.md
Teaching the agent "our team's rules" once means they'll automatically apply to every subsequent Copilot request. It feels similar to writing an onboarding document for a new team member. I recommend creating this file first—without it, every PR will have an inconsistent style.
# Project Rules
## Code Conventions
- TypeScript strict mode required
- Use async/await (Promise.then() is forbidden)
- 2-space indentation
## Testing Policy
- PRs without tests are not allowed
- Test files should use `*.spec.ts` format and reside in the same directory as the source file
## API Development
- All API responses must include Zod schema validation
- Do not modify `src/legacy/` — legacy migration is in progress
## Commit Messages
- Example: `fix(auth): handle missing refresh logic for expired tokens`
- Example: `feat(users): add user search filters`
- Example: `test(api): add test for non-existent user ID handling`
## Dependencies
- Always verify peerDependencies compatibility before adding new packages
- Use pnpm (npm and yarn are forbidden)This file uses a TypeScript-based example, but you can apply it to any project by simply updating the rules to match your language and framework.
Preparing the Execution Environment: Defining .github/workflows/copilot-setup-steps.yml
This file defines the environment initialization steps the agent needs before it actually runs any code. It's written in GitHub Actions workflow syntax, and both the filename and the job name must be copilot-setup-steps—this naming convention is fixed.
If you're not familiar with GitHub Actions, skimming the GitHub Actions official documentation first will make this much easier to understand.
# Filename: .github/workflows/copilot-setup-steps.yml
# This workflow is called automatically by Copilot internally.
# It is not intended to be triggered directly via workflow_dispatch.
name: Copilot Setup Steps
jobs:
copilot-setup-steps:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- uses: pnpm/action-setup@v4
with:
version: 9
- name: Install dependencies
run: pnpm install
- name: DB migration (prep before agent runs)
run: pnpm db:migrate
env:
# The agent's execution environment is an isolated temporary sandbox.
# TEST_DATABASE_URL must point to a test DB completely separate from production.
DATABASE_URL: ${{ secrets.TEST_DATABASE_URL }}
- name: Environment validation
run: pnpm typecheckIf you include commands like pnpm db:migrate that modify a real database, the agent's execution environment is an isolated temporary sandbox—but you must first verify that TEST_DATABASE_URL points to an environment completely separate from production. Connecting the wrong environment variable can lead the agent to touch unintended data.
Integrating External Systems: Connecting MCP Servers
Starting in July 2025, the agent can connect to remote MCP (Model Context Protocol) servers to receive context from systems outside the repository. Register them as JSON under the repository settings at Copilot → Cloud agent → MCP configuration.
{
"mcpServers": {
"slack": {
"type": "http",
"url": "https://your-slack-mcp-server.internal/mcp",
"headers": {
"Authorization": "Bearer ${SLACK_MCP_TOKEN}"
}
},
"internal-db": {
"type": "http",
"url": "https://your-db-mcp-server.internal/mcp",
"headers": {
"Authorization": "Bearer ${DB_MCP_TOKEN}"
}
}
}
}Variable references like ${SLACK_MCP_TOKEN} refer to values registered in GitHub repository Secrets. You must register secrets with the same name under Settings → Secrets and variables → Actions in your repository settings. If you just copy this in, authentication will fail—confirm this step first.
The your-slack-mcp-server.internal portion is a placeholder for your self-hosted MCP server address. This is something your team must operate on your own infrastructure; it is not an endpoint provided by GitHub. GitHub does provide some built-in MCP servers separately.
MCP (Model Context Protocol) is a standard interface between agents and external systems. Connecting a Slack MCP Server lets the agent directly read a bug report thread, understand the context, and then modify the code accordingly.
GitHub also ships some built-in MCP servers:
| MCP Server | Functionality |
|---|---|
| GitHub MCP Server | Supplies issue, PR, and repository context |
| Playwright MCP Server | Read, interact with, and screenshot web pages |
| Azure MCP Server | Integrates Azure resource context |
Tracker Integration: Delegating Directly from Jira or Azure Boards
Jira integration launched in public preview in March 2026. A PM can write a technical spec in Jira, assign it to Copilot as the assignee, and the issue's description, comments, and labels are all passed as context—automatically creating a Draft PR in the GitHub repository. This enables AI delegation within your existing tracker without any context switching.
Azure Boards reached GA (General Availability) in early 2026. Assigning a Work Item to Copilot follows the same flow as Jira and produces a PR, but it requires a separate integration setup between Azure DevOps and your GitHub repository. For detailed setup, refer to the Azure Boards integration official documentation.
Pros and Cons
Advantages
| Item | Details |
|---|---|
| Asynchronous parallel processing | The agent handles tasks independently while developers focus on other work |
| Productivity gains | A joint GitHub–Accenture study reported a 15% increase in PR merge rate and an 84% improvement in build success rate |
| Built-in security scanning | Code scanning, secret scanning, and dependency vulnerability checks run automatically before a PR is created |
| Forced human review | Only Draft PRs are created; the agent cannot Approve or Merge directly—the final gate always belongs to a human |
| Tracker integration | AI delegation is possible without changing existing workflows in Jira, Azure Boards, Linear, etc. |
Drawbacks and Caveats
Honestly, no tool has only upsides. When I first saw the numbers below, I found them pretty alarming—but once I understood why, it made sense. As the attack surface expands, your mitigations need to match.
| Item | Details | Mitigation |
|---|---|---|
| Security vulnerabilities in AI code | Approximately 30% of Python and JavaScript suggestions contain security flaws (XSS, insufficient input validation, etc.) (source) | Make CodeQL static analysis mandatory at the PR stage and maintain a dedicated review checklist for AI-generated code |
| Secret leakage risk | Repositories using Copilot show a ~40% higher rate of secret leakage compared to those that don't (source) | Enable GitHub Advanced Security secret scanning and Dependabot without exception |
| Rules File Backdoor | A discovered attack vector where malicious instructions are embedded in copilot-instructions.md to trick the agent into generating malicious code (source) |
Add a separate security reviewer step for PRs that change configuration files |
| Increased code duplication | Uncritical acceptance of AI suggestions has been observed to quadruple code clones (GitClear 2025) | Make duplicate code checks a routine part of PR reviews |
| GitHub Actions costs | Agent runs consume Actions minutes | Simulate monthly usage before enterprise adoption |
| Firewall restrictions by default | Internet access is restricted by default; tasks that require external API calls need separate configuration | Register only the minimum allowed domains on the org-level allowlist |
Seeing 30%, 40%, 4x back to back might make you wonder, "Should I even use this?" I felt the same way at first. But these numbers are closer to a baseline for "using it with no safeguards in place." Once you make CodeQL and secret scanning mandatory and establish an AI-generated code review checklist within your team, the actual risk level changes dramatically.
GitHub Advanced Security (GHAS) is a bundled set of security tools that includes CodeQL static analysis, secret scanning, and dependency review. Used alongside the Coding Agent, it can catch vulnerabilities in AI-generated code at the PR stage.
The Most Common Mistakes in Practice
-
Writing the issue body too briefly — If you throw in a one-liner like "fix login bug," the agent will flounder too. It's worth spelling out reproduction steps, relevant files, and forbidden patterns explicitly.
-
Starting without
copilot-instructions.md— Without telling the agent about your team's conventions, every PR will have an inconsistent style and may ignore existing patterns. Having this file in place from the start changes the quality of every subsequent request. -
Treating Draft PRs too casually — It's easy to forget the code was AI-generated and merge it quickly like a regular PR. It helps to review security scan results and check for duplicate or unnecessary changes using an AI-specific review checklist.
Closing Thoughts
The ability to write clear issues is the ability to use the Coding Agent effectively. Teams that already have a habit of documenting requirements precisely will get the most out of this tool.
Three steps you can take right now:
-
Verify Copilot is enabled — Go to
Copilot → Coding Agentin your repository settings to confirm the feature is on and that you're on a Copilot Pro plan or higher. -
Write
copilot-instructions.md— Create a.github/copilot-instructions.mdfile and document your team's coding conventions, forbidden patterns, and testing policies. These will be applied automatically to every future Copilot request. -
Assign your first issue — Pick a bug issue with clear reproduction steps and add
Copilotto the Assignees. Watching the agent build its exploration plan and reviewing the quality of the Draft PR firsthand is the fastest way to learn.
If the agent comes back with a PR that's not what you expected, the first thing to do is re-examine the issue body. In most cases, the cause is unclear reproduction steps, a scope that's too broad, or a missing "do not" list. Strengthen the issue and reassign—the results will be different.
References
- GitHub Copilot coding agent 101: Getting started with agentic workflows | GitHub Blog
- From idea to PR: A guide to GitHub Copilot's agentic workflows | GitHub Blog
- About GitHub Copilot coding agent | GitHub Docs
- Asking GitHub Copilot to create a pull request | GitHub Docs
- The difference between coding agent and agent mode | GitHub Blog
- Model Context Protocol (MCP) and GitHub Copilot cloud agent | GitHub Docs
- Copilot coding agent now supports remote MCP servers | GitHub Changelog
- GitHub Copilot coding agent for Jira is now in public preview | GitHub Changelog
- Azure Boards integration with GitHub Copilot | Azure DevOps Blog
- Risks and mitigations for GitHub Copilot cloud agent | GitHub Docs
- Onboarding your AI peer programmer | GitHub Blog
- New Vulnerability in GitHub Copilot and Cursor: Rules File Backdoor | Pillar Security
- Research: Quantifying GitHub Copilot's impact in the enterprise with Accenture | GitHub Blog
- Organization firewall settings for Copilot cloud agent | GitHub Changelog