Wednesday, January 7, 2026

Optimizing Colima for Apple Silicon (M1/M2/M3) Macs - Switch from Intel Emulation to Native Performance

Published: January 7, 2026

If you're running Colima on an Apple Silicon Mac and experiencing slow performance, you might be running it in Intel emulation mode. This guide will show you how to switch to native ARM64 performance for dramatically better speed and efficiency.

The Problem

Many developers unknowingly run Colima with Intel emulation (x86_64), which:

  • ❌ Uses software emulation (slow)
  • ❌ Consumes more battery
  • ❌ Increases CPU usage
  • ❌ Results in slower container operations

The Solution

Use Apple's native Virtualization.framework with VirtioFS for optimal performance.

The Optimal Command

colima start --vm-type=vz --mount-type=virtiofs

Why This Works

--vm-type=vz (Apple Virtualization.framework):

  • ✅ Native ARM64 performance (no emulation)
  • ✅ Better macOS integration
  • ✅ Lower resource usage
  • ✅ Hardware acceleration
  • ✅ Improved battery life

--mount-type=virtiofs (VirtioFS):

  • ✅ Fastest file I/O compared to 9p or sshfs
  • ✅ Better performance for bind mounts
  • ✅ Optimal for development workflows

Step-by-Step Migration

1. Check Your Current Setup

colima list

Look for profiles running with x86_64 architecture.

2. Stop Intel Emulation Profiles

colima stop <profile-name>
# Example: colima stop intel

3. Delete Old Profiles (if needed)

colima delete <profile-name>
# Example: colima delete default

4. Start with Optimal Configuration

colima start --vm-type=vz --mount-type=virtiofs --arch=aarch64 --cpu=4 --memory=4 --disk=60

5. Verify the Configuration

colima status

You should see:

  • arch: aarch64
  • macOS Virtualization.Framework
  • mountType: virtiofs

Performance Comparison

ConfigurationArchitecturePerformanceBattery Usage
Intel Emulationx86_64🐌 Slow🔋 High
Native ARM64aarch64⚡ Fast🔋 Low

Additional Optimizations

Memory and CPU Allocation

Adjust based on your system specs:

# For 16GB+ RAM systems
colima start --vm-type=vz --mount-type=virtiofs --cpu=6 --memory=8

# For 32GB+ RAM systems  
colima start --vm-type=vz --mount-type=virtiofs --cpu=8 --memory=12

Persistent Configuration

Create a config file at ~/.colima/default/colima.yaml:

vmType: vz
mountType: virtiofs
cpu: 4
memory: 4
disk: 60
arch: aarch64
runtime: docker

Troubleshooting

"Cannot Update VM Type" Error

If you get this error, you need to delete the existing profile:

colima delete default --force
colima start --vm-type=vz --mount-type=virtiofs

Docker Context Issues

Ensure Docker is using the correct context:

docker context ls
docker context use colima

Verification Test

Run this to confirm everything is working:

docker run --rm hello-world

The output should show (arm64v8) indicating native ARM64 images are being used.

Benefits You'll Notice

  • 🚀 Faster container startup times
  • Improved build performance
  • 🔋 Better battery life
  • 🏃‍♂️ Snappier development workflow
  • 💾 Lower memory usage

Conclusion

By switching to --vm-type=vz --mount-type=virtiofs, you're leveraging Apple's native virtualization technology for optimal Docker performance on Apple Silicon Macs. This simple change can dramatically improve your development experience.


System Requirements:

  • Apple Silicon Mac (M1, M2, M3, or newer)
  • macOS 13.0 (Ventura) or later
  • Colima 0.6.0 or later

Resources:

Happy coding! 🎉

 

Monday, December 15, 2025

How to Manage Multiple Azure DevOps Boards with AI Agents in VS Code

How to Manage Multiple Azure DevOps Boards with AI Agents in VS Code

Turn your IDE into a command center for multiple projects using Docker, MCP, and VS Code Profiles.

If you are a DevOps engineer or developer juggling multiple projects, you know the pain of context switching. You're coding in one repo, but you need to check a ticket in a different Azure DevOps project.

The Model Context Protocol (MCP) has changed the game. It allows AI agents (like GitHub Copilot or Cline) to read and edit your Azure Boards directly. But what if you have multipleboards?

In this guide, I’ll show you a clean, containerized way to connect your AI agent to specific Azure Boards using Docker and VS Code Profiles.


Why This Architecture?

We are using VS Code Profiles to solve the "Tool Collision" problem.

  • The Problem: If you connect one AI agent to 5 different boards, it gets confused. It might try to find a "Data Pipeline" bug in your "Frontend" board.

  • The Solution: We create a dedicated "Persona" (Profile) for each project. When you switch profiles, VS Code automatically swaps the AI's "brain" to focus only on that project's board.

  • Security: We use Docker to run the MCP server. This keeps your host machine clean and ensures no dependencies conflict.


Part 1: Prepare the Docker Image

First, we need a container that knows how to talk to Azure DevOps. We will build a simple image wrapping the standard azuredevops-mcp-server.

  1. Create a folder named ado-mcp-docker and add a Dockerfile:

    text
    # Dockerfile FROM node:18-alpine # Install the server globally RUN npm install -g @ryancardin/azuredevops-mcp-server # Set the entrypoint ENTRYPOINT ["azuredevops-mcp-server"]
  2. Build the image:

    bash
    docker build -t local/ado-mcp-server .

Part 2: Set Up VS Code Profiles

Now we will create a dedicated environment for your first project (e.g., "Frontend Shop").

  1. Open Profiles: Click the Gear Icon (bottom left) > Profiles > Create Profile...

  2. Name It: ADO-Frontend (or your project name).

  3. Customize: You can give it a specific icon or color (e.g., Blue for Frontend) to visually distinguish it.

Repeat this step for your second project (e.g., ADO-DataEngineers), giving it a different name and color.


Part 3: Configure the MCP Server

This is where the magic happens. We will tell the specific profile to run our Docker container with specific project credentials.

  1. Switch to your new profile (ADO-Frontend).

  2. Open the Command Palette (Ctrl+Shift+P) and type:
    MCP: Open User Configuration
    (This opens the mcp.json file unique to THIS profile).

  3. Paste the following configuration:

json
{ "mcpServers": { "azure-board": { "command": "docker", "args": [ "run", "-i", // Interactive mode is REQUIRED "--rm", // Delete container after use "-e", "AZURE_DEVOPS_ORG_URL=https://dev.azure.com/YourOrg", "-e", "AZURE_DEVOPS_PROJECT=ShopFrontend", // <--- PROJECT A "-e", "AZURE_DEVOPS_AUTH_TYPE=pat", "-e", "AZURE_DEVOPS_PERSONAL_ACCESS_TOKEN=YOUR_PAT_TOKEN", "local/ado-mcp-server" ] } } }
  1. Repeat for Profile B: Switch to your second profile (ADO-DataEngineers), open its mcp.json, and change the AZURE_DEVOPS_PROJECT variable to DataCore (or your second project's name).


Part 4: Testing Your AI Agent

Now, let's see it in action.

  1. Open your AI Chat (Copilot, Cline, or similar).

  2. Type: "What are the active bugs in this sprint?"

  3. Watch:

    • The agent will spin up the Docker container.

    • It will query only the board defined in your current profile.

    • It will return the tickets for that specific project.

  4. Switch Profiles: Change to your other profile, ask the same question, and you will get completely different tickets from the other board.


Troubleshooting Tips

  • "Connection Refused": If using Docker on Windows/Mac, ensure the Docker Desktop app is running.

  • "Authentication Failed": Double-check that your Personal Access Token (PAT) has Work Items (Read & Write) permissions.

  • Slow Response: The first request might take 2-3 seconds as the container starts. Subsequent requests will be faster if the session stays open.



 Avoid hardcoding your PAT in the JSON file. Instead, pass it as an environment variable from your host system:

Change the args line to:
"-e", "AZURE_DEVOPS_PERSONAL_ACCESS_TOKEN=${env:MY_ADO_PAT}"
And set MY_ADO_PAT in your system's environment variables.