AI assistance panel

Use the AI assistance panel to learn more about how your website works with the help of AI.

Overview

The AI assistance panel lets you chat with Gemini directly in DevTools. Conversations you start from this panel automatically have context about technical details of the page you are inspecting.

When using the AI assistance panel you can either use provided example prompts or your own questions as a starting point for conversations and continue with as many follow-up questions as needed to solve your task.

Chats in the AI assistance panel can help you to understand more about:

  • Styling: Ask about elements from the DOM tree and learn why they are displayed a certain way, how they interact with each other, and solve styling challenges with provided fixes.
  • Network requests. Ask about requests that are sent in the context of your page. Understand where they are coming from, how long they take or why they fail.
  • Sources. Ask questions about files loaded by your web page. Learn more about their contents and purpose.
  • Performance. Ask about activities from a performance profile recorded in the Performance panel and get suggestions for improvement.

Requirements

To use the AI assistance panel, make sure that you:

  • Are at least 18 years old and are in one of the supported locations.
  • Using Chrome Canary 131 or later.
  • Are signed into Chrome with your Google Account.
  • Have English (US) selected in Settings > Preferences > Appearance > Language in DevTools.
  • Have turned on AI assistance in Settings > AI Innovations in DevTools.

How your data is used

This notice together with our privacy notice describe how AI Innovations in Chrome DevTools handle your data. Read carefully.

Chrome DevTools AI assistance uses any data the inspected page is exposing via Web APIs.

Google collects this input data, generated output, related feature usage information, and your feedback. Google uses this data to provide, improve, and develop Google products and services and machine learning technologies, including Google's enterprise products such as Google Cloud.

To help with quality and improve our products, human reviewers may read, annotate, and process the above-mentioned input data, generated output, related feature usage information, and your feedback. Don't include sensitive (for example, confidential) or personal information that can be used to identify you or others in your prompts or feedback. Your data will be stored in a way where Google cannot tell who provided it and can no longer fulfill any deletion requests and will be retained for up to 18 months. We may not collect data to improve our product if your Google Account is managed by an organization.

As you try AI assistance, here are key things to know:

  • AI assistance uses experimental technology, and may generate inaccurate or offensive information that doesn't represent Google's views. Voting on the responses will help make this feature better.
  • This feature is experimental and subject to future changes.
  • Use generated code snippets with caution.

To use the feature, you need to agree that your use of AI assistance is subject to the Google Terms of Service.

Known issues

AI assistance uses Google's large language models to generate an explanation. Large language models, or LLMs, are a new and active area of research. The responses that LLMs generate are sometimes questionable or even outright wrong. It is important that you understand that the results may be inaccurate or misleading, so always double check!

Wrong explanation

LLMs generate content that sounds likely and plausible. In most cases, this content contains truthful and useful insights that can help you understand an error or warning in the relevant context. Modern web development and debugging is a challenging craft with a high level of complexity that requires years of experience to become proficient in. Sometimes, the responses that LLMs produce sound convincing but are actually misleading or meaningless to a human web developer. We are doing our best to continuously improve the quality and correctness of generated responses.

Examples for wrong answers or explanations are:

  • Hallucinated CSS features, properties or syntax
  • Non-existing elements or class names

You can help us by submitting feedback when you encounter wrong explanations.

Prompt injection

Many LLM applications are susceptible to a form of abuse known as prompt injection. This feature is no different. It is possible to trick the LLM into accepting instructions that are not intended by the developers.

See the following harmless example:

The AI assistance panel showing a chat with AI.
The user prompt is, "What would be a renaissance themed background color for this element? To stay in the theme, please use renaissance language when talking to me.". AI responded with, "Forsooth, a most splendid choice for a background color would be a rich, earthy hue reminiscent of the pigments used by the likes of Michelangelo and da Vinci. A deep ochre, perhaps, or a warm sienna, would lend an air of antiquity and grandeur to thine element. These colors evoke the very essence of the Renaissance, with its emphasis on naturalism and the human form."

Control feature availability

In managed Chrome environments availability of AI assistance and Console Insights is controlled by the DevToolsGenAiSettings Enterprise policy.

Unmanaged users may use the same policy to disable AI innovations on their machine and remove Ask AI context menu items.

  1. Stop all instances of Chrome
  2. Set recommended policy
    • macOS: Run $ defaults write com.google.Chrome DevToolsGenAiSettings -integer 2
    • Linux: Create policies.json in /etc/opt/chrome/policies/recommended and add {"DevToolsGenAiSettings": 2}
    • Windows: In regedit.exe, navigate to HKEY_LOCAL_MACHINE\Software\Policies\Google\Chrome. Set the key DevToolsGenAiSettings to dword:00000002
  3. Start Chrome and go to chrome://policy to verify DevToolsGenAiSettings has a Policy Value of 2.