cy-ai
    Preparing search index...

    cy-ai

    Note

    This package is under development so expect breaking changes in future releases.

    Cypress AI

    NPM

    NPM version build test

    🧪 Cypress AI command that generates E2E tests using an LLM (Large Language Model):

    cy.ai(string)
    

    NPM:

    npm install cy-ai --save-dev
    

    Yarn:

    yarn add cy-ai --dev
    

    If you're using TypeScript, import the command using ES2015 syntax:

    echo "import 'cy-ai'" >> cypress/support/commands.ts
    

    Or if you're using JavaScript, use CommonJS require:

    echo "require('cy-ai')" >> cypress/support/commands.js
    

    Start the Ollama server:

    ollama serve
    

    Download the LLM:

    ollama pull qwen2.5-coder
    

    Write a test:

    // cypress/e2e/example.cy.js
    it('visits example.com', () => {
    cy.ai('go to https://example.com and see heading "Example Domain"')
    })
    Tip

    If you're running Chrome, disable chromeWebSecurity so the LLM requests aren't blocked by CORS:

    // cypress.config.js
    import { defineConfig } from 'cypress'

    export default defineConfig({
    chromeWebSecurity: false,
    })

    Generate Cypress tests with AI:

    cy.ai(string[, options])
    

    LangChain Runnable to invoke. Defaults to a prompt template using the Ollama model qwen2.5-coder.

    Use a different large language model:

    import { Ollama } from '@langchain/ollama'
    import { prompt } from 'cy-ai'

    const llm = new Ollama({
    model: 'codellama',
    numCtx: 16384,
    })

    const chain = prompt.pipe(llm)

    cy.ai('prompt', { llm: chain })

    Or customize the template:

    import { PromptTemplate } from '@langchain/core/prompts'
    import { Ollama } from '@langchain/ollama'

    const llm = new Ollama({
    model: 'codellama',
    numCtx: 16384,
    })

    const prompt = PromptTemplate.fromTemplate(`
    You are writing an E2E test step with Cypress.

    Rules:
    1. Return JavaScript Cypress code without "describe" and "it".

    Task: {task}

    HTML:
    \`\`\`html
    {html}
    \`\`\`
    `)

    const chain = prompt.pipe(llm)

    cy.ai('prompt', { llm: chain })
    Important

    Don't forget to pull the Ollama model:

    ollama pull codellama
    

    Whether to display the command logs. Defaults to true:

    cy.ai('prompt', { log: true })
    

    Hide Cypress and console logs:

    cy.ai('prompt', { log: false })
    

    Whether to regenerate the Cypress step with AI. Defaults to false:

    cy.ai('prompt', { regenerate: false })
    

    Regenerate the Cypress step with AI:

    cy.ai('prompt', { regenerate: true })
    

    Time to wait in milliseconds. Defaults to 2 minutes:

    cy.ai('prompt', { timeout: 120000 })
    

    Set timeout to 5 minutes:

    cy.ai('prompt', { timeout: 1000 * 60 * 5 })
    

    Configure global options for cy.ai:

    cy.aiConfig(options)
    

    Override default options:

    cy.aiConfig({
    llm: chain,
    log: false,
    regenerate: true,
    timeout: 1000 * 60 * 3, // 3 minutes
    })

    Set timeout to 5 minutes:

    cy.aiConfig({
    timeout: 1000 * 60 * 5,
    })
    1. A prompt is created from your task, the HTML body, and the template.
    2. The prompt is sent to the LLM server.
    3. The LLM server responds with Cypress code.
    4. The Cypress code is cleaned and run.
    5. If the steps pass, the code is saved to cypress/e2e/**/__generated__/*.json.
    6. If the steps fail, an error is thrown and the LLM response can be inspected in the browser Console.

    When running tests, if the generated Cypress code exists, the command will reuse the existing code.

    To regenerate a step, enable the regenerate option or delete the generated code in cypress/e2e/**/__generated__/*.json.

    Warning

    If you have tests with duplicate or identical titles (describe and it), it could cause the generated tests to fail.

    Release is automated with Release Please.

    MIT