Penligent Header

Secure Your Frontend: Updated DOM Based XSS Cheat Sheet

Introduction

A DOM‑based XSS cheat sheet is your go‑to reference when you want to locate, prevent and automate protection against client‑side script injection. In essence: identify where user‑controlled input (a source) flows into a dangerous API (a sink), replace it with safe patterns (use texteContenu, créer un élément, or sanitizers), and integrate checks into your build, runtime, and pentest workflow. Because DOM XSS happens entirely in the browser and bypasses many traditional server‑side filters, your front‑end becomes the last line of defence—and the place where most teams still lack automation.

What is DOM‑Based XSS and why it matters

According to PortSwigger, DOM‑based cross‑site scripting arises when “JavaScript takes data from an attacker‑controllable source, such as the URL, and passes it to a sink that supports dynamic code execution, such as eval() ou innerHTML.” portswigger.net The OWASP DOM‑based XSS Prevention Cheat Sheet underlines that the key difference from stored/reflected XSS is runtime client‑side injection. cheatsheetseries.owasp.org+1

In modern applications — Single Page Apps (SPAs), heavy use of third‑party widgets, dynamic DOM building — the risk grows: payloads may never reach your server logs, traditional WAFs may miss them, and developers often insufficiently consider fragment identifiers, postMessage flows, or window.name, all common sources. Recognizing this shift is the first step toward measuring your security maturity.

Mapping the threat: sources → sinks

Secure coding begins with a mental map of sources (where attacker input enters) and sinks (where execution occurs). One security blog summarises: “The source is any place on a web page where user input can be added … The sink is where the data inserted in the source goes … and if not sanitized it can lead to a DOM‑based XSS vulnerability.” Moyen

Below is a compact table you should keep at your workstation and in review checklists:

Source (entry point)Sink (dangerous API)
location.hash, location.search, URLSearchParamsinnerHTML, insertAdjacentHTML, outerHTML
postMessage, window.nameeval(), new Function(), setTimeout(string)
document.referrer, localStorage, sessionStoragesetAttribute(‘on…’), element.src = …
Unvetted third‑party widget dataAny DOM insertion or implicit code execution

As OWASP emphasises, no single technique prevents XSS: you must combine proper sinks, encoding, sanitization and safe APIs. cheatsheetseries.owasp.org

Real‑World Attack Patterns with Code & Fixes

Example A – jQuery selector injection via URL fragment

Vulnerable snippet:

js

$(window).on('hashchange', () => {

const target = $(location.hash); // user‑controlled

target[0].scrollIntoView();

});

An attacker may craft https://site/page.html#<img src="x" onerror="alert(1)"> — jQuery treats the hash as HTML/selector and the en cas d'erreur triggers.

Fix:

js

const raw = location.hash.slice(1);

const safeId = /^[A-Za-z0-9_-]+$/.test(raw) ? raw : null;

if (safeId) {

const target = document.getElementById(safeId);

if (target) target.scrollIntoView();

}

Key: validate, treat hash as identifier not HTML.

Example B – postMessage → eval chain

Vulnerable snippet:

js

window.addEventListener('message', e => {

eval(e.data); // dangerous

});

Fixed version:

window.addEventListener('message', event => {

if (event.origin !== '<https://trusted.example.com>') return;

try {

const msg = JSON.parse(event.data);

handleMessage(msg);

} catch {

console.warn('Invalid message format');

}

});

Avoid eval, check origin, use safe parsing.

Example C – Editor/Preview XSS in markdown context

Vulnerable:

js

preview.innerHTML = marked(userInput);

Secure:

js

import DOMPurify from 'dompurify';

const dirty = marked(userInput);

const clean = DOMPurify.sanitize(dirty);

preview.innerHTML = clean;

When allowing user‑generated HTML, sanitization is required.

Deployable Implementations: making it operational

Safe DOM helper – safe‑dom.js

js

import DOMPurify from 'dompurify';

export function setSafeHTML(el, dirty) {

const clean = DOMPurify.sanitize(dirty, {

ALLOWED_TAGS: ['b','i','a','p','ul','li','code','pre','img'],

ALLOWED_ATTR: ['href','src','alt','title','rel'],

FORBID_ATTR: ['onerror','onclick','style']

});

el.innerHTML = clean;

}

export function setText(el, text) {

el.textContent = String(text ?? '');

}

export function safeSetAttribute(el, name, val) {

if (/^on/i.test(name)) throw new Error('Event handler attribute not allowed');

el.setAttribute(name, String(val));

}

Use this library to centralise safe DOM operations and reduce human error.

Static enforcement – ESLint sample

js

// .eslintrc.js

rules: {

'no-restricted-syntax': [

'error',

{ selector: "AssignmentExpression[left.property.name='innerHTML']", message: "Use safe-dom.setSafeHTML or textContent." },

{ selector: "CallExpression[callee.name='eval']", message: "Avoid eval()" },

{ selector: "CallExpression[callee.property.name='write']", message: "Avoid document.write()" }

]

}

Combine with pre‑commit hooks (husky, lint‑staged) to block dangerous patterns.

CI / Puppeteer test – GitHub Actions

.github/workflows/dom-xss.yml triggers a Puppeteer test:

// tests/puppeteer/dom-xss-test.js

js

const puppeteer = require('puppeteer');

(async ()=>{

const browser = await puppeteer.launch();

const page = await browser.newPage();

const logs = [];

page.on('console', msg => logs.push(msg.text()));

await page.goto(${URL}/page.html#<img src="x" onerror="console.log("xss")">);

await page.waitForTimeout(1000);

if (logs.some(l=>l.includes('XSS'))) process.exit(2);

await browser.close();

})();

Fail build on detection.

Runtime monitoring – MutationObserver

js

(function(){

const obs = new MutationObserver(muts=>{

muts.forEach(m=>{

m.addedNodes.forEach(n=>{

if(n.nodeType===1){

const html = n.innerHTML || '';

if(/on(error|click|load)|<script\\b/i.test(html)){

navigator.sendBeacon('/_monitoring/xss', JSON.stringify({

url: location.href, snippet: html.slice(0,200)

}));

}

}

});

});

});

obs.observe(document.documentElement, {childList:true, subtree:true});

})();

Useful in staging to alert unexpected DOM injections.

Browser security hardening – CSP & Trusted Types

CSP header:

pgsql

Content-Security-Policy: default-src 'self'; script‑src 'self' 'nonce‑XYZ'; object‑src 'none'; base-uri 'self';

Trusted Types snippet:

js

window.trustedTypes?.createPolicy('safePolicy', {

createHTML: s => { throw new Error('Direct HTML assignment blocked'); },

createScript: s => { throw new Error('Direct script creation blocked'); }

});

This denies untrusted sinks by default.

Third‑party script safety – SRI

bash

openssl dgst -sha384 -binary vendor.js | openssl base64 -A

Use <script src="vendor.js" integrity="sha384‑..." crossorigin="anonymous"></script> to pin and verify.

Integrating with Penligent for automation

If your team uses Penligent, you can elevate your DOM XSS protection into a continuous pipeline. Penligent’s research article notes how “detecting DOM‑based XSS via runtime taint tracking … serverside techniques cannot reliably catch client‑side injection.” Penligent

Example workflow:

  1. In CI trigger a Penligent scan with ruleset dom‑xss, supplying payloads like #<img src="x" onerror="alert(1)">.
  2. Penligent executes headless flows, generates PoC, returns findings via webhook.
  3. CI analyses findings: if severity ≥ high, fail build and annotate PR with payload + sink + fix recommendation (e.g., “replace innerHTML avec safe-dom.setSafeHTML).
  4. Developers fix, run CI again, merge only when green.

This closes the loop: from reference (this cheat sheet) → code policy → automated detection → organized remediation.

Conclusion

The front‑end is no longer “just UI”. It is an attack surface. This cheat sheet walks you through understanding client‑side injection, replacing dangerous sinks, building safe helper libraries, deploying static/CI/runtime detection, and automating with a platform like Penligent. Your next steps: scan your codebase for banned sinks (innerHTML, eval), adopt a safe‑dom library, enforce lint rules, integrate headless tests and real pentest logic, monitor production/staging, and pin third‑party resources. Protecting DOM XSS is about making it impossible to slip through—not just relying on chance.

Partager l'article :
Articles connexes