User-Agent Parser
Decode any browser User-Agent string into browser, engine, OS, device, and bot indicator. Defaults to your own UA. See exactly what websites learn from your User-Agent header.
How to Use
- The input loads with your own browser's User-Agent string by default.
- Paste any UA string to inspect it (from log files, bug reports, headers).
- See parsed: browser name + version, rendering engine, operating system + version, device type (desktop/mobile/tablet), and whether it appears to be a bot.
- A confidence indicator shows how sure the parser is — UA strings deliberately mislead, so be skeptical of unusual results.
- Use the preset buttons for common UA strings (latest Chrome, Safari, Firefox, common bots, etc.).
- Click any output to copy.
UA String Anatomy
A Brief History of the User-Agent String
The User-Agent header was specified in HTTP/1.0 (RFC 1945, 1996) as a way for clients to identify themselves to servers. The original purpose was simple: let server admins see what software was hitting their site, and let servers tailor responses (different content for Mosaic vs Lynx, for example).
The compatibility chain started immediately. Netscape's browser identified as Mozilla. Servers started checking for "Mozilla" as a proxy for "modern browser." When Microsoft Internet Explorer launched (1995) and didn't get the modern HTML, it added "Mozilla" to its UA. Every subsequent browser (Opera, Safari, Chrome, Edge) repeated the trick — adding tokens that look like other engines so servers don't downgrade them. The result is a UA string where every part claims to be everything: Chrome's UA includes Mozilla, AppleWebKit, KHTML, Gecko, Chrome, AND Safari simultaneously.
By the 2010s, UA-based browser sniffing had become a major source of bugs. Sites that hardcoded "if Chrome version < 50" broke when version 100 reset to single digits. Modern browsers send deliberately misleading UAs (Edge claims Chrome; Brave claims Chrome) precisely because so much code makes wrong assumptions. The User-Agent Reduction initiative (Chrome 100, 2022) freezes most fields to a small fixed set and pushes detection to Client Hints — explicit per-header opt-in fields where servers must request what they want. The era of relying on the UA string is winding down.
About This Parser
This parser uses pattern-matching on the UA string to extract browser, engine, OS, and device hints. The accuracy varies — common browsers (Chrome, Safari, Firefox, Edge) on common platforms (Windows, macOS, iOS, Android, Linux) parse very reliably; obscure browsers, embedded WebViews, frozen UA strings, and intentionally misleading UAs are best-effort.
Bot detection is heuristic: it looks for explicit bot tokens, common scraper user-agent strings, headless browser markers, and HTTP library identifiers. Sophisticated scrapers will fake real browser UAs and won't be flagged.
Everything runs in your browser. The default value is navigator.userAgent — your own browser's UA. Pasted UAs are parsed locally; nothing is transmitted.
Frequently Asked Questions
What is a User-Agent string?
An HTTP header (User-Agent) that browsers send with every request to identify themselves. The format started as <code>Mozilla/4.0 (compatible; MSIE 5.0)</code> and has accreted decades of compatibility tokens. A typical 2026 UA looks like <code>Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36</code> — every part exists for some historical reason.
Why are UA strings so weird?
Compatibility tokens. Early servers checked for 'Mozilla' to decide whether to send modern HTML. Other browsers added 'Mozilla' to their UAs to avoid being downgraded. Microsoft added 'compatible' and 'MSIE' inside Mozilla. WebKit added 'KHTML, like Gecko' to look like Konqueror/Mozilla. Chrome includes Safari and AppleWebKit tokens. The whole thing is a 30-year compatibility chain that nobody can break.
Is the User-Agent reliable?
No. The UA is purely client-supplied — easy to spoof, change, or remove. Browsers themselves are reducing what they reveal: Chrome's 'User-Agent Reduction' (2022) freezes most UA fields. The future is <strong>Client Hints</strong> (Sec-CH-UA-* headers) which the server requests explicitly and the browser may or may not provide. Treat UA-based decisions as advisory, not authoritative.
What are Client Hints?
Modern HTTP headers (Sec-CH-UA, Sec-CH-UA-Mobile, Sec-CH-UA-Platform, etc.) that replace the role of the User-Agent. Servers explicitly request specific hints via Accept-CH; clients respond with low-entropy info by default and high-entropy info on request. Most major browsers send Client Hints in 2026; this tool focuses on the legacy UA string but Client Hints are the path forward.
How does bot detection work?
Look for explicit bot tokens (Googlebot, Bingbot, AhrefsBot, etc.), unusual UA structure (missing Mozilla/, no engine/version), HTTP fetch libraries (Python-requests, curl, Wget, Go-http-client), and headless-browser markers. None of these are reliable — sophisticated scrapers fake real browser UAs. For real bot detection, combine UA inspection with behavior analysis, IP reputation, and CAPTCHAs.
Can I trust the OS version reported?
Approximately. Major OS family (Windows, macOS, Linux, iOS, Android) is reliable. Specific version is often frozen (Chrome on Windows 10/11 reports 'Windows NT 10.0' regardless of patch level). Mobile UAs sometimes lie about device model. Use the OS info as a hint, not as ground truth.
What's WebView vs full browser?
WebView is an embedded browser inside a native app (e.g., Facebook's in-app browser, Twitter's, ad SDK browsers). The UA usually contains both the OS info AND a marker like <code>FBAN/FBIOS</code>, <code>Twitter</code>, or <code>Instagram</code>. WebView traffic often behaves differently — limited features, different cookie scope, no extensions.
Common Use Cases
Bug report triage
Decode the User-Agent string from a customer bug report to know which browser/OS combination to test.
Log analysis
Identify the source of suspicious traffic by parsing UA strings from access logs.
Privacy audit
See exactly what your browser tells every website about your environment.
Bot identification
Distinguish search engine crawlers from scrapers from headless browsers in your traffic.
Frontend feature gating
Decide whether to enable a feature based on browser detection (use feature detection instead when possible).
Browser compatibility testing
Validate that the UAs reported by your test matrix match what production users actually send.
Last updated: